TreeVector: scalable, interactive, phylogenetic trees for the web.
Pethica, Ralph; Barker, Gary; Kovacs, Tim; Gough, Julian
2010-01-28
Phylogenetic trees are complex data forms that need to be graphically displayed to be human-readable. Traditional techniques of plotting phylogenetic trees focus on rendering a single static image, but increases in the production of biological data and large-scale analyses demand scalable, browsable, and interactive trees. We introduce TreeVector, a Scalable Vector Graphics-and Java-based method that allows trees to be integrated and viewed seamlessly in standard web browsers with no extra software required, and can be modified and linked using standard web technologies. There are now many bioinformatics servers and databases with a range of dynamic processes and updates to cope with the increasing volume of data. TreeVector is designed as a framework to integrate with these processes and produce user-customized phylogenies automatically. We also address the strengths of phylogenetic trees as part of a linked-in browsing process rather than an end graphic for print. TreeVector is fast and easy to use and is available to download precompiled, but is also open source. It can also be run from the web server listed below or the user's own web server. It has already been deployed on two recognized and widely used database Web sites.
Scalability of Robotic Controllers: An Evaluation of Controller Options-Experiment II
2011-09-01
for the Soldier, to ensure mission success while maximizing the survivability and lethality through the synergistic interaction of equipment...based touch interface for gloved finger interactions . This interface had to have larger-than-normal touch-screen buttons for commanding the robot...C.; Hill, S.; Pillalamarri, K. Extreme Scalability: Designing Interfaces and Algorithms for Soldier-Robotic Swarm Interaction , Year 2; ARL- TR
Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.
Koch, S; Bosch, H; Giereth, M; Ertl, T
2011-05-01
Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.
NeuroLines: A Subway Map Metaphor for Visualizing Nanoscale Neuronal Connectivity.
Al-Awami, Ali K; Beyer, Johanna; Strobelt, Hendrik; Kasthuri, Narayanan; Lichtman, Jeff W; Pfister, Hanspeter; Hadwiger, Markus
2014-12-01
We present NeuroLines, a novel visualization technique designed for scalable detailed analysis of neuronal connectivity at the nanoscale level. The topology of 3D brain tissue data is abstracted into a multi-scale, relative distance-preserving subway map visualization that allows domain scientists to conduct an interactive analysis of neurons and their connectivity. Nanoscale connectomics aims at reverse-engineering the wiring of the brain. Reconstructing and analyzing the detailed connectivity of neurons and neurites (axons, dendrites) will be crucial for understanding the brain and its development and diseases. However, the enormous scale and complexity of nanoscale neuronal connectivity pose big challenges to existing visualization techniques in terms of scalability. NeuroLines offers a scalable visualization framework that can interactively render thousands of neurites, and that supports the detailed analysis of neuronal structures and their connectivity. We describe and analyze the design of NeuroLines based on two real-world use-cases of our collaborators in developmental neuroscience, and investigate its scalability to large-scale neuronal connectivity data.
Large-Scale Networked Virtual Environments: Architecture and Applications
ERIC Educational Resources Information Center
Lamotte, Wim; Quax, Peter; Flerackers, Eddy
2008-01-01
Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…
Scalability in Distance Education: "Can We Have Our Cake and Eat It Too?"
ERIC Educational Resources Information Center
Laws, R. Dwight; Howell, Scott L.; Lindsay, Nathan K.
2003-01-01
The decision to increase distance education enrollment hinges on the factors of pedagogical effectiveness, interactivity, audience, faculty incentives, retention, program type, and profitability. A complex interplay exists among these scalability concerns (i.e., issues related to meeting the growing enrollment demand), and any program's approach…
Scalable isosurface visualization of massive datasets on commodity off-the-shelf clusters
Bajaj, Chandrajit
2009-01-01
Tomographic imaging and computer simulations are increasingly yielding massive datasets. Interactive and exploratory visualizations have rapidly become indispensable tools to study large volumetric imaging and simulation data. Our scalable isosurface visualization framework on commodity off-the-shelf clusters is an end-to-end parallel and progressive platform, from initial data access to the final display. Interactive browsing of extracted isosurfaces is made possible by using parallel isosurface extraction, and rendering in conjunction with a new specialized piece of image compositing hardware called Metabuffer. In this paper, we focus on the back end scalability by introducing a fully parallel and out-of-core isosurface extraction algorithm. It achieves scalability by using both parallel and out-of-core processing and parallel disks. It statically partitions the volume data to parallel disks with a balanced workload spectrum, and builds I/O-optimal external interval trees to minimize the number of I/O operations of loading large data from disk. We also describe an isosurface compression scheme that is efficient for progress extraction, transmission and storage of isosurfaces. PMID:19756231
An MPI-based MoSST core dynamics model
NASA Astrophysics Data System (ADS)
Jiang, Weiyuan; Kuang, Weijia
2008-09-01
Distributed systems are among the main cost-effective and expandable platforms for high-end scientific computing. Therefore scalable numerical models are important for effective use of such systems. In this paper, we present an MPI-based numerical core dynamics model for simulation of geodynamo and planetary dynamos, and for simulation of core-mantle interactions. The model is developed based on MPI libraries. Two algorithms are used for node-node communication: a "master-slave" architecture and a "divide-and-conquer" architecture. The former is easy to implement but not scalable in communication. The latter is scalable in both computation and communication. The model scalability is tested on Linux PC clusters with up to 128 nodes. This model is also benchmarked with a published numerical dynamo model solution.
Equalizer: a scalable parallel rendering framework.
Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato
2009-01-01
Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.
NASA Astrophysics Data System (ADS)
Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Kim, Hae-Kwang
2007-12-01
In this paper, we introduce a graphics to Scalable Vector Graphics (SVG) adaptation framework with a mechanism of vector graphics transmission to overcome the shortcoming of real-time representation and interaction experiences of 3D graphics application running on mobile devices. We therefore develop an interactive 3D visualization system based on the proposed framework for rapidly representing a 3D scene on mobile devices without having to download it from the server. Our system scenario is composed of a client viewer and a graphic to SVG adaptation server. The client viewer offers the user to access to the same 3D contents with different devices according to consumer interactions.
Scalable Visual Analytics of Massive Textual Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.
2007-04-01
This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.
High-fidelity cluster state generation for ultracold atoms in an optical lattice.
Inaba, Kensuke; Tokunaga, Yuuki; Tamaki, Kiyoshi; Igeta, Kazuhiro; Yamashita, Makoto
2014-03-21
We propose a method for generating high-fidelity multipartite spin entanglement of ultracold atoms in an optical lattice in a short operation time with a scalable manner, which is suitable for measurement-based quantum computation. To perform the desired operations based on the perturbative spin-spin interactions, we propose to actively utilize the extra degrees of freedom (DOFs) usually neglected in the perturbative treatment but included in the Hubbard Hamiltonian of atoms, such as, (pseudo-)charge and orbital DOFs. Our method simultaneously achieves high fidelity, short operation time, and scalability by overcoming the following fundamental problem: enhancing the interaction strength for shortening the operation time breaks the perturbative condition of the interaction and inevitably induces unwanted correlations among the spin and extra DOFs.
Learning directed acyclic graphs from large-scale genomics data.
Nikolay, Fabio; Pesavento, Marius; Kritikos, George; Typas, Nassos
2017-09-20
In this paper, we consider the problem of learning the genetic interaction map, i.e., the topology of a directed acyclic graph (DAG) of genetic interactions from noisy double-knockout (DK) data. Based on a set of well-established biological interaction models, we detect and classify the interactions between genes. We propose a novel linear integer optimization program called the Genetic-Interactions-Detector (GENIE) to identify the complex biological dependencies among genes and to compute the DAG topology that matches the DK measurements best. Furthermore, we extend the GENIE program by incorporating genetic interaction profile (GI-profile) data to further enhance the detection performance. In addition, we propose a sequential scalability technique for large sets of genes under study, in order to provide statistically significant results for real measurement data. Finally, we show via numeric simulations that the GENIE program and the GI-profile data extended GENIE (GI-GENIE) program clearly outperform the conventional techniques and present real data results for our proposed sequential scalability technique.
Visualization for genomics: the Microbial Genome Viewer.
Kerkhoven, Robert; van Enckevort, Frank H J; Boekhorst, Jos; Molenaar, Douwe; Siezen, Roland J
2004-07-22
A Web-based visualization tool, the Microbial Genome Viewer, is presented that allows the user to combine complex genomic data in a highly interactive way. This Web tool enables the interactive generation of chromosome wheels and linear genome maps from genome annotation data stored in a MySQL database. The generated images are in scalable vector graphics (SVG) format, which is suitable for creating high-quality scalable images and dynamic Web representations. Gene-related data such as transcriptome and time-course microarray experiments can be superimposed on the maps for visual inspection. The Microbial Genome Viewer 1.0 is freely available at http://www.cmbi.kun.nl/MGV
Wiewiórka, Marek S; Messina, Antonio; Pacholewska, Alicja; Maffioletti, Sergio; Gawrysiak, Piotr; Okoniewski, Michał J
2014-09-15
Many time-consuming analyses of next -: generation sequencing data can be addressed with modern cloud computing. The Apache Hadoop-based solutions have become popular in genomics BECAUSE OF: their scalability in a cloud infrastructure. So far, most of these tools have been used for batch data processing rather than interactive data querying. The SparkSeq software has been created to take advantage of a new MapReduce framework, Apache Spark, for next-generation sequencing data. SparkSeq is a general-purpose, flexible and easily extendable library for genomic cloud computing. It can be used to build genomic analysis pipelines in Scala and run them in an interactive way. SparkSeq opens up the possibility of customized ad hoc secondary analyses and iterative machine learning algorithms. This article demonstrates its scalability and overall fast performance by running the analyses of sequencing datasets. Tests of SparkSeq also prove that the use of cache and HDFS block size can be tuned for the optimal performance on multiple worker nodes. Available under open source Apache 2.0 license: https://bitbucket.org/mwiewiorka/sparkseq/. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
An efficient and scalable deformable model for virtual reality-based medical applications.
Choi, Kup-Sze; Sun, Hanqiu; Heng, Pheng-Ann
2004-09-01
Modeling of tissue deformation is of great importance to virtual reality (VR)-based medical simulations. Considerable effort has been dedicated to the development of interactively deformable virtual tissues. In this paper, an efficient and scalable deformable model is presented for virtual-reality-based medical applications. It considers deformation as a localized force transmittal process which is governed by algorithms based on breadth-first search (BFS). The computational speed is scalable to facilitate real-time interaction by adjusting the penetration depth. Simulated annealing (SA) algorithms are developed to optimize the model parameters by using the reference data generated with the linear static finite element method (FEM). The mechanical behavior and timing performance of the model have been evaluated. The model has been applied to simulate the typical behavior of living tissues and anisotropic materials. Integration with a haptic device has also been achieved on a generic personal computer (PC) platform. The proposed technique provides a feasible solution for VR-based medical simulations and has the potential for multi-user collaborative work in virtual environment.
SeleCon: Scalable IoT Device Selection and Control Using Hand Gestures.
Alanwar, Amr; Alzantot, Moustafa; Ho, Bo-Jhang; Martin, Paul; Srivastava, Mani
2017-04-01
Although different interaction modalities have been proposed in the field of human-computer interface (HCI), only a few of these techniques could reach the end users because of scalability and usability issues. Given the popularity and the growing number of IoT devices, selecting one out of many devices becomes a hurdle in a typical smarthome environment. Therefore, an easy-to-learn, scalable, and non-intrusive interaction modality has to be explored. In this paper, we propose a pointing approach to interact with devices, as pointing is arguably a natural way for device selection. We introduce SeleCon for device selection and control which uses an ultra-wideband (UWB) equipped smartwatch. To interact with a device in our system, people can point to the device to select it then draw a hand gesture in the air to specify a control action. To this end, SeleCon employs inertial sensors for pointing gesture detection and a UWB transceiver for identifying the selected device from ranging measurements. Furthermore, SeleCon supports an alphabet of gestures that can be used for controlling the selected devices. We performed our experiment in a 9 m -by-10 m lab space with eight deployed devices. The results demonstrate that SeleCon can achieve 84.5% accuracy for device selection and 97% accuracy for hand gesture recognition. We also show that SeleCon is power efficient to sustain daily use by turning off the UWB transceiver, when a user's wrist is stationary.
SeleCon: Scalable IoT Device Selection and Control Using Hand Gestures
Alanwar, Amr; Alzantot, Moustafa; Ho, Bo-Jhang; Martin, Paul; Srivastava, Mani
2018-01-01
Although different interaction modalities have been proposed in the field of human-computer interface (HCI), only a few of these techniques could reach the end users because of scalability and usability issues. Given the popularity and the growing number of IoT devices, selecting one out of many devices becomes a hurdle in a typical smarthome environment. Therefore, an easy-to-learn, scalable, and non-intrusive interaction modality has to be explored. In this paper, we propose a pointing approach to interact with devices, as pointing is arguably a natural way for device selection. We introduce SeleCon for device selection and control which uses an ultra-wideband (UWB) equipped smartwatch. To interact with a device in our system, people can point to the device to select it then draw a hand gesture in the air to specify a control action. To this end, SeleCon employs inertial sensors for pointing gesture detection and a UWB transceiver for identifying the selected device from ranging measurements. Furthermore, SeleCon supports an alphabet of gestures that can be used for controlling the selected devices. We performed our experiment in a 9m-by-10m lab space with eight deployed devices. The results demonstrate that SeleCon can achieve 84.5% accuracy for device selection and 97% accuracy for hand gesture recognition. We also show that SeleCon is power efficient to sustain daily use by turning off the UWB transceiver, when a user’s wrist is stationary. PMID:29683151
Scalable quantum memory in the ultrastrong coupling regime.
Kyaw, T H; Felicetti, S; Romero, G; Solano, E; Kwek, L-C
2015-03-02
Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances.
Scalable quantum memory in the ultrastrong coupling regime
Kyaw, T. H.; Felicetti, S.; Romero, G.; Solano, E.; Kwek, L.-C.
2015-01-01
Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances. PMID:25727251
Social Media Tools for Teaching and Learning
ERIC Educational Resources Information Center
Wagner, Ronald
2011-01-01
According to Wikipedia, "social media is the media designed to be disseminated through social interaction, created using highly accessible scalable techniques. Social media is the use of web-based and mobile technologies to turn communication into interactive dialogue." Social networks, such as Facebook and Twitter, contain millions of members who…
Multiyear, Multi-Instructor Evaluation of a Large-Class Interactive-Engagement Curriculum
ERIC Educational Resources Information Center
Cahill, Michael J.; Hynes, K. Mairin; Trousil, Rebecca; Brooks, Lisa A.; McDaniel, Mark A.; Repice, Michelle; Zhao, Jiuqing; Frey, Regina F.
2014-01-01
Interactive-engagement (IE) techniques consistently enhance conceptual learning gains relative to traditional-lecture courses, but attitudinal gains typically emerge only in small, inquiry-based curricula. The current study evaluated whether a "scalable IE" curriculum--a curriculum used in a large course (~130 students per section) and…
A scalable method for computing quadruplet wave-wave interactions
NASA Astrophysics Data System (ADS)
Van Vledder, Gerbrant
2017-04-01
Non-linear four-wave interactions are a key physical process in the evolution of wind generated ocean waves. The present generation operational wave models use the Discrete Interaction Approximation (DIA), but it accuracy is poor. It is now generally acknowledged that the DIA should be replaced with a more accurate method to improve predicted spectral shapes and derived parameters. The search for such a method is challenging as one should find a balance between accuracy and computational requirements. Such a method is presented here in the form of a scalable and adaptive method that can mimic both the time consuming exact Snl4 approach and the fast but inaccurate DIA, and everything in between. The method provides an elegant approach to improve the DIA, not by including more arbitrarily shaped wave number configurations, but by a mathematically consistent reduction of an exact method, viz. the WRT method. The adaptiveness is to adapt the abscissa of the locus integrand in relation to the magnitude of the known terms. The adaptiveness is extended to the highest level of the WRT method to select interacting wavenumber configurations in a hierarchical way in relation to their importance. This adaptiveness results in a speed-up of one to three orders of magnitude depending on the measure of accuracy. This definition of accuracy should not be expressed in terms of the quality of the transfer integral for academic spectra but rather in terms of wave model performance in a dynamic run. This has consequences for the balance between the required accuracy and the computational workload for evaluating these interactions. The performance of the scalable method on different scales is illustrated with results from academic spectra, simple growth curves to more complicated field cases using a 3G-wave model.
NASA Astrophysics Data System (ADS)
Kong, Fande; Cai, Xiao-Chuan
2017-07-01
Nonlinear fluid-structure interaction (FSI) problems on unstructured meshes in 3D appear in many applications in science and engineering, such as vibration analysis of aircrafts and patient-specific diagnosis of cardiovascular diseases. In this work, we develop a highly scalable, parallel algorithmic and software framework for FSI problems consisting of a nonlinear fluid system and a nonlinear solid system, that are coupled monolithically. The FSI system is discretized by a stabilized finite element method in space and a fully implicit backward difference scheme in time. To solve the large, sparse system of nonlinear algebraic equations at each time step, we propose an inexact Newton-Krylov method together with a multilevel, smoothed Schwarz preconditioner with isogeometric coarse meshes generated by a geometry preserving coarsening algorithm. Here "geometry" includes the boundary of the computational domain and the wet interface between the fluid and the solid. We show numerically that the proposed algorithm and implementation are highly scalable in terms of the number of linear and nonlinear iterations and the total compute time on a supercomputer with more than 10,000 processor cores for several problems with hundreds of millions of unknowns.
NASA Astrophysics Data System (ADS)
West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram
2014-02-01
Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.
Kong, Fande; Cai, Xiao-Chuan
2017-03-24
Nonlinear fluid-structure interaction (FSI) problems on unstructured meshes in 3D appear many applications in science and engineering, such as vibration analysis of aircrafts and patient-specific diagnosis of cardiovascular diseases. In this work, we develop a highly scalable, parallel algorithmic and software framework for FSI problems consisting of a nonlinear fluid system and a nonlinear solid system, that are coupled monolithically. The FSI system is discretized by a stabilized finite element method in space and a fully implicit backward difference scheme in time. To solve the large, sparse system of nonlinear algebraic equations at each time step, we propose an inexactmore » Newton-Krylov method together with a multilevel, smoothed Schwarz preconditioner with isogeometric coarse meshes generated by a geometry preserving coarsening algorithm. Here ''geometry'' includes the boundary of the computational domain and the wet interface between the fluid and the solid. We show numerically that the proposed algorithm and implementation are highly scalable in terms of the number of linear and nonlinear iterations and the total compute time on a supercomputer with more than 10,000 processor cores for several problems with hundreds of millions of unknowns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Xujun; Li, Jiyuan; Jiang, Xikai
An efficient parallel Stokes’s solver is developed towards the complete inclusion of hydrodynamic interactions of Brownian particles in any geometry. A Langevin description of the particle dynamics is adopted, where the long-range interactions are included using a Green’s function formalism. We present a scalable parallel computational approach, where the general geometry Stokeslet is calculated following a matrix-free algorithm using the General geometry Ewald-like method. Our approach employs a highly-efficient iterative finite element Stokes’ solver for the accurate treatment of long-range hydrodynamic interactions within arbitrary confined geometries. A combination of mid-point time integration of the Brownian stochastic differential equation, the parallelmore » Stokes’ solver, and a Chebyshev polynomial approximation for the fluctuation-dissipation theorem result in an O(N) parallel algorithm. We also illustrate the new algorithm in the context of the dynamics of confined polymer solutions in equilibrium and non-equilibrium conditions. Our method is extended to treat suspended finite size particles of arbitrary shape in any geometry using an Immersed Boundary approach.« less
Zhao, Xujun; Li, Jiyuan; Jiang, Xikai; ...
2017-06-29
An efficient parallel Stokes’s solver is developed towards the complete inclusion of hydrodynamic interactions of Brownian particles in any geometry. A Langevin description of the particle dynamics is adopted, where the long-range interactions are included using a Green’s function formalism. We present a scalable parallel computational approach, where the general geometry Stokeslet is calculated following a matrix-free algorithm using the General geometry Ewald-like method. Our approach employs a highly-efficient iterative finite element Stokes’ solver for the accurate treatment of long-range hydrodynamic interactions within arbitrary confined geometries. A combination of mid-point time integration of the Brownian stochastic differential equation, the parallelmore » Stokes’ solver, and a Chebyshev polynomial approximation for the fluctuation-dissipation theorem result in an O(N) parallel algorithm. We also illustrate the new algorithm in the context of the dynamics of confined polymer solutions in equilibrium and non-equilibrium conditions. Our method is extended to treat suspended finite size particles of arbitrary shape in any geometry using an Immersed Boundary approach.« less
Schlecht, Ulrich; Liu, Zhimin; Blundell, Jamie R; St Onge, Robert P; Levy, Sasha F
2017-05-25
Several large-scale efforts have systematically catalogued protein-protein interactions (PPIs) of a cell in a single environment. However, little is known about how the protein interactome changes across environmental perturbations. Current technologies, which assay one PPI at a time, are too low throughput to make it practical to study protein interactome dynamics. Here, we develop a highly parallel protein-protein interaction sequencing (PPiSeq) platform that uses a novel double barcoding system in conjunction with the dihydrofolate reductase protein-fragment complementation assay in Saccharomyces cerevisiae. PPiSeq detects PPIs at a rate that is on par with current assays and, in contrast with current methods, quantitatively scores PPIs with enough accuracy and sensitivity to detect changes across environments. Both PPI scoring and the bulk of strain construction can be performed with cell pools, making the assay scalable and easily reproduced across environments. PPiSeq is therefore a powerful new tool for large-scale investigations of dynamic PPIs.
Integrating distributed multimedia systems and interactive television networks
NASA Astrophysics Data System (ADS)
Shvartsman, Alex A.
1996-01-01
Recent advances in networks, storage and video delivery systems are about to make commercial deployment of interactive multimedia services over digital television networks a reality. The emerging components individually have the potential to satisfy the technical requirements in the near future. However, no single vendor is offering a complete end-to-end commercially-deployable and scalable interactive multimedia applications systems over digital/analog television systems. Integrating a large set of maturing sub-assemblies and interactive multimedia applications is a major task in deploying such systems. Here we deal with integration issues, requirements and trade-offs in building delivery platforms and applications for interactive television services. Such integration efforts must overcome lack of standards, and deal with unpredictable development cycles and quality problems of leading- edge technology. There are also the conflicting goals of optimizing systems for video delivery while enabling highly interactive distributed applications. It is becoming possible to deliver continuous video streams from specific sources, but it is difficult and expensive to provide the ability to rapidly switch among multiple sources of video and data. Finally, there is the ever- present challenge of integrating and deploying expensive systems whose scalability and extensibility is limited, while ensuring some resiliency in the face of inevitable changes. This proceedings version of the paper is an extended abstract.
ERIC Educational Resources Information Center
LoCasale-Crouch, Jennifer; Hamre, Bridget; Roberts, Amy; Neesen, Kathy
2016-01-01
The "Effective Classroom Interactions" (ECI) online courses were designed to provide an engaging, effective and scalable approach to enhancing early childhood teachers' use of classroom practices that impact children's school readiness. The created courses included several versions aimed at testing whether or not certain design aspects…
Gil-Santos, Eduardo; Baker, Christopher; Lemaître, Aristide; Gomez, Carmen; Leo, Giuseppe; Favero, Ivan
2017-01-01
Photonic lattices of mutually interacting indistinguishable cavities represent a cornerstone of collective phenomena in optics and could become important in advanced sensing or communication devices. The disorder induced by fabrication technologies has so far hindered the development of such resonant cavity architectures, while post-fabrication tuning methods have been limited by complexity and poor scalability. Here we present a new simple and scalable tuning method for ensembles of microphotonic and nanophotonic resonators, which enables their permanent collective spectral alignment. The method introduces an approach of cavity-enhanced photoelectrochemical etching in a fluid, a resonant process triggered by sub-bandgap light that allows for high selectivity and precision. The technique is presented on a gallium arsenide nanophotonic platform and illustrated by finely tuning one, two and up to five resonators. It opens the way to applications requiring large networks of identical resonators and their spectral referencing to external etalons. PMID:28117394
Scalable manufacturing of biomimetic moldable hydrogels for industrial applications.
Yu, Anthony C; Chen, Haoxuan; Chan, Doreen; Agmon, Gillie; Stapleton, Lyndsay M; Sevit, Alex M; Tibbitt, Mark W; Acosta, Jesse D; Zhang, Tony; Franzia, Paul W; Langer, Robert; Appel, Eric A
2016-12-13
Hydrogels are a class of soft material that is exploited in many, often completely disparate, industrial applications, on account of their unique and tunable properties. Advances in soft material design are yielding next-generation moldable hydrogels that address engineering criteria in several industrial settings such as complex viscosity modifiers, hydraulic or injection fluids, and sprayable carriers. Industrial implementation of these viscoelastic materials requires extreme volumes of material, upwards of several hundred million gallons per year. Here, we demonstrate a paradigm for the scalable fabrication of self-assembled moldable hydrogels using rationally engineered, biomimetic polymer-nanoparticle interactions. Cellulose derivatives are linked together by selective adsorption to silica nanoparticles via dynamic and multivalent interactions. We show that the self-assembly process for gel formation is easily scaled in a linear fashion from 0.5 mL to over 15 L without alteration of the mechanical properties of the resultant materials. The facile and scalable preparation of these materials leveraging self-assembly of inexpensive, renewable, and environmentally benign starting materials, coupled with the tunability of their properties, make them amenable to a range of industrial applications. In particular, we demonstrate their utility as injectable materials for pipeline maintenance and product recovery in industrial food manufacturing as well as their use as sprayable carriers for robust application of fire retardants in preventing wildland fires.
Scalable manufacturing of biomimetic moldable hydrogels for industrial applications
NASA Astrophysics Data System (ADS)
Yu, Anthony C.; Chen, Haoxuan; Chan, Doreen; Agmon, Gillie; Stapleton, Lyndsay M.; Sevit, Alex M.; Tibbitt, Mark W.; Acosta, Jesse D.; Zhang, Tony; Franzia, Paul W.; Langer, Robert; Appel, Eric A.
2016-12-01
Hydrogels are a class of soft material that is exploited in many, often completely disparate, industrial applications, on account of their unique and tunable properties. Advances in soft material design are yielding next-generation moldable hydrogels that address engineering criteria in several industrial settings such as complex viscosity modifiers, hydraulic or injection fluids, and sprayable carriers. Industrial implementation of these viscoelastic materials requires extreme volumes of material, upwards of several hundred million gallons per year. Here, we demonstrate a paradigm for the scalable fabrication of self-assembled moldable hydrogels using rationally engineered, biomimetic polymer-nanoparticle interactions. Cellulose derivatives are linked together by selective adsorption to silica nanoparticles via dynamic and multivalent interactions. We show that the self-assembly process for gel formation is easily scaled in a linear fashion from 0.5 mL to over 15 L without alteration of the mechanical properties of the resultant materials. The facile and scalable preparation of these materials leveraging self-assembly of inexpensive, renewable, and environmentally benign starting materials, coupled with the tunability of their properties, make them amenable to a range of industrial applications. In particular, we demonstrate their utility as injectable materials for pipeline maintenance and product recovery in industrial food manufacturing as well as their use as sprayable carriers for robust application of fire retardants in preventing wildland fires.
Scalable manufacturing of biomimetic moldable hydrogels for industrial applications
Yu, Anthony C.; Chen, Haoxuan; Chan, Doreen; Agmon, Gillie; Stapleton, Lyndsay M.; Sevit, Alex M.; Tibbitt, Mark W.; Acosta, Jesse D.; Zhang, Tony; Franzia, Paul W.; Langer, Robert
2016-01-01
Hydrogels are a class of soft material that is exploited in many, often completely disparate, industrial applications, on account of their unique and tunable properties. Advances in soft material design are yielding next-generation moldable hydrogels that address engineering criteria in several industrial settings such as complex viscosity modifiers, hydraulic or injection fluids, and sprayable carriers. Industrial implementation of these viscoelastic materials requires extreme volumes of material, upwards of several hundred million gallons per year. Here, we demonstrate a paradigm for the scalable fabrication of self-assembled moldable hydrogels using rationally engineered, biomimetic polymer–nanoparticle interactions. Cellulose derivatives are linked together by selective adsorption to silica nanoparticles via dynamic and multivalent interactions. We show that the self-assembly process for gel formation is easily scaled in a linear fashion from 0.5 mL to over 15 L without alteration of the mechanical properties of the resultant materials. The facile and scalable preparation of these materials leveraging self-assembly of inexpensive, renewable, and environmentally benign starting materials, coupled with the tunability of their properties, make them amenable to a range of industrial applications. In particular, we demonstrate their utility as injectable materials for pipeline maintenance and product recovery in industrial food manufacturing as well as their use as sprayable carriers for robust application of fire retardants in preventing wildland fires. PMID:27911849
What Makes a Message Stick? The Role of Content and Context in Social Media Epidemics
2013-09-23
First, we propose visual memes , or frequently re-posted short video segments, for detecting and monitoring latent video interactions at scale. Content...interactions (such as quoting, or remixing, parts of a video). Visual memes are extracted by scalable detection algorithms that we develop, with...high accuracy. We further augment visual memes with text, via a statistical model of latent topics. We model content interactions on YouTube with
A Scalable, Collaborative, Interactive Light-field Display System
2014-06-01
displays, 3D display, holographic video, integral photography, plenoptic , computed photography 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...light-field, holographic displays, 3D display, holographic video, integral photography, plenoptic , computed photography 1 Distribution A: Approved
Complete quantum control of exciton qubits bound to isoelectronic centres.
Éthier-Majcher, G; St-Jean, P; Boso, G; Tosi, A; Klem, J F; Francoeur, S
2014-05-30
In recent years, impressive demonstrations related to quantum information processing have been realized. The scalability of quantum interactions between arbitrary qubits within an array remains however a significant hurdle to the practical realization of a quantum computer. Among the proposed ideas to achieve fully scalable quantum processing, the use of photons is appealing because they can mediate long-range quantum interactions and could serve as buses to build quantum networks. Quantum dots or nitrogen-vacancy centres in diamond can be coupled to light, but the former system lacks optical homogeneity while the latter suffers from a low dipole moment, rendering their large-scale interconnection challenging. Here, through the complete quantum control of exciton qubits, we demonstrate that nitrogen isoelectronic centres in GaAs combine both the uniformity and predictability of atomic defects and the dipole moment of semiconductor quantum dots. This establishes isoelectronic centres as a promising platform for quantum information processing.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
Entangling spin-spin interactions of ions in individually controlled potential wells
NASA Astrophysics Data System (ADS)
Wilson, Andrew; Colombe, Yves; Brown, Kenton; Knill, Emanuel; Leibfried, Dietrich; Wineland, David
2014-03-01
Physical systems that cannot be modeled with classical computers appear in many different branches of science, including condensed-matter physics, statistical mechanics, high-energy physics, atomic physics and quantum chemistry. Despite impressive progress on the control and manipulation of various quantum systems, implementation of scalable devices for quantum simulation remains a formidable challenge. As one approach to scalability in simulation, here we demonstrate an elementary building-block of a configurable quantum simulator based on atomic ions. Two ions are trapped in separate potential wells that can individually be tailored to emulate a number of different spin-spin couplings mediated by the ions' Coulomb interaction together with classical laser and microwave fields. We demonstrate deterministic tuning of this interaction by independent control of the local wells and emulate a particular spin-spin interaction to entangle the internal states of the two ions with 0.81(2) fidelity. Extension of the building-block demonstrated here to a 2D-network, which ion-trap micro-fabrication processes enable, may provide a new quantum simulator architecture with broad flexibility in designing and scaling the arrangement of ions and their mutual interactions. This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), ONR, and the NIST Quantum Information Program.
Compositional mining of multiple object API protocols through state abstraction.
Dai, Ziying; Mao, Xiaoguang; Lei, Yan; Qi, Yuhua; Wang, Rui; Gu, Bin
2013-01-01
API protocols specify correct sequences of method invocations. Despite their usefulness, API protocols are often unavailable in practice because writing them is cumbersome and error prone. Multiple object API protocols are more expressive than single object API protocols. However, the huge number of objects of typical object-oriented programs poses a major challenge to the automatic mining of multiple object API protocols: besides maintaining scalability, it is important to capture various object interactions. Current approaches utilize various heuristics to focus on small sets of methods. In this paper, we present a general, scalable, multiple object API protocols mining approach that can capture all object interactions. Our approach uses abstract field values to label object states during the mining process. We first mine single object typestates as finite state automata whose transitions are annotated with states of interacting objects before and after the execution of the corresponding method and then construct multiple object API protocols by composing these annotated single object typestates. We implement our approach for Java and evaluate it through a series of experiments.
Compositional Mining of Multiple Object API Protocols through State Abstraction
Mao, Xiaoguang; Qi, Yuhua; Wang, Rui; Gu, Bin
2013-01-01
API protocols specify correct sequences of method invocations. Despite their usefulness, API protocols are often unavailable in practice because writing them is cumbersome and error prone. Multiple object API protocols are more expressive than single object API protocols. However, the huge number of objects of typical object-oriented programs poses a major challenge to the automatic mining of multiple object API protocols: besides maintaining scalability, it is important to capture various object interactions. Current approaches utilize various heuristics to focus on small sets of methods. In this paper, we present a general, scalable, multiple object API protocols mining approach that can capture all object interactions. Our approach uses abstract field values to label object states during the mining process. We first mine single object typestates as finite state automata whose transitions are annotated with states of interacting objects before and after the execution of the corresponding method and then construct multiple object API protocols by composing these annotated single object typestates. We implement our approach for Java and evaluate it through a series of experiments. PMID:23844378
NASA Astrophysics Data System (ADS)
Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle
2016-08-01
Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.
Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; ...
2016-08-10
Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O( N 2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Methodmore » (FMM) to evaluate the integrals in O( N) operations, with O( N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.« less
Zhao, Xue Jiao; Zhu, Guang; Fan, You Jun; Li, Hua Yang; Wang, Zhong Lin
2015-07-28
We report a flexible and area-scalable energy-harvesting technique for converting kinetic wave energy. Triboelectrification as a result of direct interaction between a dynamic wave and a large-area nanostructured solid surface produces an induced current among an array of electrodes. An integration method ensures that the induced current between any pair of electrodes can be constructively added up, which enables significant enhancement in output power and realizes area-scalable integration of electrode arrays. Internal and external factors that affect the electric output are comprehensively discussed. The produced electricity not only drives small electronics but also achieves effective impressed current cathodic protection. This type of thin-film-based device is a potentially practical solution of on-site sustained power supply at either coastal or off-shore sites wherever a dynamic wave is available. Potential applications include corrosion protection, pollution degradation, water desalination, and wireless sensing for marine surveillance.
Akama, Toshiki; Okita, Wakana; Nagai, Reito; Li, Chao; Kaneko, Toshiro; Kato, Toshiaki
2017-09-20
Few-layered transition metal dichalcogenides (TMDs) are known as true two-dimensional materials, with excellent semiconducting properties and strong light-matter interaction. Thus, TMDs are attractive materials for semitransparent and flexible solar cells for use in various applications. Hoewver, despite the recent progress, the development of a scalable method to fabricate semitransparent and flexible solar cells with mono- or few-layered TMDs remains a crucial challenge. Here, we show easy and scalable fabrication of a few-layered TMD solar cell using a Schottky-type configuration to obtain a power conversion efficiency (PCE) of approximately 0.7%, which is the highest value reported with few-layered TMDs. Clear power generation was also observed for a device fabricated on a large SiO 2 and flexible substrate, demonstrating that our method has high potential for scalable production. In addition, systematic investigation revealed that the PCE and external quantum efficiency (EQE) strongly depended on the type of photogenerated excitons (A, B, and C) because of different carrier dynamics. Because high solar cell performance along with excellent scalability can be achieved through the proposed process, our fabrication method will contribute to accelerating the industrial use of TMDs as semitransparent and flexible solar cells.
Dynamics of person-to-person interactions from distributed RFID sensor networks.
Cattuto, Ciro; Van den Broeck, Wouter; Barrat, Alain; Colizza, Vittoria; Pinton, Jean-François; Vespignani, Alessandro
2010-07-15
Digital networks, mobile devices, and the possibility of mining the ever-increasing amount of digital traces that we leave behind in our daily activities are changing the way we can approach the study of human and social interactions. Large-scale datasets, however, are mostly available for collective and statistical behaviors, at coarse granularities, while high-resolution data on person-to-person interactions are generally limited to relatively small groups of individuals. Here we present a scalable experimental framework for gathering real-time data resolving face-to-face social interactions with tunable spatial and temporal granularities. We use active Radio Frequency Identification (RFID) devices that assess mutual proximity in a distributed fashion by exchanging low-power radio packets. We analyze the dynamics of person-to-person interaction networks obtained in three high-resolution experiments carried out at different orders of magnitude in community size. The data sets exhibit common statistical properties and lack of a characteristic time scale from 20 seconds to several hours. The association between the number of connections and their duration shows an interesting super-linear behavior, which indicates the possibility of defining super-connectors both in the number and intensity of connections. Taking advantage of scalability and resolution, this experimental framework allows the monitoring of social interactions, uncovering similarities in the way individuals interact in different contexts, and identifying patterns of super-connector behavior in the community. These results could impact our understanding of all phenomena driven by face-to-face interactions, such as the spreading of transmissible infectious diseases and information.
Controlled Interactions between Two Dimensional Layered Inorganic Nanosheets and Polymers
2016-06-15
transition metal and non- pair electrons of amine allows us to develop scalable, stable and uniform composite films with numerous combinations of TMD...modification of TMDs sheets with amine-terminated polymers is introduced and the strong Lewis acid-base interaction between transition metal and non- pair ...can be readily entangled with other chains of the matrix polymer, thereby ensuring homogeneous PNC formation. The solvent medium offers an extra
An empirical comparison of several recent epistatic interaction detection methods.
Wang, Yue; Liu, Guimei; Feng, Mengling; Wong, Limsoon
2011-11-01
Many new methods have recently been proposed for detecting epistatic interactions in GWAS data. There is, however, no in-depth independent comparison of these methods yet. Five recent methods-TEAM, BOOST, SNPHarvester, SNPRuler and Screen and Clean (SC)-are evaluated here in terms of power, type-1 error rate, scalability and completeness. In terms of power, TEAM performs best on data with main effect and BOOST performs best on data without main effect. In terms of type-1 error rate, TEAM and BOOST have higher type-1 error rates than SNPRuler and SNPHarvester. SC does not control type-1 error rate well. In terms of scalability, we tested the five methods using a dataset with 100 000 SNPs on a 64 bit Ubuntu system, with Intel (R) Xeon(R) CPU 2.66 GHz, 16 GB memory. TEAM takes ~36 days to finish and SNPRuler reports heap allocation problems. BOOST scales up to 100 000 SNPs and the cost is much lower than that of TEAM. SC and SNPHarvester are the most scalable. In terms of completeness, we study how frequently the pruning techniques employed by these methods incorrectly prune away the most significant epistatic interactions. We find that, on average, 20% of datasets without main effect and 60% of datasets with main effect are pruned incorrectly by BOOST, SNPRuler and SNPHarvester. The software for the five methods tested are available from the URLs below. TEAM: http://csbio.unc.edu/epistasis/download.php BOOST: http://ihome.ust.hk/~eeyang/papers.html. SNPHarvester: http://bioinformatics.ust.hk/SNPHarvester.html. SNPRuler: http://bioinformatics.ust.hk/SNPRuler.zip. Screen and Clean: http://wpicr.wpic.pitt.edu/WPICCompGen/. wangyue@nus.edu.sg.
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-04-24
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, and scheduling modules. The design also includes a scalable, general-purpose communication infrastructure. Development will take place in four phases: Phase I results in a solid infrastructure; Phase II produces a functional but limited interactive job initiation capability without use of the interconnect/switch; Phase III provides switch support and documentation; Phase IV provides job status, fault-tolerance, and job queuing and control through Livermore's Distributed Productionmore » Control System (DPCS), a meta-batch and resource management system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xantheas, Sotiris S.; Werhahn, Jasper C.
Based on the formulation of the analytical expression of the potential V(r) describing intermolecular interactions in terms of the dimensionless variables r*=r/rm and !*=V/!, where rm is the separation at the minimum and ! the well depth, we propose more generalized scalable forms for the commonly used Lennard-Jones, Mie, Morse and Buckingham exponential-6 potential energy functions (PEFs). These new generalized forms have an additional parameter from and revert to the original ones for some choice of that parameter. In this respect, the original forms can be considered as special cases of the more general forms that are introduced. We alsomore » propose a scalable, but nonrevertible to the original one, 4-parameter extended Morse potential.« less
A Scalable Implementation of Van der Waals Density Functionals
NASA Astrophysics Data System (ADS)
Wu, Jun; Gygi, Francois
2010-03-01
Recently developed Van der Waals density functionals[1] offer the promise to account for weak intermolecular interactions that are not described accurately by local exchange-correlation density functionals. In spite of recent progress [2], the computational cost of such calculations remains high. We present a scalable parallel implementation of the functional proposed by Dion et al.[1]. The method is implemented in the Qbox first-principles simulation code (http://eslab.ucdavis.edu/software/qbox). Application to large molecular systems will be presented. [4pt] [1] M. Dion et al. Phys. Rev. Lett. 92, 246401 (2004).[0pt] [2] G. Roman-Perez and J. M. Soler, Phys. Rev. Lett. 103, 096102 (2009).
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2017-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948
NASA Technical Reports Server (NTRS)
Shen, Bo-Wen; Tao, Wei-Kuo; Chern, Jiun-Dar
2007-01-01
Improving our understanding of hurricane inter-annual variability and the impact of climate change (e.g., doubling CO2 and/or global warming) on hurricanes brings both scientific and computational challenges to researchers. As hurricane dynamics involves multiscale interactions among synoptic-scale flows, mesoscale vortices, and small-scale cloud motions, an ideal numerical model suitable for hurricane studies should demonstrate its capabilities in simulating these interactions. The newly-developed multiscale modeling framework (MMF, Tao et al., 2007) and the substantial computing power by the NASA Columbia supercomputer show promise in pursuing the related studies, as the MMF inherits the advantages of two NASA state-of-the-art modeling components: the GEOS4/fvGCM and 2D GCEs. This article focuses on the computational issues and proposes a revised methodology to improve the MMF's performance and scalability. It is shown that this prototype implementation enables 12-fold performance improvements with 364 CPUs, thereby making it more feasible to study hurricane climate.
JuxtaView - A tool for interactive visualization of large imagery on scalable tiled displays
Krishnaprasad, N.K.; Vishwanath, V.; Venkataraman, S.; Rao, A.G.; Renambot, L.; Leigh, J.; Johnson, A.E.; Davis, B.
2004-01-01
JuxtaView is a cluster-based application for viewing ultra-high-resolution images on scalable tiled displays. We present in JuxtaView, a new parallel computing and distributed memory approach for out-of-core montage visualization, using LambdaRAM, a software-based network-level cache system. The ultimate goal of JuxtaView is to enable a user to interactively roam through potentially terabytes of distributed, spatially referenced image data such as those from electron microscopes, satellites and aerial photographs. In working towards this goal, we describe our first prototype implemented over a local area network, where the image is distributed using LambdaRAM, on the memory of all nodes of a PC cluster driving a tiled display wall. Aggressive pre-fetching schemes employed by LambdaRAM help to reduce latency involved in remote memory access. We compare LambdaRAM with a more traditional memory-mapped file approach for out-of-core visualization. ?? 2004 IEEE.
Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets
Jeong, Won-Ki; Beyer, Johanna; Hadwiger, Markus; Vazquez, Amelio; Pfister, Hanspeter; Whitaker, Ross T.
2011-01-01
Recent advances in scanning technology provide high resolution EM (Electron Microscopy) datasets that allow neuroscientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes. PMID:19834227
Measuring Social-Emotional Skills to Advance Science and Practice
ERIC Educational Resources Information Center
McKown, Clark; Russo-Ponsaran, Nicole; Johnson, Jason
2016-01-01
The ability to understand and effectively interact with others is a critical determinant of academic, social, and life success (DiPerna & Elliott, 2002). An area in particular need of scalable, feasible, usable, and scientifically sound assessment tools is social-emotional comprehension, which includes mental processes enlisted to encode,…
Massive and Reproducible Production of Liver Buds Entirely from Human Pluripotent Stem Cells.
Takebe, Takanori; Sekine, Keisuke; Kimura, Masaki; Yoshizawa, Emi; Ayano, Satoru; Koido, Masaru; Funayama, Shizuka; Nakanishi, Noriko; Hisai, Tomoko; Kobayashi, Tatsuya; Kasai, Toshiharu; Kitada, Rina; Mori, Akira; Ayabe, Hiroaki; Ejiri, Yoko; Amimoto, Naoki; Yamazaki, Yosuke; Ogawa, Shimpei; Ishikawa, Momotaro; Kiyota, Yasujiro; Sato, Yasuhiko; Nozawa, Kohei; Okamoto, Satoshi; Ueno, Yasuharu; Taniguchi, Hideki
2017-12-05
Organoid technology provides a revolutionary paradigm toward therapy but has yet to be applied in humans, mainly because of reproducibility and scalability challenges. Here, we overcome these limitations by evolving a scalable organ bud production platform entirely from human induced pluripotent stem cells (iPSC). By conducting massive "reverse" screen experiments, we identified three progenitor populations that can effectively generate liver buds in a highly reproducible manner: hepatic endoderm, endothelium, and septum mesenchyme. Furthermore, we achieved human scalability by developing an omni-well-array culture platform for mass producing homogeneous and miniaturized liver buds on a clinically relevant large scale (>10 8 ). Vascularized and functional liver tissues generated entirely from iPSCs significantly improved subsequent hepatic functionalization potentiated by stage-matched developmental progenitor interactions, enabling functional rescue against acute liver failure via transplantation. Overall, our study provides a stringent manufacturing platform for multicellular organoid supply, thus facilitating clinical and pharmaceutical applications especially for the treatment of liver diseases through multi-industrial collaborations. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Processing Diabetes Mellitus Composite Events in MAGPIE.
Brugués, Albert; Bromuri, Stefano; Barry, Michael; Del Toro, Óscar Jiménez; Mazurkiewicz, Maciej R; Kardas, Przemyslaw; Pegueroles, Josep; Schumacher, Michael
2016-02-01
The focus of this research is in the definition of programmable expert Personal Health Systems (PHS) to monitor patients affected by chronic diseases using agent oriented programming and mobile computing to represent the interactions happening amongst the components of the system. The paper also discusses issues of knowledge representation within the medical domain when dealing with temporal patterns concerning the physiological values of the patient. In the presented agent based PHS the doctors can personalize for each patient monitoring rules that can be defined in a graphical way. Furthermore, to achieve better scalability, the computations for monitoring the patients are distributed among their devices rather than being performed in a centralized server. The system is evaluated using data of 21 diabetic patients to detect temporal patterns according to a set of monitoring rules defined. The system's scalability is evaluated by comparing it with a centralized approach. The evaluation concerning the detection of temporal patterns highlights the system's ability to monitor chronic patients affected by diabetes. Regarding the scalability, the results show the fact that an approach exploiting the use of mobile computing is more scalable than a centralized approach. Therefore, more likely to satisfy the needs of next generation PHSs. PHSs are becoming an adopted technology to deal with the surge of patients affected by chronic illnesses. This paper discusses architectural choices to make an agent based PHS more scalable by using a distributed mobile computing approach. It also discusses how to model the medical knowledge in the PHS in such a way that it is modifiable at run time. The evaluation highlights the necessity of distributing the reasoning to the mobile part of the system and that modifiable rules are able to deal with the change in lifestyle of the patients affected by chronic illnesses.
SPV: a JavaScript Signaling Pathway Visualizer.
Calderone, Alberto; Cesareni, Gianni
2018-03-24
The visualization of molecular interactions annotated in web resources is useful to offer to users such information in a clear intuitive layout. These interactions are frequently represented as binary interactions that are laid out in free space where, different entities, cellular compartments and interaction types are hardly distinguishable. SPV (Signaling Pathway Visualizer) is a free open source JavaScript library which offers a series of pre-defined elements, compartments and interaction types meant to facilitate the representation of signaling pathways consisting of causal interactions without neglecting simple protein-protein interaction networks. freely available under Apache version 2 license; Source code: https://github.com/Sinnefa/SPV_Signaling_Pathway_Visualizer_v1.0. Language: JavaScript; Web technology: Scalable Vector Graphics; Libraries: D3.js. sinnefa@gmail.com.
Declarative Knowledge Acquisition in Immersive Virtual Learning Environments
ERIC Educational Resources Information Center
Webster, Rustin
2016-01-01
The author investigated the interaction effect of immersive virtual reality (VR) in the classroom. The objective of the project was to develop and provide a low-cost, scalable, and portable VR system containing purposely designed and developed immersive virtual learning environments for the US Army. The purpose of the mixed design experiment was…
Coordinating Decentralized Learning and Conflict Resolution across Agent Boundaries
ERIC Educational Resources Information Center
Cheng, Shanjun
2012-01-01
It is crucial for embedded systems to adapt to the dynamics of open environments. This adaptation process becomes especially challenging in the context of multiagent systems because of scalability, partial information accessibility and complex interaction of agents. It is a challenge for agents to learn good policies, when they need to plan and…
Scalable Online Network Modeling and Simulation
2005-08-01
ONLINE NETWORK MODELING AND SIMULATION 6. AUTHOR(S) Boleslaw Szymanski , Shivkumar Kalyanaraman, Biplab Sikdar and Christopher Carothers 5...performance for a wide range of parameter values (parameter sensitivity), understanding of protocol stability and dynamics, and studying feature ...a wide range of parameter values (parameter sensitivity), understanding of protocol stability and dynamics, and studying feature interactions
Using Pot-Magnets to Enable Stable and Scalable Electromagnetic Tactile Displays.
Zarate, Juan Jose; Shea, Herbert
2017-01-01
We present the design, fabrication, characterization, and psychophysical testing of a scalable haptic display based on electromagnetic (EM) actuators. The display consists of a 4 × 4 array of taxels, each of which can be in a raised or a lowered position, thus generating different static configurations. One of the most challenging aspects when designing densely-packed arrays of EM actuators is obtaining large actuation forces while simultaneously generating only weak interactions between neighboring taxels. In this work, we introduce a lightweight and effective magnetic shielding architecture. The moving part of each taxel is a cylindrical permanent magnet embedded in a ferromagnetic pot, forming a pot-magnet. An array of planar microcoils attracts or repels each pot-magnet. This configuration reduces the interaction between neighboring magnets by more than one order of magnitude, while the coil/magnet interaction is only reduced by 10 percent. For 4 mm diameter pins on an 8 mm pitch, we obtained displacements of 0.55 mm and forces of 40 mN using 1.7 W. We measured the accuracy of human perception under two actuation configurations which differed in the force versus displacement curve. We obtained 91 percent of correct answers in pulling configuration and 100 percent in pushing configuration.
Scalable Nonlinear Solvers for Fully Implicit Coupled Nuclear Fuel Modeling. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Xiao-Chuan; Keyes, David; Yang, Chao
2014-09-29
The focus of the project is on the development and customization of some highly scalable domain decomposition based preconditioning techniques for the numerical solution of nonlinear, coupled systems of partial differential equations (PDEs) arising from nuclear fuel simulations. These high-order PDEs represent multiple interacting physical fields (for example, heat conduction, oxygen transport, solid deformation), each is modeled by a certain type of Cahn-Hilliard and/or Allen-Cahn equations. Most existing approaches involve a careful splitting of the fields and the use of field-by-field iterations to obtain a solution of the coupled problem. Such approaches have many advantages such as ease of implementationmore » since only single field solvers are needed, but also exhibit disadvantages. For example, certain nonlinear interactions between the fields may not be fully captured, and for unsteady problems, stable time integration schemes are difficult to design. In addition, when implemented on large scale parallel computers, the sequential nature of the field-by-field iterations substantially reduces the parallel efficiency. To overcome the disadvantages, fully coupled approaches have been investigated in order to obtain full physics simulations.« less
2016-01-01
The development of a practical and scalable process for the asymmetric synthesis of sitagliptin is reported. Density functional theory calculations reveal that two noncovalent interactions are responsible for the high diastereoselection. The first is an intramolecular hydrogen bond between the enamide NH and the boryl mesylate S=O, consistent with MsOH being crucial for high selectivity. The second is a novel C–H···F interaction between the aryl C5-fluoride and the methyl of the mesylate ligand. PMID:25799267
KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery
NASA Astrophysics Data System (ADS)
Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan
2013-05-01
KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.
Shadid, J. N.; Pawlowski, R. P.; Cyr, E. C.; ...
2016-02-10
Here, we discuss that the computational solution of the governing balance equations for mass, momentum, heat transfer and magnetic induction for resistive magnetohydrodynamics (MHD) systems can be extremely challenging. These difficulties arise from both the strong nonlinear, nonsymmetric coupling of fluid and electromagnetic phenomena, as well as the significant range of time- and length-scales that the interactions of these physical mechanisms produce. This paper explores the development of a scalable, fully-implicit stabilized unstructured finite element (FE) capability for 3D incompressible resistive MHD. The discussion considers the development of a stabilized FE formulation in context of the variational multiscale (VMS) method,more » and describes the scalable implicit time integration and direct-to-steady-state solution capability. The nonlinear solver strategy employs Newton–Krylov methods, which are preconditioned using fully-coupled algebraic multilevel preconditioners. These preconditioners are shown to enable a robust, scalable and efficient solution approach for the large-scale sparse linear systems generated by the Newton linearization. Verification results demonstrate the expected order-of-accuracy for the stabilized FE discretization. The approach is tested on a variety of prototype problems, that include MHD duct flows, an unstable hydromagnetic Kelvin–Helmholtz shear layer, and a 3D island coalescence problem used to model magnetic reconnection. Initial results that explore the scaling of the solution methods are also presented on up to 128K processors for problems with up to 1.8B unknowns on a CrayXK7.« less
Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy
NASA Astrophysics Data System (ADS)
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli
2014-03-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
Scalable Creation of Long-Lived Multipartite Entanglement
NASA Astrophysics Data System (ADS)
Kaufmann, H.; Ruster, T.; Schmiegelow, C. T.; Luda, M. A.; Kaushal, V.; Schulz, J.; von Lindenfels, D.; Schmidt-Kaler, F.; Poschinger, U. G.
2017-10-01
We demonstrate the deterministic generation of multipartite entanglement based on scalable methods. Four qubits are encoded in 40Ca+, stored in a microstructured segmented Paul trap. These qubits are sequentially entangled by laser-driven pairwise gate operations. Between these, the qubit register is dynamically reconfigured via ion shuttling operations, where ion crystals are separated and merged, and ions are moved in and out of a fixed laser interaction zone. A sequence consisting of three pairwise entangling gates yields a four-ion Greenberger-Horne-Zeilinger state |ψ ⟩=(1 /√{2 })(|0000 ⟩+|1111 ⟩) , and full quantum state tomography reveals a state fidelity of 94.4(3)%. We analyze the decoherence of this state and employ dynamic decoupling on the spatially distributed constituents to maintain 69(5)% coherence at a storage time of 1.1 sec.
Dynamic full-scalability conversion in scalable video coding
NASA Astrophysics Data System (ADS)
Lee, Dong Su; Bae, Tae Meon; Thang, Truong Cong; Ro, Yong Man
2007-02-01
For outstanding coding efficiency with scalability functions, SVC (Scalable Video Coding) is being standardized. SVC can support spatial, temporal and SNR scalability and these scalabilities are useful to provide a smooth video streaming service even in a time varying network such as a mobile environment. But current SVC is insufficient to support dynamic video conversion with scalability, thereby the adaptation of bitrate to meet a fluctuating network condition is limited. In this paper, we propose dynamic full-scalability conversion methods for QoS adaptive video streaming in SVC. To accomplish full scalability dynamic conversion, we develop corresponding bitstream extraction, encoding and decoding schemes. At the encoder, we insert the IDR NAL periodically to solve the problems of spatial scalability conversion. At the extractor, we analyze the SVC bitstream to get the information which enable dynamic extraction. Real time extraction is achieved by using this information. Finally, we develop the decoder so that it can manage the changing scalability. Experimental results showed that dynamic full-scalability conversion was verified and it was necessary for time varying network condition.
Three-Dimensional Online Visualization and Engagement Tools for the Geosciences
NASA Astrophysics Data System (ADS)
Cockett, R.; Moran, T.; Pidlisecky, A.
2013-12-01
Educational tools often sacrifice interactivity in favour of scalability so they can reach more users. This compromise leads to tools that may be viewed as second tier when compared to more engaging activities performed in a laboratory; however, the resources required to deliver laboratory exercises that are scalable is often impractical. Geoscience education is well situated to benefit from interactive online learning tools that allow users to work in a 3D environment. Visible Geology (http://3ptscience.com/visiblegeology) is an innovative web-based application designed to enable visualization of geologic structures and processes through the use of interactive 3D models. The platform allows users to conceptualize difficult, yet important geologic principles in a scientifically accurate manner by developing unique geologic models. The environment allows students to interactively practice their visualization and interpretation skills by creating and interacting with their own models and terrains. Visible Geology has been designed from a user centric perspective resulting in a simple and intuitive interface. The platform directs students to build there own geologic models by adding beds and creating geologic events such as tilting, folding, or faulting. The level of ownership and interactivity encourages engagement, leading learners to discover geologic relationships on their own, in the context of guided assignments. In January 2013, an interactive geologic history assignment was developed for a 700-student introductory geology class at The University of British Columbia. The assignment required students to distinguish the relative age of geologic events to construct a geologic history. Traditionally this type of exercise has been taught through the use of simple geologic cross-sections showing crosscutting relationships; from these cross-sections students infer the relative age of geologic events. In contrast, the Visible Geology assignment offers students a unique experience where they first create their own geologic events allowing them to directly see how the timing of a geologic event manifests in the model and resulting cross-sections. By creating each geologic event in the model themselves, the students gain a deeper understanding of the processes and relative order of events. The resulting models can be shared amongst students, and provide instructors with a basis for guiding inquiry to address misconceptions. The ease of use of the assignment, including automatic assessment, made this tool practical for deployment in this 700 person class. The outcome of this type of large scale deployment is that students, who would normally not experience a lab exercise, gain exposure to interactive 3D thinking. Engaging tools and software that puts the user in control of their learning experiences is critical for moving to scalable, yet engaging, online learning environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Atul K.
The overall objectives of this DOE funded project is to combine scientific and computational challenges in climate modeling by expanding our understanding of the biogeophysical-biogeochemical processes and their interactions in the northern high latitudes (NHLs) using an earth system modeling (ESM) approach, and by adopting an adaptive parallel runtime system in an ESM to achieve efficient and scalable climate simulations through improved load balancing algorithms.
(DCT-FY08) Target Detection Using Multiple Modality Airborne and Ground Based Sensors
2013-03-01
Plenoptic modeling: an image-based rendering system,” in SIGGRAPH ’95: Proceedings of the 22nd annual conference on Computer graphics and interactive...techniques. New York, NY, USA: ACM, 1995, pp. 39–46. [21] D. G. Aliaga and I. Carlbom, “ Plenoptic stitching: a scalable method for reconstructing 3D
Integrated Visible Photonics for Trapped-Ion Quantum Computing
2017-06-10
necessarily reflect the views of the Department of Defense. Abstract- A scalable trapped-ion-based quantum - computing architecture requires the... Quantum Computing Dave Kharas, Cheryl Sorace-Agaskar, Suraj Bramhavar, William Loh, Jeremy M. Sage, Paul W. Juodawlkis, and John...coherence times, strong coulomb interactions, and optical addressability, hold great promise for implementation of practical quantum information
ERIC Educational Resources Information Center
Greaney, Mary L.; Puleo, Elaine; Bennett, Gary G.; Haines, Jess; Viswanath, K.; Gillman, Matthew W.; Sprunck-Harrild, Kim; Coeling, Molly; Rusinak, Donna; Emmons, Karen M.
2014-01-01
Background: Many U.S. adults have multiple behavioral risk factors, and effective, scalable interventions are needed to promote population-level health. In the health care setting, interventions are often provided in print, although accessible to nearly everyone, are brief (e.g., pamphlets), are not interactive, and can require some logistics…
Scalable quantum information processing with atomic ensembles and flying photons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei Feng; Yu Yafei; Feng Mang
2009-10-15
We present a scheme for scalable quantum information processing with atomic ensembles and flying photons. Using the Rydberg blockade, we encode the qubits in the collective atomic states, which could be manipulated fast and easily due to the enhanced interaction in comparison to the single-atom case. We demonstrate that our proposed gating could be applied to generation of two-dimensional cluster states for measurement-based quantum computation. Moreover, the atomic ensembles also function as quantum repeaters useful for long-distance quantum state transfer. We show the possibility of our scheme to work in bad cavity or in weak coupling regime, which could muchmore » relax the experimental requirement. The efficient coherent operations on the ensemble qubits enable our scheme to be switchable between quantum computation and quantum communication using atomic ensembles.« less
Scalable large format 3D displays
NASA Astrophysics Data System (ADS)
Chang, Nelson L.; Damera-Venkata, Niranjan
2010-02-01
We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.
Scalable Creation of Long-Lived Multipartite Entanglement.
Kaufmann, H; Ruster, T; Schmiegelow, C T; Luda, M A; Kaushal, V; Schulz, J; von Lindenfels, D; Schmidt-Kaler, F; Poschinger, U G
2017-10-13
We demonstrate the deterministic generation of multipartite entanglement based on scalable methods. Four qubits are encoded in ^{40}Ca^{+}, stored in a microstructured segmented Paul trap. These qubits are sequentially entangled by laser-driven pairwise gate operations. Between these, the qubit register is dynamically reconfigured via ion shuttling operations, where ion crystals are separated and merged, and ions are moved in and out of a fixed laser interaction zone. A sequence consisting of three pairwise entangling gates yields a four-ion Greenberger-Horne-Zeilinger state |ψ⟩=(1/sqrt[2])(|0000⟩+|1111⟩), and full quantum state tomography reveals a state fidelity of 94.4(3)%. We analyze the decoherence of this state and employ dynamic decoupling on the spatially distributed constituents to maintain 69(5)% coherence at a storage time of 1.1 sec.
Iowa State University – Final Report for SciDAC3/NUCLEI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vary, James P
The Iowa State University (ISU) contributions to the NUCLEI project are focused on developing, implementing and running an efficient and scalable configuration interaction code (Many-Fermion Dynamics – nuclear or MFDn) for leadership class supercomputers addressing forefront research problems in low-energy nuclear physics. We investigate nuclear structure and reactions with realistic nucleon-nucleon (NN) and three-nucleon (3N) interactions. We select a few highlights from our work that has produced a total of more than 82 refereed publications and more than 109 invited talks under SciDAC3/NUCLEI.
Thermodynamic effects of single-qubit operations in silicon-based quantum computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lougovski, Pavel; Peters, Nicholas A.
Silicon-based quantum logic is a promising technology to implement universal quantum computing. It is widely believed that a millikelvin cryogenic environment will be necessary to accommodate silicon-based qubits. This prompts a question of the ultimate scalability of the technology due to finite cooling capacity of refrigeration systems. In this work, we answer this question by studying energy dissipation due to interactions between nuclear spin impurities and qubit control pulses. Furthermore, we demonstrate that this interaction constrains the sustainable number of single-qubit operations per second for a given cooling capacity.
Thermodynamic effects of single-qubit operations in silicon-based quantum computing
Lougovski, Pavel; Peters, Nicholas A.
2018-05-21
Silicon-based quantum logic is a promising technology to implement universal quantum computing. It is widely believed that a millikelvin cryogenic environment will be necessary to accommodate silicon-based qubits. This prompts a question of the ultimate scalability of the technology due to finite cooling capacity of refrigeration systems. In this work, we answer this question by studying energy dissipation due to interactions between nuclear spin impurities and qubit control pulses. Furthermore, we demonstrate that this interaction constrains the sustainable number of single-qubit operations per second for a given cooling capacity.
NASA Astrophysics Data System (ADS)
Jack-Scott, E.; Arnott, J. C.; Katzenberger, J.; Davis, S. J.; Delman, E.
2015-12-01
It has been a generational challenge to simultaneously meet the world's energy requirements, while remaining within the bounds of acceptable cost and environmental impact. To this end, substantial research has explored various energy futures on a global scale, leaving decision-makers and the public overwhelmed by information on energy options. In response, this interactive energy table was developed as a comprehensive resource through which users can explore the availability, scalability, and growth potentials of all energy technologies currently in use or development. Extensive research from peer-reviewed papers and reports was compiled and summarized, detailing technology costs, technical considerations, imminent breakthroughs, and obstacles to integration, as well as political, social, and environmental considerations. Energy technologies fall within categories of coal, oil, natural gas, nuclear, solar, wind, hydropower, ocean, geothermal and biomass. In addition to 360 expandable cells of cited data, the interactive table also features educational windows with background information on each energy technology. The table seeks not to advocate for specific energy futures, but to succinctly and accurately centralize peer-reviewed research and information in an interactive, accessible resource. With this tool, decision-makers, researchers and the public alike can explore various combinations of energy technologies and their quantitative and qualitative attributes that can satisfy the world's total primary energy supply (TPES) while making progress towards a near zero carbon future.
An Extensible Sensing and Control Platform for Building Energy Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowe, Anthony; Berges, Mario; Martin, Christopher
2016-04-03
The goal of this project is to develop Mortar.io, an open-source BAS platform designed to simplify data collection, archiving, event scheduling and coordination of cross-system interactions. Mortar.io is optimized for (1) robustness to network outages, (2) ease of installation using plug-and-play and (3) scalable support for small to large buildings and campuses.
Error Awareness and Recovery in Conversational Spoken Language Interfaces
2007-05-01
portant step towards constructing autonomously self -improving systems. Furthermore, we developed a scalable, data-driven approach that allows a system...prob- lems in spoken dialog (as well as other interactive systems) and constitutes an important step towards building autonomously self -improving...implicitly-supervised learning approach is applicable to other problems, and represents an important step towards developing autonomous, self
Sharing knowledge with the public during a crisis: NASA's public portal
NASA Technical Reports Server (NTRS)
Holm, Jeanne
2003-01-01
This case study looks at integrating the web governance policies and procedures, migration to a single content management solution, and integrating best-of-breed technology with high-impact, interactive components. In particular, this case study is interesting in the dynamic scalability of this application to meet the needs of an organization on the front lines during a crisis.
Long-range interactions and parallel scalability in molecular simulations
NASA Astrophysics Data System (ADS)
Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko
2007-01-01
Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.
SuperDCA for genome-wide epistasis analysis.
Puranen, Santeri; Pesonen, Maiju; Pensar, Johan; Xu, Ying Ying; Lees, John A; Bentley, Stephen D; Croucher, Nicholas J; Corander, Jukka
2018-05-29
The potential for genome-wide modelling of epistasis has recently surfaced given the possibility of sequencing densely sampled populations and the emerging families of statistical interaction models. Direct coupling analysis (DCA) has previously been shown to yield valuable predictions for single protein structures, and has recently been extended to genome-wide analysis of bacteria, identifying novel interactions in the co-evolution between resistance, virulence and core genome elements. However, earlier computational DCA methods have not been scalable to enable model fitting simultaneously to 10 4 -10 5 polymorphisms, representing the amount of core genomic variation observed in analyses of many bacterial species. Here, we introduce a novel inference method (SuperDCA) that employs a new scoring principle, efficient parallelization, optimization and filtering on phylogenetic information to achieve scalability for up to 10 5 polymorphisms. Using two large population samples of Streptococcus pneumoniae, we demonstrate the ability of SuperDCA to make additional significant biological findings about this major human pathogen. We also show that our method can uncover signals of selection that are not detectable by genome-wide association analysis, even though our analysis does not require phenotypic measurements. SuperDCA, thus, holds considerable potential in building understanding about numerous organisms at a systems biological level.
Rethinking Visual Analytics for Streaming Data Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris
In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between themore » two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is massive, complex, incomplete, and uncertain in scenarios requiring human judgment.« less
Introducing a distributed unstructured mesh into gyrokinetic particle-in-cell code, XGC
NASA Astrophysics Data System (ADS)
Yoon, Eisung; Shephard, Mark; Seol, E. Seegyoung; Kalyanaraman, Kaushik
2017-10-01
XGC has shown good scalability for large leadership supercomputers. The current production version uses a copy of the entire unstructured finite element mesh on every MPI rank. Although an obvious scalability issue if the mesh sizes are to be dramatically increased, the current approach is also not optimal with respect to data locality of particles and mesh information. To address these issues we have initiated the development of a distributed mesh PIC method. This approach directly addresses the base scalability issue with respect to mesh size and, through the use of a mesh entity centric view of the particle mesh relationship, provides opportunities to address data locality needs of many core and GPU supported heterogeneous systems. The parallel mesh PIC capabilities are being built on the Parallel Unstructured Mesh Infrastructure (PUMI). The presentation will first overview the form of mesh distribution used and indicate the structures and functions used to support the mesh, the particles and their interaction. Attention will then focus on the node-level optimizations being carried out to ensure performant operation of all PIC operations on the distributed mesh. Partnership for Edge Physics Simulation (EPSI) Grant No. DE-SC0008449 and Center for Extended Magnetohydrodynamic Modeling (CEMM) Grant No. DE-SC0006618.
A scalable strategy for high-throughput GFP tagging of endogenous human proteins.
Leonetti, Manuel D; Sekine, Sayaka; Kamiyama, Daichi; Weissman, Jonathan S; Huang, Bo
2016-06-21
A central challenge of the postgenomic era is to comprehensively characterize the cellular role of the ∼20,000 proteins encoded in the human genome. To systematically study protein function in a native cellular background, libraries of human cell lines expressing proteins tagged with a functional sequence at their endogenous loci would be very valuable. Here, using electroporation of Cas9 nuclease/single-guide RNA ribonucleoproteins and taking advantage of a split-GFP system, we describe a scalable method for the robust, scarless, and specific tagging of endogenous human genes with GFP. Our approach requires no molecular cloning and allows a large number of cell lines to be processed in parallel. We demonstrate the scalability of our method by targeting 48 human genes and show that the resulting GFP fluorescence correlates with protein expression levels. We next present how our protocols can be easily adapted for the tagging of a given target with GFP repeats, critically enabling the study of low-abundance proteins. Finally, we show that our GFP tagging approach allows the biochemical isolation of native protein complexes for proteomic studies. Taken together, our results pave the way for the large-scale generation of endogenously tagged human cell lines for the proteome-wide analysis of protein localization and interaction networks in a native cellular context.
Scalable Method to Produce Biodegradable Nanoparticles that Rapidly Penetrate Human Mucus
Xu, Qingguo; Boylan, Nicholas J.; Cai, Shutian; Miao, Bolong; Patel, Himatkumar; Hanes, Justin
2013-01-01
Mucus typically traps and rapidly removes foreign particles from the airways, gastrointestinal tract, nasopharynx, female reproductive tract and the surface of the eye. Nanoparticles capable of rapid penetration through mucus can potentially avoid rapid clearance, and open significant opportunities for controlled drug delivery at mucosal surfaces. Here, we report an industrially scalable emulsification method to produce biodegradable mucus-penetrating particles (MPP). The emulsification of diblock copolymers of poly(lactic-co-glycolic acid) and polyethylene glycol (PLGA-PEG) using low molecular weight (MW) emulsifiers forms dense brush PEG coatings on nanoparticles that allow rapid nanoparticle penetration through fresh undiluted human mucus. In comparison, conventional high MW emulsifiers, such as polyvinyl alcohol (PVA), interrupts the PEG coating on nanoparticles, resulting in their immobilization in mucus owing to adhesive interactions with mucus mesh elements. PLGA-PEG nanoparticles with a wide range of PEG MW (1, 2, 5, and 10 kDa), prepared by the emulsification method using low MW emulsifiers, all rapidly penetrated mucus. A range of drugs, from hydrophobic small molecules to hydrohilic large biologics, can be efficiently loaded into biodegradable MPP using the method described. This readily scalable method should facilitate the production of MPP products for mucosal drug delivery, as well as potentially longer-circulating particles following intravenous administration. PMID:23751567
Algorithmically scalable block preconditioner for fully implicit shallow-water equations in CAM-SE
Lott, P. Aaron; Woodward, Carol S.; Evans, Katherine J.
2014-10-19
Performing accurate and efficient numerical simulation of global atmospheric climate models is challenging due to the disparate length and time scales over which physical processes interact. Implicit solvers enable the physical system to be integrated with a time step commensurate with processes being studied. The dominant cost of an implicit time step is the ancillary linear system solves, so we have developed a preconditioner aimed at improving the efficiency of these linear system solves. Our preconditioner is based on an approximate block factorization of the linearized shallow-water equations and has been implemented within the spectral element dynamical core within themore » Community Atmospheric Model (CAM-SE). Furthermore, in this paper we discuss the development and scalability of the preconditioner for a suite of test cases with the implicit shallow-water solver within CAM-SE.« less
Molecular nanomagnets with switchable coupling for quantum simulation
Chiesa, Alessandro; Whitehead, George F. S.; Carretta, Stefano; ...
2014-12-11
Molecular nanomagnets are attractive candidate qubits because of their wide inter- and intra-molecular tunability. Uniform magnetic pulses could be exploited to implement one- and two-qubit gates in presence of a properly engineered pattern of interactions, but the synthesis of suitable and potentially scalable supramolecular complexes has proven a very hard task. Indeed, no quantum algorithms have ever been implemented, not even a proof-of-principle two-qubit gate. In this paper we show that the magnetic couplings in two supramolecular {Cr7Ni}-Ni-{Cr7Ni} assemblies can be chemically engineered to fit the above requisites for conditional gates with no need of local control. Microscopic parameters aremore » determined by a recently developed many-body ab-initio approach and used to simulate quantum gates. We find that these systems are optimal for proof-of-principle two-qubit experiments and can be exploited as building blocks of scalable architectures for quantum simulation.« less
Modeling Wind Wave Evolution from Deep to Shallow Water
2012-09-30
WORK COMPLETED Development of a Lumped Quadruplet Approximation ( LQA ) A scalable parameterization of non-linear four-wave interactions is being...what we refer to as the Lumped Quadruplet Approximation ( LQA ), in which discrete contributions on the locus are treated as individual wave number...includes inhomogeneous wave fields, but is compatible with the action balance generally used in operational wave models. RESULTS Development LQA
Scalable Spin-Qubit Circuits with Quantum Dots
2006-12-31
Kondo entanglement” Phys. Rev. B 75, 035332 (2007). 14. W. A. Coish, Vitaly N . Golovach, J. Carlos Egues, Daniel Loss, “Measurement, control, and...Spin-orbit interaction in symmetric wells and cycloidal orbits without magnetic fields”, cond-mat/0607218. 16. Mircea Trif, Vitaly N . Golovach, Daniel...195-199 (2006); Supplementary Information. 22. Vitaly N . Golovach, Massoud Borhani, Daniel Loss, “Electric Dipole Induced Spin Resonance in Quantum
Department of Defense High Performance Computing Modernization Program. 2006 Annual Report
2007-03-01
Department. We successfully completed several software development projects that introduced parallel, scalable production software now in use across the...imagined. They are developing and deploying weather and ocean models that allow our soldiers, sailors, marines and airmen to plan missions more effectively...and to navigate adverse environments safely. They are modeling molecular interactions leading to the development of higher energy fuels, munitions
Wang, Sibo; Wu, Yunchao; Miao, Ran; ...
2017-07-26
Scalable and cost-effective synthesis and assembly of technologically important nanostructures in three-dimensional (3D) substrates hold keys to bridge the demonstrated nanotechnologies in academia with industrially relevant scalable manufacturing. In this paper, using ZnO nanorod arrays as an example, a hydrothermal-based continuous flow synthesis (CFS) method is successfully used to integrate the nano-arrays in multi-channeled monolithic cordierite. Compared to the batch process, CFS enhances the average growth rate of nano-arrays by 125%, with the average length increasing from 2 μm to 4.5 μm within the same growth time of 4 hours. The precursor utilization efficiency of CFS is enhanced by 9more » times compared to that of batch process by preserving the majority of precursors in recyclable solution. Computational fluid dynamic simulation suggests a steady-state solution flow and mass transport inside the channels of honeycomb substrates, giving rise to steady and consecutive growth of ZnO nano-arrays with an average length of 10 μm in 12 h. The monolithic ZnO nano-array-integrated cordierite obtained through CFS shows enhanced low-temperature (200 °C) desulfurization capacity and recyclability in comparison to ZnO powder wash-coated cordierite. This can be attributed to exposed ZnO {101¯0} planes, better dispersion and stronger interactions between sorbent and reactant in the ZnO nanorod arrays, as well as the sintering-resistance of nano-array configurations during sulfidation–regeneration cycles. Finally, with the demonstrated scalable synthesis and desulfurization performance of ZnO nano-arrays, a promising, industrially relevant integration strategy is provided to fabricate metal oxide nano-array-based monolithic devices for various environmental and energy applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Sibo; Wu, Yunchao; Miao, Ran
Scalable and cost-effective synthesis and assembly of technologically important nanostructures in three-dimensional (3D) substrates hold keys to bridge the demonstrated nanotechnologies in academia with industrially relevant scalable manufacturing. In this paper, using ZnO nanorod arrays as an example, a hydrothermal-based continuous flow synthesis (CFS) method is successfully used to integrate the nano-arrays in multi-channeled monolithic cordierite. Compared to the batch process, CFS enhances the average growth rate of nano-arrays by 125%, with the average length increasing from 2 μm to 4.5 μm within the same growth time of 4 hours. The precursor utilization efficiency of CFS is enhanced by 9more » times compared to that of batch process by preserving the majority of precursors in recyclable solution. Computational fluid dynamic simulation suggests a steady-state solution flow and mass transport inside the channels of honeycomb substrates, giving rise to steady and consecutive growth of ZnO nano-arrays with an average length of 10 μm in 12 h. The monolithic ZnO nano-array-integrated cordierite obtained through CFS shows enhanced low-temperature (200 °C) desulfurization capacity and recyclability in comparison to ZnO powder wash-coated cordierite. This can be attributed to exposed ZnO {101¯0} planes, better dispersion and stronger interactions between sorbent and reactant in the ZnO nanorod arrays, as well as the sintering-resistance of nano-array configurations during sulfidation–regeneration cycles. Finally, with the demonstrated scalable synthesis and desulfurization performance of ZnO nano-arrays, a promising, industrially relevant integration strategy is provided to fabricate metal oxide nano-array-based monolithic devices for various environmental and energy applications.« less
Wolfram technologies as an integrated scalable platform for interactive learning
NASA Astrophysics Data System (ADS)
Kaurov, Vitaliy
2012-02-01
We rely on technology profoundly with the prospect of even greater integration in the future. Well known challenges in education are a technology-inadequate curriculum and many software platforms that are difficult to scale or interconnect. We'll review an integrated technology, much of it free, that addresses these issues for individuals and small schools as well as for universities. Topics include: Mathematica, a programming environment that offers a diverse range of functionality; natural language programming for getting started quickly and accessing data from Wolfram|Alpha; quick and easy construction of interactive courseware and scientific applications; partnering with publishers to create interactive e-textbooks; course assistant apps for mobile platforms; the computable document format (CDF); teacher-student and student-student collaboration on interactive projects and web publishing at the Wolfram Demonstrations site.
NASA Technical Reports Server (NTRS)
Mohr, Karen Irene; Tao, Wei-Kuo; Chern, Jiun-Dar; Kumar, Sujay V.; Peters-Lidard, Christa D.
2013-01-01
The present generation of general circulation models (GCM) use parameterized cumulus schemes and run at hydrostatic grid resolutions. To improve the representation of cloud-scale moist processes and landeatmosphere interactions, a global, Multi-scale Modeling Framework (MMF) coupled to the Land Information System (LIS) has been developed at NASA-Goddard Space Flight Center. The MMFeLIS has three components, a finite-volume (fv) GCM (Goddard Earth Observing System Ver. 4, GEOS-4), a 2D cloud-resolving model (Goddard Cumulus Ensemble, GCE), and the LIS, representing the large-scale atmospheric circulation, cloud processes, and land surface processes, respectively. The non-hydrostatic GCE model replaces the single-column cumulus parameterization of fvGCM. The model grid is composed of an array of fvGCM gridcells each with a series of embedded GCE models. A horizontal coupling strategy, GCE4fvGCM4Coupler4LIS, offered significant computational efficiency, with the scalability and I/O capabilities of LIS permitting landeatmosphere interactions at cloud-scale. Global simulations of 2007e2008 and comparisons to observations and reanalysis products were conducted. Using two different versions of the same land surface model but the same initial conditions, divergence in regional, synoptic-scale surface pressure patterns emerged within two weeks. The sensitivity of largescale circulations to land surface model physics revealed significant functional value to using a scalable, multi-model land surface modeling system in global weather and climate prediction.
A Scalable Approach for Discovering Conserved Active Subnetworks across Species
Verfaillie, Catherine M.; Hu, Wei-Shou; Myers, Chad L.
2010-01-01
Overlaying differential changes in gene expression on protein interaction networks has proven to be a useful approach to interpreting the cell's dynamic response to a changing environment. Despite successes in finding active subnetworks in the context of a single species, the idea of overlaying lists of differentially expressed genes on networks has not yet been extended to support the analysis of multiple species' interaction networks. To address this problem, we designed a scalable, cross-species network search algorithm, neXus (Network - cross(X)-species - Search), that discovers conserved, active subnetworks based on parallel differential expression studies in multiple species. Our approach leverages functional linkage networks, which provide more comprehensive coverage of functional relationships than physical interaction networks by combining heterogeneous types of genomic data. We applied our cross-species approach to identify conserved modules that are differentially active in stem cells relative to differentiated cells based on parallel gene expression studies and functional linkage networks from mouse and human. We find hundreds of conserved active subnetworks enriched for stem cell-associated functions such as cell cycle, DNA repair, and chromatin modification processes. Using a variation of this approach, we also find a number of species-specific networks, which likely reflect mechanisms of stem cell function that have diverged between mouse and human. We assess the statistical significance of the subnetworks by comparing them with subnetworks discovered on random permutations of the differential expression data. We also describe several case examples that illustrate the utility of comparative analysis of active subnetworks. PMID:21170309
Scalable Matrix Algorithms for Interactive Analytics of Very Large Informatics Graphs
2017-06-14
information networks. Depending on the situation, these larger networks may not fit on a single machine. Although we considered traditional matrix and graph...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or
Scalable and Axiomatic Ranking of Network Role Similarity
Jin, Ruoming; Lee, Victor E.; Li, Longjie
2014-01-01
A key task in analyzing social networks and other complex networks is role analysis: describing and categorizing nodes according to how they interact with other nodes. Two nodes have the same role if they interact with equivalent sets of neighbors. The most fundamental role equivalence is automorphic equivalence. Unfortunately, the fastest algorithms known for graph automorphism are nonpolynomial. Moreover, since exact equivalence is rare, a more meaningful task is measuring the role similarity between any two nodes. This task is closely related to the structural or link-based similarity problem that SimRank addresses. However, SimRank and other existing similarity measures are not sufficient because they do not guarantee to recognize automorphically or structurally equivalent nodes. This paper makes two contributions. First, we present and justify several axiomatic properties necessary for a role similarity measure or metric. Second, we present RoleSim, a new similarity metric which satisfies these axioms and which can be computed with a simple iterative algorithm. We rigorously prove that RoleSim satisfies all these axiomatic properties. We also introduce Iceberg RoleSim, a scalable algorithm which discovers all pairs with RoleSim scores above a user-defined threshold θ. We demonstrate the interpretative power of RoleSim on both both synthetic and real datasets. PMID:25383066
Novel high-fidelity realistic explosion damage simulation for urban environments
NASA Astrophysics Data System (ADS)
Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya
2010-04-01
Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.
The TOTEM DAQ based on the Scalable Readout System (SRS)
NASA Astrophysics Data System (ADS)
Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio
2018-02-01
The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.
Perovskite Technology is Scalable, But Questions Remain about the Best
Methods | News | NREL Perovskite Technology is Scalable, But Questions Remain about the Best Methods News Release: Perovskite Technology is Scalable, But Questions Remain about the Best Methods NREL be used on a larger surface. The NREL researchers examined potential scalable deposition methods
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
Kazmi, S M Shams; Richards, Lisa M; Schrandt, Christian J; Davis, Mitchell A; Dunn, Andrew K
2015-01-01
Laser speckle contrast imaging (LSCI) provides a rapid characterization of cortical flow dynamics for functional monitoring of the microcirculation. The technique stems from interactions of laser light with moving particles. These interactions encode the encountered Doppler phenomena within a random interference pattern imaged in widefield, known as laser speckle. Studies of neurovascular function and coupling with LSCI have benefited from the real-time characterization of functional dynamics in the laboratory setting through quantification of perfusion dynamics. While the technique has largely been relegated to acute small animal imaging, its scalability is being assessed and characterized for both chronic and clinical neurovascular imaging. PMID:25944593
Entanglement classification with matrix product states
NASA Astrophysics Data System (ADS)
Sanz, M.; Egusquiza, I. L.; di Candia, R.; Saberi, H.; Lamata, L.; Solano, E.
2016-07-01
We propose an entanglement classification for symmetric quantum states based on their diagonal matrix-product-state (MPS) representation. The proposed classification, which preserves the stochastic local operation assisted with classical communication (SLOCC) criterion, relates entanglement families to the interaction length of Hamiltonians. In this manner, we establish a connection between entanglement classification and condensed matter models from a quantum information perspective. Moreover, we introduce a scalable nesting property for the proposed entanglement classification, in which the families for N parties carry over to the N + 1 case. Finally, using techniques from algebraic geometry, we prove that the minimal nontrivial interaction length n for any symmetric state is bounded by .
Improving Big Data Visual Analytics with Interactive Virtual Reality
2015-05-22
gain a better understanding of data include scalable zooms, dynamic filtering, and anno - tation. Below, we describe some tasks that can be performed...pages 609–614. IEEE, 2014. [13] Matt R Fetterman, Zachary J Weber, Robert Freking, Alessio Volpe, D Scott, et al. Luminocity: a 3d printed, illuminated...Institute for Business Valueexecutive report, IBM Institute for Business Value, 2012. [24] James J Thomas. Illuminating the path:[the research and
2007-09-01
behaviour based on past experience of interacting with the operator), and mobile (i.e., can move themselves from one machine to another). Edwards argues that...Sofge, D., Bugajska, M., Adams, W., Perzanowski, D., and Schultz, A. (2003). Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots...based architecture can provide a natural and scalable approach to implementing a multimodal interface to control mobile robots through dynamic
Adaptive format conversion for scalable video coding
NASA Astrophysics Data System (ADS)
Wan, Wade K.; Lim, Jae S.
2001-12-01
The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.
Moyle, Richard L.; Carvalhais, Lilia C.; Pretorius, Lara-Simone; Nowak, Ekaterina; Subramaniam, Gayathery; Dalton-Morgan, Jessica; Schenk, Peer M.
2017-01-01
Studies investigating the action of small RNAs on computationally predicted target genes require some form of experimental validation. Classical molecular methods of validating microRNA action on target genes are laborious, while approaches that tag predicted target sequences to qualitative reporter genes encounter technical limitations. The aim of this study was to address the challenge of experimentally validating large numbers of computationally predicted microRNA-target transcript interactions using an optimized, quantitative, cost-effective, and scalable approach. The presented method combines transient expression via agroinfiltration of Nicotiana benthamiana leaves with a quantitative dual luciferase reporter system, where firefly luciferase is used to report the microRNA-target sequence interaction and Renilla luciferase is used as an internal standard to normalize expression between replicates. We report the appropriate concentration of N. benthamiana leaf extracts and dilution factor to apply in order to avoid inhibition of firefly LUC activity. Furthermore, the optimal ratio of microRNA precursor expression construct to reporter construct and duration of the incubation period post-agroinfiltration were determined. The optimized dual luciferase assay provides an efficient, repeatable and scalable method to validate and quantify microRNA action on predicted target sequences. The optimized assay was used to validate five predicted targets of rice microRNA miR529b, with as few as six technical replicates. The assay can be extended to assess other small RNA-target sequence interactions, including assessing the functionality of an artificial miRNA or an RNAi construct on a targeted sequence. PMID:28979287
LOGISTIC NETWORK REGRESSION FOR SCALABLE ANALYSIS OF NETWORKS WITH JOINT EDGE/VERTEX DYNAMICS
Almquist, Zack W.; Butts, Carter T.
2015-01-01
Change in group size and composition has long been an important area of research in the social sciences. Similarly, interest in interaction dynamics has a long history in sociology and social psychology. However, the effects of endogenous group change on interaction dynamics are a surprisingly understudied area. One way to explore these relationships is through social network models. Network dynamics may be viewed as a process of change in the edge structure of a network, in the vertex set on which edges are defined, or in both simultaneously. Although early studies of such processes were primarily descriptive, recent work on this topic has increasingly turned to formal statistical models. Although showing great promise, many of these modern dynamic models are computationally intensive and scale very poorly in the size of the network under study and/or the number of time points considered. Likewise, currently used models focus on edge dynamics, with little support for endogenously changing vertex sets. Here, the authors show how an existing approach based on logistic network regression can be extended to serve as a highly scalable framework for modeling large networks with dynamic vertex sets. The authors place this approach within a general dynamic exponential family (exponential-family random graph modeling) context, clarifying the assumptions underlying the framework (and providing a clear path for extensions), and they show how model assessment methods for cross-sectional networks can be extended to the dynamic case. Finally, the authors illustrate this approach on a classic data set involving interactions among windsurfers on a California beach. PMID:26120218
LOGISTIC NETWORK REGRESSION FOR SCALABLE ANALYSIS OF NETWORKS WITH JOINT EDGE/VERTEX DYNAMICS.
Almquist, Zack W; Butts, Carter T
2014-08-01
Change in group size and composition has long been an important area of research in the social sciences. Similarly, interest in interaction dynamics has a long history in sociology and social psychology. However, the effects of endogenous group change on interaction dynamics are a surprisingly understudied area. One way to explore these relationships is through social network models. Network dynamics may be viewed as a process of change in the edge structure of a network, in the vertex set on which edges are defined, or in both simultaneously. Although early studies of such processes were primarily descriptive, recent work on this topic has increasingly turned to formal statistical models. Although showing great promise, many of these modern dynamic models are computationally intensive and scale very poorly in the size of the network under study and/or the number of time points considered. Likewise, currently used models focus on edge dynamics, with little support for endogenously changing vertex sets. Here, the authors show how an existing approach based on logistic network regression can be extended to serve as a highly scalable framework for modeling large networks with dynamic vertex sets. The authors place this approach within a general dynamic exponential family (exponential-family random graph modeling) context, clarifying the assumptions underlying the framework (and providing a clear path for extensions), and they show how model assessment methods for cross-sectional networks can be extended to the dynamic case. Finally, the authors illustrate this approach on a classic data set involving interactions among windsurfers on a California beach.
Technology for On-Chip Qubit Control with Microfabricated Surface Ion Traps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Highstrete, Clark; Scott, Sean Michael; Nordquist, Christopher D.
2013-11-01
Trapped atomic ions are a leading physical system for quantum information processing. However, scalability and operational fidelity remain limiting technical issues often associated with optical qubit control. One promising approach is to develop on-chip microwave electronic control of ion qubits based on the atomic hyperfine interaction. This project developed expertise and capabilities at Sandia toward on-chip electronic qubit control in a scalable architecture. The project developed a foundation of laboratory capabilities, including trapping the 171Yb + hyperfine ion qubit and developing an experimental microwave coherent control capability. Additionally, the project investigated the integration of microwave device elements with surface ionmore » traps utilizing Sandia’s state-of-the-art MEMS microfabrication processing. This effort culminated in a device design for a multi-purpose ion trap experimental platform for investigating on-chip microwave qubit control, laying the groundwork for further funded R&D to develop on-chip microwave qubit control in an architecture that is suitable to engineering development.« less
Sahoo, Satya S; Tao, Shiqiang; Parchman, Andrew; Luo, Zhihui; Cui, Licong; Mergler, Patrick; Lanese, Robert; Barnholtz-Sloan, Jill S; Meropol, Neal J; Zhang, Guo-Qiang
2014-01-01
Cancer is responsible for approximately 7.6 million deaths per year worldwide. A 2012 survey in the United Kingdom found dramatic improvement in survival rates for childhood cancer because of increased participation in clinical trials. Unfortunately, overall patient participation in cancer clinical studies is low. A key logistical barrier to patient and physician participation is the time required for identification of appropriate clinical trials for individual patients. We introduce the Trial Prospector tool that supports end-to-end management of cancer clinical trial recruitment workflow with (a) structured entry of trial eligibility criteria, (b) automated extraction of patient data from multiple sources, (c) a scalable matching algorithm, and (d) interactive user interface (UI) for physicians with both matching results and a detailed explanation of causes for ineligibility of available trials. We report the results from deployment of Trial Prospector at the National Cancer Institute (NCI)-designated Case Comprehensive Cancer Center (Case CCC) with 1,367 clinical trial eligibility evaluations performed with 100% accuracy. PMID:25506198
Negative autoregulation matches production and demand in synthetic transcriptional networks.
Franco, Elisa; Giordano, Giulia; Forsberg, Per-Ola; Murray, Richard M
2014-08-15
We propose a negative feedback architecture that regulates activity of artificial genes, or "genelets", to meet their output downstream demand, achieving robustness with respect to uncertain open-loop output production rates. In particular, we consider the case where the outputs of two genelets interact to form a single assembled product. We show with analysis and experiments that negative autoregulation matches the production and demand of the outputs: the magnitude of the regulatory signal is proportional to the "error" between the circuit output concentration and its actual demand. This two-device system is experimentally implemented using in vitro transcriptional networks, where reactions are systematically designed by optimizing nucleic acid sequences with publicly available software packages. We build a predictive ordinary differential equation (ODE) model that captures the dynamics of the system and can be used to numerically assess the scalability of this architecture to larger sets of interconnected genes. Finally, with numerical simulations we contrast our negative autoregulation scheme with a cross-activation architecture, which is less scalable and results in slower response times.
Silicon quantum processor with robust long-distance qubit couplings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tosi, Guilherme; Mohiyaddin, Fahd A.; Schmitt, Vivien
Practical quantum computers require a large network of highly coherent qubits, interconnected in a design robust against errors. Donor spins in silicon provide state-of-the-art coherence and quantum gate fidelities, in a platform adapted from industrial semiconductor processing. Here we present a scalable design for a silicon quantum processor that does not require precise donor placement and leaves ample space for the routing of interconnects and readout devices. We introduce the flip-flop qubit, a combination of the electron-nuclear spin states of a phosphorus donor that can be controlled by microwave electric fields. Two-qubit gates exploit a second-order electric dipole-dipole interaction, allowingmore » selective coupling beyond the nearest-neighbor, at separations of hundreds of nanometers, while microwave resonators can extend the entanglement to macroscopic distances. We predict gate fidelities within fault-tolerance thresholds using realistic noise models. This design provides a realizable blueprint for scalable spin-based quantum computers in silicon.« less
Frequency-domain nonlinear optics in two-dimensionally patterned quasi-phase-matching media.
Phillips, C R; Mayer, B W; Gallmann, L; Keller, U
2016-07-11
Advances in the amplification and manipulation of ultrashort laser pulses have led to revolutions in several areas. Examples include chirped pulse amplification for generating high peak-power lasers, power-scalable amplification techniques, pulse shaping via modulation of spatially-dispersed laser pulses, and efficient frequency-mixing in quasi-phase-matched nonlinear crystals to access new spectral regions. In this work, we introduce and demonstrate a new platform for nonlinear optics which has the potential to combine these separate functionalities (pulse amplification, frequency transfer, and pulse shaping) into a single monolithic device that is bandwidth- and power-scalable. The approach is based on two-dimensional (2D) patterning of quasi-phase-matching (QPM) gratings combined with optical parametric interactions involving spatially dispersed laser pulses. Our proof of principle experiment demonstrates this technique via mid-infrared optical parametric chirped pulse amplification of few-cycle pulses. Additionally, we present a detailed theoretical and numerical analysis of such 2D-QPM devices and how they can be designed.
Gate-tunable electron interaction in high-κ dielectric films
Kondovych, Svitlana; Luk’yanchuk, Igor; Baturina, Tatyana I.; ...
2017-02-20
The two-dimensional (2D) logarithmic character of Coulomb interaction between charges and the resulting logarithmic confinement is a remarkable inherent property of high dielectric constant (high-k) thin films with far reaching implications. Most and foremost, this is the charge Berezinskii-Kosterlitz-Thouless transition with the notable manifestation, low-temperature superinsulating topological phase. Here we show that the range of the confinement can be tuned by the external gate electrode and unravel a variety of electrostatic interactions in high-k films. Lastly, our findings open a unique laboratory for the in-depth study of topological phase transitions and a plethora of related phenomena, ranging from criticality ofmore » quantum metal- and superconductor-insulator transitions to the effects of charge-trapping and Coulomb scalability in memory nanodevices.« less
Rydberg blockade in three-atom systems
NASA Astrophysics Data System (ADS)
Barredo, Daniel; Ravets, Sylvain; Labuhn, Henning; Beguin, Lucas; Vernier, Aline; Chicireanu, Radu; Nogrette, Florence; Lahaye, Thierry; Browaeys, Antoine
2014-05-01
The control of individual neutral atoms in arrays of optical tweezers is a promising avenue for quantum science and technology. Here we demonstrate unprecedented control over a system of three Rydberg atoms arranged in linear and triangular configurations. The interaction between Rydberg atoms results in the observation of an almost perfect van der Waals blockade. When the single-atom Rabi frequency for excitation to the Rydberg state is comparable to the interaction energy, we directly observe the anisotropy of the interaction between nD-states. Using the independently measured two-body interaction energy shifts we fully reproduce the dynamics of the three-atom system with a model based on a master equation without any adjustable parameter. Combined with our ability to trap single atoms in arbitrary patterns of 2D arrays of up to 100 traps separated by a few microns, these results are very promising for a scalable implementation of quantum simulation of frustrated quantum magnetism with Rydberg atoms.
Human-computer interface including haptically controlled interactions
Anderson, Thomas G.
2005-10-11
The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.
Ultracoherent operation of spin qubits with superexchange coupling
NASA Astrophysics Data System (ADS)
Rančić, Marko J.; Burkard, Guido
2017-11-01
With the use of nuclear-spin-free materials such as silicon and germanium, spin-based quantum bits (qubits) have evolved to become among the most coherent systems for quantum information processing. The new frontier for spin qubits has therefore shifted to the ubiquitous charge noise and spin-orbit interaction, which are limiting the coherence times and gate fidelities of solid-state qubits. In this paper we investigate superexchange, as a means of indirect exchange interaction between two single electron spin qubits, each embedded in a single semiconductor quantum dot (QD), mediated by an intermediate, empty QD. Our results suggest the existence of "supersweet spots", in which the qubit operations implemented by superexchange interaction are simultaneously first-order-insensitive to charge noise and to errors due to spin-orbit interaction. The proposed spin-qubit architecture is scalable and within the manufacturing capabilities of semiconductor industry.
Topological Magnonics: A Paradigm for Spin-Wave Manipulation and Device Design
NASA Astrophysics Data System (ADS)
Wang, X. S.; Zhang, H. W.; Wang, X. R.
2018-02-01
Conventional magnonic devices use magnetostatic waves whose properties are sensitive to device geometry and the details of magnetization structure, so the design and the scalability of the device or circuitry are difficult. We propose topological magnonics, in which topological exchange spin waves are used as information carriers, that do not suffer from conventional problems of magnonic devices with additional nice features of nanoscale wavelength and high frequency. We show that a perpendicularly magnetized ferromagnet on a honeycomb lattice is generically a topological magnetic material in the sense that topologically protected chiral edge spin waves exist in the band gap as long as a spin-orbit-induced nearest-neighbor pseudodipolar interaction (and/or a next-nearest-neighbor Dzyaloshinskii-Moriya interaction) is present. The edge spin waves propagate unidirectionally along sample edges and domain walls regardless of the system geometry and defects. As a proof of concept, spin-wave diodes, spin-wave beam splitters, and spin-wave interferometers are designed by using sample edges and domain walls to manipulate the propagation of topologically protected chiral spin waves. Since magnetic domain walls can be controlled by magnetic fields or electric current or fields, one can essentially draw, erase, and redraw different spin-wave devices and circuitry on the same magnetic plate so that the proposed devices are reconfigurable and tunable. The topological magnonics opens up an alternative direction towards a robust, reconfigurable and scalable spin-wave circuitry.
NASA Astrophysics Data System (ADS)
Brown, Sheldon
As the world around us is transformed into digitally enabled forms and processes, aesthetic strategies are required that articulate this underlying condition. A method for doing so involves a formal and conceptual strategy that is derived from collage, montage and assemblage. This triple "age" is termed "troiage", and it uses a style of computational apparency which articulates the edges of our current representational forms and processes as the semantic elements of culture. Each of these component aesthetics has previously had an important effect upon different areas of contemporary art and culture. Collage in painting, montage in film, assemblage in sculpture and architecture, are recombined via algorithmic methods, forefronting the structure of the algorithmic itself. The dynamic of the aesthetic is put into play by examining binary relationships such as: nature/culture, personal/public, U.S/Mexico, freedom/coercion, mediation/experience, etc. Through this process, the pervasiveness of common algorithmic approaches across cultural and social operations is revealed. This aesthetic is used in the project "The Scalable City" in which a virtual urban landscape is created by users interacting with data taken from the physical world in the form of different photographic techniques. This data is transformed by algorithmic methods which have previously been unfamiliar to the types of data that they are utilizing. The Scalable City project creates works across many media; such as prints, procedural animations, digital cinema and interactive 3D computer graphic installations.
A scalable healthcare information system based on a service-oriented architecture.
Yang, Tzu-Hsiang; Sun, Yeali S; Lai, Feipei
2011-06-01
Many existing healthcare information systems are composed of a number of heterogeneous systems and face the important issue of system scalability. This paper first describes the comprehensive healthcare information systems used in National Taiwan University Hospital (NTUH) and then presents a service-oriented architecture (SOA)-based healthcare information system (HIS) based on the service standard HL7. The proposed architecture focuses on system scalability, in terms of both hardware and software. Moreover, we describe how scalability is implemented in rightsizing, service groups, databases, and hardware scalability. Although SOA-based systems sometimes display poor performance, through a performance evaluation of our HIS based on SOA, the average response time for outpatient, inpatient, and emergency HL7Central systems are 0.035, 0.04, and 0.036 s, respectively. The outpatient, inpatient, and emergency WebUI average response times are 0.79, 1.25, and 0.82 s. The scalability of the rightsizing project and our evaluation results show that the SOA HIS we propose provides evidence that SOA can provide system scalability and sustainability in a highly demanding healthcare information system.
A scalable multi-DLP pico-projector system for virtual reality
NASA Astrophysics Data System (ADS)
Teubl, F.; Kurashima, C.; Cabral, M.; Fels, S.; Lopes, R.; Zuffo, M.
2014-03-01
Virtual Reality (VR) environments can offer immersion, interaction and realistic images to users. A VR system is usually expensive and requires special equipment in a complex setup. One approach is to use Commodity-Off-The-Shelf (COTS) desktop multi-projectors manually or camera based calibrated to reduce the cost of VR systems without significant decrease of the visual experience. Additionally, for non-planar screen shapes, special optics such as lenses and mirrors are required thus increasing costs. We propose a low-cost, scalable, flexible and mobile solution that allows building complex VR systems that projects images onto a variety of arbitrary surfaces such as planar, cylindrical and spherical surfaces. This approach combines three key aspects: 1) clusters of DLP-picoprojectors to provide homogeneous and continuous pixel density upon arbitrary surfaces without additional optics; 2) LED lighting technology for energy efficiency and light control; 3) smaller physical footprint for flexibility purposes. Therefore, the proposed system is scalable in terms of pixel density, energy and physical space. To achieve these goals, we developed a multi-projector software library called FastFusion that calibrates all projectors in a uniform image that is presented to viewers. FastFusion uses a camera to automatically calibrate geometric and photometric correction of projected images from ad-hoc positioned projectors, the only requirement is some few pixels overlapping amongst them. We present results with eight Pico-projectors, with 7 lumens (LED) and DLP 0.17 HVGA Chipset.
NASA Astrophysics Data System (ADS)
Jing, Changfeng; Liang, Song; Ruan, Yong; Huang, Jie
2008-10-01
During the urbanization process, when facing complex requirements of city development, ever-growing urban data, rapid development of planning business and increasing planning complexity, a scalable, extensible urban planning management information system is needed urgently. PM2006 is such a system that can deal with these problems. In response to the status and problems in urban planning, the scalability and extensibility of PM2006 are introduced which can be seen as business-oriented workflow extensibility, scalability of DLL-based architecture, flexibility on platforms of GIS and database, scalability of data updating and maintenance and so on. It is verified that PM2006 system has good extensibility and scalability which can meet the requirements of all levels of administrative divisions and can adapt to ever-growing changes in urban planning business. At the end of this paper, the application of PM2006 in Urban Planning Bureau of Suzhou city is described.
Lambert, Jean-Philippe; Ivosev, Gordana; Couzens, Amber L; Larsen, Brett; Taipale, Mikko; Lin, Zhen-Yuan; Zhong, Quan; Lindquist, Susan; Vidal, Marc; Aebersold, Ruedi; Pawson, Tony; Bonner, Ron; Tate, Stephen; Gingras, Anne-Claude
2013-12-01
Characterizing changes in protein-protein interactions associated with sequence variants (e.g., disease-associated mutations or splice forms) or following exposure to drugs, growth factors or hormones is critical to understanding how protein complexes are built, localized and regulated. Affinity purification (AP) coupled with mass spectrometry permits the analysis of protein interactions under near-physiological conditions, yet monitoring interaction changes requires the development of a robust and sensitive quantitative approach, especially for large-scale studies in which cost and time are major considerations. We have coupled AP to data-independent mass spectrometric acquisition (sequential window acquisition of all theoretical spectra, SWATH) and implemented an automated data extraction and statistical analysis pipeline to score modulated interactions. We used AP-SWATH to characterize changes in protein-protein interactions imparted by the HSP90 inhibitor NVP-AUY922 or melanoma-associated mutations in the human kinase CDK4. We show that AP-SWATH is a robust label-free approach to characterize such changes and propose a scalable pipeline for systems biology studies.
A quantum annealing architecture with all-to-all connectivity from local interactions.
Lechner, Wolfgang; Hauke, Philipp; Zoller, Peter
2015-10-01
Quantum annealers are physical devices that aim at solving NP-complete optimization problems by exploiting quantum mechanics. The basic principle of quantum annealing is to encode the optimization problem in Ising interactions between quantum bits (qubits). A fundamental challenge in building a fully programmable quantum annealer is the competing requirements of full controllable all-to-all connectivity and the quasi-locality of the interactions between physical qubits. We present a scalable architecture with full connectivity, which can be implemented with local interactions only. The input of the optimization problem is encoded in local fields acting on an extended set of physical qubits. The output is-in the spirit of topological quantum memories-redundantly encoded in the physical qubits, resulting in an intrinsic fault tolerance. Our model can be understood as a lattice gauge theory, where long-range interactions are mediated by gauge constraints. The architecture can be realized on various platforms with local controllability, including superconducting qubits, NV-centers, quantum dots, and atomic systems.
A quantum annealing architecture with all-to-all connectivity from local interactions
Lechner, Wolfgang; Hauke, Philipp; Zoller, Peter
2015-01-01
Quantum annealers are physical devices that aim at solving NP-complete optimization problems by exploiting quantum mechanics. The basic principle of quantum annealing is to encode the optimization problem in Ising interactions between quantum bits (qubits). A fundamental challenge in building a fully programmable quantum annealer is the competing requirements of full controllable all-to-all connectivity and the quasi-locality of the interactions between physical qubits. We present a scalable architecture with full connectivity, which can be implemented with local interactions only. The input of the optimization problem is encoded in local fields acting on an extended set of physical qubits. The output is—in the spirit of topological quantum memories—redundantly encoded in the physical qubits, resulting in an intrinsic fault tolerance. Our model can be understood as a lattice gauge theory, where long-range interactions are mediated by gauge constraints. The architecture can be realized on various platforms with local controllability, including superconducting qubits, NV-centers, quantum dots, and atomic systems. PMID:26601316
NASA Astrophysics Data System (ADS)
Zhu, F.; Yu, H.; Rilee, M. L.; Kuo, K. S.; Yu, L.; Pan, Y.; Jiang, H.
2017-12-01
Since the establishment of data archive centers and the standardization of file formats, scientists are required to search metadata catalogs for data needed and download the data files to their local machines to carry out data analysis. This approach has facilitated data discovery and access for decades, but it inevitably leads to data transfer from data archive centers to scientists' computers through low-bandwidth Internet connections. Data transfer becomes a major performance bottleneck in such an approach. Combined with generally constrained local compute/storage resources, they limit the extent of scientists' studies and deprive them of timely outcomes. Thus, this conventional approach is not scalable with respect to both the volume and variety of geoscience data. A much more viable solution is to couple analysis and storage systems to minimize data transfer. In our study, we compare loosely coupled approaches (exemplified by Spark and Hadoop) and tightly coupled approaches (exemplified by parallel distributed database management systems, e.g., SciDB). In particular, we investigate the optimization of data placement and movement to effectively tackle the variety challenge, and boost the popularization of parallelization to address the volume challenge. Our goal is to enable high-performance interactive analysis for a good portion of geoscience data analysis exercise. We show that tightly coupled approaches can concentrate data traffic between local storage systems and compute units, and thereby optimizing bandwidth utilization to achieve a better throughput. Based on our observations, we develop a geoscience data analysis system that tightly couples analysis engines with storages, which has direct access to the detailed map of data partition locations. Through an innovation data partitioning and distribution scheme, our system has demonstrated scalable and interactive performance in real-world geoscience data analysis applications.
Cruella: developing a scalable tissue microarray data management system.
Cowan, James D; Rimm, David L; Tuck, David P
2006-06-01
Compared with DNA microarray technology, relatively little information is available concerning the special requirements, design influences, and implementation strategies of data systems for tissue microarray technology. These issues include the requirement to accommodate new and different data elements for each new project as well as the need to interact with pre-existing models for clinical, biological, and specimen-related data. To design and implement a flexible, scalable tissue microarray data storage and management system that could accommodate information regarding different disease types and different clinical investigators, and different clinical investigation questions, all of which could potentially contribute unforeseen data types that require dynamic integration with existing data. The unpredictability of the data elements combined with the novelty of automated analysis algorithms and controlled vocabulary standards in this area require flexible designs and practical decisions. Our design includes a custom Java-based persistence layer to mediate and facilitate interaction with an object-relational database model and a novel database schema. User interaction is provided through a Java Servlet-based Web interface. Cruella has become an indispensable resource and is used by dozens of researchers every day. The system stores millions of experimental values covering more than 300 biological markers and more than 30 disease types. The experimental data are merged with clinical data that has been aggregated from multiple sources and is available to the researchers for management, analysis, and export. Cruella addresses many of the special considerations for managing tissue microarray experimental data and the associated clinical information. A metadata-driven approach provides a practical solution to many of the unique issues inherent in tissue microarray research, and allows relatively straightforward interoperability with and accommodation of new data models.
Adjustable Spin-Spin Interaction with 171Yb+ ions and Addressing of a Quantum Byte
NASA Astrophysics Data System (ADS)
Wunderlich, Christof
2015-05-01
Trapped atomic ions are a well-advanced physical system for investigating fundamental questions of quantum physics and for quantum information science and its applications. When contemplating the scalability of trapped ions for quantum information science one notes that the use of laser light for coherent operations gives rise to technical and also physical issues that can be remedied by replacing laser light by microwave (MW) and radio-frequency (RF) radiation employing suitably modified ion traps. Magnetic gradient induced coupling (MAGIC) makes it possible to coherently manipulate trapped ions using exclusively MW and RF radiation. After introducing the general concept of MAGIC, I shall report on recent experimental progress using 171Yb+ ions, confined in a suitable Paul trap, as effective spin-1/2 systems interacting via MAGIC. Entangling gates between non-neighbouring ions will be presented. The spin-spin coupling strength is variable and can be adjusted by variation of the secular trap frequency. In general, executing a quantum gate with a single qubit, or a subset of qubits, affects the quantum states of all other qubits. This reduced fidelity of the whole quantum register may preclude scalability. We demonstrate addressing of individual qubits within a quantum byte (eight qubits interacting via MAGIC) using MW radiation and measure the error induced in all non-addressed qubits (cross-talk) associated with the application of single-qubit gates. The measured cross-talk is on the order 10-5 and therefore below the threshold commonly agreed sufficient to efficiently realize fault-tolerant quantum computing. Furthermore, experimental results on continuous and pulsed dynamical decoupling (DD) for protecting quantum memories and quantum gates against decoherence will be briefly discussed. Finally, I report on using continuous DD to realize a broadband ultrasensitive single-atom magnetometer.
A multiplexed microfluidic system for evaluation of dynamics of immune-tumor interactions.
Moore, N; Doty, D; Zielstorff, M; Kariv, I; Moy, L Y; Gimbel, A; Chevillet, J R; Lowry, N; Santos, J; Mott, V; Kratchman, L; Lau, T; Addona, G; Chen, H; Borenstein, J T
2018-05-25
Recapitulation of the tumor microenvironment is critical for probing mechanisms involved in cancer, and for evaluating the tumor-killing potential of chemotherapeutic agents, targeted therapies and immunotherapies. Microfluidic devices have emerged as valuable tools for both mechanistic studies and for preclinical evaluation of therapeutic agents, due to their ability to precisely control drug concentrations and gradients of oxygen and other species in a scalable and potentially high throughput manner. Most existing in vitro microfluidic cancer models are comprised of cultured cancer cells embedded in a physiologically relevant matrix, collocated with vascular-like structures. However, the recent emergence of immune checkpoint inhibitors (ICI) as a powerful therapeutic modality against many cancers has created a need for preclinical in vitro models that accommodate interactions between tumors and immune cells, particularly for assessment of unprocessed tumor fragments harvested directly from patient biopsies. Here we report on a microfluidic model, termed EVIDENT (ex vivo immuno-oncology dynamic environment for tumor biopsies), that accommodates up to 12 separate tumor biopsy fragments interacting with flowing tumor-infiltrating lymphocytes (TILs) in a dynamic microenvironment. Flow control is achieved with a single pump in a simple and scalable configuration, and the entire system is constructed using low-sorption materials, addressing two principal concerns with existing microfluidic cancer models. The system sustains tumor fragments for multiple days, and permits real-time, high-resolution imaging of the interaction between autologous TILs and tumor fragments, enabling mapping of TIL-mediated tumor killing and testing of various ICI treatments versus tumor response. Custom image analytic algorithms based on machine learning reported here provide automated and quantitative assessment of experimental results. Initial studies indicate that the system is capable of quantifying temporal levels of TIL infiltration and tumor death, and that the EVIDENT model mimics the known in vivo tumor response to anti-PD-1 ICI treatment of flowing TILs relative to isotype control treatments for syngeneic mouse MC38 tumors.
Visual Analytics for Heterogeneous Geoscience Data
NASA Astrophysics Data System (ADS)
Pan, Y.; Yu, L.; Zhu, F.; Rilee, M. L.; Kuo, K. S.; Jiang, H.; Yu, H.
2017-12-01
Geoscience data obtained from diverse sources have been routinely leveraged by scientists to study various phenomena. The principal data sources include observations and model simulation outputs. These data are characterized by spatiotemporal heterogeneity originated from different instrument design specifications and/or computational model requirements used in data generation processes. Such inherent heterogeneity poses several challenges in exploring and analyzing geoscience data. First, scientists often wish to identify features or patterns co-located among multiple data sources to derive and validate certain hypotheses. Heterogeneous data make it a tedious task to search such features in dissimilar datasets. Second, features of geoscience data are typically multivariate. It is challenging to tackle the high dimensionality of geoscience data and explore the relations among multiple variables in a scalable fashion. Third, there is a lack of transparency in traditional automated approaches, such as feature detection or clustering, in that scientists cannot intuitively interact with their analysis processes and interpret results. To address these issues, we present a new scalable approach that can assist scientists in analyzing voluminous and diverse geoscience data. We expose a high-level query interface that allows users to easily express their customized queries to search features of interest across multiple heterogeneous datasets. For identified features, we develop a visualization interface that enables interactive exploration and analytics in a linked-view manner. Specific visualization techniques such as scatter plots to parallel coordinates are employed in each view to allow users to explore various aspects of features. Different views are linked and refreshed according to user interactions in any individual view. In such a manner, a user can interactively and iteratively gain understanding into the data through a variety of visual analytics operations. We demonstrate with use cases how scientists can combine the query and visualization interfaces to enable a customized workflow facilitating studies using heterogeneous geoscience datasets.
Scalability problems of simple genetic algorithms.
Thierens, D
1999-01-01
Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm-namely elitism, niching, and restricted mating are not significantly improving the scalability problems.
Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining
NASA Astrophysics Data System (ADS)
Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio
2013-12-01
Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.
Azcorra, A; Chiroque, L F; Cuevas, R; Fernández Anta, A; Laniado, H; Lillo, R E; Romo, J; Sguera, C
2018-05-03
Billions of users interact intensively every day via Online Social Networks (OSNs) such as Facebook, Twitter, or Google+. This makes OSNs an invaluable source of information, and channel of actuation, for sectors like advertising, marketing, or politics. To get the most of OSNs, analysts need to identify influential users that can be leveraged for promoting products, distributing messages, or improving the image of companies. In this report we propose a new unsupervised method, Massive Unsupervised Outlier Detection (MUOD), based on outliers detection, for providing support in the identification of influential users. MUOD is scalable, and can hence be used in large OSNs. Moreover, it labels the outliers as of shape, magnitude, or amplitude, depending of their features. This allows classifying the outlier users in multiple different classes, which are likely to include different types of influential users. Applying MUOD to a subset of roughly 400 million Google+ users, it has allowed identifying and discriminating automatically sets of outlier users, which present features associated to different definitions of influential users, like capacity to attract engagement, capacity to attract a large number of followers, or high infection capacity.
On delay adjustment for dynamic load balancing in distributed virtual environments.
Deng, Yunhua; Lau, Rynson W H
2012-04-01
Distributed virtual environments (DVEs) are becoming very popular in recent years, due to the rapid growing of applications, such as massive multiplayer online games (MMOGs). As the number of concurrent users increases, scalability becomes one of the major challenges in designing an interactive DVE system. One solution to address this scalability problem is to adopt a multi-server architecture. While some methods focus on the quality of partitioning the load among the servers, others focus on the efficiency of the partitioning process itself. However, all these methods neglect the effect of network delay among the servers on the accuracy of the load balancing solutions. As we show in this paper, the change in the load of the servers due to network delay would affect the performance of the load balancing algorithm. In this work, we conduct a formal analysis of this problem and discuss two efficient delay adjustment schemes to address the problem. Our experimental results show that our proposed schemes can significantly improve the performance of the load balancing algorithm with neglectable computation overhead.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Azevedo, Eduardo; Abbott, Stephen; Koskela, Tuomas
The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning for balancing computational work in pushing particlesmore » and in grid related work, scalable and accurate discretization algorithms for non-linear Coulomb collisions, and communication-avoiding subcycling technology for pushing particles on both CPUs and GPUs are also utilized to dramatically improve the scalability and time-to-solution, hence enabling the difficult kinetic ITER edge simulation on a present-day leadership class computer.« less
Dewari, Pooran Singh; Southgate, Benjamin; Mccarten, Katrina; Monogarov, German; O'Duibhir, Eoghan; Quinn, Niall; Tyrer, Ashley; Leitner, Marie-Christin; Plumb, Colin; Kalantzaki, Maria; Blin, Carla; Finch, Rebecca; Bressan, Raul Bardini; Morrison, Gillian; Jacobi, Ashley M; Behlke, Mark A; von Kriegsheim, Alex; Tomlinson, Simon; Krijgsveld, Jeroen
2018-01-01
CRISPR/Cas9 can be used for precise genetic knock-in of epitope tags into endogenous genes, simplifying experimental analysis of protein function. However, Cas9-assisted epitope tagging in primary mammalian cell cultures is often inefficient and reliant on plasmid-based selection strategies. Here, we demonstrate improved knock-in efficiencies of diverse tags (V5, 3XFLAG, Myc, HA) using co-delivery of Cas9 protein pre-complexed with two-part synthetic modified RNAs (annealed crRNA:tracrRNA) and single-stranded oligodeoxynucleotide (ssODN) repair templates. Knock-in efficiencies of ~5–30%, were achieved without selection in embryonic stem (ES) cells, neural stem (NS) cells, and brain-tumor-derived stem cells. Biallelic-tagged clonal lines were readily derived and used to define Olig2 chromatin-bound interacting partners. Using our novel web-based design tool, we established a 96-well format pipeline that enabled V5-tagging of 60 different transcription factors. This efficient, selection-free and scalable epitope tagging pipeline enables systematic surveys of protein expression levels, subcellular localization, and interactors across diverse mammalian stem cells. PMID:29638216
A scalable population code for time in the striatum.
Mello, Gustavo B M; Soares, Sofia; Paton, Joseph J
2015-05-04
To guide behavior and learn from its consequences, the brain must represent time over many scales. Yet, the neural signals used to encode time in the seconds-to-minute range are not known. The striatum is a major input area of the basal ganglia associated with learning and motor function. Previous studies have also shown that the striatum is necessary for normal timing behavior. To address how striatal signals might be involved in timing, we recorded from striatal neurons in rats performing an interval timing task. We found that neurons fired at delays spanning tens of seconds and that this pattern of responding reflected the interaction between time and the animals' ongoing sensorimotor state. Surprisingly, cells rescaled responses in time when intervals changed, indicating that striatal populations encoded relative time. Moreover, time estimates decoded from activity predicted timing behavior as animals adjusted to new intervals, and disrupting striatal function led to a decrease in timing performance. These results suggest that striatal activity forms a scalable population code for time, providing timing signals that animals use to guide their actions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Scalable cell alignment on optical media substrates.
Anene-Nzelu, Chukwuemeka G; Choudhury, Deepak; Li, Huipeng; Fraiszudeen, Azmall; Peh, Kah-Yim; Toh, Yi-Chin; Ng, Sum Huan; Leo, Hwa Liang; Yu, Hanry
2013-07-01
Cell alignment by underlying topographical cues has been shown to affect important biological processes such as differentiation and functional maturation in vitro. However, the routine use of cell culture substrates with micro- or nano-topographies, such as grooves, is currently hampered by the high cost and specialized facilities required to produce these substrates. Here we present cost-effective commercially available optical media as substrates for aligning cells in culture. These optical media, including CD-R, DVD-R and optical grating, allow different cell types to attach and grow well on them. The physical dimension of the grooves in these optical media allowed cells to be aligned in confluent cell culture with maximal cell-cell interaction and these cell alignment affect the morphology and differentiation of cardiac (H9C2), skeletal muscle (C2C12) and neuronal (PC12) cell lines. The optical media is amenable to various chemical modifications with fibronectin, laminin and gelatin for culturing different cell types. These low-cost commercially available optical media can serve as scalable substrates for research or drug safety screening applications in industry scales. Copyright © 2013 Elsevier Ltd. All rights reserved.
Implementing Journaling in a Linux Shared Disk File System
NASA Technical Reports Server (NTRS)
Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew;
2000-01-01
In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.
DSPCP: A Data Scalable Approach for Identifying Relationships in Parallel Coordinates.
Nguyen, Hoa; Rosen, Paul
2018-03-01
Parallel coordinates plots (PCPs) are a well-studied technique for exploring multi-attribute datasets. In many situations, users find them a flexible method to analyze and interact with data. Unfortunately, using PCPs becomes challenging as the number of data items grows large or multiple trends within the data mix in the visualization. The resulting overdraw can obscure important features. A number of modifications to PCPs have been proposed, including using color, opacity, smooth curves, frequency, density, and animation to mitigate this problem. However, these modified PCPs tend to have their own limitations in the kinds of relationships they emphasize. We propose a new data scalable design for representing and exploring data relationships in PCPs. The approach exploits the point/line duality property of PCPs and a local linear assumption of data to extract and represent relationship summarizations. This approach simultaneously shows relationships in the data and the consistency of those relationships. Our approach supports various visualization tasks, including mixed linear and nonlinear pattern identification, noise detection, and outlier detection, all in large data. We demonstrate these tasks on multiple synthetic and real-world datasets.
Participatory monitoring to connect local and global priorities for forest restoration.
Evans, Kristen; Guariguata, Manuel R; Brancalion, Pedro H S
2018-06-01
New global initiatives to restore forest landscapes present an unparalleled opportunity to reverse deforestation and forest degradation. Participatory monitoring could play a crucial role in providing accountability, generating local buy in, and catalyzing learning in monitoring systems that need scalability and adaptability to a range of local sites. We synthesized current knowledge from literature searches and interviews to provide lessons for the development of a scalable, multisite participatory monitoring system. Studies show that local people can collect accurate data on forest change, drivers of change, threats to reforestation, and biophysical and socioeconomic impacts that remote sensing cannot. They can do this at one-third the cost of professionals. Successful participatory monitoring systems collect information on a few simple indicators, respond to local priorities, provide appropriate incentives for participation, and catalyze learning and decision making based on frequent analyses and multilevel interactions with other stakeholders. Participatory monitoring could provide a framework for linking global, national, and local needs, aspirations, and capacities for forest restoration. © 2018 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.
ERIC Educational Resources Information Center
Wang, James Z.; Du, Yanping
Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…
Temporally Scalable Visual SLAM using a Reduced Pose Graph
2012-05-25
m b r i d g e , m a 0 213 9 u s a — w w w. c s a i l . m i t . e d u MIT-CSAIL-TR-2012-013 May 25, 2012 Temporally Scalable Visual SLAM using a...00-00-2012 4. TITLE AND SUBTITLE Temporally Scalable Visual SLAM using a Reduced Pose Graph 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...demonstrate a system for temporally scalable visual SLAM using a reduced pose graph representation. Unlike previous visual SLAM approaches that use
Hermann, Gunter; Pohl, Vincent; Tremblay, Jean Christophe
2017-10-30
In this contribution, we extend our framework for analyzing and visualizing correlated many-electron dynamics to non-variational, highly scalable electronic structure method. Specifically, an explicitly time-dependent electronic wave packet is written as a linear combination of N-electron wave functions at the configuration interaction singles (CIS) level, which are obtained from a reference time-dependent density functional theory (TDDFT) calculation. The procedure is implemented in the open-source Python program detCI@ORBKIT, which extends the capabilities of our recently published post-processing toolbox (Hermann et al., J. Comput. Chem. 2016, 37, 1511). From the output of standard quantum chemistry packages using atom-centered Gaussian-type basis functions, the framework exploits the multideterminental structure of the hybrid TDDFT/CIS wave packet to compute fundamental one-electron quantities such as difference electronic densities, transient electronic flux densities, and transition dipole moments. The hybrid scheme is benchmarked against wave function data for the laser-driven state selective excitation in LiH. It is shown that all features of the electron dynamics are in good quantitative agreement with the higher-level method provided a judicious choice of functional is made. Broadband excitation of a medium-sized organic chromophore further demonstrates the scalability of the method. In addition, the time-dependent flux densities unravel the mechanistic details of the simulated charge migration process at a glance. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Studying the Interface between Nanomaterials and Biomolecules
NASA Astrophysics Data System (ADS)
Torelli, Marco Diego
As engineered nanomaterials become ubiquitous among society, their inevitable entrance into the environment invites questions as to potential implications. As the field of nanotechnology progresses, responsible development of nanomaterials requires a broad availability of useful tools. To this aim, this work seeks to improve analytical abilities to address fundamental molecular interactions of nanomaterials with biological systems that can be expanded broadly, divided into the following: (1) A model applicable to X-ray photoelectron spectroscopy was developed and validated to correct the over-estimated signal for core:shell nanomaterials that can occur at small particle sizes approaching the electron attenuation length of the material being investigated. (2) To understand the role of underlying substrate in particle interactions, diamond and gold functionalized with a protein resisting molecule (hexaethylene glycol) were compared to test the ability of each to resist adsorption of charged proteins. It was demonstrated that the underlying substrate can have an effect on the ability of to properly resist proteins, with charged proteins adsorbing to gold, believed to be due to the ability of gold to form an image dipole. (3) To advance the use of nanodiamond in biological settings, methods to create robust chemical linkages at single digit sizes were developed. Alkene based oligo(ethylene glycol) molecules were successfully photochemically grafted to fully disaggregated detonation nanodiamond. Because the scalability of such methods currently limits such functionalization broadly, polyelectrolytic wrapping of nanodiamond was developed as a useful and scalable method to produce diamond nanoparticles with varying amine based functionalities. (4) Phage display was adapted as a method to determine chemical functionalities that interact with anatase titanium dioxide below 20 nm. In contrast to finding specific, individual inorganic binding sequences, we lowered the selection stringency to allow for a broader number of peptides to be samples. While no statistical size dependent differences were observed in the amino acid chemistries that interact with anatase TiO2, chemical functionalities and motifs that appear to be important for interaction with nano-anatase were identified. Specifically, positively charged and aromatic motifs working in concert were found to be important.
Empirical Comparison of Visualization Tools for Larger-Scale Network Analysis
Pavlopoulos, Georgios A.; Paez-Espino, David; Kyrpides, Nikos C.; ...
2017-07-18
Gene expression, signal transduction, protein/chemical interactions, biomedical literature cooccurrences, and other concepts are often captured in biological network representations where nodes represent a certain bioentity and edges the connections between them. While many tools to manipulate, visualize, and interactively explore such networks already exist, only few of them can scale up and follow today’s indisputable information growth. In this review, we shortly list a catalog of available network visualization tools and, from a user-experience point of view, we identify four candidate tools suitable for larger-scale network analysis, visualization, and exploration. Lastly, we comment on their strengths and their weaknesses andmore » empirically discuss their scalability, user friendliness, and postvisualization capabilities.« less
Empirical Comparison of Visualization Tools for Larger-Scale Network Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlopoulos, Georgios A.; Paez-Espino, David; Kyrpides, Nikos C.
Gene expression, signal transduction, protein/chemical interactions, biomedical literature cooccurrences, and other concepts are often captured in biological network representations where nodes represent a certain bioentity and edges the connections between them. While many tools to manipulate, visualize, and interactively explore such networks already exist, only few of them can scale up and follow today’s indisputable information growth. In this review, we shortly list a catalog of available network visualization tools and, from a user-experience point of view, we identify four candidate tools suitable for larger-scale network analysis, visualization, and exploration. Lastly, we comment on their strengths and their weaknesses andmore » empirically discuss their scalability, user friendliness, and postvisualization capabilities.« less
LUMA: A many-core, Fluid-Structure Interaction solver based on the Lattice-Boltzmann Method
NASA Astrophysics Data System (ADS)
Harwood, Adrian R. G.; O'Connor, Joseph; Sanchez Muñoz, Jonathan; Camps Santasmasas, Marta; Revell, Alistair J.
2018-01-01
The Lattice-Boltzmann Method at the University of Manchester (LUMA) project was commissioned to build a collaborative research environment in which researchers of all abilities can study fluid-structure interaction (FSI) problems in engineering applications from aerodynamics to medicine. It is built on the principles of accessibility, simplicity and flexibility. The LUMA software at the core of the project is a capable FSI solver with turbulence modelling and many-core scalability as well as a wealth of input/output and pre- and post-processing facilities. The software has been validated and several major releases benchmarked on supercomputing facilities internationally. The software architecture is modular and arranged logically using a minimal amount of object-orientation to maintain a simple and accessible software.
Geometric quantification of features in large flow fields.
Kendall, Wesley; Huang, Jian; Peterka, Tom
2012-01-01
Interactive exploration of flow features in large-scale 3D unsteady-flow data is one of the most challenging visualization problems today. To comprehensively explore the complex feature spaces in these datasets, a proposed system employs a scalable framework for investigating a multitude of characteristics from traced field lines. This capability supports the examination of various neighborhood-based geometric attributes in concert with other scalar quantities. Such an analysis wasn't previously possible because of the large computational overhead and I/O requirements. The system integrates visual analytics methods by letting users procedurally and interactively describe and extract high-level flow features. An exploration of various phenomena in a large global ocean-modeling simulation demonstrates the approach's generality and expressiveness as well as its efficacy.
2013-01-01
Complementary in situ X-ray photoelectron spectroscopy (XPS), X-ray diffractometry, and environmental scanning electron microscopy are used to fingerprint the entire graphene chemical vapor deposition process on technologically important polycrystalline Cu catalysts to address the current lack of understanding of the underlying fundamental growth mechanisms and catalyst interactions. Graphene forms directly on metallic Cu during the high-temperature hydrocarbon exposure, whereby an upshift in the binding energies of the corresponding C1s XPS core level signatures is indicative of coupling between the Cu catalyst and the growing graphene. Minor carbon uptake into Cu can under certain conditions manifest itself as carbon precipitation upon cooling. Postgrowth, ambient air exposure even at room temperature decouples the graphene from Cu by (reversible) oxygen intercalation. The importance of these dynamic interactions is discussed for graphene growth, processing, and device integration. PMID:24041311
Single-photon non-linear optics with a quantum dot in a waveguide
NASA Astrophysics Data System (ADS)
Javadi, A.; Söllner, I.; Arcari, M.; Hansen, S. Lindskov; Midolo, L.; Mahmoodian, S.; Kiršanskė, G.; Pregnolato, T.; Lee, E. H.; Song, J. D.; Stobbe, S.; Lodahl, P.
2015-10-01
Strong non-linear interactions between photons enable logic operations for both classical and quantum-information technology. Unfortunately, non-linear interactions are usually feeble and therefore all-optical logic gates tend to be inefficient. A quantum emitter deterministically coupled to a propagating mode fundamentally changes the situation, since each photon inevitably interacts with the emitter, and highly correlated many-photon states may be created. Here we show that a single quantum dot in a photonic-crystal waveguide can be used as a giant non-linearity sensitive at the single-photon level. The non-linear response is revealed from the intensity and quantum statistics of the scattered photons, and contains contributions from an entangled photon-photon bound state. The quantum non-linearity will find immediate applications for deterministic Bell-state measurements and single-photon transistors and paves the way to scalable waveguide-based photonic quantum-computing architectures.
Myneni, Sahiti; Cobb, Nathan K; Cohen, Trevor
2016-01-01
Analysis of user interactions in online communities could improve our understanding of health-related behaviors and inform the design of technological solutions that support behavior change. However, to achieve this we would need methods that provide granular perspective, yet are scalable. In this paper, we present a methodology for high-throughput semantic and network analysis of large social media datasets, combining semi-automated text categorization with social network analytics. We apply this method to derive content-specific network visualizations of 16,492 user interactions in an online community for smoking cessation. Performance of the categorization system was reasonable (average F-measure of 0.74, with system-rater reliability approaching rater-rater reliability). The resulting semantically specific network analysis of user interactions reveals content- and behavior-specific network topologies. Implications for socio-behavioral health and wellness platforms are also discussed.
An Immersive VR System for Sports Education
NASA Astrophysics Data System (ADS)
Song, Peng; Xu, Shuhong; Fong, Wee Teck; Chin, Ching Ling; Chua, Gim Guan; Huang, Zhiyong
The development of new technologies has undoubtedly promoted the advances of modern education, among which Virtual Reality (VR) technologies have made the education more visually accessible for students. However, classroom education has been the focus of VR applications whereas not much research has been done in promoting sports education using VR technologies. In this paper, an immersive VR system is designed and implemented to create a more intuitive and visual way of teaching tennis. A scalable system architecture is proposed in addition to the hardware setup layout, which can be used for various immersive interactive applications such as architecture walkthroughs, military training simulations, other sports game simulations, interactive theaters, and telepresent exhibitions. Realistic interaction experience is achieved through accurate and robust hybrid tracking technology, while the virtual human opponent is animated in real time using shader-based skin deformation. Potential future extensions are also discussed to improve the teaching/learning experience.
Deterministic control of radiative processes by shaping the mode field
NASA Astrophysics Data System (ADS)
Pellegrino, D.; Pagliano, F.; Genco, A.; Petruzzella, M.; van Otten, F. W.; Fiore, A.
2018-04-01
Quantum dots (QDs) interacting with confined light fields in photonic crystal cavities represent a scalable light source for the generation of single photons and laser radiation in the solid-state platform. The complete control of light-matter interaction in these sources is needed to fully exploit their potential, but it has been challenging due to the small length scales involved. In this work, we experimentally demonstrate the control of the radiative interaction between InAs QDs and one mode of three coupled nanocavities. By non-locally moulding the mode field experienced by the QDs inside one of the cavities, we are able to deterministically tune, and even inhibit, the spontaneous emission into the mode. The presented method will enable the real-time switching of Rabi oscillations, the shaping of the temporal waveform of single photons, and the implementation of unexplored nanolaser modulation schemes.
A Flexible Sensor Technology for the Distributed Measurement of Interaction Pressure
Donati, Marco; Vitiello, Nicola; De Rossi, Stefano Marco Maria; Lenzi, Tommaso; Crea, Simona; Persichetti, Alessandro; Giovacchini, Francesco; Koopman, Bram; Podobnik, Janez; Munih, Marko; Carrozza, Maria Chiara
2013-01-01
We present a sensor technology for the measure of the physical human-robot interaction pressure developed in the last years at Scuola Superiore Sant'Anna. The system is composed of flexible matrices of opto-electronic sensors covered by a soft silicone cover. This sensory system is completely modular and scalable, allowing one to cover areas of any sizes and shapes, and to measure different pressure ranges. In this work we present the main application areas for this technology. A first generation of the system was used to monitor human-robot interaction in upper- (NEUROExos; Scuola Superiore Sant'Anna) and lower-limb (LOPES; University of Twente) exoskeletons for rehabilitation. A second generation, with increased resolution and wireless connection, was used to develop a pressure-sensitive foot insole and an improved human-robot interaction measurement systems. The experimental characterization of the latter system along with its validation on three healthy subjects is presented here for the first time. A perspective on future uses and development of the technology is finally drafted. PMID:23322104
Rewiring MAP kinases in Saccharomyces cerevisiae to regulate novel targets through ubiquitination.
Groves, Benjamin; Khakhar, Arjun; Nadel, Cory M; Gardner, Richard G; Seelig, Georg
2016-08-15
Evolution has often copied and repurposed the mitogen-activated protein kinase (MAPK) signaling module. Understanding how connections form during evolution, in disease and across individuals requires knowledge of the basic tenets that govern kinase-substrate interactions. We identify criteria sufficient for establishing regulatory links between a MAPK and a non-native substrate. The yeast MAPK Fus3 and human MAPK ERK2 can be functionally redirected if only two conditions are met: the kinase and substrate contain matching interaction domains and the substrate includes a phospho-motif that can be phosphorylated by the kinase and recruit a downstream effector. We used a panel of interaction domains and phosphorylation-activated degradation motifs to demonstrate modular and scalable retargeting. We applied our approach to reshape the signaling behavior of an existing kinase pathway. Together, our results demonstrate that a MAPK can be largely defined by its interaction domains and compatible phospho-motifs and provide insight into how MAPK-substrate connections form.
Energy-absorption capability and scalability of square cross section composite tube specimens
NASA Technical Reports Server (NTRS)
Farley, Gary L.
1987-01-01
Static crushing tests were conducted on graphite/epoxy and Kevlar/epoxy square cross section tubes to study the influence of specimen geometry on the energy-absorption capability and scalability of composite materials. The tube inside width-to-wall thickness (W/t) ratio was determined to significantly affect the energy-absorption capability of composite materials. As W/t ratio decreases, the energy-absorption capability increases nonlinearly. The energy-absorption capability of Kevlar epoxy tubes was found to be geometrically scalable, but the energy-absorption capability of graphite/epoxy tubes was not geometrically scalable.
Scalable and Resilient Middleware to Handle Information Exchange during Environment Crisis
NASA Astrophysics Data System (ADS)
Tao, R.; Poslad, S.; Moßgraber, J.; Middleton, S.; Hammitzsch, M.
2012-04-01
The EU FP7 TRIDEC project focuses on enabling real-time, intelligent, information management of collaborative, complex, critical decision processes for earth management. A key challenge is to promote a communication infrastructure to facilitate interoperable environment information services during environment events and crises such as tsunamis and drilling, during which increasing volumes and dimensionality of disparate information sources, including sensor-based and human-based ones, can result, and need to be managed. Such a system needs to support: scalable, distributed messaging; asynchronous messaging; open messaging to handling changing clients such as new and retired automated system and human information sources becoming online or offline; flexible data filtering, and heterogeneous access networks (e.g., GSM, WLAN and LAN). In addition, the system needs to be resilient to handle the ICT system failures, e.g. failure, degradation and overloads, during environment events. There are several system middleware choices for TRIDEC based upon a Service-oriented-architecture (SOA), Event-driven-Architecture (EDA), Cloud Computing, and Enterprise Service Bus (ESB). In an SOA, everything is a service (e.g. data access, processing and exchange); clients can request on demand or subscribe to services registered by providers; more often interaction is synchronous. In an EDA system, events that represent significant changes in state can be processed simply, or as streams or more complexly. Cloud computing is a virtualization, interoperable and elastic resource allocation model. An ESB, a fundamental component for enterprise messaging, supports synchronous and asynchronous message exchange models and has inbuilt resilience against ICT failure. Our middleware proposal is an ESB based hybrid architecture model: an SOA extension supports more synchronous workflows; EDA assists the ESB to handle more complex event processing; Cloud computing can be used to increase and decrease the ESB resources on demand. To reify this hybrid ESB centric architecture, we will adopt two complementary approaches: an open source one for scalability and resilience improvement while a commercial one can be used for ultra-speed messaging, whilst we can bridge between these two to support interoperability. In TRIDEC, to manage such a hybrid messaging system, overlay and underlay management techniques will be adopted. The managers (both global and local) will collect, store and update status information (e.g. CPU utilization, free space, number of clients) and balance the usage, throughput, and delays to improve resilience and scalability. The expected resilience improvement includes dynamic failover, self-healing, pre-emptive load balancing, and bottleneck prediction while the expected improvement for scalability includes capacity estimation, Http Bridge, and automatic configuration and reconfiguration (e.g. add or delete clients and servers).
Disparity : scalable anomaly detection for clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, N.; Bradshaw, R.; Lusk, E.
2008-01-01
In this paper, we describe disparity, a tool that does parallel, scalable anomaly detection for clusters. Disparity uses basic statistical methods and scalable reduction operations to perform data reduction on client nodes and uses these results to locate node anomalies. We discuss the implementation of disparity and present results of its use on a SiCortex SC5832 system.
BAMSI: a multi-cloud service for scalable distributed filtering of massive genome data.
Ausmees, Kristiina; John, Aji; Toor, Salman Z; Hellander, Andreas; Nettelblad, Carl
2018-06-26
The advent of next-generation sequencing (NGS) has made whole-genome sequencing of cohorts of individuals a reality. Primary datasets of raw or aligned reads of this sort can get very large. For scientific questions where curated called variants are not sufficient, the sheer size of the datasets makes analysis prohibitively expensive. In order to make re-analysis of such data feasible without the need to have access to a large-scale computing facility, we have developed a highly scalable, storage-agnostic framework, an associated API and an easy-to-use web user interface to execute custom filters on large genomic datasets. We present BAMSI, a Software as-a Service (SaaS) solution for filtering of the 1000 Genomes phase 3 set of aligned reads, with the possibility of extension and customization to other sets of files. Unique to our solution is the capability of simultaneously utilizing many different mirrors of the data to increase the speed of the analysis. In particular, if the data is available in private or public clouds - an increasingly common scenario for both academic and commercial cloud providers - our framework allows for seamless deployment of filtering workers close to data. We show results indicating that such a setup improves the horizontal scalability of the system, and present a possible use case of the framework by performing an analysis of structural variation in the 1000 Genomes data set. BAMSI constitutes a framework for efficient filtering of large genomic data sets that is flexible in the use of compute as well as storage resources. The data resulting from the filter is assumed to be greatly reduced in size, and can easily be downloaded or routed into e.g. a Hadoop cluster for subsequent interactive analysis using Hive, Spark or similar tools. In this respect, our framework also suggests a general model for making very large datasets of high scientific value more accessible by offering the possibility for organizations to share the cost of hosting data on hot storage, without compromising the scalability of downstream analysis.
Joint source-channel coding for motion-compensated DCT-based SNR scalable video.
Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K
2002-01-01
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.
Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin
2015-10-19
The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.
Sol-Gel Processing of MgF₂ Antireflective Coatings.
Löbmann, Peer
2018-05-02
There are different approaches for the preparation of porous antireflective λ/4 MgF₂ films from liquid precursors. Among these, the non-aqueous fluorolytic synthesis of precursor solutions offers many advantages in terms of processing simplicity and scalability. In this paper, the structural features and optical performance of the resulting films are highlighted, and their specific interactions with different inorganic substrates are discussed. Due to their excellent abrasion resistance, coatings have a high potential for applications on glass. Using solvothermal treatment of precursor solutions, also the processing of thermally sensitive polymer substrates becomes feasible.
Architecture Knowledge for Evaluating Scalable Databases
2015-01-16
problems, arising from the proliferation of new data models and distributed technologies for building scalable, available data stores . Architects must...longer are relational databases the de facto standard for building data repositories. Highly distributed, scalable “ NoSQL ” databases [11] have emerged...This is especially challenging at the data storage layer. The multitude of competing NoSQL database technologies creates a complex and rapidly
Scalable and Manageable Storage Systems
2000-12-01
Despite our long- distance relationship, my brothers and sisters, Charfeddine, Amel, Ghazi, Hajer, Nabeel , and Ines overwhelmed me with more love and...that enable storage sys - tems to be more cost-effectively scalable. Furthermore, the dissertation proposes an approach to ensure automatic load...and addresses three key technical challenges to making storage sys - tems more cost-effectively scalable and manageable. 1.2 Dissertation research The
Scalable Quantum Networks for Distributed Computing and Sensing
2016-04-01
probabilistic measurement , so we developed quantum memories and guided-wave implementations of same, demonstrating controlled delay of a heralded single...Second, fundamental scalability requires a method to synchronize protocols based on quantum measurements , which are inherently probabilistic. To meet...AFRL-AFOSR-UK-TR-2016-0007 Scalable Quantum Networks for Distributed Computing and Sensing Ian Walmsley THE UNIVERSITY OF OXFORD Final Report 04/01
A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system
NASA Astrophysics Data System (ADS)
Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.
2014-06-01
The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.
Adaptive UEP and Packet Size Assignment for Scalable Video Transmission over Burst-Error Channels
NASA Astrophysics Data System (ADS)
Lee, Chen-Wei; Yang, Chu-Sing; Su, Yih-Ching
2006-12-01
This work proposes an adaptive unequal error protection (UEP) and packet size assignment scheme for scalable video transmission over a burst-error channel. An analytic model is developed to evaluate the impact of channel bit error rate on the quality of streaming scalable video. A video transmission scheme, which combines the adaptive assignment of packet size with unequal error protection to increase the end-to-end video quality, is proposed. Several distinct scalable video transmission schemes over burst-error channel have been compared, and the simulation results reveal that the proposed transmission schemes can react to varying channel conditions with less and smoother quality degradation.
Scalability enhancement of AODV using local link repairing
NASA Astrophysics Data System (ADS)
Jain, Jyoti; Gupta, Roopam; Bandhopadhyay, T. K.
2014-09-01
Dynamic change in the topology of an ad hoc network makes it difficult to design an efficient routing protocol. Scalability of an ad hoc network is also one of the important criteria of research in this field. Most of the research works in ad hoc network focus on routing and medium access protocols and produce simulation results for limited-size networks. Ad hoc on-demand distance vector (AODV) is one of the best reactive routing protocols. In this article, modified routing protocols based on local link repairing of AODV are proposed. Method of finding alternate routes for next-to-next node is proposed in case of link failure. These protocols are beacon-less, means periodic hello message is removed from the basic AODV to improve scalability. Few control packet formats have been changed to accommodate suggested modification. Proposed protocols are simulated to investigate scalability performance and compared with basic AODV protocol. This also proves that local link repairing of proposed protocol improves scalability of the network. From simulation results, it is clear that scalability performance of routing protocol is improved because of link repairing method. We have tested protocols for different terrain area with approximate constant node densities and different traffic load.
Wanted: Scalable Tracers for Diffusion Measurements
2015-01-01
Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core–shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say “reinforced Ficoll” or “reinforced hyperbranched polyglycerol.” PMID:25319586
Ontology and modeling patterns for state-based behavior representation
NASA Technical Reports Server (NTRS)
Castet, Jean-Francois; Rozek, Matthew L.; Ingham, Michel D.; Rouquette, Nicolas F.; Chung, Seung H.; Kerzhner, Aleksandr A.; Donahue, Kenneth M.; Jenkins, J. Steven; Wagner, David A.; Dvorak, Daniel L.;
2015-01-01
This paper provides an approach to capture state-based behavior of elements, that is, the specification of their state evolution in time, and the interactions amongst them. Elements can be components (e.g., sensors, actuators) or environments, and are characterized by state variables that vary with time. The behaviors of these elements, as well as interactions among them are represented through constraints on state variables. This paper discusses the concepts and relationships introduced in this behavior ontology, and the modeling patterns associated with it. Two example cases are provided to illustrate their usage, as well as to demonstrate the flexibility and scalability of the behavior ontology: a simple flashlight electrical model and a more complex spacecraft model involving instruments, power and data behaviors. Finally, an implementation in a SysML profile is provided.
Conceptual Architecture for Obtaining Cyber Situational Awareness
2014-06-01
1-893723-17-8. [10] SKYBOX SECURITY. Developer´s Guide. Skybox View. Manual.Version 11. 2010. [11] SCALABLE Network. EXata communications...E. Understanding command and control. Washington, D.C.: CCRP Publication Series, 2006. 255 p. ISBN 1-893723-17-8. • [10] SKYBOX SECURITY. Developer...s Guide. Skybox View. Manual.Version 11. 2010. • [11] SCALABLE Network. EXata communications simulation platform. Available: <http://www.scalable
Scalable Power-Component Models for Concept Testing
2011-08-17
Scalable Power-Component Models for Concept Testing, Mazzola, et al . UNCLASSIFIED: Dist A. Approved for public release 2011 NDIA GROUND VEHICLE...Power-Component Models for Concept Testing, Mazzola, et al . UNCLASSIFIED: Dist A. Approved for public release Page 2 of 8 technology that has yet...Technology Symposium (GVSETS) Scalable Power-Component Models for Concept Testing, Mazzola, et al . UNCLASSIFIED: Dist A. Approved for public release
TriG: Next Generation Scalable Spaceborne GNSS Receiver
NASA Technical Reports Server (NTRS)
Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.
2012-01-01
TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drotar, Alexander P.; Quinn, Erin E.; Sutherland, Landon D.
2012-07-30
Project description is: (1) Build a high performance computer; and (2) Create a tool to monitor node applications in Component Based Tool Framework (CBTF) using code from Lightweight Data Metric Service (LDMS). The importance of this project is that: (1) there is a need a scalable, parallel tool to monitor nodes on clusters; and (2) New LDMS plugins need to be able to be easily added to tool. CBTF stands for Component Based Tool Framework. It's scalable and adjusts to different topologies automatically. It uses MRNet (Multicast/Reduction Network) mechanism for information transport. CBTF is flexible and general enough to bemore » used for any tool that needs to do a task on many nodes. Its components are reusable and 'EASILY' added to a new tool. There are three levels of CBTF: (1) frontend node - interacts with users; (2) filter nodes - filters or concatenates information from backend nodes; and (3) backend nodes - where the actual work of the tool is done. LDMS stands for lightweight data metric servies. It's a tool used for monitoring nodes. Ltool is the name of the tool we derived from LDMS. It's dynamically linked and includes the following components: Vmstat, Meminfo, Procinterrupts and more. It works by: Ltool command is run on the frontend node; Ltool collects information from the backend nodes; backend nodes send information to the filter nodes; and filter nodes concatenate information and send to a database on the front end node. Ltool is a useful tool when it comes to monitoring nodes on a cluster because the overhead involved with running the tool is not particularly high and it will automatically scale to any size cluster.« less
Coelho, Daniel H; Hammerschlag, Paul E; Bat-Chava, Yael; Kohan, Darius
2009-06-01
The Cochlear Implant Function Index (CIFI) is created to assess adult cochlear implant (CI) auditory effectiveness in real world situations. Our objective is to evaluate the CIFI as a reliable psychometric tool to assess 1) reliance on visual assistance, 2) telephone use, 3) communication at work, 4) 'hearing' in noise, 5) in groups, and 6) in large room settings. Based upon Guttman scaling properties, the CIFI elicits implanted respondent's functional level with auditory independence from Level 1 (still requiring signing) to level 4 (without any help beyond CI). A blinded, retrospective questionnaire is anonymously answered by cochlear implant recipients. CI centers of tertiary care medical centers, CI support group, and an interactive web page of a hearing and speech center in a large metropolitan region. 245 respondents from a varied adult CI population implanted for one month to 19 years prior to answering the questionnaire. An assessment tool of CI function. A coefficient of reproducibility (CR) for the Guttman scale format equal or greater than 0.90, indicating good scalability. CR in the CIFI was above 0.90. Effective scalability and mean scores from 2.5 to 3.5 for the six areas examined (1.00-4.00) were achieved. The psychometric properties of this user friendly survey demonstrate consistently good scalability. Based on these findings, the CIFI provides a validated tool that can be used for systematic comparisons between groups of patients or for follow-up outcomes in patients who use cochlear implants. Further study is indicated to correlate CIFI scores with sound and speech perception scores. Copyright 2009 John Wiley & Sons, Ltd.
High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering
NASA Technical Reports Server (NTRS)
Maly, K.
1998-01-01
Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
Performances of the PIPER scalable child human body model in accident reconstruction
Giordano, Chiara; Kleiven, Svein
2017-01-01
Human body models (HBMs) have the potential to provide significant insights into the pediatric response to impact. This study describes a scalable/posable approach to perform child accident reconstructions using the Position and Personalize Advanced Human Body Models for Injury Prediction (PIPER) scalable child HBM of different ages and in different positions obtained by the PIPER tool. Overall, the PIPER scalable child HBM managed reasonably well to predict the injury severity and location of the children involved in real-life crash scenarios documented in the medical records. The developed methodology and workflow is essential for future work to determine child injury tolerances based on the full Child Advanced Safety Project for European Roads (CASPER) accident reconstruction database. With the workflow presented in this study, the open-source PIPER scalable HBM combined with the PIPER tool is also foreseen to have implications for improved safety designs for a better protection of children in traffic accidents. PMID:29135997
Silicon quantum processor with robust long-distance qubit couplings.
Tosi, Guilherme; Mohiyaddin, Fahd A; Schmitt, Vivien; Tenberg, Stefanie; Rahman, Rajib; Klimeck, Gerhard; Morello, Andrea
2017-09-06
Practical quantum computers require a large network of highly coherent qubits, interconnected in a design robust against errors. Donor spins in silicon provide state-of-the-art coherence and quantum gate fidelities, in a platform adapted from industrial semiconductor processing. Here we present a scalable design for a silicon quantum processor that does not require precise donor placement and leaves ample space for the routing of interconnects and readout devices. We introduce the flip-flop qubit, a combination of the electron-nuclear spin states of a phosphorus donor that can be controlled by microwave electric fields. Two-qubit gates exploit a second-order electric dipole-dipole interaction, allowing selective coupling beyond the nearest-neighbor, at separations of hundreds of nanometers, while microwave resonators can extend the entanglement to macroscopic distances. We predict gate fidelities within fault-tolerance thresholds using realistic noise models. This design provides a realizable blueprint for scalable spin-based quantum computers in silicon.Quantum computers will require a large network of coherent qubits, connected in a noise-resilient way. Tosi et al. present a design for a quantum processor based on electron-nuclear spins in silicon, with electrical control and coupling schemes that simplify qubit fabrication and operation.
NASA Astrophysics Data System (ADS)
Luo, Jun-Wei; Li, Shu-Shen; Zunger, Alex
2017-09-01
The electric field manipulation of the Rashba spin-orbit coupling effects provides a route to electrically control spins, constituting the foundation of the field of semiconductor spintronics. In general, the strength of the Rashba effects depends linearly on the applied electric field and is significant only for heavy-atom materials with large intrinsic spin-orbit interaction under high electric fields. Here, we illustrate in 1D semiconductor nanowires an anomalous field dependence of the hole (but not electron) Rashba effect (HRE). (i) At low fields, the strength of the HRE exhibits a steep increase with the field so that even low fields can be used for device switching. (ii) At higher fields, the HRE undergoes a rapid transition to saturation with a giant strength even for light-atom materials such as Si (exceeding 100 meV Å). (iii) The nanowire-size dependence of the saturation HRE is rather weak for light-atom Si, so size fluctuations would have a limited effect; this is a key requirement for scalability of Rashba-field-based spintronic devices. These three features offer Si nanowires as a promising platform for the realization of scalable complementary metal-oxide-semiconductor compatible spintronic devices.
Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patchett, John M; Ahrens, James P; Lo, Li - Ta
2010-10-15
Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We presentmore » a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.« less
Power-Scalable Blue-Green Bessel Beams
2016-02-23
19b. TELEPHONE NUMBER (Include area code) 02/23/2016 Final Technical JAN 2011 - DEC 2013 Power-Scalable Blue -Green Bessel Beams Siddharth Ramachandran...fiber lasers, non-traditional emission wavelengths, high-power blue -green tunabel lasers U U U SAR 11 Siddharth Ramachandran 617-353-9811 1 Power...Scalable Blue -Green Bessel Beams Siddharth Ramachandran Photonics Center, Boston University, 8 Saint Mary’s Street, Boston, MA 02215 phone: (617) 353
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing
2012-12-14
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Matei Zaharia Tathagata Das Haoyuan Li Timothy Hunter Scott Shenker Ion...SUBTITLE Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...time. However, current programming models for distributed stream processing are relatively low-level often leaving the user to worry about consistency of
Modular Universal Scalable Ion-trap Quantum Computer
2016-06-02
SECURITY CLASSIFICATION OF: The main goal of the original MUSIQC proposal was to construct and demonstrate a modular and universally- expandable ion...Distribution Unlimited UU UU UU UU 02-06-2016 1-Aug-2010 31-Jan-2016 Final Report: Modular Universal Scalable Ion-trap Quantum Computer The views...P.O. Box 12211 Research Triangle Park, NC 27709-2211 Ion trap quantum computation, scalable modular architectures REPORT DOCUMENTATION PAGE 11
Scalable L-infinite coding of meshes.
Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter
2010-01-01
The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.
Scalability of grid- and subbasin-based land surface modeling approaches for hydrologic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tesfa, Teklu K.; Ruby Leung, L.; Huang, Maoyi
2014-03-27
This paper investigates the relative merits of grid- and subbasin-based land surface modeling approaches for hydrologic simulations, with a focus on their scalability (i.e., abilities to perform consistently across a range of spatial resolutions) in simulating runoff generation. Simulations produced by the grid- and subbasin-based configurations of the Community Land Model (CLM) are compared at four spatial resolutions (0.125o, 0.25o, 0.5o and 1o) over the topographically diverse region of the U.S. Pacific Northwest. Using the 0.125o resolution simulation as the “reference”, statistical skill metrics are calculated and compared across simulations at 0.25o, 0.5o and 1o spatial resolutions of each modelingmore » approach at basin and topographic region levels. Results suggest significant scalability advantage for the subbasin-based approach compared to the grid-based approach for runoff generation. Basin level annual average relative errors of surface runoff at 0.25o, 0.5o, and 1o compared to 0.125o are 3%, 4%, and 6% for the subbasin-based configuration and 4%, 7%, and 11% for the grid-based configuration, respectively. The scalability advantages of the subbasin-based approach are more pronounced during winter/spring and over mountainous regions. The source of runoff scalability is found to be related to the scalability of major meteorological and land surface parameters of runoff generation. More specifically, the subbasin-based approach is more consistent across spatial scales than the grid-based approach in snowfall/rainfall partitioning, which is related to air temperature and surface elevation. Scalability of a topographic parameter used in the runoff parameterization also contributes to improved scalability of the rain driven saturated surface runoff component, particularly during winter. Hence this study demonstrates the importance of spatial structure for multi-scale modeling of hydrological processes, with implications to surface heat fluxes in coupled land-atmosphere modeling.« less
Leveraging social networks for understanding the evolution of epidemics
2011-01-01
Background To understand how infectious agents disseminate throughout a population it is essential to capture the social model in a realistic manner. This paper presents a novel approach to modeling the propagation of the influenza virus throughout a realistic interconnection network based on actual individual interactions which we extract from online social networks. The advantage is that these networks can be extracted from existing sources which faithfully record interactions between people in their natural environment. We additionally allow modeling the characteristics of each individual as well as customizing his daily interaction patterns by making them time-dependent. Our purpose is to understand how the infection spreads depending on the structure of the contact network and the individuals who introduce the infection in the population. This would help public health authorities to respond more efficiently to epidemics. Results We implement a scalable, fully distributed simulator and validate the epidemic model by comparing the simulation results against the data in the 2004-2005 New York State Department of Health Report (NYSDOH), with similar temporal distribution results for the number of infected individuals. We analyze the impact of different types of connection models on the virus propagation. Lastly, we analyze and compare the effects of adopting several different vaccination policies, some of them based on individual characteristics -such as age- while others targeting the super-connectors in the social model. Conclusions This paper presents an approach to modeling the propagation of the influenza virus via a realistic social model based on actual individual interactions extracted from online social networks. We implemented a scalable, fully distributed simulator and we analyzed both the dissemination of the infection and the effect of different vaccination policies on the progress of the epidemics. The epidemic values predicted by our simulator match real data from NYSDOH. Our results show that our simulator can be a useful tool in understanding the differences in the evolution of an epidemic within populations with different characteristics and can provide guidance with regard to which, and how many, individuals should be vaccinated to slow down the virus propagation and reduce the number of infections. PMID:22784620
Large scale analysis of signal reachability.
Todor, Andrei; Gabr, Haitham; Dobra, Alin; Kahveci, Tamer
2014-06-15
Major disorders, such as leukemia, have been shown to alter the transcription of genes. Understanding how gene regulation is affected by such aberrations is of utmost importance. One promising strategy toward this objective is to compute whether signals can reach to the transcription factors through the transcription regulatory network (TRN). Due to the uncertainty of the regulatory interactions, this is a #P-complete problem and thus solving it for very large TRNs remains to be a challenge. We develop a novel and scalable method to compute the probability that a signal originating at any given set of source genes can arrive at any given set of target genes (i.e., transcription factors) when the topology of the underlying signaling network is uncertain. Our method tackles this problem for large networks while providing a provably accurate result. Our method follows a divide-and-conquer strategy. We break down the given network into a sequence of non-overlapping subnetworks such that reachability can be computed autonomously and sequentially on each subnetwork. We represent each interaction using a small polynomial. The product of these polynomials express different scenarios when a signal can or cannot reach to target genes from the source genes. We introduce polynomial collapsing operators for each subnetwork. These operators reduce the size of the resulting polynomial and thus the computational complexity dramatically. We show that our method scales to entire human regulatory networks in only seconds, while the existing methods fail beyond a few tens of genes and interactions. We demonstrate that our method can successfully characterize key reachability characteristics of the entire transcriptions regulatory networks of patients affected by eight different subtypes of leukemia, as well as those from healthy control samples. All the datasets and code used in this article are available at bioinformatics.cise.ufl.edu/PReach/scalable.htm. © The Author 2014. Published by Oxford University Press.
Driving force for hydrophobic interaction at different length scales.
Zangi, Ronen
2011-03-17
We study by molecular dynamics simulations the driving force for the hydrophobic interaction between graphene sheets of different sizes down to the atomic scale. Similar to the prediction by Lum, Chandler, and Weeks for hard-sphere solvation [J. Phys. Chem. B 1999, 103, 4570-4577], we find the driving force to be length-scale dependent, despite the fact that our model systems do not exhibit dewetting. For small hydrophobic solutes, the association is purely entropic, while enthalpy favors dissociation. The latter is demonstrated to arise from the enhancement of hydrogen bonding between the water molecules around small hydrophobes. On the other hand, the attraction between large graphene sheets is dominated by enthalpy which mainly originates from direct solute-solute interactions. The crossover length is found to be inside the range of 0.3-1.5 nm(2) of the surface area of the hydrophobe that is eliminated in the association process. In the large-scale regime, different thermodynamic properties are scalable with this change of surface area. In particular, upon dimerization, a total and a water-induced stabilization of approximately 65 and 12 kJ/mol/nm(2) are obtained, respectively, and on average around one hydrogen bond is gained per 1 nm(2) of graphene sheet association. Furthermore, the potential of mean force between the sheets is also scalable except for interplate distances smaller than 0.64 nm which corresponds to the region around the barrier for removing the last layer of water. It turns out that, as the surface area increases, the relative height of the barrier for association decreases and the range of attraction increases. It is also shown that, around small hydrophobic solutes, the lifetime of the hydrogen bonds is longer than in the bulk, while around large hydrophobes it is the same. Nevertheless, the rearrangement of the hydrogen-bond network for both length-scale regimes is slower than in bulk water. © 2011 American Chemical Society
Multi-Purpose, Application-Centric, Scalable I/O Proxy Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, M. C.
2015-06-15
MACSio is a Multi-purpose, Application-Centric, Scalable I/O proxy application. It is designed to support a number of goals with respect to parallel I/O performance testing and benchmarking including the ability to test and compare various I/O libraries and I/O paradigms, to predict scalable performance of real applications and to help identify where improvements in I/O performance can be made within the HPC I/O software stack.
SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop.
Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo
2014-01-01
Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig's scalability over many computing nodes and illustrate its use with example scripts. Available under the open source MIT license at http://sourceforge.net/projects/seqpig/
Scalable Implementation of Finite Elements by NASA _ Implicit (ScIFEi)
NASA Technical Reports Server (NTRS)
Warner, James E.; Bomarito, Geoffrey F.; Heber, Gerd; Hochhalter, Jacob D.
2016-01-01
Scalable Implementation of Finite Elements by NASA (ScIFEN) is a parallel finite element analysis code written in C++. ScIFEN is designed to provide scalable solutions to computational mechanics problems. It supports a variety of finite element types, nonlinear material models, and boundary conditions. This report provides an overview of ScIFEi (\\Sci-Fi"), the implicit solid mechanics driver within ScIFEN. A description of ScIFEi's capabilities is provided, including an overview of the tools and features that accompany the software as well as a description of the input and output le formats. Results from several problems are included, demonstrating the efficiency and scalability of ScIFEi by comparing to finite element analysis using a commercial code.
Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications
NASA Astrophysics Data System (ADS)
Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei
2007-04-01
In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shamis, Pavel; Graham, Richard L; Gorentla Venkata, Manjunath
The scalability and performance of collective communication operations limit the scalability and performance of many scientific applications. This paper presents two new blocking and nonblocking Broadcast algorithms for communicators with arbitrary communication topology, and studies their performance. These algorithms benefit from increased concurrency and a reduced memory footprint, making them suitable for use on large-scale systems. Measuring small, medium, and large data Broadcasts on a Cray-XT5, using 24,576 MPI processes, the Cheetah algorithms outperform the native MPI on that system by 51%, 69%, and 9%, respectively, at the same process count. These results demonstrate an algorithmic approach to the implementationmore » of the important class of collective communications, which is high performing, scalable, and also uses resources in a scalable manner.« less
On-the-Fly Control of High-Harmonic Generation Using a Structured Pump Beam
NASA Astrophysics Data System (ADS)
Hareli, Liran; Lobachinsky, Lilya; Shoulga, Georgiy; Eliezer, Yaniv; Michaeli, Linor; Bahabad, Alon
2018-05-01
We demonstrate experimentally a relatively simple yet powerful all-optical enhancement and control technique for high harmonic generation. This is achieved by using as a pump beam two different spatial optical modes interfering together to realize tunable periodic quasi-phase matching of the interaction. With this technique, we demonstrate on-the-fly quasi-phase matching of harmonic orders 29-41 at ambient gas pressure levels of 50 and 100 Torr, where an up to 100-fold enhancement of the emission is observed. The technique is scalable to different harmonic orders and ambient pressure conditions.
Sol-Gel Processing of MgF2 Antireflective Coatings
Löbmann, Peer
2018-01-01
There are different approaches for the preparation of porous antireflective λ/4 MgF2 films from liquid precursors. Among these, the non-aqueous fluorolytic synthesis of precursor solutions offers many advantages in terms of processing simplicity and scalability. In this paper, the structural features and optical performance of the resulting films are highlighted, and their specific interactions with different inorganic substrates are discussed. Due to their excellent abrasion resistance, coatings have a high potential for applications on glass. Using solvothermal treatment of precursor solutions, also the processing of thermally sensitive polymer substrates becomes feasible. PMID:29724064
On-the-Fly Control of High-Harmonic Generation Using a Structured Pump Beam.
Hareli, Liran; Lobachinsky, Lilya; Shoulga, Georgiy; Eliezer, Yaniv; Michaeli, Linor; Bahabad, Alon
2018-05-04
We demonstrate experimentally a relatively simple yet powerful all-optical enhancement and control technique for high harmonic generation. This is achieved by using as a pump beam two different spatial optical modes interfering together to realize tunable periodic quasi-phase matching of the interaction. With this technique, we demonstrate on-the-fly quasi-phase matching of harmonic orders 29-41 at ambient gas pressure levels of 50 and 100 Torr, where an up to 100-fold enhancement of the emission is observed. The technique is scalable to different harmonic orders and ambient pressure conditions.
Efficient creation of dipolar coupled nitrogen-vacancy spin qubits in diamond
NASA Astrophysics Data System (ADS)
Jakobi, I.; Momenzadeh, S. A.; Fávaro de Oliveira, F.; Michl, J.; Ziem, F.; Schreck, M.; Neumann, P.; Denisenko, A.; Wrachtrup, J.
2016-09-01
Coherently coupled pairs or multimers of nitrogen-vacancy defect electron spins in diamond have many promising applications especially in quantum information processing (QIP) but also in nanoscale sensing applications. Scalable registers of spin qubits are essential to the progress of QIP. Ion implantation is the only known technique able to produce defect pairs close enough to allow spin coupling via dipolar interaction. Although several competing methods have been proposed to increase the resulting resolution of ion implantation, the reliable creation of working registers is still to be demonstrated. The current limitation are residual radiation-induced defects, resulting in degraded qubit performance as trade-off for positioning accuracy. Here we present an optimized estimation of nanomask implantation parameters that are most likely to produce interacting qubits under standard conditions. We apply our findings to a well-established technique, namely masks written in electron-beam lithography, to create coupled defect pairs with a reasonable probability. Furthermore, we investigate the scaling behavior and necessary improvements to efficiently engineer interacting spin architectures.
Aakhus, Mark
2011-11-01
The International Radiation Protection Association's guiding principles for stakeholder engagement focus on fostering, facilitating, and enabling interaction among stakeholders that is inclusive and fosters competent decision making. Implicit in these standards is a call to cultivate knowledge and competence in designing communication for stakeholder engagement among radiation protection professionals. Communication as design is an approach to risk communication in science and policy that differs from, yet complements, the more well-known communication practices of informing and persuading. Design focuses on the recurring practical problem faced by professionals in making communication possible among stakeholders where it has otherwise been difficult, impossible, or even unimagined. The knowledge and competence associated with design involves principles for crafting interactivity across a variety of mediated and non-mediated encounters among stakeholders. Risk communication can be improved by cultivating expertise in scalable communication design that embraces the demands of involvement without abandoning the need for competence in science and policy communication.
LOLAweb: a containerized web server for interactive genomic locus overlap enrichment analysis.
Nagraj, V P; Magee, Neal E; Sheffield, Nathan C
2018-06-06
The past few years have seen an explosion of interest in understanding the role of regulatory DNA. This interest has driven large-scale production of functional genomics data and analytical methods. One popular analysis is to test for enrichment of overlaps between a query set of genomic regions and a database of region sets. In this way, new genomic data can be easily connected to annotations from external data sources. Here, we present an interactive interface for enrichment analysis of genomic locus overlaps using a web server called LOLAweb. LOLAweb accepts a set of genomic ranges from the user and tests it for enrichment against a database of region sets. LOLAweb renders results in an R Shiny application to provide interactive visualization features, enabling users to filter, sort, and explore enrichment results dynamically. LOLAweb is built and deployed in a Linux container, making it scalable to many concurrent users on our servers and also enabling users to download and run LOLAweb locally.
Interactive Machine Learning at Scale with CHISSL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arendt, Dustin L.; Grace, Emily A.; Volkova, Svitlana
We demonstrate CHISSL, a scalable client-server system for real-time interactive machine learning. Our system is capa- ble of incorporating user feedback incrementally and imme- diately without a structured or pre-defined prediction task. Computation is partitioned between a lightweight web-client and a heavyweight server. The server relies on representation learning and agglomerative clustering to learn a dendrogram, a hierarchical approximation of a representation space. The client uses only this dendrogram to incorporate user feedback into the model via transduction. Distances and predictions for each unlabeled instance are updated incrementally and deter- ministically, with O(n) space and time complexity. Our al- gorithmmore » is implemented in a functional prototype, designed to be easy to use by non-experts. The prototype organizes the large amounts of data into recommendations. This allows the user to interact with actual instances by dragging and drop- ping to provide feedback in an intuitive manner. We applied CHISSL to several domains including cyber, social media, and geo-temporal analysis.« less
NASA Astrophysics Data System (ADS)
Yang, Xu-Chen; Wang, Xin
The manipulation of coupled quantum dot devices is crucial to scalable, fault-tolerant quantum computation. We present a theoretical study of a four-electron four-quantum-dot system based on molecular orbital methods, which depicts a pair of singlet-triplet (S-T) qubits. We find that while the two S-T qubits are coupled by the capacitive interaction when they are sufficiently far away, the admixture of wave functions undergoes a substantial change as the two S-T qubits get closer. We find that in certain parameter regime the exchange interaction may only be defined in the sense of an effective one when the computational basis states no longer dominate the eigenstates. We further discuss the gate crosstalk as a consequence of this wave function mixing. This work was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CityU 21300116) and the National Natural Science Foundation of China (No. 11604277).
Real-time scalable visual analysis on mobile devices
NASA Astrophysics Data System (ADS)
Pattath, Avin; Ebert, David S.; May, Richard A.; Collins, Timothy F.; Pike, William
2008-02-01
Interactive visual presentation of information can help an analyst gain faster and better insight from data. When combined with situational or context information, visualization on mobile devices is invaluable to in-field responders and investigators. However, several challenges are posed by the form-factor of mobile devices in developing such systems. In this paper, we classify these challenges into two broad categories - issues in general mobile computing and issues specific to visual analysis on mobile devices. Using NetworkVis and Infostar as example systems, we illustrate some of the techniques that we employed to overcome many of the identified challenges. NetworkVis is an OpenVG-based real-time network monitoring and visualization system developed for Windows Mobile devices. Infostar is a flash-based interactive, real-time visualization application intended to provide attendees access to conference information. Linked time-synchronous visualization, stylus/button-based interactivity, vector graphics, overview-context techniques, details-on-demand and statistical information display are some of the highlights of these applications.
NASA Astrophysics Data System (ADS)
Rosenblum, Serge; Borne, Adrien; Dayan, Barak
2017-03-01
The long-standing goal of deterministic quantum interactions between single photons and single atoms was recently realized in various experiments. Among these, an appealing demonstration relied on single-photon Raman interaction (SPRINT) in a three-level atom coupled to a single-mode waveguide. In essence, the interference-based process of SPRINT deterministically swaps the qubits encoded in a single photon and a single atom, without the need for additional control pulses. It can also be harnessed to construct passive entangling quantum gates, and can therefore form the basis for scalable quantum networks in which communication between the nodes is carried out only by single-photon pulses. Here we present an analytical and numerical study of SPRINT, characterizing its limitations and defining parameters for its optimal operation. Specifically, we study the effect of losses, imperfect polarization, and the presence of multiple excited states. In all cases we discuss strategies for restoring the operation of SPRINT.
Large Scale Analysis of Geospatial Data with Dask and XArray
NASA Astrophysics Data System (ADS)
Zender, C. S.; Hamman, J.; Abernathey, R.; Evans, K. J.; Rocklin, M.; Zender, C. S.; Rocklin, M.
2017-12-01
The analysis of geospatial data with high level languages has acceleratedinnovation and the impact of existing data resources. However, as datasetsgrow beyond single-machine memory, data structures within these high levellanguages can become a bottleneck. New libraries like Dask and XArray resolve some of these scalability issues,providing interactive workflows that are both familiar tohigh-level-language researchers while also scaling out to much largerdatasets. This broadens the access of researchers to larger datasets on highperformance computers and, through interactive development, reducestime-to-insight when compared to traditional parallel programming techniques(MPI). This talk describes Dask, a distributed dynamic task scheduler, Dask.array, amulti-dimensional array that copies the popular NumPy interface, and XArray,a library that wraps NumPy/Dask.array with labeled and indexes axes,implementing the CF conventions. We discuss both the basic design of theselibraries and how they change interactive analysis of geospatial data, and alsorecent benefits and challenges of distributed computing on clusters ofmachines.
Passing messages between biological networks to refine predicted interactions.
Glass, Kimberly; Huttenhower, Curtis; Quackenbush, John; Yuan, Guo-Cheng
2013-01-01
Regulatory network reconstruction is a fundamental problem in computational biology. There are significant limitations to such reconstruction using individual datasets, and increasingly people attempt to construct networks using multiple, independent datasets obtained from complementary sources, but methods for this integration are lacking. We developed PANDA (Passing Attributes between Networks for Data Assimilation), a message-passing model using multiple sources of information to predict regulatory relationships, and used it to integrate protein-protein interaction, gene expression, and sequence motif data to reconstruct genome-wide, condition-specific regulatory networks in yeast as a model. The resulting networks were not only more accurate than those produced using individual data sets and other existing methods, but they also captured information regarding specific biological mechanisms and pathways that were missed using other methodologies. PANDA is scalable to higher eukaryotes, applicable to specific tissue or cell type data and conceptually generalizable to include a variety of regulatory, interaction, expression, and other genome-scale data. An implementation of the PANDA algorithm is available at www.sourceforge.net/projects/panda-net.
Tezaur, Irina K.; Tuminaro, Raymond S.; Perego, Mauro; ...
2015-01-01
We examine the scalability of the recently developed Albany/FELIX finite-element based code for the first-order Stokes momentum balance equations for ice flow. We focus our analysis on the performance of two possible preconditioners for the iterative solution of the sparse linear systems that arise from the discretization of the governing equations: (1) a preconditioner based on the incomplete LU (ILU) factorization, and (2) a recently-developed algebraic multigrid (AMG) preconditioner, constructed using the idea of semi-coarsening. A strong scalability study on a realistic, high resolution Greenland ice sheet problem reveals that, for a given number of processor cores, the AMG preconditionermore » results in faster linear solve times but the ILU preconditioner exhibits better scalability. In addition, a weak scalability study is performed on a realistic, moderate resolution Antarctic ice sheet problem, a substantial fraction of which contains floating ice shelves, making it fundamentally different from the Greenland ice sheet problem. We show that as the problem size increases, the performance of the ILU preconditioner deteriorates whereas the AMG preconditioner maintains scalability. This is because the linear systems are extremely ill-conditioned in the presence of floating ice shelves, and the ill-conditioning has a greater negative effect on the ILU preconditioner than on the AMG preconditioner.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Ying-Jie, E-mail: yingjiezhang@qfnu.edu.cn; Han, Wei; Xia, Yun-Jie, E-mail: yjxia@qfnu.edu.cn
We propose a scheme of controlling entanglement dynamics of a quantum system by applying the external classical driving field for two atoms separately located in a single-mode photon cavity. It is shown that, with a judicious choice of the classical-driving strength and the atom–photon detuning, the effective atom–photon interaction Hamiltonian can be switched from Jaynes–Cummings model to anti-Jaynes–Cummings model. By tuning the controllable atom–photon interaction induced by the classical field, we illustrate that the evolution trajectory of the Bell-like entanglement states can be manipulated from entanglement-sudden-death to no-entanglement-sudden-death, from no-entanglement-invariant to entanglement-invariant. Furthermore, the robustness of the initial Bell-like entanglementmore » can be improved by the classical driving field in the leaky cavities. This classical-driving-assisted architecture can be easily extensible to multi-atom quantum system for scalability.« less
CoMD Implementation Suite in Emerging Programming Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haque, Riyaz; Reeve, Sam; Juallmes, Luc
CoMD-Em is a software implementation suite of the CoMD [4] proxy app using different emerging programming models. It is intended to analyze the features and capabilities of novel programming models that could help ensure code and performance portability and scalability across heterogeneous platforms while improving programmer productivity. Another goal is to provide the authors and venders with some meaningful feedback regarding the capabilities and limitations of their models. The actual application is a classical molecular dynamics (MD) simulation using either the Lennard-Jones method (LJ) or the embedded atom method (EAM) for primary particle interaction. The code can be extended tomore » support alternate interaction models. The code is expected ro run on a wide class of heterogeneous hardware configurations like shard/distributed/hybrid memory, GPU's and any other platform supported by the underlying programming model.« less
PrePhyloPro: phylogenetic profile-based prediction of whole proteome linkages
Niu, Yulong; Liu, Chengcheng; Moghimyfiroozabad, Shayan; Yang, Yi
2017-01-01
Direct and indirect functional links between proteins as well as their interactions as part of larger protein complexes or common signaling pathways may be predicted by analyzing the correlation of their evolutionary patterns. Based on phylogenetic profiling, here we present a highly scalable and time-efficient computational framework for predicting linkages within the whole human proteome. We have validated this method through analysis of 3,697 human pathways and molecular complexes and a comparison of our results with the prediction outcomes of previously published co-occurrency model-based and normalization methods. Here we also introduce PrePhyloPro, a web-based software that uses our method for accurately predicting proteome-wide linkages. We present data on interactions of human mitochondrial proteins, verifying the performance of this software. PrePhyloPro is freely available at http://prephylopro.org/phyloprofile/. PMID:28875072
NGL Viewer: Web-based molecular graphics for large complexes.
Rose, Alexander S; Bradley, Anthony R; Valasatava, Yana; Duarte, Jose M; Prlic, Andreas; Rose, Peter W
2018-05-29
The interactive visualization of very large macromolecular complexes on the web is becoming a challenging problem as experimental techniques advance at an unprecedented rate and deliver structures of increasing size. We have tackled this problem by developing highly memory-efficient and scalable extensions for the NGL WebGL-based molecular viewer and by using MMTF, a binary and compressed Macromolecular Transmission Format. These enable NGL to download and render molecular complexes with millions of atoms interactively on desktop computers and smartphones alike, making it a tool of choice for web-based molecular visualization in research and education. The source code is freely available under the MIT license at github.com/arose/ngl and distributed on NPM (npmjs.com/package/ngl). MMTF-JavaScript encoders and decoders are available at github.com/rcsb/mmtf-javascript. asr.moin@gmail.com.
Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach
Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei
2015-01-01
Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies. PMID:26705505
Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach.
Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei
2015-08-01
Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies.
NASA Astrophysics Data System (ADS)
Riedel-Kruse, Ingmar
Modern biotechnology gets increasingly powerful to manipulate and measure microscopic biophysical processes. Nevertheless, no platform exists to truly interact with these processes, certainly not with the convenience that we are accustomed to from our electronic smart devices. In my talk I will provide the rational for such Interactive Biotechnology and conceptualize its core component, the BPU (biotic processing unit), which is then connected to an according user interface. The biophysical phenomena currently featured on these platforms utilize the phototactic response of motile microorganisms, e.g., Euglena gracilis, resulting in spatio-temporal dynamics from the single cell to the self-organized multi-cellular scale. I will demonstrate multiple platforms, such as scalable biology cloud experimentation labs, tangible museum exhibits, biotic video games, low-cost interactive DIY kits using smartphones, and programming languages for swarm robotics. I will discuss applications for education as well as for professional and citizen science. Hence, we turn traditionally observational microscopy into an interactive experience. I was told that presenting in the educational section does not count against the ''one author - one talk policy'' - so I submit two abstracts. In case of conflict - please contact me: ingmar@stanford.edu.
MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.
Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui
A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.
BactoGeNIE: A large-scale comparative genome visualization for big displays
Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; ...
2015-08-13
The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less
BactoGeNIE: a large-scale comparative genome visualization for big displays
2015-01-01
Background The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. Results In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. Conclusions BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics. PMID:26329021
BactoGeNIE: A large-scale comparative genome visualization for big displays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurisano, Jillian; Reda, Khairi; Johnson, Andrew
The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less
NASA Astrophysics Data System (ADS)
Bucay, Igal; Helal, Ahmed; Dunsky, David; Leviyev, Alex; Mallavarapu, Akhila; Sreenivasan, S. V.; Raizen, Mark
2017-04-01
Ionization of atoms and molecules is an important process in many applications and processes such as mass spectrometry. Ionization is typically accomplished by electron bombardment, and while it is scalable to large volumes, is also very inefficient due to the small cross section of electron-atom collisions. Photoionization methods can be highly efficient, but are not scalable due to the small ionization volume. Electric field ionization is accomplished using ultra-sharp conducting tips biased to a few kilovolts, but suffers from a low ionization volume and tip fabrication limitations. We report on our progress towards an efficient, robust, and scalable method of atomic and molecular ionization using orderly arrays of sharp, gold-doped silicon nanowires. As demonstrated in earlier work, the presence of the gold greatly enhances the ionization probability, which was attributed to an increase in available acceptor surface states. We present here a novel process used to fabricate the nanowire array, results of simulations aimed at optimizing the configuration of the array, and our progress towards demonstrating efficient and scalable ionization.
Medusa: A Scalable MR Console Using USB
Stang, Pascal P.; Conolly, Steven M.; Santos, Juan M.; Pauly, John M.; Scott, Greig C.
2012-01-01
MRI pulse sequence consoles typically employ closed proprietary hardware, software, and interfaces, making difficult any adaptation for innovative experimental technology. Yet MRI systems research is trending to higher channel count receivers, transmitters, gradient/shims, and unique interfaces for interventional applications. Customized console designs are now feasible for researchers with modern electronic components, but high data rates, synchronization, scalability, and cost present important challenges. Implementing large multi-channel MR systems with efficiency and flexibility requires a scalable modular architecture. With Medusa, we propose an open system architecture using the Universal Serial Bus (USB) for scalability, combined with distributed processing and buffering to address the high data rates and strict synchronization required by multi-channel MRI. Medusa uses a modular design concept based on digital synthesizer, receiver, and gradient blocks, in conjunction with fast programmable logic for sampling and synchronization. Medusa is a form of synthetic instrument, being reconfigurable for a variety of medical/scientific instrumentation needs. The Medusa distributed architecture, scalability, and data bandwidth limits are presented, and its flexibility is demonstrated in a variety of novel MRI applications. PMID:21954200
The Scalable Checkpoint/Restart Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, A.
The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less
Next Generation Integrated Environment for Collaborative Work Across Internets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey B. Newman
2009-02-24
We are now well-advanced in our development, prototyping and deployment of a high performance next generation Integrated Environment for Collaborative Work. The system, aimed at using the capability of ESnet and Internet2 for rapid data exchange, is based on the Virtual Room Videoconferencing System (VRVS) developed by Caltech. The VRVS system has been chosen by the Internet2 Digital Video (I2-DV) Initiative as a preferred foundation for the development of advanced video, audio and multimedia collaborative applications by the Internet2 community. Today, the system supports high-end, broadcast-quality interactivity, while enabling a wide variety of clients (Mbone, H.323) to participate in themore » same conference by running different standard protocols in different contexts with different bandwidth connection limitations, has a fully Web-integrated user interface, developers and administrative APIs, a widely scalable video network topology based on both multicast domains and unicast tunnels, and demonstrated multiplatform support. This has led to its rapidly expanding production use for national and international scientific collaborations in more than 60 countries. We are also in the process of creating a 'testbed video network' and developing the necessary middleware to support a set of new and essential requirements for rapid data exchange, and a high level of interactivity in large-scale scientific collaborations. These include a set of tunable, scalable differentiated network services adapted to each of the data streams associated with a large number of collaborative sessions, policy-based and network state-based resource scheduling, authentication, and optional encryption to maintain confidentiality of inter-personal communications. High performance testbed video networks will be established in ESnet and Internet2 to test and tune the implementation, using a few target application-sets.« less
Memory-Scalable GPU Spatial Hierarchy Construction.
Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D
2011-04-01
Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.
SVGenes: a library for rendering genomic features in scalable vector graphic format.
Etherington, Graham J; MacLean, Daniel
2013-08-01
Drawing genomic features in attractive and informative ways is a key task in visualization of genomics data. Scalable Vector Graphics (SVG) format is a modern and flexible open standard that provides advanced features including modular graphic design, advanced web interactivity and animation within a suitable client. SVGs do not suffer from loss of image quality on re-scaling and provide the ability to edit individual elements of a graphic on the whole object level independent of the whole image. These features make SVG a potentially useful format for the preparation of publication quality figures including genomic objects such as genes or sequencing coverage and for web applications that require rich user-interaction with the graphical elements. SVGenes is a Ruby-language library that uses SVG primitives to render typical genomic glyphs through a simple and flexible Ruby interface. The library implements a simple Page object that spaces and contains horizontal Track objects that in turn style, colour and positions features within them. Tracks are the level at which visual information is supplied providing the full styling capability of the SVG standard. Genomic entities like genes, transcripts and histograms are modelled in Glyph objects that are attached to a track and take advantage of SVG primitives to render the genomic features in a track as any of a selection of defined glyphs. The feature model within SVGenes is simple but flexible and not dependent on particular existing gene feature formats meaning graphics for any existing datasets can easily be created without need for conversion. The library is provided as a Ruby Gem from https://rubygems.org/gems/bio-svgenes under the MIT license, and open source code is available at https://github.com/danmaclean/bioruby-svgenes also under the MIT License. dan.maclean@tsl.ac.uk.
NASA Astrophysics Data System (ADS)
Karami, Mojtaba; Rangzan, Kazem; Saberi, Azim
2013-10-01
With emergence of air-borne and space-borne hyperspectral sensors, spectroscopic measurements are gaining more importance in remote sensing. Therefore, the number of available spectral reference data is constantly increasing. This rapid increase often exhibits a poor data management, which leads to ultimate isolation of data on disk storages. Spectral data without precise description of the target, methods, environment, and sampling geometry cannot be used by other researchers. Moreover, existing spectral data (in case it accompanied with good documentation) become virtually invisible or unreachable for researchers. Providing documentation and a data-sharing framework for spectral data, in which researchers are able to search for or share spectral data and documentation, would definitely improve the data lifetime. Relational Database Management Systems (RDBMS) are main candidates for spectral data management and their efficiency is proven by many studies and applications to date. In this study, a new approach to spectral data administration is presented based on spatial identity of spectral samples. This method benefits from scalability and performance of RDBMS for storage of spectral data, but uses GIS servers to provide users with interactive maps as an interface to the system. The spectral files, photographs and descriptive data are considered as belongings of a geospatial object. A spectral processing unit is responsible for evaluation of metadata quality and performing routine spectral processing tasks for newly-added data. As a result, by using internet browser software the users would be able to visually examine availability of data and/or search for data based on descriptive attributes associated to it. The proposed system is scalable and besides giving the users good sense of what data are available in the database, it facilitates participation of spectral reference data in producing geoinformation.
Khammash, Mustafa
2014-01-01
Reaction networks are systems in which the populations of a finite number of species evolve through predefined interactions. Such networks are found as modeling tools in many biological disciplines such as biochemistry, ecology, epidemiology, immunology, systems biology and synthetic biology. It is now well-established that, for small population sizes, stochastic models for biochemical reaction networks are necessary to capture randomness in the interactions. The tools for analyzing such models, however, still lag far behind their deterministic counterparts. In this paper, we bridge this gap by developing a constructive framework for examining the long-term behavior and stability properties of the reaction dynamics in a stochastic setting. In particular, we address the problems of determining ergodicity of the reaction dynamics, which is analogous to having a globally attracting fixed point for deterministic dynamics. We also examine when the statistical moments of the underlying process remain bounded with time and when they converge to their steady state values. The framework we develop relies on a blend of ideas from probability theory, linear algebra and optimization theory. We demonstrate that the stability properties of a wide class of biological networks can be assessed from our sufficient theoretical conditions that can be recast as efficient and scalable linear programs, well-known for their tractability. It is notably shown that the computational complexity is often linear in the number of species. We illustrate the validity, the efficiency and the wide applicability of our results on several reaction networks arising in biochemistry, systems biology, epidemiology and ecology. The biological implications of the results as well as an example of a non-ergodic biological network are also discussed. PMID:24968191
NASA Astrophysics Data System (ADS)
Tolba, Khaled Ibrahim; Morgenthal, Guido
2018-01-01
This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.
Design of an H.264/SVC resilient watermarking scheme
NASA Astrophysics Data System (ADS)
Van Caenegem, Robrecht; Dooms, Ann; Barbarien, Joeri; Schelkens, Peter
2010-01-01
The rapid dissemination of media technologies has lead to an increase of unauthorized copying and distribution of digital media. Digital watermarking, i.e. embedding information in the multimedia signal in a robust and imperceptible manner, can tackle this problem. Recently, there has been a huge growth in the number of different terminals and connections that can be used to consume multimedia. To tackle the resulting distribution challenges, scalable coding is often employed. Scalable coding allows the adaptation of a single bit-stream to varying terminal and transmission characteristics. As a result of this evolution, watermarking techniques that are robust against scalable compression become essential in order to control illegal copying. In this paper, a watermarking technique resilient against scalable video compression using the state-of-the-art H.264/SVC codec is therefore proposed and evaluated.
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash
2003-01-01
Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.
SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop
Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo
2014-01-01
Summary: Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig’s scalability over many computing nodes and illustrate its use with example scripts. Availability and Implementation: Available under the open source MIT license at http://sourceforge.net/projects/seqpig/ Contact: andre.schumacher@yahoo.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24149054
Evaluation of 3D printed anatomically scalable transfemoral prosthetic knee.
Ramakrishnan, Tyagi; Schlafly, Millicent; Reed, Kyle B
2017-07-01
This case study compares a transfemoral amputee's gait while using the existing Ossur Total Knee 2000 and our novel 3D printed anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee is 3D printed out of a carbon-fiber and nylon composite that has a gear-mesh coupling with a hard-stop weight-actuated locking mechanism aided by a cross-linked four-bar spring mechanism. This design can be scaled using anatomical dimensions of a human femur and tibia to have a unique fit for each user. The transfemoral amputee who was tested is high functioning and walked on the Computer Assisted Rehabilitation Environment (CAREN) at a self-selected pace. The motion capture and force data that was collected showed that there were distinct differences in the gait dynamics. The data was used to perform the Combined Gait Asymmetry Metric (CGAM), where the scores revealed that the overall asymmetry of the gait on the Ossur Total Knee was more asymmetric than the anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee had higher peak knee flexion that caused a large step time asymmetry. This made walking on the anatomically scalable transfemoral prosthetic knee more strenuous due to the compensatory movements in adapting to the different dynamics. This can be overcome by tuning the cross-linked spring mechanism to emulate the dynamics of the subject better. The subject stated that the knee would be good for daily use and has the potential to be adapted as a running knee.
Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...
2018-01-30
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Zulian, Patrick; Benson, Thomas
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
Responsive, Flexible and Scalable Broader Impacts (Invited)
NASA Astrophysics Data System (ADS)
Decharon, A.; Companion, C.; Steinman, M.
2010-12-01
In many educator professional development workshops, scientists present content in a slideshow-type format and field questions afterwards. Drawbacks of this approach include: inability to begin the lecture with content that is responsive to audience needs; lack of flexible access to specific material within the linear presentation; and “Q&A” sessions are not easily scalable to broader audiences. Often this type of traditional interaction provides little direct benefit to the scientists. The Centers for Ocean Sciences Education Excellence - Ocean Systems (COSEE-OS) applies the technique of concept mapping with demonstrated effectiveness in helping scientists and educators “get on the same page” (deCharon et al., 2009). A key aspect is scientist professional development geared towards improving face-to-face and online communication with non-scientists. COSEE-OS promotes scientist-educator collaboration, tests the application of scientist-educator maps in new contexts through webinars, and is piloting the expansion of maps as long-lived resources for the broader community. Collaboration - COSEE-OS has developed and tested a workshop model bringing scientists and educators together in a peer-oriented process, often clarifying common misconceptions. Scientist-educator teams develop online concept maps that are hyperlinked to “assets” (i.e., images, videos, news) and are responsive to the needs of non-scientist audiences. In workshop evaluations, 91% of educators said that the process of concept mapping helped them think through science topics and 89% said that concept mapping helped build a bridge of communication with scientists (n=53). Application - After developing a concept map, with COSEE-OS staff assistance, scientists are invited to give webinar presentations that include live “Q&A” sessions. The webinars extend the reach of scientist-created concept maps to new contexts, both geographically and topically (e.g., oil spill), with a relatively small investment of time. Initiated in summer 2010, the webinars are interactive and highly flexible: people can participate from their homes anywhere and can interact according to their comfort levels (i.e., submitting questions in “chat boxes” rather than orally). Expansion - To expand scientists’ research beyond educators attending a workshop or webinar, COSEE-OS uses a blog as an additional mode of communication. Topically focused by concept maps, blogs serve as a forum for scalable content. The varied types of formatting allow scientists to create long-lived resources that remain attributed to them while supporting sustained educator engagement. Blogs are another point of contact and allow educators further asynchronous access to scientists. Based on COSEE-OS evaluations, interacting on a blog was found to be educators’ preferred method of following up with scientists. Sustained engagement of scientists or educators requires a specific return on investment. Workshops and web tools can be used together to maximize scientist impact with a relatively small investment of time. As one educator stated, “It really helps my students’ interest when we discuss concepts and I tell them my knowledge comes directly from a scientist!” [A. deCharon et al. (2009), Online tools help get scientists and educators on the same page, Eos Transactions, American Geophysical Union, 90(34), 289-290.
Fabrication of Scalable Indoor Light Energy Harvester and Study for Agricultural IoT Applications
NASA Astrophysics Data System (ADS)
Watanabe, M.; Nakamura, A.; Kunii, A.; Kusano, K.; Futagawa, M.
2015-12-01
A scalable indoor light energy harvester was fabricated by microelectromechanical system (MEMS) and printing hybrid technology and evaluated for agricultural IoT applications under different environmental input power density conditions, such as outdoor farming under the sun, greenhouse farming under scattered lighting, and a plant factory under LEDs. We fabricated and evaluated a dye- sensitized-type solar cell (DSC) as a low cost and “scalable” optical harvester device. We developed a transparent conductive oxide (TCO)-less process with a honeycomb metal mesh substrate fabricated by MEMS technology. In terms of the electrical and optical properties, we achieved scalable harvester output power by cell area sizing. Second, we evaluated the dependence of the input power scalable characteristics on the input light intensity, spectrum distribution, and light inlet direction angle, because harvested environmental input power is unstable. The TiO2 fabrication relied on nanoimprint technology, which was designed for optical optimization and fabrication, and we confirmed that the harvesters are robust to a variety of environments. Finally, we studied optical energy harvesting applications for agricultural IoT systems. These scalable indoor light harvesters could be used in many applications and situations in smart agriculture.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Theoretical and Empirical Analysis of a Spatial EA Parallel Boosting Algorithm.
Kamath, Uday; Domeniconi, Carlotta; De Jong, Kenneth
2018-01-01
Many real-world problems involve massive amounts of data. Under these circumstances learning algorithms often become prohibitively expensive, making scalability a pressing issue to be addressed. A common approach is to perform sampling to reduce the size of the dataset and enable efficient learning. Alternatively, one customizes learning algorithms to achieve scalability. In either case, the key challenge is to obtain algorithmic efficiency without compromising the quality of the results. In this article we discuss a meta-learning algorithm (PSBML) that combines concepts from spatially structured evolutionary algorithms (SSEAs) with concepts from ensemble and boosting methodologies to achieve the desired scalability property. We present both theoretical and empirical analyses which show that PSBML preserves a critical property of boosting, specifically, convergence to a distribution centered around the margin. We then present additional empirical analyses showing that this meta-level algorithm provides a general and effective framework that can be used in combination with a variety of learning classifiers. We perform extensive experiments to investigate the trade-off achieved between scalability and accuracy, and robustness to noise, on both synthetic and real-world data. These empirical results corroborate our theoretical analysis, and demonstrate the potential of PSBML in achieving scalability without sacrificing accuracy.
Progressive Dictionary Learning with Hierarchical Predictive Structure for Scalable Video Coding.
Dai, Wenrui; Shen, Yangmei; Xiong, Hongkai; Jiang, Xiaoqian; Zou, Junni; Taubman, David
2017-04-12
Dictionary learning has emerged as a promising alternative to the conventional hybrid coding framework. However, the rigid structure of sequential training and prediction degrades its performance in scalable video coding. This paper proposes a progressive dictionary learning framework with hierarchical predictive structure for scalable video coding, especially in low bitrate region. For pyramidal layers, sparse representation based on spatio-temporal dictionary is adopted to improve the coding efficiency of enhancement layers (ELs) with a guarantee of reconstruction performance. The overcomplete dictionary is trained to adaptively capture local structures along motion trajectories as well as exploit the correlations between neighboring layers of resolutions. Furthermore, progressive dictionary learning is developed to enable the scalability in temporal domain and restrict the error propagation in a close-loop predictor. Under the hierarchical predictive structure, online learning is leveraged to guarantee the training and prediction performance with an improved convergence rate. To accommodate with the stateof- the-art scalable extension of H.264/AVC and latest HEVC, standardized codec cores are utilized to encode the base and enhancement layers. Experimental results show that the proposed method outperforms the latest SHVC and HEVC simulcast over extensive test sequences with various resolutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel W.
Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.« less
NASA Astrophysics Data System (ADS)
Wang, Shuguang; Zhou, Tong; Li, Dehui; Zhong, Zhenyang
2016-06-01
The scalable array of ordered nano-pillars with precisely controllable quantum nanostructures (QNs) are ideal candidates for the exploration of the fundamental features of cavity quantum electrodynamics. It also has a great potential in the applications of innovative nano-optoelectronic devices for the future quantum communication and integrated photon circuits. Here, we present a synthesis of such hybrid system in combination of the nanosphere lithography and the self-assembly during heteroepitaxy. The precise positioning and controllable evolution of self-assembled Ge QNs, including quantum dot necklace(QDN), QD molecule(QDM) and quantum ring(QR), on Si nano-pillars are readily achieved. Considering the strain relaxation and the non-uniform Ge growth due to the thickness-dependent and anisotropic surface diffusion of adatoms on the pillars, the comprehensive scenario of the Ge growth on Si pillars is discovered. It clarifies the inherent mechanism underlying the controllable growth of the QNs on the pillar. Moreover, it inspires a deliberate two-step growth procedure to engineer the controllable QNs on the pillar. Our results pave a promising avenue to the achievement of desired nano-pillar-QNs system that facilitates the strong light-matter interaction due to both spectra and spatial coupling between the QNs and the cavity modes of a single pillar and the periodic pillars.
Luo, Jun-Wei; Li, Shu-Shen; Zunger, Alex
2017-09-22
The electric field manipulation of the Rashba spin-orbit coupling effects provides a route to electrically control spins, constituting the foundation of the field of semiconductor spintronics. In general, the strength of the Rashba effects depends linearly on the applied electric field and is significant only for heavy-atom materials with large intrinsic spin-orbit interaction under high electric fields. Here, we illustrate in 1D semiconductor nanowires an anomalous field dependence of the hole (but not electron) Rashba effect (HRE). (i) At low fields, the strength of the HRE exhibits a steep increase with the field so that even low fields can be used for device switching. (ii) At higher fields, the HRE undergoes a rapid transition to saturation with a giant strength even for light-atom materials such as Si (exceeding 100 meV Å). (iii) The nanowire-size dependence of the saturation HRE is rather weak for light-atom Si, so size fluctuations would have a limited effect; this is a key requirement for scalability of Rashba-field-based spintronic devices. These three features offer Si nanowires as a promising platform for the realization of scalable complementary metal-oxide-semiconductor compatible spintronic devices.
Interface induced spin-orbit interaction in silicon quantum dots and prospects of scalability
NASA Astrophysics Data System (ADS)
Ferdous, Rifat; Wai, Kok; Veldhorst, Menno; Hwang, Jason; Yang, Henry; Klimeck, Gerhard; Dzurak, Andrew; Rahman, Rajib
A scalable quantum computing architecture requires reproducibility over key qubit properties, like resonance frequency, coherence time etc. Randomness in these properties would necessitate individual knowledge of each qubit in a quantum computer. Spin qubits hosted in Silicon (Si) quantum dots (QD) is promising as a potential building block for a large-scale quantum computer, because of their longer coherence times. The Stark shift of the electron g-factor in these QDs has been used to selectively address multiple qubits. From atomistic tight-binding studies we investigated the effect of interface non-ideality on the Stark shift of the g-factor in a Si QD. We find that based on the location of a monoatomic step at the interface with respect to the dot center both the sign and magnitude of the Stark shift change. Thus the presence of interface steps in these devices will cause variability in electron g-factor and its Stark shift based on the location of the qubit. This behavior will also cause varying sensitivity to charge noise from one qubit to another, which will randomize the dephasing times T2*. This predicted device-to-device variability is experimentally observed recently in three qubits fabricated at a Si/Si02 interface, which validates the issues discussed.
Scalable Combinatorial Tools for Health Disparities Research
Langston, Michael A.; Levine, Robert S.; Kilbourne, Barbara J.; Rogers, Gary L.; Kershenbaum, Anne D.; Baktash, Suzanne H.; Coughlin, Steven S.; Saxton, Arnold M.; Agboto, Vincent K.; Hood, Darryl B.; Litchveld, Maureen Y.; Oyana, Tonny J.; Matthews-Juarez, Patricia; Juarez, Paul D.
2014-01-01
Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject. PMID:25310540
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin
2013-01-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803
RF wave simulation for cold edge plasmas using the MFEM library
NASA Astrophysics Data System (ADS)
Shiraiwa, S.; Wright, J. C.; Bonoli, P. T.; Kolev, T.; Stowell, M.
2017-10-01
A newly developed generic electro-magnetic (EM) simulation tool for modeling RF wave propagation in SOL plasmas is presented. The primary motivation of this development is to extend the domain partitioning approach for incorporating arbitrarily shaped SOL plasmas and antenna to the TORIC core ICRF solver, which was previously demonstrated in the 2D geometry [S. Shiraiwa, et. al., "HISTORIC: extending core ICRF wave simulation to include realistic SOL plasmas", Nucl. Fusion in press], to larger and more complicated simulations by including a 3D realistic antenna and integrating RF rectified sheath potential model. Such an extension requires a scalable high fidelity 3D edge plasma wave simulation. We used the MFEM [
Wang, Shuguang; Zhou, Tong; Li, Dehui; Zhong, Zhenyang
2016-01-01
The scalable array of ordered nano-pillars with precisely controllable quantum nanostructures (QNs) are ideal candidates for the exploration of the fundamental features of cavity quantum electrodynamics. It also has a great potential in the applications of innovative nano-optoelectronic devices for the future quantum communication and integrated photon circuits. Here, we present a synthesis of such hybrid system in combination of the nanosphere lithography and the self-assembly during heteroepitaxy. The precise positioning and controllable evolution of self-assembled Ge QNs, including quantum dot necklace(QDN), QD molecule(QDM) and quantum ring(QR), on Si nano-pillars are readily achieved. Considering the strain relaxation and the non-uniform Ge growth due to the thickness-dependent and anisotropic surface diffusion of adatoms on the pillars, the comprehensive scenario of the Ge growth on Si pillars is discovered. It clarifies the inherent mechanism underlying the controllable growth of the QNs on the pillar. Moreover, it inspires a deliberate two-step growth procedure to engineer the controllable QNs on the pillar. Our results pave a promising avenue to the achievement of desired nano-pillar-QNs system that facilitates the strong light-matter interaction due to both spectra and spatial coupling between the QNs and the cavity modes of a single pillar and the periodic pillars. PMID:27353231
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clement, T. Prabhakar; Barnett, Mark O.; Zheng, Chunmiao
DE-FG02-06ER64213: Development of Modeling Methods and Tools for Predicting Coupled Reactive Transport Processes in Porous Media at Multiple Scales Investigators: T. Prabhakar Clement (PD/PI) and Mark O. Barnett (Auburn), Chunmiao Zheng (Univ. of Alabama), and Norman L. Jones (BYU). The objective of this project was to develop scalable modeling approaches for predicting the reactive transport of metal contaminants. We studied two contaminants, a radioactive cation [U(VI)] and a metal(loid) oxyanion system [As(III/V)], and investigated their interactions with two types of subsurface materials, iron and manganese oxyhydroxides. We also developed modeling methods for describing the experimental results. Overall, the project supportedmore » 25 researchers at three universities. Produced 15 journal articles, 3 book chapters, 6 PhD dissertations and 6 MS theses. Three key journal articles are: 1) Jeppu et al., A scalable surface complexation modeling framework for predicting arsenate adsorption on goethite-coated sands, Environ. Eng. Sci., 27(2): 147-158, 2010. 2) Loganathan et al., Scaling of adsorption reactions: U(VI) experiments and modeling, Applied Geochemistry, 24 (11), 2051-2060, 2009. 3) Phillippi, et al., Theoretical solid/solution ratio effects on adsorption and transport: uranium (VI) and carbonate, Soil Sci. Soci. of America, 71:329-335, 2007« less
Yoshikura, Hiroshi
2018-04-27
Relation between number of measles patients (y) and population size (x) was expressed by an equation y = ax s , where a is a constant and s the slope of the plot; s was 2.04-2.17 for prefectures in Japan, i.e., the number of patients was proportional to square of the prefecture population size. For European countries that joined European Union no later than 2009, the slope was 1.43-1.87. The measles' population dependency found among prefectures in Japan was thus scalable up to European countries. It was surprising because, unlike Japan, population density in EU countries was not uniform and not proportional to the population size. The population size dependency was not observed among Western Pacific and South-East Asian countries probably on account of confounding interacting socioeconomic factors. Correlation between measles incidence and birth rate, infant mortality or GDP per capita was almost insignificant.Size distribution of local infection clusters (LICs) of measles and rubella in Japan followed power law. For measles, though the population dependency remained unchanged after "elimination", there was change in the Zipf-type plot of LIC sizes. After the "elimination", LICs linked to importation-related outbreaks in less populated prefectures emerged as the top-ranked LICs.
Sample-to-answer palm-sized nucleic acid testing device towards low-cost malaria mass screening.
Choi, Gihoon; Prince, Theodore; Miao, Jun; Cui, Liwang; Guan, Weihua
2018-05-19
The effectiveness of malaria screening and treatment highly depends on the low-cost access to the highly sensitive and specific malaria test. We report a real-time fluorescence nucleic acid testing device for malaria field detection with automated and scalable sample preparation capability. The device consists a compact analyzer and a disposable microfluidic reagent compact disc. The parasite DNA sample preparation and subsequent real-time LAMP detection were seamlessly integrated on a single microfluidic compact disc, driven by energy efficient non-centrifuge based magnetic field interactions. Each disc contains four parallel testing units which could be configured either as four identical tests or as four species-specific tests. When configured as species-specific tests, it could identify two of the most life-threatening malaria species (P. falciparum and P. vivax). The NAT device is capable of processing four samples simultaneously within 50 min turnaround time. It achieves a detection limit of ~0.5 parasites/µl for whole blood, sufficient for detecting asymptomatic parasite carriers. The combination of the sensitivity, specificity, cost, and scalable sample preparation suggests the real-time fluorescence LAMP device could be particularly useful for malaria screening in the field settings. Copyright © 2018 Elsevier B.V. All rights reserved.
Providing scalable system software for high-end simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenberg, D.
1997-12-31
Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.
NASA Technical Reports Server (NTRS)
Luke, Edward Allen
1993-01-01
Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.
Rohmer, Kai; Jendersie, Johannes; Grosch, Thorsten
2017-11-01
Augmented Reality offers many applications today, especially on mobile devices. Due to the lack of mobile hardware for illumination measurements, photorealistic rendering with consistent appearance of virtual objects is still an area of active research. In this paper, we present a full two-stage pipeline for environment acquisition and augmentation of live camera images using a mobile device with a depth sensor. We show how to directly work on a recorded 3D point cloud of the real environment containing high dynamic range color values. For unknown and automatically changing camera settings, a color compensation method is introduced. Based on this, we show photorealistic augmentations using variants of differential light simulation techniques. The presented methods are tailored for mobile devices and run at interactive frame rates. However, our methods are scalable to trade performance for quality and can produce quality renderings on desktop hardware.
Quantum annealing with all-to-all connected nonlinear oscillators
Puri, Shruti; Andersen, Christian Kraglund; Grimsmo, Arne L.; Blais, Alexandre
2017-01-01
Quantum annealing aims at solving combinatorial optimization problems mapped to Ising interactions between quantum spins. Here, with the objective of developing a noise-resilient annealer, we propose a paradigm for quantum annealing with a scalable network of two-photon-driven Kerr-nonlinear resonators. Each resonator encodes an Ising spin in a robust degenerate subspace formed by two coherent states of opposite phases. A fully connected optimization problem is mapped to local fields driving the resonators, which are connected with only local four-body interactions. We describe an adiabatic annealing protocol in this system and analyse its performance in the presence of photon loss. Numerical simulations indicate substantial resilience to this noise channel, leading to a high success probability for quantum annealing. Finally, we propose a realistic circuit QED implementation of this promising platform for implementing a large-scale quantum Ising machine. PMID:28593952
Optimizing Interactive Development of Data-Intensive Applications
Interlandi, Matteo; Tetali, Sai Deep; Gulzar, Muhammad Ali; Noor, Joseph; Condie, Tyson; Kim, Miryung; Millstein, Todd
2017-01-01
Modern Data-Intensive Scalable Computing (DISC) systems are designed to process data through batch jobs that execute programs (e.g., queries) compiled from a high-level language. These programs are often developed interactively by posing ad-hoc queries over the base data until a desired result is generated. We observe that there can be significant overlap in the structure of these queries used to derive the final program. Yet, each successive execution of a slightly modified query is performed anew, which can significantly increase the development cycle. Vega is an Apache Spark framework that we have implemented for optimizing a series of similar Spark programs, likely originating from a development or exploratory data analysis session. Spark developers (e.g., data scientists) can leverage Vega to significantly reduce the amount of time it takes to re-execute a modified Spark program, reducing the overall time to market for their Big Data applications. PMID:28405637
Framework for scalable adsorbate–adsorbate interaction models
Hoffmann, Max J.; Medford, Andrew J.; Bligaard, Thomas
2016-06-02
Here, we present a framework for physically motivated models of adsorbate–adsorbate interaction between small molecules on transition and coinage metals based on modifications to the substrate electronic structure due to adsorption. We use this framework to develop one model for transition and one for coinage metal surfaces. The models for transition metals are based on the d-band center position, and the models for coinage metals are based on partial charges. The models require no empirical parameters, only two first-principles calculations per adsorbate as input, and therefore scale linearly with the number of reaction intermediates. By theory to theory comparison withmore » explicit density functional theory calculations over a wide range of adsorbates and surfaces, we show that the root-mean-squared error for differential adsorption energies is less than 0.2 eV for up to 1 ML coverage.« less
Developing a comprehensive curriculum for public health advocacy.
Hines, Ayelet; Jernigan, David H
2012-11-01
There is a substantial gap in public health school curricula regarding advocacy. Development of such a curriculum faces three challenges: faculty lack advocacy skills and experience; the public health literature on effective advocacy is limited; and yet a successful curriculum must be scalable to meet the needs of approximately 9,000 public health students graduating each year. To meet these challenges, we propose a 100-hour interactive online curriculum in five sections: campaigning and organizing, policy making and lobbying, campaign communications, new media, and fund-raising. We outline the content for individual modules in each of these sections, describe how the curriculum would build on existing interactive learning and social media technologies, and provide readers the opportunity to "test-drive" excerpts of a module on "grasstops" organizing. Developing advocacy skills and expertise is critical to meeting the challenges of public health today, and we provide a blueprint for how such training might be brought to scale in the field.
Multi-neuron intracellular recording in vivo via interacting autopatching robots
Holst, Gregory L; Singer, Annabelle C; Han, Xue; Brown, Emery N
2018-01-01
The activities of groups of neurons in a circuit or brain region are important for neuronal computations that contribute to behaviors and disease states. Traditional extracellular recordings have been powerful and scalable, but much less is known about the intracellular processes that lead to spiking activity. We present a robotic system, the multipatcher, capable of automatically obtaining blind whole-cell patch clamp recordings from multiple neurons simultaneously. The multipatcher significantly extends automated patch clamping, or 'autopatching’, to guide four interacting electrodes in a coordinated fashion, avoiding mechanical coupling in the brain. We demonstrate its performance in the cortex of anesthetized and awake mice. A multipatcher with four electrodes took an average of 10 min to obtain dual or triple recordings in 29% of trials in anesthetized mice, and in 18% of the trials in awake mice, thus illustrating practical yield and throughput to obtain multiple, simultaneous whole-cell recordings in vivo. PMID:29297466
Carbon Nanotube Based Groundwater Remediation: The Case of Trichloroethylene.
Jha, Kshitij C; Liu, Zhuonan; Vijwani, Hema; Nadagouda, Mallikarjuna; Mukhopadhyay, Sharmila M; Tsige, Mesfin
2016-07-21
Adsorption of chlorinated organic contaminants (COCs) on carbon nanotubes (CNTs) has been gaining ground as a remedial platform for groundwater treatment. Applications depend on our mechanistic understanding of COC adsorption on CNTs. This paper lays out the nature of competing interactions at play in hybrid, membrane, and pure CNT based systems and presents results with the perspective of existing gaps in design strategies. First, current remediation approaches to trichloroethylene (TCE), the most ubiquitous of the COCs, is presented along with examination of forces contributing to adsorption of analogous contaminants at the molecular level. Second, we present results on TCE adsorption and remediation on pure and hybrid CNT systems with a stress on the specific nature of substrate and molecular architecture that would contribute to competitive adsorption. The delineation of intermolecular interactions that contribute to efficient remediation is needed for custom, scalable field design of purification systems for a wide range of contaminants.
Accelerating Full Configuration Interaction Calculations for Nuclear Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Chao; Sternberg, Philip; Maris, Pieter
2008-04-14
One of the emerging computational approaches in nuclear physics is the full configuration interaction (FCI) method for solving the many-body nuclear Hamiltonian in a sufficiently large single-particle basis space to obtain exact answers - either directly or by extrapolation. The lowest eigenvalues and correspondingeigenvectors for very large, sparse and unstructured nuclear Hamiltonian matrices are obtained and used to evaluate additional experimental quantities. These matrices pose a significant challenge to the design and implementation of efficient and scalable algorithms for obtaining solutions on massively parallel computer systems. In this paper, we describe the computational strategies employed in a state-of-the-art FCI codemore » MFDn (Many Fermion Dynamics - nuclear) as well as techniques we recently developed to enhance the computational efficiency of MFDn. We will demonstrate the current capability of MFDn and report the latest performance improvement we have achieved. We will also outline our future research directions.« less
phiGENOME: an integrative navigation throughout bacteriophage genomes.
Stano, Matej; Klucar, Lubos
2011-11-01
phiGENOME is a web-based genome browser generating dynamic and interactive graphical representation of phage genomes stored in the phiSITE, database of gene regulation in bacteriophages. phiGENOME is an integral part of the phiSITE web portal (http://www.phisite.org/phigenome) and it was optimised for visualisation of phage genomes with the emphasis on the gene regulatory elements. phiGENOME consists of three components: (i) genome map viewer built using Adobe Flash technology, providing dynamic and interactive graphical display of phage genomes; (ii) sequence browser based on precisely formatted HTML tags, providing detailed exploration of genome features on the sequence level and (iii) regulation illustrator, based on Scalable Vector Graphics (SVG) and designed for graphical representation of gene regulations. Bringing 542 complete genome sequences accompanied with their rich annotations and references, makes phiGENOME a unique information resource in the field of phage genomics. Copyright © 2011 Elsevier Inc. All rights reserved.
In vivo generation of DNA sequence diversity for cellular barcoding
Peikon, Ian D.; Gizatullina, Diana I.; Zador, Anthony M.
2014-01-01
Heterogeneity is a ubiquitous feature of biological systems. A complete understanding of such systems requires a method for uniquely identifying and tracking individual components and their interactions with each other. We have developed a novel method of uniquely tagging individual cells in vivo with a genetic ‘barcode’ that can be recovered by DNA sequencing. Our method is a two-component system comprised of a genetic barcode cassette whose fragments are shuffled by Rci, a site-specific DNA invertase. The system is highly scalable, with the potential to generate theoretical diversities in the billions. We demonstrate the feasibility of this technique in Escherichia coli. Currently, this method could be employed to track the dynamics of populations of microbes through various bottlenecks. Advances of this method should prove useful in tracking interactions of cells within a network, and/or heterogeneity within complex biological samples. PMID:25013177
Declarative language design for interactive visualization.
Heer, Jeffrey; Bostock, Michael
2010-01-01
We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.
NASA Astrophysics Data System (ADS)
Mortezapour, Ali; Ahmadi Borji, Mahdi; Lo Franco, Rosario
2017-05-01
Efficient entanglement preservation in open quantum systems is a crucial scope towards a reliable exploitation of quantum resources. We address this issue by studying how two-qubit entanglement dynamically behaves when two atom qubits move inside two separated identical cavities. The moving qubits independently interact with their respective cavity. As a main general result, we find that under resonant qubit-cavity interaction the initial entanglement between two moving qubits remains closer to its initial value as time passes compared to the case of stationary qubits. In particular, we show that the initial entanglement can be strongly protected from decay by suitably adjusting the velocities of the qubits according to the non-Markovian features of the cavities. Our results supply a further way of preserving quantum correlations against noise with a natural implementation in cavity-QED scenarios and are straightforwardly extendable to many qubits for scalability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janjusic, Tommy; Kartsaklis, Christos
Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an applicationmore » s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).« less
A novel processing platform for post tape out flows
NASA Astrophysics Data System (ADS)
Vu, Hien T.; Kim, Soohong; Word, James; Cai, Lynn Y.
2018-03-01
As the computational requirements for post tape out (PTO) flows increase at the 7nm and below technology nodes, there is a need to increase the scalability of the computational tools in order to reduce the turn-around time (TAT) of the flows. Utilization of design hierarchy has been one proven method to provide sufficient partitioning to enable PTO processing. However, as the data is processed through the PTO flow, its effective hierarchy is reduced. The reduction is necessary to achieve the desired accuracy. Also, the sequential nature of the PTO flow is inherently non-scalable. To address these limitations, we are proposing a quasi-hierarchical solution that combines multiple levels of parallelism to increase the scalability of the entire PTO flow. In this paper, we describe the system and present experimental results demonstrating the runtime reduction through scalable processing with thousands of computational cores.
Scalable architecture for a room temperature solid-state quantum information processor.
Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D
2012-04-24
The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.
Scalable free energy calculation of proteins via multiscale essential sampling
NASA Astrophysics Data System (ADS)
Moritsugu, Kei; Terada, Tohru; Kidera, Akinori
2010-12-01
A multiscale simulation method, "multiscale essential sampling (MSES)," is proposed for calculating free energy surface of proteins in a sizable dimensional space with good scalability. In MSES, the configurational sampling of a full-dimensional model is enhanced by coupling with the accelerated dynamics of the essential degrees of freedom. Applying the Hamiltonian exchange method to MSES can remove the biasing potential from the coupling term, deriving the free energy surface of the essential degrees of freedom. The form of the coupling term ensures good scalability in the Hamiltonian exchange. As a test application, the free energy surface of the folding process of a miniprotein, chignolin, was calculated in the continuum solvent model. Results agreed with the free energy surface derived from the multicanonical simulation. Significantly improved scalability with the MSES method was clearly shown in the free energy calculation of chignolin in explicit solvent, which was achieved without increasing the number of replicas in the Hamiltonian exchange.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzi, Silvio; Hereld, Mark; Insley, Joseph
In this work we perform in-situ visualization of molecular dynamics simulations, which can help scientists to visualize simulation output on-the-fly, without incurring storage overheads. We present a case study to couple LAMMPS, the large-scale molecular dynamics simulation code with vl3, our parallel framework for large-scale visualization and analysis. Our motivation is to identify effective approaches for covisualization and exploration of large-scale atomistic simulations at interactive frame rates.We propose a system of coupled libraries and describe its architecture, with an implementation that runs on GPU-based clusters. We present the results of strong and weak scalability experiments, as well as future researchmore » avenues based on our results.« less
Simulation of Tasks Distribution in Horizontally Scalable Management System
NASA Astrophysics Data System (ADS)
Kustov, D.; Sherstneva, A.; Botygin, I.
2016-08-01
This paper presents an imitational model of the task distribution system for the components of territorially-distributed automated management system with a dynamically changing topology. Each resource of the distributed automated management system is represented with an agent, which allows to set behavior of every resource in the best possible way and ensure their interaction. The agent work load imitation was done via service query imitation formed in a system dynamics style using a stream diagram. The query generation took place in the abstract-represented center - afterwards, they were sent to the drive to be distributed to management system resources according to a ranking table.
Collagen based magnetic nanocomposites for oil removal applications
Thanikaivelan, Palanisamy; Narayanan, Narayanan T.; Pradhan, Bhabendra K.; Ajayan, Pulickel M.
2012-01-01
A stable magnetic nanocomposite of collagen and superparamagnetic iron oxide nanoparticles (SPIONs) is prepared by a simple process utilizing protein wastes from leather industry. Molecular interaction between helical collagen fibers and spherical SPIONs is proven through calorimetric, microscopic and spectroscopic techniques. This nanocomposite exhibited selective oil absorption and magnetic tracking ability, allowing it to be used in oil removal applications. The environmental sustainability of the oil adsorbed nanobiocomposite is also demonstrated here through its conversion into a bi-functional graphitic nanocarbon material via heat treatment. The approach highlights new avenues for converting bio-wastes into useful nanomaterials in scalable and inexpensive ways. PMID:22355744
Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface
NASA Astrophysics Data System (ADS)
Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry
2007-04-01
As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.
Level-2 Milestone 3504: Scalable Applications Preparations and Outreach for the Sequoia ID (Dawn)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Futral, W. Scott; Gyllenhaal, John C.; Hedges, Richard M.
2010-07-02
This report documents LLNL SAP project activities in anticipation of the ASC Sequoia system, ASC L2 milestone 3504: Scalable Applications Preparations and Outreach for the Sequoia ID (Dawn), due June 30, 2010.
Scalable Metadata Management for a Large Multi-Source Seismic Data Repository
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaylord, J. M.; Dodge, D. A.; Magana-Zook, S. A.
In this work, we implemented the key metadata management components of a scalable seismic data ingestion framework to address limitations in our existing system, and to position it for anticipated growth in volume and complexity.
NASA Astrophysics Data System (ADS)
Gomez de Arco, Lewis Mortimer
Graphene and carbon nanotubes have outstanding electrical and thermal conductivity. These characteristics make them exciting materials with high potential to replace silicon and surpass its performance in the next generation of semiconductors devices, such devices ought to be considerably smaller and faster than the ones used in present technology. Despite of the excellent electrical and thermal conduction properties of graphene and carbon nanotubes, the advance of nanoelectronics based on them has been hampered due to fundamental limitations of the current synthesis and integration technologies of these carbon nanomaterials. Therefore, there is a strong need to do research at fundamental and applicative levels to help find the roadmap that these materials need to follow, in order to become a real alternative for silicon in future technologies. This dissertation present our approach to overcome some of the most critical problems that hinder the implementation of graphene and carbon nanotubes as important components in real-life macro and nanoelectronic devices. Towards this end, we systematically studied synthesis methods for scalable, high quality graphene and evaluated our large-scale synthesized graphene as transparent electrodes in functional energy conversion devices. In addition, we explored scalable methods to obtain carbon nanotube field-effect transistors with only semiconductor nanotube channels and studied the substrate influence on the structure and metal to semiconductor ratio of aligned nanotubes. Although we have successfully tackled some of the most important challenges of the above-mentioned one- and two-dimensional carbon nanostructures, more remains to be done to integrate them as functional components in electronic devices to reach the goal of transferring them from the laboratory to the manufacturing industry, and ultimately to the society. In chapter 1, a general introduction to carbon nanomaterials is presented, followed by a more focused discussion on the structure and properties of graphene and carbon nanotubes. Chapter 2, presents the development of a chemical vapor deposition method for scalable graphene synthesis and the evaluation of its electrical properties as the active channel in field effect transistor and as a transparent conductor. Chapter 3 presents further work on graphene synthesis on single crystal nickel and the influence of the substrate atomic arrangement on the synthesized graphene. Chapter 4 presents the implementation of the highly scalable graphene synthesized by CVD as the transparent electrode in flexible organic photovoltaic cells. Chapter 5 evaluates the influence of substrate/nanotube interactions during align nanotube growth on the Raman signature of the resulting aligned nanotubes, nanotube structure and metal to semiconductor ratio. Chapter 6 presents our findings on a scalable method that can be used at wafer scale to achieve metal to semiconductor conversion of carbon nanotubes by light irradiation and its application to achieve semiconducting CNTFETs. Finally, in chapter 7, future research directions in related areas of science and technology are proposed.
Jara, Antonio J.; Moreno-Sanchez, Pedro; Skarmeta, Antonio F.; Varakliotis, Socrates; Kirstein, Peter
2013-01-01
Sensors utilize a large number of heterogeneous technologies for a varied set of application environments. The sheer number of devices involved requires that this Internet be the Future Internet, with a core network based on IPv6 and a higher scalability in order to be able to address all the devices, sensors and things located around us. This capability to connect through IPv6 devices, sensors and things is what is defining the so-called Internet of Things (IoT). IPv6 provides addressing space to reach this ubiquitous set of sensors, but legacy technologies, such as X10, European Installation Bus (EIB), Controller Area Network (CAN) and radio frequency ID (RFID) from the industrial, home automation and logistic application areas, do not support the IPv6 protocol. For that reason, a technique must be devised to map the sensor and identification technologies to IPv6, thus allowing homogeneous access via IPv6 features in the context of the IoT. This paper proposes a mapping between the native addressing of each technology and an IPv6 address following a set of rules that are discussed and proposed in this work. Specifically, the paper presents a technology-dependent IPv6 addressing proxy, which maps each device to the different subnetworks built under the IPv6 prefix addresses provided by the internet service provider for each home, building or user. The IPv6 addressing proxy offers a common addressing environment based on IPv6 for all the devices, regardless of the device technology. Thereby, this offers a scalable and homogeneous solution to interact with devices that do not support IPv6 addressing. The IPv6 addressing proxy has been implemented in a multi-protocol card and evaluated successfully its performance, scalability and interoperability through a protocol built over IPv6. PMID:23686145
Scalable Multi-Platform Distribution of Spatial 3d Contents
NASA Astrophysics Data System (ADS)
Klimke, J.; Hagedorn, B.; Döllner, J.
2013-09-01
Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.
EarthServer: Cross-Disciplinary Earth Science Through Data Cube Analytics
NASA Astrophysics Data System (ADS)
Baumann, P.; Rossi, A. P.
2016-12-01
The unprecedented increase of imagery, in-situ measurements, and simulation data produced by Earth (and Planetary) Science observations missions bears a rich, yet not leveraged potential for getting insights from integrating such diverse datasets and transform scientific questions into actual queries to data, formulated in a standardized way.The intercontinental EarthServer [1] initiative is demonstrating new directions for flexible, scalable Earth Science services based on innovative NoSQL technology. Researchers from Europe, the US and Australia have teamed up to rigorously implement the concept of the datacube. Such a datacube may have spatial and temporal dimensions (such as a satellite image time series) and may unite an unlimited number of scenes. Independently from whatever efficient data structuring a server network may perform internally, users (scientist, planners, decision makers) will always see just a few datacubes they can slice and dice.EarthServer has established client [2] and server technology for such spatio-temporal datacubes. The underlying scalable array engine, rasdaman [3,4], enables direct interaction, including 3-D visualization, common EO data processing, and general analytics. Services exclusively rely on the open OGC "Big Geo Data" standards suite, the Web Coverage Service (WCS). Conversely, EarthServer has shaped and advanced WCS based on the experience gained. The first phase of EarthServer has advanced scalable array database technology into 150+ TB services. Currently, Petabyte datacubes are being built for ad-hoc and cross-disciplinary querying, e.g. using climate, Earth observation and ocean data.We will present the EarthServer approach, its impact on OGC / ISO / INSPIRE standardization, and its platform technology, rasdaman.References: [1] Baumann, et al. (2015) DOI: 10.1080/17538947.2014.1003106 [2] Hogan, P., (2011) NASA World Wind, Proceedings of the 2nd International Conference on Computing for Geospatial Research & Applications ACM. [3] Baumann, Peter, et al. (2014) In Proc. 10th ICDM, 194-201. [4] Dumitru, A. et al. (2014) In Proc ACM SIGMOD Workshop on Data Analytics in the Cloud (DanaC'2014), 1-4.
Jara, Antonio J; Moreno-Sanchez, Pedro; Skarmeta, Antonio F; Varakliotis, Socrates; Kirstein, Peter
2013-05-17
Sensors utilize a large number of heterogeneous technologies for a varied set of application environments. The sheer number of devices involved requires that this Internet be the Future Internet, with a core network based on IPv6 and a higher scalability in order to be able to address all the devices, sensors and things located around us. This capability to connect through IPv6 devices, sensors and things is what is defining the so-called Internet of Things (IoT). IPv6 provides addressing space to reach this ubiquitous set of sensors, but legacy technologies, such as X10, European Installation Bus (EIB), Controller Area Network (CAN) and radio frequency ID (RFID) from the industrial, home automation and logistic application areas, do not support the IPv6 protocol. For that reason, a technique must be devised to map the sensor and identification technologies to IPv6, thus allowing homogeneous access via IPv6 features in the context of the IoT. This paper proposes a mapping between the native addressing of each technology and an IPv6 address following a set of rules that are discussed and proposed in this work. Specifically, the paper presents a technology-dependent IPv6 addressing proxy, which maps each device to the different subnetworks built under the IPv6 prefix addresses provided by the internet service provider for each home, building or user. The IPv6 addressing proxy offers a common addressing environment based on IPv6 for all the devices, regardless of the device technology. Thereby, this offers a scalable and homogeneous solution to interact with devices that do not support IPv6 addressing. The IPv6 addressing proxy has been implemented in a multi-protocol Sensors 2013, 13 6688 card and evaluated successfully its performance, scalability and interoperability through a protocol built over IPv6.
Lin, Zhaoyang; Yin, Anxiang; Mao, Jun; Xia, Yi; Kempf, Nicholas; He, Qiyuan; Wang, Yiliu; Chen, Chih-Yen; Zhang, Yanliang; Ozolins, Vidvuds; Ren, Zhifeng; Huang, Yu; Duan, Xiangfeng
2016-10-01
Epitaxial heterostructures with precisely controlled composition and electronic modulation are of central importance for electronics, optoelectronics, thermoelectrics, and catalysis. In general, epitaxial material growth requires identical or nearly identical crystal structures with small misfit in lattice symmetry and parameters and is typically achieved by vapor-phase depositions in vacuum. We report a scalable solution-phase growth of symmetry-mismatched PbSe/Bi 2 Se 3 epitaxial heterostructures by using two-dimensional (2D) Bi 2 Se 3 nanoplates as soft templates. The dangling bond-free surface of 2D Bi 2 Se 3 nanoplates guides the growth of PbSe crystal without requiring a one-to-one match in the atomic structure, which exerts minimal restriction on the epitaxial layer. With a layered structure and weak van der Waals interlayer interaction, the interface layer in the 2D Bi 2 Se 3 nanoplates can deform to accommodate incoming layer, thus functioning as a soft template for symmetry-mismatched epitaxial growth of cubic PbSe crystal on rhombohedral Bi 2 Se 3 nanoplates. We show that a solution chemistry approach can be readily used for the synthesis of gram-scale PbSe/Bi 2 Se 3 epitaxial heterostructures, in which the square PbSe (001) layer forms on the trigonal/hexagonal (0001) plane of Bi 2 Se 3 nanoplates. We further show that the resulted PbSe/Bi 2 Se 3 heterostructures can be readily processed into bulk pellet with considerably suppressed thermal conductivity (0.30 W/m·K at room temperature) while retaining respectable electrical conductivity, together delivering a thermoelectric figure of merit ZT three times higher than that of the pristine Bi 2 Se 3 nanoplates at 575 K. Our study demonstrates a unique epitaxy mode enabled by the 2D nanocrystal soft template via an affordable and scalable solution chemistry approach. It opens up new opportunities for the creation of diverse epitaxial heterostructures with highly disparate structures and functions.
Lin, Zhaoyang; Yin, Anxiang; Mao, Jun; Xia, Yi; Kempf, Nicholas; He, Qiyuan; Wang, Yiliu; Chen, Chih-Yen; Zhang, Yanliang; Ozolins, Vidvuds; Ren, Zhifeng; Huang, Yu; Duan, Xiangfeng
2016-01-01
Epitaxial heterostructures with precisely controlled composition and electronic modulation are of central importance for electronics, optoelectronics, thermoelectrics, and catalysis. In general, epitaxial material growth requires identical or nearly identical crystal structures with small misfit in lattice symmetry and parameters and is typically achieved by vapor-phase depositions in vacuum. We report a scalable solution-phase growth of symmetry-mismatched PbSe/Bi2Se3 epitaxial heterostructures by using two-dimensional (2D) Bi2Se3 nanoplates as soft templates. The dangling bond–free surface of 2D Bi2Se3 nanoplates guides the growth of PbSe crystal without requiring a one-to-one match in the atomic structure, which exerts minimal restriction on the epitaxial layer. With a layered structure and weak van der Waals interlayer interaction, the interface layer in the 2D Bi2Se3 nanoplates can deform to accommodate incoming layer, thus functioning as a soft template for symmetry-mismatched epitaxial growth of cubic PbSe crystal on rhombohedral Bi2Se3 nanoplates. We show that a solution chemistry approach can be readily used for the synthesis of gram-scale PbSe/Bi2Se3 epitaxial heterostructures, in which the square PbSe (001) layer forms on the trigonal/hexagonal (0001) plane of Bi2Se3 nanoplates. We further show that the resulted PbSe/Bi2Se3 heterostructures can be readily processed into bulk pellet with considerably suppressed thermal conductivity (0.30 W/m·K at room temperature) while retaining respectable electrical conductivity, together delivering a thermoelectric figure of merit ZT three times higher than that of the pristine Bi2Se3 nanoplates at 575 K. Our study demonstrates a unique epitaxy mode enabled by the 2D nanocrystal soft template via an affordable and scalable solution chemistry approach. It opens up new opportunities for the creation of diverse epitaxial heterostructures with highly disparate structures and functions. PMID:27730211
Modeling time-series data from microbial communities.
Ridenhour, Benjamin J; Brooker, Sarah L; Williams, Janet E; Van Leuven, James T; Miller, Aaron W; Dearing, M Denise; Remien, Christopher H
2017-11-01
As sequencing technologies have advanced, the amount of information regarding the composition of bacterial communities from various environments (for example, skin or soil) has grown exponentially. To date, most work has focused on cataloging taxa present in samples and determining whether the distribution of taxa shifts with exogenous covariates. However, important questions regarding how taxa interact with each other and their environment remain open thus preventing in-depth ecological understanding of microbiomes. Time-series data from 16S rDNA amplicon sequencing are becoming more common within microbial ecology, but methods to infer ecological interactions from these longitudinal data are limited. We address this gap by presenting a method of analysis using Poisson regression fit with an elastic-net penalty that (1) takes advantage of the fact that the data are time series; (2) constrains estimates to allow for the possibility of many more interactions than data; and (3) is scalable enough to handle data consisting of thousands of taxa. We test the method on gut microbiome data from white-throated woodrats (Neotoma albigula) that were fed varying amounts of the plant secondary compound oxalate over a period of 22 days to estimate interactions between OTUs and their environment.
An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud
Dinh, Thanh; Kim, Younghan
2016-01-01
This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud. PMID:27367689
Parallel mapping of optical near-field interactions by molecular motor-driven quantum dots.
Groß, Heiko; Heil, Hannah S; Ehrig, Jens; Schwarz, Friedrich W; Hecht, Bert; Diez, Stefan
2018-04-30
In the vicinity of metallic nanostructures, absorption and emission rates of optical emitters can be modulated by several orders of magnitude 1,2 . Control of such near-field light-matter interaction is essential for applications in biosensing 3 , light harvesting 4 and quantum communication 5,6 and requires precise mapping of optical near-field interactions, for which single-emitter probes are promising candidates 7-11 . However, currently available techniques are limited in terms of throughput, resolution and/or non-invasiveness. Here, we present an approach for the parallel mapping of optical near-field interactions with a resolution of <5 nm using surface-bound motor proteins to transport microtubules carrying single emitters (quantum dots). The deterministic motion of the quantum dots allows for the interpolation of their tracked positions, resulting in an increased spatial resolution and a suppression of localization artefacts. We apply this method to map the near-field distribution of nanoslits engraved into gold layers and find an excellent agreement with finite-difference time-domain simulations. Our technique can be readily applied to a variety of surfaces for scalable, nanometre-resolved and artefact-free near-field mapping using conventional wide-field microscopes.
An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud.
Dinh, Thanh; Kim, Younghan
2016-06-28
This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud.
Classification of group behaviors in social media via social behavior grammars
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Getoor, Lise; Smith, Marc
2014-06-01
The increasing use of online collaboration and information sharing in the last decade has resulted in explosion of criminal and anti-social activities in online communities. Detection of such behaviors are of interest to commercial enterprises who want to guard themselves from cyber criminals, and the military intelligence analysts who desire to detect and counteract cyberwars waged by adversarial states and organizations. The most challenging behaviors to detect are those involving multiple individuals who share actions and roles in the hostile activities and individually appear benign. To detect these behaviors, the theories of group behaviors and interactions must be developed. In this paper we describe our exploration of the data from collaborative social platform to categorize the behaviors of multiple individuals. We applied graph matching algorithms to explore consistent social interactions. Our research led us to a conclusion that complex collaborative behaviors can be modeled and detected using a concept of group behavior grammars, in a manner analogous to natural language processing. These grammars capture constraints on how people take on roles in virtual environments, form groups, and interact over time, providing the building blocks for scalable and accurate multi-entity interaction analysis and social behavior hypothesis testing.
Solar wind interaction with Venus and Mars in a parallel hybrid code
NASA Astrophysics Data System (ADS)
Jarvinen, Riku; Sandroos, Arto
2013-04-01
We discuss the development and applications of a new parallel hybrid simulation, where ions are treated as particles and electrons as a charge-neutralizing fluid, for the interaction between the solar wind and Venus and Mars. The new simulation code under construction is based on the algorithm of the sequential global planetary hybrid model developed at the Finnish Meteorological Institute (FMI) and on the Corsair parallel simulation platform also developed at the FMI. The FMI's sequential hybrid model has been used for studies of plasma interactions of several unmagnetized and weakly magnetized celestial bodies for more than a decade. Especially, the model has been used to interpret in situ particle and magnetic field observations from plasma environments of Mars, Venus and Titan. Further, Corsair is an open source MPI (Message Passing Interface) particle and mesh simulation platform, mainly aimed for simulations of diffusive shock acceleration in solar corona and interplanetary space, but which is now also being extended for global planetary hybrid simulations. In this presentation we discuss challenges and strategies of parallelizing a legacy simulation code as well as possible applications and prospects of a scalable parallel hybrid model for the solar wind interactions of Venus and Mars.
A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Lindsay; Zéphyr, Luckny; Cardell, Judith B.
The evolution of the power system to the reliable, efficient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of renewable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distribution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for cooptimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this framework, microgrids encompass consumers, distributed renewables and storage. The energy managementmore » system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the development of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic optimization, including decomposition and stochastic dual dynamic programming.« less
A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, C. Lindsay; Zéphyr, Luckny; Liu, Jialin
The evolution of the power system to the reliable, effi- cient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of re- newable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distri- bution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for co- optimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this frame- work, microgrids encompass consumers, distributed renewablesmore » and storage. The energy management system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the devel- opment of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic op- timization, including decomposition and stochastic dual dynamic programming.« less
A Fermi-degenerate three-dimentional optical lattice clock
NASA Astrophysics Data System (ADS)
Goban, Akihisa; Campbell, Sara; Hutson, Ross; Marti, G. Edward; Sonderhouse, Lindsay; Robinson, John; Zhang, Wei; Ye, Jun
2017-04-01
The pursuit of better atomic clocks has advanced many research areas, providing better quantum state control, tighter limits on fundamental constant variation, and improved tests of relativity. Recent progress in optical lattice clock to the accuracy of 2E-18 has benefited from the understanding of atomic interactions. Also the precision of clock spectroscopy has been applied to explore many-body interactions including SU(N) symmetry. In our previous 1D optical lattice, atomic interactions cause suppression and broadening of the atomic resonance, limiting the clock stability. To overcome this limitation, we demonstrate a scalable solution that takes advantage of the high density of a degenerate Fermi gas in a three-dimensional optical lattice to protect against on-site interaction shifts. Using an ultrastable laser, we achieve an unprecedented level of atom-light coherence, reaching a spectroscopic quality factor 5.2E15. We investigate clock systematics unique to this design; on-site interactions are resolved so that their contribution to clock shifts is orders of magnitude suppressed compared to the 1D optical lattice experiments. Also, we measure the combined scalar and tensor magic wavelengths for state-independent trapping along all three lattice axes. We acknowledge support from NIST, DARPA and the NSF JILA Physics Frontier Center.
Scalable Domain Decomposed Monte Carlo Particle Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Matthew Joseph
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malony, Allen D; Shende, Sameer
This is the final progress report for the FastOS (Phase 2) (FastOS-2) project with Argonne National Laboratory and the University of Oregon (UO). The project started at UO on July 1, 2008 and ran until April 30, 2010, at which time a six-month no-cost extension began. The FastOS-2 work at UO delivered excellent results in all research work areas: * scalable parallel monitoring * kernel-level performance measurement * parallel I/0 system measurement * large-scale and hybrid application performance measurement * onlne scalable performance data reduction and analysis * binary instrumentation
Scalable cloud without dedicated storage
NASA Astrophysics Data System (ADS)
Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.
2015-05-01
We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.
Scalable Robust Principal Component Analysis Using Grassmann Averages.
Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J
2016-11-01
In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.
Novel Scalable 3-D MT Inverse Solver
NASA Astrophysics Data System (ADS)
Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.
2016-12-01
We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.
NASA Astrophysics Data System (ADS)
Plaza, Antonio; Plaza, Javier; Paz, Abel
2010-10-01
Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.
: A Scalable and Transparent System for Simulating MPI Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2010-01-01
is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less
Transportation Network Topologies
NASA Technical Reports Server (NTRS)
Holmes, Bruce J.; Scott, John M.
2004-01-01
A discomforting reality has materialized on the transportation scene: our existing air and ground infrastructures will not scale to meet our nation's 21st century demands and expectations for mobility, commerce, safety, and security. The consequence of inaction is diminished quality of life and economic opportunity in the 21st century. Clearly, new thinking is required for transportation that can scale to meet to the realities of a networked, knowledge-based economy in which the value of time is a new coin of the realm. This paper proposes a framework, or topology, for thinking about the problem of scalability of the system of networks that comprise the aviation system. This framework highlights the role of integrated communication-navigation-surveillance systems in enabling scalability of future air transportation networks. Scalability, in this vein, is a goal of the recently formed Joint Planning and Development Office for the Next Generation Air Transportation System. New foundations for 21PstP thinking about air transportation are underpinned by several technological developments in the traditional aircraft disciplines as well as in communication, navigation, surveillance and information systems. Complexity science and modern network theory give rise to one of the technological developments of importance. Scale-free (i.e., scalable) networks represent a promising concept space for modeling airspace system architectures, and for assessing network performance in terms of scalability, efficiency, robustness, resilience, and other metrics. The paper offers an air transportation system topology as framework for transportation system innovation. Successful outcomes of innovation in air transportation could lay the foundations for new paradigms for aircraft and their operating capabilities, air transportation system architectures, and airspace architectures and procedural concepts. The topology proposed considers air transportation as a system of networks, within which strategies for scalability of the topology may be enabled by technologies and policies. In particular, the effects of scalable ICNS concepts are evaluated within this proposed topology. Alternative business models are appearing on the scene as the old centralized hub-and-spoke model reaches the limits of its scalability. These models include growth of point-to-point scheduled air transportation service (e.g., the RJ phenomenon and the 'Southwest Effect'). Another is a new business model for on-demand, widely distributed, air mobility in jet taxi services. The new businesses forming around this vision are targeting personal air mobility to virtually any of the thousands of origins and destinations throughout suburban, rural, and remote communities and regions. Such advancement in air mobility has many implications for requirements for airports, airspace, and consumers. These new paradigms could support scalable alternatives for the expansion of future air mobility to more consumers in more places.
Transportation Network Topologies
NASA Technical Reports Server (NTRS)
Holmes, Bruce J.; Scott, John
2004-01-01
A discomforting reality has materialized on the transportation scene: our existing air and ground infrastructures will not scale to meet our nation's 21st century demands and expectations for mobility, commerce, safety, and security. The consequence of inaction is diminished quality of life and economic opportunity in the 21st century. Clearly, new thinking is required for transportation that can scale to meet to the realities of a networked, knowledge-based economy in which the value of time is a new coin of the realm. This paper proposes a framework, or topology, for thinking about the problem of scalability of the system of networks that comprise the aviation system. This framework highlights the role of integrated communication-navigation-surveillance systems in enabling scalability of future air transportation networks. Scalability, in this vein, is a goal of the recently formed Joint Planning and Development Office for the Next Generation Air Transportation System. New foundations for 21st thinking about air transportation are underpinned by several technological developments in the traditional aircraft disciplines as well as in communication, navigation, surveillance and information systems. Complexity science and modern network theory give rise to one of the technological developments of importance. Scale-free (i.e., scalable) networks represent a promising concept space for modeling airspace system architectures, and for assessing network performance in terms of scalability, efficiency, robustness, resilience, and other metrics. The paper offers an air transportation system topology as framework for transportation system innovation. Successful outcomes of innovation in air transportation could lay the foundations for new paradigms for aircraft and their operating capabilities, air transportation system architectures, and airspace architectures and procedural concepts. The topology proposed considers air transportation as a system of networks, within which strategies for scalability of the topology may be enabled by technologies and policies. In particular, the effects of scalable ICNS concepts are evaluated within this proposed topology. Alternative business models are appearing on the scene as the old centralized hub-and-spoke model reaches the limits of its scalability. These models include growth of point-to-point scheduled air transportation service (e.g., the RJ phenomenon and the Southwest Effect). Another is a new business model for on-demand, widely distributed, air mobility in jet taxi services. The new businesses forming around this vision are targeting personal air mobility to virtually any of the thousands of origins and destinations throughout suburban, rural, and remote communities and regions. Such advancement in air mobility has many implications for requirements for airports, airspace, and consumers. These new paradigms could support scalable alternatives for the expansion of future air mobility to more consumers in more places.
Park, Christopher Y.; Krishnan, Arjun; Zhu, Qian; Wong, Aaron K.; Lee, Young-Suk; Troyanskaya, Olga G.
2015-01-01
Motivation: Leveraging the large compendium of genomic data to predict biomedical pathways and specific mechanisms of protein interactions genome-wide in metazoan organisms has been challenging. In contrast to unicellular organisms, biological and technical variation originating from diverse tissues and cell-lineages is often the largest source of variation in metazoan data compendia. Therefore, a new computational strategy accounting for the tissue heterogeneity in the functional genomic data is needed to accurately translate the vast amount of human genomic data into specific interaction-level hypotheses. Results: We developed an integrated, scalable strategy for inferring multiple human gene interaction types that takes advantage of data from diverse tissue and cell-lineage origins. Our approach specifically predicts both the presence of a functional association and also the most likely interaction type among human genes or its protein products on a whole-genome scale. We demonstrate that directly incorporating tissue contextual information improves the accuracy of our predictions, and further, that such genome-wide results can be used to significantly refine regulatory interactions from primary experimental datasets (e.g. ChIP-Seq, mass spectrometry). Availability and implementation: An interactive website hosting all of our interaction predictions is publically available at http://pathwaynet.princeton.edu. Software was implemented using the open-source Sleipnir library, which is available for download at https://bitbucket.org/libsleipnir/libsleipnir.bitbucket.org. Contact: ogt@cs.princeton.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25431329
Intelligent indexing: a semi-automated, trainable system for field labeling
NASA Astrophysics Data System (ADS)
Clawson, Robert; Barrett, William
2015-01-01
We present Intelligent Indexing: a general, scalable, collaborative approach to indexing and transcription of non-machinereadable documents that exploits visual consensus and group labeling while harnessing human recognition and domain expertise. In our system, indexers work directly on the page, and with minimal context switching can navigate the page, enter labels, and interact with the recognition engine. Interaction with the recognition engine occurs through preview windows that allow the indexer to quickly verify and correct recommendations. This interaction is far superior to conventional, tedious, inefficient post-correction and editing. Intelligent Indexing is a trainable system that improves over time and can provide benefit even without prior knowledge. A user study was performed to compare Intelligent Indexing to a basic, manual indexing system. Volunteers report that using Intelligent Indexing is less mentally fatiguing and more enjoyable than the manual indexing system. Their results also show that it reduces significantly (30.2%) the time required to index census records, while maintaining comparable accuracy. (a video demonstration is available at http://youtube.com/gqdVzEPnBEw)
NASA Technical Reports Server (NTRS)
Hutto, Clayton; Briscoe, Erica; Trewhitt, Ethan
2012-01-01
Societal level macro models of social behavior do not sufficiently capture nuances needed to adequately represent the dynamics of person-to-person interactions. Likewise, individual agent level micro models have limited scalability - even minute parameter changes can drastically affect a model's response characteristics. This work presents an approach that uses agent-based modeling to represent detailed intra- and inter-personal interactions, as well as a system dynamics model to integrate societal-level influences via reciprocating functions. A Cognitive Network Model (CNM) is proposed as a method of quantitatively characterizing cognitive mechanisms at the intra-individual level. To capture the rich dynamics of interpersonal communication for the propagation of beliefs and attitudes, a Socio-Cognitive Network Model (SCNM) is presented. The SCNM uses socio-cognitive tie strength to regulate how agents influence--and are influenced by--one another's beliefs during social interactions. We then present experimental results which support the use of this network analytical approach, and we discuss its applicability towards characterizing and understanding human information processing.
Passing Messages between Biological Networks to Refine Predicted Interactions
Glass, Kimberly; Huttenhower, Curtis; Quackenbush, John; Yuan, Guo-Cheng
2013-01-01
Regulatory network reconstruction is a fundamental problem in computational biology. There are significant limitations to such reconstruction using individual datasets, and increasingly people attempt to construct networks using multiple, independent datasets obtained from complementary sources, but methods for this integration are lacking. We developed PANDA (Passing Attributes between Networks for Data Assimilation), a message-passing model using multiple sources of information to predict regulatory relationships, and used it to integrate protein-protein interaction, gene expression, and sequence motif data to reconstruct genome-wide, condition-specific regulatory networks in yeast as a model. The resulting networks were not only more accurate than those produced using individual data sets and other existing methods, but they also captured information regarding specific biological mechanisms and pathways that were missed using other methodologies. PANDA is scalable to higher eukaryotes, applicable to specific tissue or cell type data and conceptually generalizable to include a variety of regulatory, interaction, expression, and other genome-scale data. An implementation of the PANDA algorithm is available at www.sourceforge.net/projects/panda-net. PMID:23741402
Zonal methods for the parallel execution of range-limited N-body simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Kevin J.; Dror, Ron O.; Shaw, David E.
2007-01-20
Particle simulations in fields ranging from biochemistry to astrophysics require the evaluation of interactions between all pairs of particles separated by less than some fixed interaction radius. The applicability of such simulations is often limited by the time required for calculation, but the use of massive parallelism to accelerate these computations is typically limited by inter-processor communication requirements. Recently, Snir [M. Snir, A note on N-body computations with cutoffs, Theor. Comput. Syst. 37 (2004) 295-318] and Shaw [D.E. Shaw, A fast, scalable method for the parallel evaluation of distance-limited pairwise particle interactions, J. Comput. Chem. 26 (2005) 1318-1328] independently introducedmore » two distinct methods that offer asymptotic reductions in the amount of data transferred between processors. In the present paper, we show that these schemes represent special cases of a more general class of methods, and introduce several new algorithms in this class that offer practical advantages over all previously described methods for a wide range of problem parameters. We also show that several of these algorithms approach an approximate lower bound on inter-processor data transfer.« less
Global Static Indexing for Real-Time Exploration of Very Large Regular Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pascucci, V; Frank, R
2001-07-23
In this paper we introduce a new indexing scheme for progressive traversal and visualization of large regular grids. We demonstrate the potential of our approach by providing a tool that displays at interactive rates planar slices of scalar field data with very modest computing resources. We obtain unprecedented results both in terms of absolute performance and, more importantly, in terms of scalability. On a laptop computer we provide real time interaction with a 2048{sup 3} grid (8 Giga-nodes) using only 20MB of memory. On an SGI Onyx we slice interactively an 8192{sup 3} grid (1/2 tera-nodes) using only 60MB ofmore » memory. The scheme relies simply on the determination of an appropriate reordering of the rectilinear grid data and a progressive construction of the output slice. The reordering minimizes the amount of I/O performed during the out-of-core computation. The progressive and asynchronous computation of the output provides flexible quality/speed tradeoffs and a time-critical and interruptible user interface.« less
Improved Swimming Performance in Hydrodynamically- coupled Airfoils
NASA Astrophysics Data System (ADS)
Heydari, Sina; Shelley, Michael J.; Kanso, Eva
2017-11-01
Collective motion is a widespread phenomenon in the animal kingdom from fish schools to bird flocks. Half of the known fish species are thought to exhibit schooling behavior during some phase of their life cycle. Schooling likely occurs to serve multiple purposes, including foraging for resources and protection from predators. Growing experimental and theoretical evidence supports the hypothesis that fish can benefit from the hydrodynamic interactions with their neighbors, but it is unclear whether this requires particular configurations or regulations. Here, we propose a physics-based approach that account for hydrodynamic interactions among swimmers based on the vortex sheet model. The benefit of this model is that it is scalable to a large number of swimmers. We start by examining the case of two swimmers, heaving plates, moving in parallel and in tandem. We find that for the same heaving amplitude and frequency, the coupled-swimmers move faster and more efficiently. This increase in velocity depends strongly on the configuration and separation distance between the swimmers. Our results are consistent with recent experimental findings on heaving airfoils and underline the role of fluid dynamic interactions in the collective behavior of swimmers.
Metabolic interactions and dynamics in microbial communities
NASA Astrophysics Data System (ADS)
Segre', Daniel
Metabolism, in addition to being the engine of every living cell, plays a major role in the cell-cell and cell-environment relations that shape the dynamics and evolution of microbial communities, e.g. by mediating competition and cross-feeding interactions between different species. Despite the increasing availability of metagenomic sequencing data for numerous microbial ecosystems, fundamental aspects of these communities, such as the unculturability of many isolates, and the conditions necessary for taxonomic or functional stability, are still poorly understood. We are developing mechanistic computational approaches for studying the interactions between different organisms based on the knowledge of their entire metabolic networks. In particular, we have recently built an open source platform for the Computation of Microbial Ecosystems in Time and Space (COMETS), which combines metabolic models with convection-diffusion equations to simulate the spatio-temporal dynamics of metabolism in microbial communities. COMETS has been experimentally tested on small artificial communities, and is scalable to hundreds of species in complex environments. I will discuss recent developments and challenges towards the implementation of models for microbiomes and synthetic microbial communities.
Kastner, Monika; Sayal, Radha; Oliver, Doug; Straus, Sharon E; Dolovich, Lisa
2017-08-01
Chronic diseases are a significant public health concern, particularly in older adults. To address the delivery of health care services to optimally meet the needs of older adults with multiple chronic diseases, Health TAPESTRY (Teams Advancing Patient Experience: Strengthening Quality) uses a novel approach that involves patient home visits by trained volunteers to collect and transmit relevant health information using e-health technology to inform appropriate care from an inter-professional healthcare team. Health TAPESTRY was implemented, pilot tested, and evaluated in a randomized controlled trial (analysis underway). Knowledge translation (KT) interventions such as Health TAPESTRY should involve an investigation of their sustainability and scalability determinants to inform further implementation. However, this is seldom considered in research or considered early enough, so the objectives of this study were to assess the sustainability and scalability potential of Health TAPESTRY from the perspective of the team who developed and pilot-tested it. Our objectives were addressed using a sequential mixed-methods approach involving the administration of a validated, sustainability survey developed by the National Health Service (NHS) to all members of the Health TAPESTRY team who were actively involved in the development, implementation and pilot evaluation of the intervention (Phase 1: n = 38). Mean sustainability scores were calculated to identify the best potential for improvement across sustainability factors. Phase 2 was a qualitative study of interviews with purposively selected Health TAPESTRY team members to gain a more in-depth understanding of the factors that influence the sustainability and scalability Health TAPESTRY. Two independent reviewers coded transcribed interviews and completed a multi-step thematic analysis. Outcomes were participant perceptions of the determinants influencing the sustainability and scalability of Health TAPESTRY. Twenty Health TAPESTRY team members (53% response rate) completed the NHS sustainability survey. The overall mean sustainability score was 64.6 (range 22.8-96.8). Important opportunities for improving sustainability were better staff involvement and training, clinical leadership engagement, and infrastructure for sustainability. Interviews with 25 participants (response rate 60%) showed that factors influencing the sustainability and scalability of Health TAPESTRY emerged across two dimensions: I) Health TAPESTRY operations (development and implementation activities undertaken by the central team); and II) the Health TAPESTRY intervention (factors specific to the intervention and its elements). Resource capacity appears to be an important factor to consider for Health TAPESTRY operations as it was identified across both sustainability and scalability factors; and perceived lack of interprofessional team and volunteer resource capacity and the need for stakeholder buy-in are important considerations for the Health TAPESTRY intervention. We used these findings to create actionable recommendations to initiate dialogue among Health TAPESTRY team members to improve the intervention. Our study identified sustainability and scalability determinants of the Health TAPESTRY intervention that can be used to optimize its potential for impact. Next steps will involve using findings to inform a guide to facilitate sustainability and scalability of Health TAPESTRY in other jurisdictions considering its adoption. Our findings build on the limited current knowledge of sustainability, and advances KT science related to the sustainability and scalability of KT interventions.
Towards Rapid Re-Certification Using Formal Analysis
2015-07-22
profiles will help ensure that information assurance requirements are commensurate with risk and scalable based on an application’s changing external...20 Scalability Evaluation .......................................................................................................... 22...agility in certification processes. Software re-certification processes require significant expenditure in order to provide evidence of information
Mavrommatis, Kostas
2017-12-22
DOE JGI's Kostas Mavrommatis, chair of the Scalability of Comparative Analysis, Novel Algorithms and Tools panel, at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.
2004-10-01
MONITORING AGENCY NAME(S) AND ADDRESS(ES) Defense Advanced Research Projects Agency AFRL/IFTC 3701 North Fairfax Drive...Scalable Parallel Libraries for Large-Scale Concurrent Applications," Technical Report UCRL -JC-109251, Lawrence Livermore National Laboratory
NASA Technical Reports Server (NTRS)
Stoica, A.; Keymeulen, D.; Zebulum, R. S.; Ferguson, M. I.
2003-01-01
This paper describes scalability issues of evolutionary-driven automatic synthesis of electronic circuits. The article begins by reviewing the concepts of circuit evolution and discussing the limitations of this technique when trying to achieve more complex systems.
SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Mattmann, C. A.; Waliser, D. E.; Kim, J.; Loikith, P.; Lee, H.; McGibbney, L. J.; Whitehall, K. D.
2014-12-01
Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark. Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk, and makes iterative algorithms feasible. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 100 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning (ML) based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. The goals of SciSpark are to: (1) Decrease the time to compute comparison statistics and plots from minutes to seconds; (2) Allow for interactive exploration of time-series properties over seasons and years; (3) Decrease the time for satellite data ingestion into RCMES to hours; (4) Allow for Level-2 comparisons with higher-order statistics or PDF's in minutes to hours; and (5) Move RCMES into a near real time decision-making platform. We will report on: the architecture and design of SciSpark, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning (sharding) of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF's, and first metrics quantifying parallel speedups and memory & disk usage.
Smits, Samuel A; Ouverney, Cleber C
2010-08-18
Many software packages have been developed to address the need for generating phylogenetic trees intended for print. With an increased use of the web to disseminate scientific literature, there is a need for phylogenetic trees to be viewable across many types of devices and feature some of the interactive elements that are integral to the browsing experience. We propose a novel approach for publishing interactive phylogenetic trees. We present a javascript library, jsPhyloSVG, which facilitates constructing interactive phylogenetic trees from raw Newick or phyloXML formats directly within the browser in Scalable Vector Graphics (SVG) format. It is designed to work across all major browsers and renders an alternative format for those browsers that do not support SVG. The library provides tools for building rectangular and circular phylograms with integrated charting. Interactive features may be integrated and made to respond to events such as clicks on any element of the tree, including labels. jsPhyloSVG is an open-source solution for rendering dynamic phylogenetic trees. It is capable of generating complex and interactive phylogenetic trees across all major browsers without the need for plugins. It is novel in supporting the ability to interpret the tree inference formats directly, exposing the underlying markup to data-mining services. The library source code, extensive documentation and live examples are freely accessible at www.jsphylosvg.com.
A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Z.; Hodgson, M.; Li, W.
2016-12-01
Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.
Coordinated Transformation among Community Colleges Lacking a State System
ERIC Educational Resources Information Center
Russell, James Thad
2016-01-01
Community colleges face many challenges in the face of demands for increased student success. Institutions continually seek scalable interventions and initiatives focused on improving student achievement. Effectively implementing sustainable change that moves the needle of student success remains elusive. Facilitating systemic, scalable change…
Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks
Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok
2016-01-01
Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN). PMID:27907113
The Quantum Socket: Wiring for Superconducting Qubits - Part 3
NASA Astrophysics Data System (ADS)
Mariantoni, M.; Bejianin, J. H.; McConkey, T. G.; Rinehart, J. R.; Bateman, J. D.; Earnest, C. T.; McRae, C. H.; Rohanizadegan, Y.; Shiri, D.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.
The implementation of a quantum computer requires quantum error correction codes, which allow to correct errors occurring on physical quantum bits (qubits). Ensemble of physical qubits will be grouped to form a logical qubit with a lower error rate. Reaching low error rates will necessitate a large number of physical qubits. Thus, a scalable qubit architecture must be developed. Superconducting qubits have been used to realize error correction. However, a truly scalable qubit architecture has yet to be demonstrated. A critical step towards scalability is the realization of a wiring method that allows to address qubits densely and accurately. A quantum socket that serves this purpose has been designed and tested at microwave frequencies. In this talk, we show results where the socket is used at millikelvin temperatures to measure an on-chip superconducting resonator. The control electronics is another fundamental element for scalability. We will present a proposal based on the quantum socket to interconnect a classical control hardware to a superconducting qubit hardware, where both are operated at millikelvin temperatures.
Peterson, Kevin J.; Pathak, Jyotishman
2014-01-01
Automated execution of electronic Clinical Quality Measures (eCQMs) from electronic health records (EHRs) on large patient populations remains a significant challenge, and the testability, interoperability, and scalability of measure execution are critical. The High Throughput Phenotyping (HTP; http://phenotypeportal.org) project aligns with these goals by using the standards-based HL7 Health Quality Measures Format (HQMF) and Quality Data Model (QDM) for measure specification, as well as Common Terminology Services 2 (CTS2) for semantic interpretation. The HQMF/QDM representation is automatically transformed into a JBoss® Drools workflow, enabling horizontal scalability via clustering and MapReduce algorithms. Using Project Cypress, automated verification metrics can then be produced. Our results show linear scalability for nine executed 2014 Center for Medicare and Medicaid Services (CMS) eCQMs for eligible professionals and hospitals for >1,000,000 patients, and verified execution correctness of 96.4% based on Project Cypress test data of 58 eCQMs. PMID:25954459
On-chip detection of non-classical light by scalable integration of single-photon detectors
Najafi, Faraz; Mower, Jacob; Harris, Nicholas C.; Bellei, Francesco; Dane, Andrew; Lee, Catherine; Hu, Xiaolong; Kharel, Prashanta; Marsili, Francesco; Assefa, Solomon; Berggren, Karl K.; Englund, Dirk
2015-01-01
Photonic-integrated circuits have emerged as a scalable platform for complex quantum systems. A central goal is to integrate single-photon detectors to reduce optical losses, latency and wiring complexity associated with off-chip detectors. Superconducting nanowire single-photon detectors (SNSPDs) are particularly attractive because of high detection efficiency, sub-50-ps jitter and nanosecond-scale reset time. However, while single detectors have been incorporated into individual waveguides, the system detection efficiency of multiple SNSPDs in one photonic circuit—required for scalable quantum photonic circuits—has been limited to <0.2%. Here we introduce a micrometer-scale flip-chip process that enables scalable integration of SNSPDs on a range of photonic circuits. Ten low-jitter detectors are integrated on one circuit with 100% device yield. With an average system detection efficiency beyond 10%, and estimated on-chip detection efficiency of 14–52% for four detectors operated simultaneously, we demonstrate, to the best of our knowledge, the first on-chip photon correlation measurements of non-classical light. PMID:25575346
Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks.
Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok
2016-01-01
Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN).
A Systems Approach to Scalable Transportation Network Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2006-01-01
Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory andmore » speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, Cecilia C.; Theoretische Physik, Universitaet des Saarlandes, D-66041 Saarbruecken; Departament de Fisica, Universitat Autonoma de Barcelona, E-08193 Bellaterra
2010-06-15
We present in a unified manner the existing methods for scalable partial quantum process tomography. We focus on two main approaches: the one presented in Bendersky et al. [Phys. Rev. Lett. 100, 190403 (2008)] and the ones described, respectively, in Emerson et al. [Science 317, 1893 (2007)] and Lopez et al. [Phys. Rev. A 79, 042328 (2009)], which can be combined together. The methods share an essential feature: They are based on the idea that the tomography of a quantum map can be efficiently performed by studying certain properties of a twirling of such a map. From this perspective, inmore » this paper we present extensions, improvements, and comparative analyses of the scalable methods for partial quantum process tomography. We also clarify the significance of the extracted information, and we introduce interesting and useful properties of the {chi}-matrix representation of quantum maps that can be used to establish a clearer path toward achieving full tomography of quantum processes in a scalable way.« less
Modeling Cardiac Electrophysiology at the Organ Level in the Peta FLOPS Computing Age
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Lawrence; Bishop, Martin; Hoetzl, Elena
2010-09-30
Despite a steep increase in available compute power, in-silico experimentation with highly detailed models of the heart remains to be challenging due to the high computational cost involved. It is hoped that next generation high performance computing (HPC) resources lead to significant reductions in execution times to leverage a new class of in-silico applications. However, performance gains with these new platforms can only be achieved by engaging a much larger number of compute cores, necessitating strongly scalable numerical techniques. So far strong scalability has been demonstrated only for a moderate number of cores, orders of magnitude below the range requiredmore » to achieve the desired performance boost.In this study, strong scalability of currently used techniques to solve the bidomain equations is investigated. Benchmark results suggest that scalability is limited to 512-4096 cores within the range of relevant problem sizes even when systems are carefully load-balanced and advanced IO strategies are employed.« less
Validation of a Scalable Solar Sailcraft
NASA Technical Reports Server (NTRS)
Murphy, D. M.
2006-01-01
The NASA In-Space Propulsion (ISP) program sponsored intensive solar sail technology and systems design, development, and hardware demonstration activities over the past 3 years. Efforts to validate a scalable solar sail system by functional demonstration in relevant environments, together with test-analysis correlation activities on a scalable solar sail system have recently been successfully completed. A review of the program, with descriptions of the design, results of testing, and analytical model validations of component and assembly functional, strength, stiffness, shape, and dynamic behavior are discussed. The scaled performance of the validated system is projected to demonstrate the applicability to flight demonstration and important NASA road-map missions.
NASA Astrophysics Data System (ADS)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
Wavelet-based scalable L-infinity-oriented compression.
Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter
2006-09-01
Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.
NASA Technical Reports Server (NTRS)
Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)
1997-01-01
The Whitney project is integrating commodity off-the-shelf PC hardware and software technology to build a parallel supercomputer with hundreds to thousands of nodes. To build such a system, one must have a scalable software model, and the installation and maintenance of the system software must be completely automated. We describe the design of an architecture for booting, installing, and configuring nodes in such a system with particular consideration given to scalability and ease of maintenance. This system has been implemented on a 40-node prototype of Whitney and is to be used on the 500 processor Whitney system to be built in 1998.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-14
... best demonstrate that they have the managerial and operational capacity, including significant and demonstrable scalability in their management, finances, systems, and infrastructure, to assume the...--Scalability in operations and management to perform timely, accurate, and comprehensive lender claims review...
NASA Astrophysics Data System (ADS)
Yan, Beichuan; Regueiro, Richard A.
2018-02-01
A three-dimensional (3D) DEM code for simulating complex-shaped granular particles is parallelized using message-passing interface (MPI). The concepts of link-block, ghost/border layer, and migration layer are put forward for design of the parallel algorithm, and theoretical scalability function of 3-D DEM scalability and memory usage is derived. Many performance-critical implementation details are managed optimally to achieve high performance and scalability, such as: minimizing communication overhead, maintaining dynamic load balance, handling particle migrations across block borders, transmitting C++ dynamic objects of particles between MPI processes efficiently, eliminating redundant contact information between adjacent MPI processes. The code executes on multiple US Department of Defense (DoD) supercomputers and tests up to 2048 compute nodes for simulating 10 million three-axis ellipsoidal particles. Performance analyses of the code including speedup, efficiency, scalability, and granularity across five orders of magnitude of simulation scale (number of particles) are provided, and they demonstrate high speedup and excellent scalability. It is also discovered that communication time is a decreasing function of the number of compute nodes in strong scaling measurements. The code's capability of simulating a large number of complex-shaped particles on modern supercomputers will be of value in both laboratory studies on micromechanical properties of granular materials and many realistic engineering applications involving granular materials.
Scalability and Validation of Big Data Bioinformatics Software.
Yang, Andrian; Troup, Michael; Ho, Joshua W K
2017-01-01
This review examines two important aspects that are central to modern big data bioinformatics analysis - software scalability and validity. We argue that not only are the issues of scalability and validation common to all big data bioinformatics analyses, they can be tackled by conceptually related methodological approaches, namely divide-and-conquer (scalability) and multiple executions (validation). Scalability is defined as the ability for a program to scale based on workload. It has always been an important consideration when developing bioinformatics algorithms and programs. Nonetheless the surge of volume and variety of biological and biomedical data has posed new challenges. We discuss how modern cloud computing and big data programming frameworks such as MapReduce and Spark are being used to effectively implement divide-and-conquer in a distributed computing environment. Validation of software is another important issue in big data bioinformatics that is often ignored. Software validation is the process of determining whether the program under test fulfils the task for which it was designed. Determining the correctness of the computational output of big data bioinformatics software is especially difficult due to the large input space and complex algorithms involved. We discuss how state-of-the-art software testing techniques that are based on the idea of multiple executions, such as metamorphic testing, can be used to implement an effective bioinformatics quality assurance strategy. We hope this review will raise awareness of these critical issues in bioinformatics.
Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability
NASA Astrophysics Data System (ADS)
Guruvareddiar, Palanivel; Joseph, Biju K.
2014-03-01
Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.
Cotič, Živa; Rees, Rebecca; Wark, Petra A; Car, Josip
2016-10-19
In 2013, there was a shortage of approximately 7.2 million health workers worldwide, which is larger among family physicians than among specialists. eLearning could provide a potential solution to some of these global workforce challenges. However, there is little evidence on factors facilitating or hindering implementation, adoption, use, scalability and sustainability of eLearning. This review aims to synthesise results from qualitative and mixed methods studies to provide insight on factors influencing implementation of eLearning for family medicine specialty education and training. Additionally, this review aims to identify the actions needed to increase effectiveness of eLearning and identify the strategies required to improve eLearning implementation, adoption, use, sustainability and scalability for family medicine speciality education and training. A systematic search will be conducted across a range of databases for qualitative studies focusing on experiences, barriers, facilitators, and other factors related to the implementation, adoption, use, sustainability and scalability of eLearning for family medicine specialty education and training. Studies will be synthesised by using the framework analysis approach. This study will contribute to the evaluation of eLearning implementation, adoption, use, sustainability and scalability for family medicine specialty training and education and the development of eLearning guidelines for postgraduate medical education. PROSPERO http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016036449.
Visual Analytics for Power Grid Contingency Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Pak C.; Huang, Zhenyu; Chen, Yousu
2014-01-20
Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure tomore » do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.« less
Framework of distributed coupled atmosphere-ocean-wave modeling system
NASA Astrophysics Data System (ADS)
Wen, Yuanqiao; Huang, Liwen; Deng, Jian; Zhang, Jinfeng; Wang, Sisi; Wang, Lijun
2006-05-01
In order to research the interactions between the atmosphere and ocean as well as their important role in the intensive weather systems of coastal areas, and to improve the forecasting ability of the hazardous weather processes of coastal areas, a coupled atmosphere-ocean-wave modeling system has been developed. The agent-based environment framework for linking models allows flexible and dynamic information exchange between models. For the purpose of flexibility, portability and scalability, the framework of the whole system takes a multi-layer architecture that includes a user interface layer, computational layer and service-enabling layer. The numerical experiment presented in this paper demonstrates the performance of the distributed coupled modeling system.
JBrowse: a dynamic web platform for genome visualization and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buels, Robert; Yao, Eric; Diesh, Colin M.
JBrowse is a fast and full-featured genome browser built with JavaScript and HTML5. It is easily embedded into websites or apps but can also be served as a standalone web page. Overall improvements to speed and scalability are accompanied by specific enhancements that support complex interactive queries on large track sets. Analysis functions can readily be added using the plugin framework; most visual aspects of tracks can also be customized, along with clicks, mouseovers, menus, and popup boxes. JBrowse can also be used to browse local annotation files offline and to generate high-resolution figures for publication. JBrowse is a maturemore » web application suitable for genome visualization and analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pruitt, Spencer R.; Nakata, Hiroya; Nagata, Takeshi
2016-04-12
The analytic first derivative with respect to nuclear coordinates is formulated and implemented in the framework of the three-body fragment molecular orbital (FMO) method. The gradient has been derived and implemented for restricted Hartree-Fock, second-order Møller-Plesset perturbation, and density functional theories. The importance of the three-body fully analytic gradient is illustrated through the failure of the two-body FMO method during molecular dynamics simulations of a small water cluster. The parallel implementation of the fragment molecular orbital method, its parallel efficiency, and its scalability on the Blue Gene/Q architecture up to 262,144 CPU cores, are also discussed.
The Newick utilities: high-throughput phylogenetic tree processing in the UNIX shell.
Junier, Thomas; Zdobnov, Evgeny M
2010-07-01
We present a suite of Unix shell programs for processing any number of phylogenetic trees of any size. They perform frequently-used tree operations without requiring user interaction. They also allow tree drawing as scalable vector graphics (SVG), suitable for high-quality presentations and further editing, and as ASCII graphics for command-line inspection. As an example we include an implementation of bootscanning, a procedure for finding recombination breakpoints in viral genomes. C source code, Python bindings and executables for various platforms are available from http://cegg.unige.ch/newick_utils. The distribution includes a manual and example data. The package is distributed under the BSD License. thomas.junier@unige.ch
JBrowse: a dynamic web platform for genome visualization and analysis.
Buels, Robert; Yao, Eric; Diesh, Colin M; Hayes, Richard D; Munoz-Torres, Monica; Helt, Gregg; Goodstein, David M; Elsik, Christine G; Lewis, Suzanna E; Stein, Lincoln; Holmes, Ian H
2016-04-12
JBrowse is a fast and full-featured genome browser built with JavaScript and HTML5. It is easily embedded into websites or apps but can also be served as a standalone web page. Overall improvements to speed and scalability are accompanied by specific enhancements that support complex interactive queries on large track sets. Analysis functions can readily be added using the plugin framework; most visual aspects of tracks can also be customized, along with clicks, mouseovers, menus, and popup boxes. JBrowse can also be used to browse local annotation files offline and to generate high-resolution figures for publication. JBrowse is a mature web application suitable for genome visualization and analysis.
XNsim: Internet-Enabled Collaborative Distributed Simulation via an Extensible Network
NASA Technical Reports Server (NTRS)
Novotny, John; Karpov, Igor; Zhang, Chendi; Bedrossian, Nazareth S.
2007-01-01
In this paper, the XNsim approach to achieve Internet-enabled, dynamically scalable collaborative distributed simulation capabilities is presented. With this approach, a complete simulation can be assembled from shared component subsystems written in different formats, that run on different computing platforms, with different sampling rates, in different geographic locations, and over singlelmultiple networks. The subsystems interact securely with each other via the Internet. Furthermore, the simulation topology can be dynamically modified. The distributed simulation uses a combination of hub-and-spoke and peer-topeer network topology. A proof-of-concept demonstrator is also presented. The XNsim demonstrator can be accessed at http://www.jsc.draver.corn/xn that hosts various examples of Internet enabled simulations.
Hum, D S; Route, R K; Fejer, M M
2007-04-15
Quasi-phase-matched second-harmonic generation of 532 nm radiation in 25 degrees -rotated, x-cut, near-stoichiometric lithium tantalate has been performed. Using a face-normal topology for frequency conversion applications allows scalable surface area to avoid surface and volume damage in high-power interactions. First-order, quasi-phase-matched second-harmonic generation was achieved using near-stoichiometric lithium tantalate fabricated by vapor transport equilibration. These crystals supported 1 J of 1064 nm radiation and generated 21 mJ of 532 nm radiation from a 7 ns, Q-switched Nd:YAG laser within a factor of 4.2 of expectation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Towards Scalable Deep Learning via I/O Analysis and Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pumma, Sarunya; Si, Min; Feng, Wu-Chun
Deep learning systems have been growing in prominence as a way to automatically characterize objects, trends, and anomalies. Given the importance of deep learning systems, researchers have been investigating techniques to optimize such systems. An area of particular interest has been using large supercomputing systems to quickly generate effective deep learning networks: a phase often referred to as “training” of the deep learning neural network. As we scale existing deep learning frameworks—such as Caffe—on these large supercomputing systems, we notice that the parallelism can help improve the computation tremendously, leaving data I/O as the major bottleneck limiting the overall systemmore » scalability. In this paper, we first present a detailed analysis of the performance bottlenecks of Caffe on large supercomputing systems. Our analysis shows that the I/O subsystem of Caffe—LMDB—relies on memory-mapped I/O to access its database, which can be highly inefficient on large-scale systems because of its interaction with the process scheduling system and the network-based parallel filesystem. Based on this analysis, we then present LMDBIO, our optimized I/O plugin for Caffe that takes into account the data access pattern of Caffe in order to vastly improve I/O performance. Our experimental results show that LMDBIO can improve the overall execution time of Caffe by nearly 20-fold in some cases.« less
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...
2017-03-08
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Scalable gamma-ray camera for wide-area search based on silicon photomultipliers array
NASA Astrophysics Data System (ADS)
Jeong, Manhee; Van, Benjamin; Wells, Byron T.; D'Aries, Lawrence J.; Hammig, Mark D.
2018-03-01
Portable coded-aperture imaging systems based on scintillators and semiconductors have found use in a variety of radiological applications. For stand-off detection of weakly emitting materials, large volume detectors can facilitate the rapid localization of emitting materials. We describe a scalable coded-aperture imaging system based on 5.02 × 5.02 cm2 CsI(Tl) scintillator modules, each partitioned into 4 × 4 × 20 mm3 pixels that are optically coupled to 12 × 12 pixel silicon photo-multiplier (SiPM) arrays. The 144 pixels per module are read-out with a resistor-based charge-division circuit that reduces the readout outputs from 144 to four signals per module, from which the interaction position and total deposited energy can be extracted. All 144 CsI(Tl) pixels are readily distinguishable with an average energy resolution, at 662 keV, of 13.7% FWHM, a peak-to-valley ratio of 8.2, and a peak-to-Compton ratio of 2.9. The detector module is composed of a SiPM array coupled with a 2 cm thick scintillator and modified uniformly redundant array mask. For the image reconstruction, cross correlation and maximum likelihood expectation maximization methods are used. The system shows a field of view of 45° and an angular resolution of 4.7° FWHM.
Harris, Daniel R.; Henderson, Darren W.; Kavuluru, Ramakanth; Stromberg, Arnold J.; Johnson, Todd R.
2015-01-01
We present a custom, Boolean query generator utilizing common-table expressions (CTEs) that is capable of scaling with big datasets. The generator maps user-defined Boolean queries, such as those interactively created in clinical-research and general-purpose healthcare tools, into SQL. We demonstrate the effectiveness of this generator by integrating our work into the Informatics for Integrating Biology and the Bedside (i2b2) query tool and show that it is capable of scaling. Our custom generator replaces and outperforms the default query generator found within the Clinical Research Chart (CRC) cell of i2b2. In our experiments, sixteen different types of i2b2 queries were identified by varying four constraints: date, frequency, exclusion criteria, and whether selected concepts occurred in the same encounter. We generated non-trivial, random Boolean queries based on these 16 types; the corresponding SQL queries produced by both generators were compared by execution times. The CTE-based solution significantly outperformed the default query generator and provided a much more consistent response time across all query types (M=2.03, SD=6.64 vs. M=75.82, SD=238.88 seconds). Without costly hardware upgrades, we provide a scalable solution based on CTEs with very promising empirical results centered on performance gains. The evaluation methodology used for this provides a means of profiling clinical data warehouse performance. PMID:25192572
NASA Astrophysics Data System (ADS)
Hammer, Sebastian; Mangold, Hans-Moritz; Nguyen, Ariana E.; Martinez-Ta, Dominic; Naghibi Alvillar, Sahar; Bartels, Ludwig; Krenner, Hubert J.
2018-02-01
We review1 the fully-scalable fabrication of a large array of hybrid molybdenum disulfide (MoS2) - silicon dioxide (SiO2) one-dimensional (1D), freestanding photonic-crystal cavities (PCCs) capable of enhancement of the MoS2 photoluminescence (PL) at the narrow cavity resonance. As demonstrated in our prior work [S. Hammer et al., Sci. Rep. 7, 7251 (2017)]1, geometric mode tuning over the wide spectral range of MoS2 PL can be achieved by changing the PC period. In this contribution, we provide a step-by-step description of the fabrication process and give additional detailed information on the degradation of MoS2 by XeF2 vapor. We avoid potential damage of the MoS2 monolayer during the crucial XeF2 etch by refraining from stripping the electron beam (e-beam) resist after dry etching of the photonic crystal pattern. The remaining resist on top of the samples encapsulates and protects the MoS2 film during the entire fabrication process. Albeit the thickness of the remaining resists strongly depends on the fabrication process, the resulting encapsulation of the MoS2 layer improves the confinement to the optical modes and gives rise to a potential enhancement of the light-matter interaction.
Li, Jin; Lindley-Start, Jack; Porch, Adrian; Barrow, David
2017-07-24
High specification, polymer capsules, to produce inertial fusion energy targets, were continuously fabricated using surfactant-free, inertial centralisation, and ultrafast polymerisation, in a scalable flow reactor. Laser-driven, inertial confinement fusion depends upon the interaction of high-energy lasers and hydrogen isotopes, contained within small, spherical and concentric target shells, causing a nuclear fusion reaction at ~150 M°C. Potentially, targets will be consumed at ~1 M per day per reactor, demanding a 5000x unit cost reduction to ~$0.20, and is a critical, key challenge. Experimentally, double emulsions were used as templates for capsule-shells, and were formed at 20 Hz, on a fluidic chip. Droplets were centralised in a dynamic flow, and their shapes both evaluated, and mathematically modeled, before subsequent shell solidification. The shells were photo-cured individually, on-the-fly, with precisely-actuated, millisecond-length (70 ms), uniform-intensity UV pulses, delivered through eight, radially orchestrated light-pipes. The near 100% yield rate of uniform shells had a minimum 99.0% concentricity and sphericity, and the solidification processing period was significantly reduced, over conventional batch methods. The data suggest the new possibility of a continuous, on-the-fly, IFE target fabrication process, employing sequential processing operations within a continuous enclosed duct system, which may include cryogenic fuel-filling, and shell curing, to produce ready-to-use IFE targets.
Hadwiger, M; Beyer, J; Jeong, Won-Ki; Pfister, H
2012-12-01
This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.
This Small Business Innovative Research (SBIR) project will develop and ready for commercialization a scalable, low-cost process for purification of water containing Contaminants of Emerging Concern (CECs) using anodic oxidation with boron-doped ultrananocrystalline diam...
ERIC Educational Resources Information Center
Kenney, Jacqueline; Hermens, Antoine; Clarke, Thomas
2004-01-01
The development of e-learning by government through policy, funding allocations, research-based collaborative projects and alliances has increased recently in both developed and under-developed nations. The paper notes that government, industry and corporate users are increasingly focusing on standardisation issues and the scalability of…
Scalability study of solid xenon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, J.; Cease, H.; Jaskierny, W. F.
2015-04-01
We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified {\\it Bridgeman's technique} reproduces a large scale optically transparent solid xenon.
Air-stable ink for scalable, high-throughput layer deposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weil, Benjamin D; Connor, Stephen T; Cui, Yi
A method for producing and depositing air-stable, easily decomposable, vulcanized ink on any of a wide range of substrates is disclosed. The ink enables high-volume production of optoelectronic and/or electronic devices using scalable production methods, such as roll-to-roll transfer, fast rolling processes, and the like.
Slices: A Scalable Partitioner for Finite Element Meshes
NASA Technical Reports Server (NTRS)
Ding, H. Q.; Ferraro, R. D.
1995-01-01
A parallel partitioner for partitioning unstructured finite element meshes on distributed memory architectures is developed. The element based partitioner can handle mixtures of different element types. All algorithms adopted in the partitioner are scalable, including a communication template for unpredictable incoming messages, as shown in actual timing measurements.
Estimates of the Sampling Distribution of Scalability Coefficient H
ERIC Educational Resources Information Center
Van Onna, Marieke J. H.
2004-01-01
Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…
Scalability Assessments for the Malicious Activity Simulation Tool (MAST)
2012-09-01
the scalability characteristics of MAST. Specifically, we show that an exponential increase in clients using the MAST software does not impact...an exponential increase in clients using the MAST software does not impact network and system resources significantly. Additionally, we...31 1. Hardware .....................................31 2. Software .....................................32 3. Common PC
Hohenstein, Edward G; Parrish, Robert M; Sherrill, C David; Turney, Justin M; Schaefer, Henry F
2011-11-07
Symmetry-adapted perturbation theory (SAPT) provides a means of probing the fundamental nature of intermolecular interactions. Low-orders of SAPT (here, SAPT0) are especially attractive since they provide qualitative (sometimes quantitative) results while remaining tractable for large systems. The application of density fitting and Laplace transformation techniques to SAPT0 can significantly reduce the expense associated with these computations and make even larger systems accessible. We present new factorizations of the SAPT0 equations with density-fitted two-electron integrals and the first application of Laplace transformations of energy denominators to SAPT. The improved scalability of the DF-SAPT0 implementation allows it to be applied to systems with more than 200 atoms and 2800 basis functions. The Laplace-transformed energy denominators are compared to analogous partial Cholesky decompositions of the energy denominator tensor. Application of our new DF-SAPT0 program to the intercalation of DNA by proflavine has allowed us to determine the nature of the proflavine-DNA interaction. Overall, the proflavine-DNA interaction contains important contributions from both electrostatics and dispersion. The energetics of the intercalator interaction are are dominated by the stacking interactions (two-thirds of the total), but contain important contributions from the intercalator-backbone interactions. It is hypothesized that the geometry of the complex will be determined by the interactions of the intercalator with the backbone, because by shifting toward one side of the backbone, the intercalator can form two long hydrogen-bonding type interactions. The long-range interactions between the intercalator and the next-nearest base pairs appear to be negligible, justifying the use of truncated DNA models in computational studies of intercalation interaction energies.
NASA Astrophysics Data System (ADS)
Hohenstein, Edward G.; Parrish, Robert M.; Sherrill, C. David; Turney, Justin M.; Schaefer, Henry F.
2011-11-01
Symmetry-adapted perturbation theory (SAPT) provides a means of probing the fundamental nature of intermolecular interactions. Low-orders of SAPT (here, SAPT0) are especially attractive since they provide qualitative (sometimes quantitative) results while remaining tractable for large systems. The application of density fitting and Laplace transformation techniques to SAPT0 can significantly reduce the expense associated with these computations and make even larger systems accessible. We present new factorizations of the SAPT0 equations with density-fitted two-electron integrals and the first application of Laplace transformations of energy denominators to SAPT. The improved scalability of the DF-SAPT0 implementation allows it to be applied to systems with more than 200 atoms and 2800 basis functions. The Laplace-transformed energy denominators are compared to analogous partial Cholesky decompositions of the energy denominator tensor. Application of our new DF-SAPT0 program to the intercalation of DNA by proflavine has allowed us to determine the nature of the proflavine-DNA interaction. Overall, the proflavine-DNA interaction contains important contributions from both electrostatics and dispersion. The energetics of the intercalator interaction are are dominated by the stacking interactions (two-thirds of the total), but contain important contributions from the intercalator-backbone interactions. It is hypothesized that the geometry of the complex will be determined by the interactions of the intercalator with the backbone, because by shifting toward one side of the backbone, the intercalator can form two long hydrogen-bonding type interactions. The long-range interactions between the intercalator and the next-nearest base pairs appear to be negligible, justifying the use of truncated DNA models in computational studies of intercalation interaction energies.
A Scalability Model for ECS's Data Server
NASA Technical Reports Server (NTRS)
Menasce, Daniel A.; Singhal, Mukesh
1998-01-01
This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.
Scalable load balancing for massively parallel distributed Monte Carlo particle transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, M. J.; Brantley, P. S.; Joy, K. I.
2013-07-01
In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrencemore » Livermore National Laboratory. (authors)« less
Prior knowledge based mining functional modules from Yeast PPI networks with gene ontology
2010-01-01
Background In the literature, there are fruitful algorithmic approaches for identification functional modules in protein-protein interactions (PPI) networks. Because of accumulation of large-scale interaction data on multiple organisms and non-recording interaction data in the existing PPI database, it is still emergent to design novel computational techniques that can be able to correctly and scalably analyze interaction data sets. Indeed there are a number of large scale biological data sets providing indirect evidence for protein-protein interaction relationships. Results The main aim of this paper is to present a prior knowledge based mining strategy to identify functional modules from PPI networks with the aid of Gene Ontology. Higher similarity value in Gene Ontology means that two gene products are more functionally related to each other, so it is better to group such gene products into one functional module. We study (i) to encode the functional pairs into the existing PPI networks; and (ii) to use these functional pairs as pairwise constraints to supervise the existing functional module identification algorithms. Topology-based modularity metric and complex annotation in MIPs will be used to evaluate the identified functional modules by these two approaches. Conclusions The experimental results on Yeast PPI networks and GO have shown that the prior knowledge based learning methods perform better than the existing algorithms. PMID:21172053
Profiling cellular protein complexes by proximity ligation with dual tag microarray readout.
Hammond, Maria; Nong, Rachel Yuan; Ericsson, Olle; Pardali, Katerina; Landegren, Ulf
2012-01-01
Patterns of protein interactions provide important insights in basic biology, and their analysis plays an increasing role in drug development and diagnostics of disease. We have established a scalable technique to compare two biological samples for the levels of all pairwise interactions among a set of targeted protein molecules. The technique is a combination of the proximity ligation assay with readout via dual tag microarrays. In the proximity ligation assay protein identities are encoded as DNA sequences by attaching DNA oligonucleotides to antibodies directed against the proteins of interest. Upon binding by pairs of antibodies to proteins present in the same molecular complexes, ligation reactions give rise to reporter DNA molecules that contain the combined sequence information from the two DNA strands. The ligation reactions also serve to incorporate a sample barcode in the reporter molecules to allow for direct comparison between pairs of samples. The samples are evaluated using a dual tag microarray where information is decoded, revealing which pairs of tags that have become joined. As a proof-of-concept we demonstrate that this approach can be used to detect a set of five proteins and their pairwise interactions both in cellular lysates and in fixed tissue culture cells. This paper provides a general strategy to analyze the extent of any pairwise interactions in large sets of molecules by decoding reporter DNA strands that identify the interacting molecules.
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data
Venkat, A.; Christensen, C.; Gyulassy, A.; Summa, B.; Federer, F.; Angelucci, A.; Pascucci, V.
2017-01-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data. PMID:28638896
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data.
Venkat, A; Christensen, C; Gyulassy, A; Summa, B; Federer, F; Angelucci, A; Pascucci, V
2016-08-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data.
Circuit quantum electrodynamics with a spin qubit.
Petersson, K D; McFaul, L W; Schroer, M D; Jung, M; Taylor, J M; Houck, A A; Petta, J R
2012-10-18
Electron spins trapped in quantum dots have been proposed as basic building blocks of a future quantum processor. Although fast, 180-picosecond, two-quantum-bit (two-qubit) operations can be realized using nearest-neighbour exchange coupling, a scalable, spin-based quantum computing architecture will almost certainly require long-range qubit interactions. Circuit quantum electrodynamics (cQED) allows spatially separated superconducting qubits to interact via a superconducting microwave cavity that acts as a 'quantum bus', making possible two-qubit entanglement and the implementation of simple quantum algorithms. Here we combine the cQED architecture with spin qubits by coupling an indium arsenide nanowire double quantum dot to a superconducting cavity. The architecture allows us to achieve a charge-cavity coupling rate of about 30 megahertz, consistent with coupling rates obtained in gallium arsenide quantum dots. Furthermore, the strong spin-orbit interaction of indium arsenide allows us to drive spin rotations electrically with a local gate electrode, and the charge-cavity interaction provides a measurement of the resulting spin dynamics. Our results demonstrate how the cQED architecture can be used as a sensitive probe of single-spin physics and that a spin-cavity coupling rate of about one megahertz is feasible, presenting the possibility of long-range spin coupling via superconducting microwave cavities.
Collective Behaviors of Mobile Robots Beyond the Nearest Neighbor Rules With Switching Topology.
Ning, Boda; Han, Qing-Long; Zuo, Zongyu; Jin, Jiong; Zheng, Jinchuan
2018-05-01
This paper is concerned with the collective behaviors of robots beyond the nearest neighbor rules, i.e., dispersion and flocking, when robots interact with others by applying an acute angle test (AAT)-based interaction rule. Different from a conventional nearest neighbor rule or its variations, the AAT-based interaction rule allows interactions with some far-neighbors and excludes unnecessary nearest neighbors. The resulting dispersion and flocking hold the advantages of scalability, connectivity, robustness, and effective area coverage. For the dispersion, a spring-like controller is proposed to achieve collision-free coordination. With switching topology, a new fixed-time consensus-based energy function is developed to guarantee the system stability. An upper bound of settling time for energy consensus is obtained, and a uniform time interval is accordingly set so that energy distribution is conducted in a fair manner. For the flocking, based on a class of generalized potential functions taking nonsmooth switching into account, a new controller is proposed to ensure that the same velocity for all robots is eventually reached. A co-optimizing problem is further investigated to accomplish additional tasks, such as enhancing communication performance, while maintaining the collective behaviors of mobile robots. Simulation results are presented to show the effectiveness of the theoretical results.
Spin-orbit qubit in a semiconductor nanowire.
Nadj-Perge, S; Frolov, S M; Bakkers, E P A M; Kouwenhoven, L P
2010-12-23
Motion of electrons can influence their spins through a fundamental effect called spin-orbit interaction. This interaction provides a way to control spins electrically and thus lies at the foundation of spintronics. Even at the level of single electrons, the spin-orbit interaction has proven promising for coherent spin rotations. Here we implement a spin-orbit quantum bit (qubit) in an indium arsenide nanowire, where the spin-orbit interaction is so strong that spin and motion can no longer be separated. In this regime, we realize fast qubit rotations and universal single-qubit control using only electric fields; the qubits are hosted in single-electron quantum dots that are individually addressable. We enhance coherence by dynamically decoupling the qubits from the environment. Nanowires offer various advantages for quantum computing: they can serve as one-dimensional templates for scalable qubit registers, and it is possible to vary the material even during wire growth. Such flexibility can be used to design wires with suppressed decoherence and to push semiconductor qubit fidelities towards error correction levels. Furthermore, electrical dots can be integrated with optical dots in p-n junction nanowires. The coherence times achieved here are sufficient for the conversion of an electronic qubit into a photon, which can serve as a flying qubit for long-distance quantum communication.
Tunable spin-spin interactions and entanglement of ions in separate potential wells.
Wilson, A C; Colombe, Y; Brown, K R; Knill, E; Leibfried, D; Wineland, D J
2014-08-07
Quantum simulation--the use of one quantum system to simulate a less controllable one--may provide an understanding of the many quantum systems which cannot be modelled using classical computers. Considerable progress in control and manipulation has been achieved for various quantum systems, but one of the remaining challenges is the implementation of scalable devices. In this regard, individual ions trapped in separate tunable potential wells are promising. Here we implement the basic features of this approach and demonstrate deterministic tuning of the Coulomb interaction between two ions, independently controlling their local wells. The scheme is suitable for emulating a range of spin-spin interactions, but to characterize the performance of our set-up we select one that entangles the internal states of the two ions with a fidelity of 0.82(1) (the digit in parentheses shows the standard error of the mean). Extension of this building block to a two-dimensional network, which is possible using ion-trap microfabrication processes, may provide a new quantum simulator architecture with broad flexibility in designing and scaling the arrangement of ions and their mutual interactions. To perform useful quantum simulations, including those of condensed-matter phenomena such as the fractional quantum Hall effect, an array of tens of ions might be sufficient.
Abbas, Syed Ali; Ding, Jiang; Wu, Sheng Hui; Fang, Jason; Boopathi, Karunakara Moorthy; Mohapatra, Anisha; Lee, Li Wei; Wang, Pen-Cheng; Chang, Chien-Cheng; Chu, Chih Wei
2017-12-26
In this paper we describe a modified (AEG/CH) coated separator for Li-S batteries in which the shuttling phenomenon of the lithium polysulfides is restrained through two types of interactions: activated expanded graphite (AEG) flakes interacted physically with the lithium polysulfides, while chitosan (CH), used to bind the AEG flakes on the separator, interacted chemically through its abundance of amino and hydroxyl functional groups. Moreover, the AEG flakes facilitated ionic and electronic transfer during the redox reaction. Live H-cell discharging experiments revealed that the modified separator was effective at curbing polysulfide shuttling; moreover, X-ray photoelectron spectroscopy analysis of the cycled separator confirmed the presence of lithium polysulfides in the AEG/CH matrix. Using this dual functional interaction approach, the lifetime of the pure sulfur-based cathode was extended to 3000 cycles at 1C-rate (1C = 1670 mA/g), decreasing the decay rate to 0.021% per cycle, a value that is among the best reported to date. A flexible battery based on this modified separator exhibited stable performance and could turn on multiple light-emitting diodes. Such modified membranes with good mechanical strength, high electronic conductivity, and anti-self-discharging shield appear to be a scalable solution for future high-energy battery systems.
High-performance biocomputing for simulating the spread of contagion over large contact networks
2012-01-01
Background Many important biological problems can be modeled as contagion diffusion processes over interaction networks. This article shows how the EpiSimdemics interaction-based simulation system can be applied to the general contagion diffusion problem. Two specific problems, computational epidemiology and human immune system modeling, are given as examples. We then show how the graphics processing unit (GPU) within each compute node of a cluster can effectively be used to speed-up the execution of these types of problems. Results We show that a single GPU can accelerate the EpiSimdemics computation kernel by a factor of 6 and the entire application by a factor of 3.3, compared to the execution time on a single core. When 8 CPU cores and 2 GPU devices are utilized, the speed-up of the computational kernel increases to 9.5. When combined with effective techniques for inter-node communication, excellent scalability can be achieved without significant loss of accuracy in the results. Conclusions We show that interaction-based simulation systems can be used to model disparate and highly relevant problems in biology. We also show that offloading some of the work to GPUs in distributed interaction-based simulations can be an effective way to achieve increased intra-node efficiency. PMID:22537298
Kuethe, Jeffrey T; Basu, Kallol; Orr, Robert K; Ashley, Eric; Poirier, Marc; Tan, Lushi
2018-02-15
The evolution of a scalable process for the preparation of methylcyclobutanol-pyridyl ether 1 is described. Key aspects of this development including careful control of the stereochemistry, elimination of chromatography, and application to kilogram-scale synthesis are addressed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Adapting for Scalability: Automating the Video Assessment of Instructional Learning
ERIC Educational Resources Information Center
Roberts , Amy M.; LoCasale-Crouch, Jennifer; Hamre, Bridget K.; Buckrop, Jordan M.
2017-01-01
Although scalable programs, such as online courses, have the potential to reach broad audiences, they may pose challenges to evaluating learners' knowledge and skills. Automated scoring offers a possible solution. In the current paper, we describe the process of creating and testing an automated means of scoring a validated measure of teachers'…
Design for Scalability: A Case Study of the River City Curriculum
ERIC Educational Resources Information Center
Clarke, Jody; Dede, Chris
2009-01-01
One-size-fits-all educational innovations do not work because they ignore contextual factors that determine an intervention's efficacy in a particular local situation. This paper presents a framework on how to design educational innovations for scalability through enhancing their adaptability for effective usage in a wide variety of settings. The…
USDA-ARS?s Scientific Manuscript database
A scalable and modular LED illumination dome for microscopic scientific photography is described and illustrated, and methods for constructing such a dome are detailed. Dome illumination for insect specimens has become standard practice across the field of insect systematics, but many dome designs ...
Algorithmic Coordination in Robotic Networks
2010-11-29
appropriate performance, robustness and scalability properties for various task allocation , surveillance, and information gathering applications is...networking, we envision designing and analyzing algorithms with appropriate performance, robustness and scalability properties for various task ...distributed algorithms for target assignments; based on the classic auction algorithms in static networks, we intend to design efficient algorithms in worst
ERIC Educational Resources Information Center
Kleiman, Glenn M.; Wolf, Mary Ann; Frye, David
2013-01-01
In conjunction with the relaunch of the Digital Learning Transition (DLT) Massive Open Online Course for Educatos (MOOC-Ed) in September 2013, the Alliance and the Friday Institute released "The Digital Learning Transition MOOC for Educators: Exploring a Scalable Approach to Professional Development", a new paper that describes the…
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2 21 = 2,097,152 MPI processes on the IBMmore » BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.« less
Heat-treated stainless steel felt as scalable anode material for bioelectrochemical systems.
Guo, Kun; Soeriyadi, Alexander H; Feng, Huajun; Prévoteau, Antonin; Patil, Sunil A; Gooding, J Justin; Rabaey, Korneel
2015-11-01
This work reports a simple and scalable method to convert stainless steel (SS) felt into an effective anode for bioelectrochemical systems (BESs) by means of heat treatment. X-ray photoelectron spectroscopy and cyclic voltammetry elucidated that the heat treatment generated an iron oxide rich layer on the SS felt surface. The iron oxide layer dramatically enhanced the electroactive biofilm formation on SS felt surface in BESs. Consequently, the sustained current densities achieved on the treated electrodes (1 cm(2)) were around 1.5±0.13 mA/cm(2), which was seven times higher than the untreated electrodes (0.22±0.04 mA/cm(2)). To test the scalability of this material, the heat-treated SS felt was scaled up to 150 cm(2) and similar current density (1.5 mA/cm(2)) was achieved on the larger electrode. The low cost, straightforwardness of the treatment, high conductivity and high bioelectrocatalytic performance make heat-treated SS felt a scalable anodic material for BESs. Copyright © 2015 Elsevier Ltd. All rights reserved.
Scalability, Timing, and System Design Issues for Intrinsic Evolvable Hardware
NASA Technical Reports Server (NTRS)
Hereford, James; Gwaltney, David
2004-01-01
In this paper we address several issues pertinent to intrinsic evolvable hardware (EHW). The first issue is scalability; namely, how the design space scales as the programming string for the programmable device gets longer. We develop a model for population size and the number of generations as a function of the programming string length, L, and show that the number of circuit evaluations is an O(L2) process. We compare our model to several successful intrinsic EHW experiments and discuss the many implications of our model. The second issue that we address is the timing of intrinsic EHW experiments. We show that the processing time is a small part of the overall time to derive or evolve a circuit and that major improvements in processor speed alone will have only a minimal impact on improving the scalability of intrinsic EHW. The third issue we consider is the system-level design of intrinsic EHW experiments. We review what other researchers have done to break the scalability barrier and contend that the type of reconfigurable platform and the evolutionary algorithm are tied together and impose limits on each other.
Scalability of voltage-controlled filamentary and nanometallic resistance memory devices.
Lu, Yang; Lee, Jong Ho; Chen, I-Wei
2017-08-31
Much effort has been devoted to device and materials engineering to realize nanoscale resistance random access memory (RRAM) for practical applications, but a rational physical basis to be relied on to design scalable devices spanning many length scales is still lacking. In particular, there is no clear criterion for switching control in those RRAM devices in which resistance changes are limited to localized nanoscale filaments that experience concentrated heat, electric current and field. Here, we demonstrate voltage-controlled resistance switching, always at a constant characteristic critical voltage, for macro and nanodevices in both filamentary RRAM and nanometallic RRAM, and the latter switches uniformly and does not require a forming process. As a result, area-scalability can be achieved under a device-area-proportional current compliance for the low resistance state of the filamentary RRAM, and for both the low and high resistance states of the nanometallic RRAM. This finding will help design area-scalable RRAM at the nanoscale. It also establishes an analogy between RRAM and synapses, in which signal transmission is also voltage-controlled.
Superlinearly scalable noise robustness of redundant coupled dynamical systems.
Kohar, Vivek; Kia, Behnam; Lindner, John F; Ditto, William L
2016-03-01
We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.
Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams
NASA Astrophysics Data System (ADS)
Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng
2006-12-01
This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).
PROPER: global protein interaction network alignment through percolation matching.
Kazemi, Ehsan; Hassani, Hamed; Grossglauser, Matthias; Pezeshgi Modarres, Hassan
2016-12-12
The alignment of protein-protein interaction (PPI) networks enables us to uncover the relationships between different species, which leads to a deeper understanding of biological systems. Network alignment can be used to transfer biological knowledge between species. Although different PPI-network alignment algorithms were introduced during the last decade, developing an accurate and scalable algorithm that can find alignments with high biological and structural similarities among PPI networks is still challenging. In this paper, we introduce a new global network alignment algorithm for PPI networks called PROPER. Compared to other global network alignment methods, our algorithm shows higher accuracy and speed over real PPI datasets and synthetic networks. We show that the PROPER algorithm can detect large portions of conserved biological pathways between species. Also, using a simple parsimonious evolutionary model, we explain why PROPER performs well based on several different comparison criteria. We highlight that PROPER has high potential in further applications such as detecting biological pathways, finding protein complexes and PPI prediction. The PROPER algorithm is available at http://proper.epfl.ch .
Direct Preparation of Few Layer Graphene Epoxy Nanocomposites from Untreated Flake Graphite.
Throckmorton, James; Palmese, Giuseppe
2015-07-15
The natural availability of flake graphite and the exceptional properties of graphene and graphene-polymer composites create a demand for simple, cost-effective, and scalable methods for top-down graphite exfoliation. This work presents a novel method of few layer graphite nanocomposite preparation directly from untreated flake graphite using a room temperature ionic liquid and laminar shear processing regimen. The ionic liquid serves both as a solvent and initiator for epoxy polymerization and is incorporated chemically into the matrix. This nanocomposite shows low electrical percolation (0.005 v/v) and low thickness (1-3 layers) graphite/graphene flakes by TEM. Additionally, the effect of processing conditions by rheometry and comparison with solvent-free conditions reveal the interactions between processing and matrix properties and provide insight into the theory of the chemical and physical exfoliation of graphite crystals and the resulting polymer matrix dispersion. An interaction model that correlates the interlayer shear physics of graphite flakes and processing parameters is proposed and tested.
Photogenerated Lectin Sensors Produced by Thiol-Ene/Yne Photo-Click Chemistry in Aqueous Solution
Norberg, Oscar; Lee, Irene H.; Aastrup, Teodor; Yan, Mingdi; Ramström, Olof
2012-01-01
The photoinitiated radical reactions between thiols and alkenes/alkynes (thiol-ene and thiol-yne chemistry) have been applied to a functionalization methodology to produce carbohydrate-presenting surfaces for analyses of biomolecular interactions. Polymer-coated quartz surfaces were functionalized with alkenes or alkynes in a straightforward photochemical procedure utilizing perfluorophenylazide (PFPA) chemistry. The alkene/alkyne surfaces were subsequently allowed to react with carbohydrate thiols in water under UV-irradiation. The reaction can be carried out in a drop of water directly on the surface without photoinitiator and any disulfide side products were easily washed away after the functionalization process. The resulting carbohydrate-presenting surfaces were evaluated in real-time studies of protein-carbohydrate interactions using a quartz crystal microbalance flow-through system with recurring injections of selected lectins with intermediate regeneration steps using low pH buffer. The resulting methodology proved fast, efficient and scalable to high-throughput analysis formats, and the produced surfaces showed significant protein binding with expected selectivities of the lectins used in the study. PMID:22341757
FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.
Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver
2014-06-14
Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.
NASA Technical Reports Server (NTRS)
Splettstoesser, W. R.; Schultz, K. J.; Boxwell, D. A.; Schmitz, F. H.
1984-01-01
Acoustic data taken in the anechoic Deutsch-Niederlaendischer Windkanal (DNW) have documented the blade vortex interaction (BVI) impulsive noise radiated from a 1/7-scale model main rotor of the AH-1 series helicopter. Averaged model scale data were compared with averaged full scale, inflight acoustic data under similar nondimensional test conditions. At low advance ratios (mu = 0.164 to 0.194), the data scale remarkable well in level and waveform shape, and also duplicate the directivity pattern of BVI impulsive noise. At moderate advance ratios (mu = 0.224 to 0.270), the scaling deteriorates, suggesting that the model scale rotor is not adequately simulating the full scale BVI noise; presently, no proved explanation of this discrepancy exists. Carefully performed parametric variations over a complete matrix of testing conditions have shown that all of the four governing nondimensional parameters - tip Mach number at hover, advance ratio, local inflow ratio, and thrust coefficient - are highly sensitive to BVI noise radiation.
NASA Astrophysics Data System (ADS)
Millan, Jaime; McMillan, Janet; Brodin, Jeff; Lee, Byeongdu; Mirkin, Chad; Olvera de La Cruz, Monica
Programmable DNA interactions represent a robust scheme to self-assemble a rich variety of tunable superlattices, where intrinsic and in some cases non-desirable nano-scale building blocks interactions are substituted for DNA hybridization events. Recent advances in synthesis has allowed the extension of this successful scheme to proteins, where DNA distribution can be tuned independently of protein shape by selectively addressing surface residues, giving rise to assembly properties in three dimensional protein-nanoparticle superlattices dependent on DNA distribution. In parallel to this advances, we introduced a scalable coarse-grained model that faithfully reproduces the previously observed co-assemblies from nanoparticles and proteins conjugates. Herein, we implement this numerical model to explain the stability of complex protein-nanoparticle binary superlattices and to elucidate experimentally inaccessible features such as protein orientation. Also, we will discuss systematic studies that highlight the role of DNA distribution and sequence on two-dimensional protein-protein and protein-nanoparticle superlattices.
Silicon CMOS architecture for a spin-based quantum computer.
Veldhorst, M; Eenink, H G J; Yang, C H; Dzurak, A S
2017-12-15
Recent advances in quantum error correction codes for fault-tolerant quantum computing and physical realizations of high-fidelity qubits in multiple platforms give promise for the construction of a quantum computer based on millions of interacting qubits. However, the classical-quantum interface remains a nascent field of exploration. Here, we propose an architecture for a silicon-based quantum computer processor based on complementary metal-oxide-semiconductor (CMOS) technology. We show how a transistor-based control circuit together with charge-storage electrodes can be used to operate a dense and scalable two-dimensional qubit system. The qubits are defined by the spin state of a single electron confined in quantum dots, coupled via exchange interactions, controlled using a microwave cavity, and measured via gate-based dispersive readout. We implement a spin qubit surface code, showing the prospects for universal quantum computation. We discuss the challenges and focus areas that need to be addressed, providing a path for large-scale quantum computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozden, Sehmus; Tsafack, Thierry; Owuor, Peter S.
Owing to the weak physical interactions such as van der Waals and π-π interactions, which hold nanotubes together in carbon nanotube (CNT) bulk structures, the tubes can easily slide on each other. In creating covalent interconnection between individual carbon nanotube (CNT) structures we saw remarkable improvements in the properties of their three-dimensional (3D) bulk structures. The creation of such nanoengineered 3D solid structures with improved properties and low-density remains one of the fundamental challenges in real-world applications. We also report the scalable synthesis of low-density 3D macroscopic structure made of covalently interconnected nanotubes using free-radical polymerization method after functionalized CNTsmore » with allylamine monomers. The resulted interconnected highly porous solid structure exhibits higher mechanical properties, larger surface area and greater porosity than non-crosslinked nanotube structures. To gain further insights into the deformation mechanisms of nanotubes, fully atomistic reactive molecular dynamics simulations are used. Here we demonstrate one such utility in CO 2 uptake, whose interconnected solid structure performed better than non-interconnected structures.« less
Wang, Letian; Rho, Yoonsoo; Shou, Wan; Hong, Sukjoon; Kato, Kimihiko; Eliceiri, Matthew; Shi, Meng; Grigoropoulos, Costas P; Pan, Heng; Carraro, Carlo; Qi, Dongfeng
2018-03-27
Manipulating and tuning nanoparticles by means of optical field interactions is of key interest for nanoscience and applications in electronics and photonics. We report scalable, direct, and optically modulated writing of nanoparticle patterns (size, number, and location) of high precision using a pulsed nanosecond laser. The complex nanoparticle arrangement is modulated by the laser pulse energy and polarization with the particle size ranging from 60 to 330 nm. Furthermore, we report fast cooling-rate induced phase switching of crystalline Si nanoparticles to the amorphous state. Such phase switching has usually been observed in compound phase change materials like GeSbTe. The ensuing modification of atomic structure leads to dielectric constant switching. Based on these effects, a multiscale laser-assisted method of fabricating Mie resonator arrays is proposed. The number of Mie resonators, as well as the resonance peaks and dielectric constants of selected resonators, can be programmed. The programmable light-matter interaction serves as a mechanism to fabricate optical metasurfaces, structural color, and multidimensional optical storage devices.
High strength films from oriented, hydrogen-bonded "graphamid" 2D polymer molecular ensembles.
Sandoz-Rosado, Emil; Beaudet, Todd D; Andzelm, Jan W; Wetzel, Eric D
2018-02-27
The linear polymer poly(p-phenylene terephthalamide), better known by its tradename Kevlar, is an icon of modern materials science due to its remarkable strength, stiffness, and environmental resistance. Here, we propose a new two-dimensional (2D) polymer, "graphamid", that closely resembles Kevlar in chemical structure, but is mechanically advantaged by virtue of its 2D structure. Using atomistic calculations, we show that graphamid comprises covalently-bonded sheets bridged by a high population of strong intermolecular hydrogen bonds. Molecular and micromechanical calculations predict that these strong intermolecular interactions allow stiff, high strength (6-8 GPa), and tough films from ensembles of finite graphamid molecules. In contrast, traditional 2D materials like graphene have weak intermolecular interactions, leading to ensembles of low strength (0.1-0.5 GPa) and brittle fracture behavior. These results suggest that hydrogen-bonded 2D polymers like graphamid would be transformative in enabling scalable, lightweight, high performance polymer films of unprecedented mechanical performance.
NASA Astrophysics Data System (ADS)
Puri, Shruti; McMahon, Peter L.; Yamamoto, Yoshihisa
2014-10-01
We propose a scheme to perform single-shot quantum nondemolition (QND) readout of the spin of an electron trapped in a semiconductor quantum dot (QD). Our proposal relies on the interaction of the QD electron spin with optically excited, quantum well (QW) microcavity exciton-polaritons. The spin-dependent Coulomb exchange interaction between the QD electron and cavity polaritons causes the phase and intensity response of left circularly polarized light to be different than that of right circularly polarized light, in such a way that the QD electron's spin can be inferred from the response to a linearly polarized probe reflected or transmitted from the cavity. We show that with careful device design it is possible to essentially eliminate spin-flip Raman transitions. Thus a QND measurement of the QD electron spin can be performed within a few tens of nanoseconds with fidelity ˜99.95%. This improves upon current optical QD spin readout techniques across multiple metrics, including speed and scalability.
Scuba: scalable kernel-based gene prioritization.
Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio
2018-01-25
The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .
Scalable hybrid computation with spikes.
Sarpeshkar, Rahul; O'Halloran, Micah
2002-09-01
We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moderate-precision analog units to collectively compute a precise answer to a computation. Second, frequent discrete signal restoration of the analog information prevents analog noise and offset from degrading the computation. And, third, a state machine enables complex computations to be created using a sequence of elementary computations. A natural choice for implementing this hybrid scheme is one based on spikes because spike-count codes are digital, while spike-time codes are analog. We illustrate how spikes afford easy ways to implement all three components of scalable hybrid computation. First, as an important example of distributed analog computation, we show how spikes can create a distributed modular representation of an analog number by implementing digital carry interactions between spiking analog neurons. Second, we show how signal restoration may be performed by recursive spike-count quantization of spike-time codes. And, third, we use spikes from an analog dynamical system to trigger state transitions in a digital dynamical system, which reconfigures the analog dynamical system using a binary control vector; such feedback interactions between analog and digital dynamical systems create a hybrid state machine (HSM). The HSM extends and expands the concept of a digital finite-state-machine to the hybrid domain. We present experimental data from a two-neuron HSM on a chip that implements error-correcting analog-to-digital conversion with the concurrent use of spike-time and spike-count codes. We also present experimental data from silicon circuits that implement HSM-based pattern recognition using spike-time synchrony. We outline how HSMs may be used to perform learning, vector quantization, spike pattern recognition and generation, and how they may be reconfigured.
An Interactive Web-Based Analysis Framework for Remote Sensing Cloud Computing
NASA Astrophysics Data System (ADS)
Wang, X. Z.; Zhang, H. M.; Zhao, J. H.; Lin, Q. H.; Zhou, Y. C.; Li, J. H.
2015-07-01
Spatiotemporal data, especially remote sensing data, are widely used in ecological, geographical, agriculture, and military research and applications. With the development of remote sensing technology, more and more remote sensing data are accumulated and stored in the cloud. An effective way for cloud users to access and analyse these massive spatiotemporal data in the web clients becomes an urgent issue. In this paper, we proposed a new scalable, interactive and web-based cloud computing solution for massive remote sensing data analysis. We build a spatiotemporal analysis platform to provide the end-user with a safe and convenient way to access massive remote sensing data stored in the cloud. The lightweight cloud storage system used to store public data and users' private data is constructed based on open source distributed file system. In it, massive remote sensing data are stored as public data, while the intermediate and input data are stored as private data. The elastic, scalable, and flexible cloud computing environment is built using Docker, which is a technology of open-source lightweight cloud computing container in the Linux operating system. In the Docker container, open-source software such as IPython, NumPy, GDAL, and Grass GIS etc., are deployed. Users can write scripts in the IPython Notebook web page through the web browser to process data, and the scripts will be submitted to IPython kernel to be executed. By comparing the performance of remote sensing data analysis tasks executed in Docker container, KVM virtual machines and physical machines respectively, we can conclude that the cloud computing environment built by Docker makes the greatest use of the host system resources, and can handle more concurrent spatial-temporal computing tasks. Docker technology provides resource isolation mechanism in aspects of IO, CPU, and memory etc., which offers security guarantee when processing remote sensing data in the IPython Notebook. Users can write complex data processing code on the web directly, so they can design their own data processing algorithm.
Systems 2020: Strategic Initiative
2010-08-29
research areas that enable agile, assured, efficient, and scalable systems engineering approaches to support the development of these systems. This...To increase development efficiency and ensure flexible solutions in the field, systems engineers need powerful, agile, interoperable, and scalable...design and development will be transformed as a result of Systems 2020, along with complementary enabling acquisition practice improvements initiated in
CASTOR: Widely Distributed Scalable Infospaces
2008-11-01
1 i Progress against Planned Objectives Enable nimble apps that react fast as...generation of scalable, reliable, ultra- fast event notification in Linux data centers. • Maelstrom, a spin-off from Ricochet, offers a powerful new option...out potential enhancements to WS-EVENTING and WS-NOTIFICATION based on our work. Potential impact for the warflighter. QSM achieves extremely fast
2011-01-01
present performance statistics to explain the scalability behavior. Keywords-atmospheric models, time intergrators , MPI, scal- ability, performance; I...across inter-element bound- aries. Basis functions are constructed as tensor products of Lagrange polynomials ψi (x) = hα(ξ) ⊗ hβ(η) ⊗ hγ(ζ)., where hα
Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter
2015-01-20
While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.
Scalability of transport parameters with pore sizes in isodense disordered media
NASA Astrophysics Data System (ADS)
Reginald, S. William; Schmitt, V.; Vallée, R. A. L.
2014-09-01
We study light multiple scattering in complex disordered porous materials. High internal phase emulsion-based isodense polystyrene foams are designed. Two types of samples, exhibiting different pore size distributions, are investigated for different slab thicknesses varying from L = 1 \\text{mm} to 10 \\text{mm} . Optical measurements combining steady-state and time-resolved detection are used to characterize the photon transport parameters. Very interestingly, a clear scalability of the transport mean free path \\ellt with the average size of the pores S is observed, featuring a constant velocity of the transport energy in these isodense structures. This study strongly motivates further investigations into the limits of validity of this scalability as the scattering strength of the system increases.
Scalable, full-colour and controllable chromotropic plasmonic printing
Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua
2015-01-01
Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization. PMID:26567803
A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions
Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.
2017-01-01
Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945
A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.
Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J
2017-04-12
Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.
Scalable, full-colour and controllable chromotropic plasmonic printing.
Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua
2015-11-16
Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization.
Scalable cluster administration - Chiba City I approach and lessons learned.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navarro, J. P.; Evard, R.; Nurmi, D.
2002-07-01
Systems administrators of large clusters often need to perform the same administrative activity hundreds or thousands of times. Often such activities are time-consuming, especially the tasks of installing and maintaining software. By combining network services such as DHCP, TFTP, FTP, HTTP, and NFS with remote hardware control, cluster administrators can automate all administrative tasks. Scalable cluster administration addresses the following challenge: What systems design techniques can cluster builders use to automate cluster administration on very large clusters? We describe the approach used in the Mathematics and Computer Science Division of Argonne National Laboratory on Chiba City I, a 314-node Linuxmore » cluster; and we analyze the scalability, flexibility, and reliability benefits and limitations from that approach.« less
Nuclear data made easily accessible through the Notre Dame Nuclear Database
NASA Astrophysics Data System (ADS)
Khouw, Timothy; Lee, Kevin; Fasano, Patrick; Mumpower, Matthew; Aprahamian, Ani
2014-09-01
In 1994, the NNDC revolutionized nuclear research by providing a colorful, clickable, searchable database over the internet. Over the last twenty years, web technology has evolved dramatically. Our project, the Notre Dame Nuclear Database, aims to provide a more comprehensive and broadly searchable interactive body of data. The database can be searched by an array of filters which includes metadata such as the facility where a measurement is made, the author(s), or date of publication for the datum of interest. The user interface takes full advantage of HTML, a web markup language, CSS (cascading style sheets to define the aesthetics of the website), and JavaScript, a language that can process complex data. A command-line interface is supported that interacts with the database directly on a user's local machine which provides single command access to data. This is possible through the use of a standardized API (application programming interface) that relies upon well-defined filtering variables to produce customized search results. We offer an innovative chart of nuclides utilizing scalable vector graphics (SVG) to deliver users an unsurpassed level of interactivity supported on all computers and mobile devices. We will present a functional demo of our database at the conference.
Lehrer, Nicole; Duff, Margaret; Venkataraman, Vinay; Turaga, Pavan; Ingalls, Todd; Rymer, W. Zev; Wolf, Steven L.; Rikakis, Thanassis
2015-01-01
Interactive neurorehabilitation (INR) systems provide therapy that can evaluate and deliver feedback on a patient's movement computationally. There are currently many approaches to INR design and implementation, without a clear indication of which methods to utilize best. This article presents key interactive computing, motor learning, and media arts concepts utilized by an interdisciplinary group to develop adaptive, mixed reality INR systems for upper extremity therapy of patients with stroke. Two INR systems are used as examples to show how the concepts can be applied within: (1) a small-scale INR clinical study that achieved integrated improvement of movement quality and functionality through continuously supervised therapy and (2) a pilot study that achieved improvement of clinical scores with minimal supervision. The notion is proposed that some of the successful approaches developed and tested within these systems can form the basis of a scalable design methodology for other INR systems. A coherent approach to INR design is needed to facilitate the use of the systems by physical therapists, increase the number of successful INR studies, and generate rich clinical data that can inform the development of best practices for use of INR in physical therapy. PMID:25425694
Gallium arsenide based surface plasmon resonance for glucose monitoring
NASA Astrophysics Data System (ADS)
Patil, Harshada; Sane, Vani; Sriram, G.; Indumathi, T. S; Sharan, Preeta
2015-07-01
The recent trends in the semiconductor and microwave industries has enabled the development of scalable microfabrication technology which produces a superior set of performance as against its counterparts. Surface Plasmon Resonance (SPR) based biosensors are a special class of optical sensors that become affected by electromagnetic waves. It is found that bio-molecular recognition element immobilized on the SPR sensor surface layer reveals a characteristic interaction with various sample solutions during the passage of light. The present work revolves around developing painless glucose monitoring systems using fluids containing glucose like saliva, urine, sweat or tears instead of blood samples. Non-invasive glucose monitoring has long been simulated using label free detection mechanisms and the same concept is adapted. In label-free detection, target molecules are not labeled or altered, and are detected in their natural forms. Label-free detection mechanisms involves the measurement of refractive index (RI) change induced by molecular interactions. These interactions relates the sample concentration or surface density, instead of total sample mass. After simulation it has been observed that the result obtained is highly accurate and sensitive. The structure used here is SPR sensor based on channel waveguide. The tools used for simulation are RSOFT FULLWAVE, MEEP and MATLAB etc.
Global Alignment of Pairwise Protein Interaction Networks for Maximal Common Conserved Patterns
Tian, Wenhong; Samatova, Nagiza F.
2013-01-01
A number of tools for the alignment of protein-protein interaction (PPI) networks have laid the foundation for PPI network analysis. Most of alignment tools focus on finding conserved interaction regions across the PPI networks through either local or global mapping of similar sequences. Researchers are still trying to improve the speed, scalability, and accuracy of network alignment. In view of this, we introduce a connected-components based fast algorithm, HopeMap, for network alignment. Observing that the size of true orthologs across species is small comparing to the total number of proteins in all species, we take a different approach based onmore » a precompiled list of homologs identified by KO terms. Applying this approach to S. cerevisiae (yeast) and D. melanogaster (fly), E. coli K12 and S. typhimurium , E. coli K12 and C. crescenttus , we analyze all clusters identified in the alignment. The results are evaluated through up-to-date known gene annotations, gene ontology (GO), and KEGG ortholog groups (KO). Comparing to existing tools, our approach is fast with linear computational cost, highly accurate in terms of KO and GO terms specificity and sensitivity, and can be extended to multiple alignments easily.« less
Baran, Michael; Lehrer, Nicole; Duff, Margaret; Venkataraman, Vinay; Turaga, Pavan; Ingalls, Todd; Rymer, W Zev; Wolf, Steven L; Rikakis, Thanassis
2015-03-01
Interactive neurorehabilitation (INR) systems provide therapy that can evaluate and deliver feedback on a patient's movement computationally. There are currently many approaches to INR design and implementation, without a clear indication of which methods to utilize best. This article presents key interactive computing, motor learning, and media arts concepts utilized by an interdisciplinary group to develop adaptive, mixed reality INR systems for upper extremity therapy of patients with stroke. Two INR systems are used as examples to show how the concepts can be applied within: (1) a small-scale INR clinical study that achieved integrated improvement of movement quality and functionality through continuously supervised therapy and (2) a pilot study that achieved improvement of clinical scores with minimal supervision. The notion is proposed that some of the successful approaches developed and tested within these systems can form the basis of a scalable design methodology for other INR systems. A coherent approach to INR design is needed to facilitate the use of the systems by physical therapists, increase the number of successful INR studies, and generate rich clinical data that can inform the development of best practices for use of INR in physical therapy. © 2015 American Physical Therapy Association.
BioEve Search: A Novel Framework to Facilitate Interactive Literature Search
Ahmed, Syed Toufeeq; Davulcu, Hasan; Tikves, Sukru; Nair, Radhika; Zhao, Zhongming
2012-01-01
Background. Recent advances in computational and biological methods in last two decades have remarkably changed the scale of biomedical research and with it began the unprecedented growth in both the production of biomedical data and amount of published literature discussing it. An automated extraction system coupled with a cognitive search and navigation service over these document collections would not only save time and effort, but also pave the way to discover hitherto unknown information implicitly conveyed in the texts. Results. We developed a novel framework (named “BioEve”) that seamlessly integrates Faceted Search (Information Retrieval) with Information Extraction module to provide an interactive search experience for the researchers in life sciences. It enables guided step-by-step search query refinement, by suggesting concepts and entities (like genes, drugs, and diseases) to quickly filter and modify search direction, and thereby facilitating an enriched paradigm where user can discover related concepts and keywords to search while information seeking. Conclusions. The BioEve Search framework makes it easier to enable scalable interactive search over large collection of textual articles and to discover knowledge hidden in thousands of biomedical literature articles with ease. PMID:22693501
A Synthetic Community System for Probing Microbial Interactions Driven by Exometabolites
Chodkowski, John L.
2017-01-01
ABSTRACT Though most microorganisms live within a community, we have modest knowledge about microbial interactions and their implications for community properties and ecosystem functions. To advance understanding of microbial interactions, we describe a straightforward synthetic community system that can be used to interrogate exometabolite interactions among microorganisms. The filter plate system (also known as the Transwell system) physically separates microbial populations, but allows for chemical interactions via a shared medium reservoir. Exometabolites, including small molecules, extracellular enzymes, and antibiotics, are assayed from the reservoir using sensitive mass spectrometry. Community member outcomes, such as growth, productivity, and gene regulation, can be determined using flow cytometry, biomass measurements, and transcript analyses, respectively. The synthetic community design allows for determination of the consequences of microbiome diversity for emergent community properties and for functional changes over time or after perturbation. Because it is versatile, scalable, and accessible, this synthetic community system has the potential to practically advance knowledge of microbial interactions that occur within both natural and artificial communities. IMPORTANCE Understanding microbial interactions is a fundamental objective in microbiology and ecology. The synthetic community system described here can set into motion a range of research to investigate how the diversity of a microbiome and interactions among its members impact its function, where function can be measured as exometabolites. The system allows for community exometabolite profiling to be coupled with genome mining, transcript analysis, and measurements of member productivity and population size. It can also facilitate discovery of natural products that are only produced within microbial consortia. Thus, this synthetic community system has utility to address fundamental questions about a diversity of possible microbial interactions that occur in both natural and engineered ecosystems. Author Video: An author video summary of this article is available. PMID:29152587
NASA Astrophysics Data System (ADS)
de la Cita, V. M.; Bosch-Ramon, V.; Paredes-Fortuny, X.; Khangulyan, D.; Perucho, M.
2016-06-01
Context. Stars and their winds can contribute to the non-thermal emission in extragalactic jets. Because of the complexity of jet-star interactions, the properties of the resulting emission are closely linked to those of the emitting flows. Aims: We simulate the interaction between a stellar wind and a relativistic extragalactic jet and use the hydrodynamic results to compute the non-thermal emission under different conditions. Methods: We performed relativistic axisymmetric hydrodynamical simulations of a relativistic jet interacting with a supersonic, non-relativistic stellar wind. We computed the corresponding streamlines out of the simulation results and calculated the injection, evolution, and emission of non-thermal particles accelerated in the jet shock, focusing on electrons or e±-pairs. Several cases were explored, considering different jet-star interaction locations, magnetic fields, and observer lines of sight. The jet luminosity and star properties were fixed, but the results are easily scalable when these parameters are changed. Results: Individual jet-star interactions produce synchrotron and inverse Compton emission that peaks from X-rays to MeV energies (depending on the magnetic field), and at ~100-1000 GeV (depending on the stellar type), respectively. The radiation spectrum is hard in the scenarios explored here as a result of non-radiative cooling dominance, as low-energy electrons are efficiently advected even under relatively high magnetic fields. Interactions of jets with cold stars lead to an even harder inverse Compton spectrum because of the Klein-Nishina effect in the cross section. Doppler boosting has a strong effect on the observer luminosity. Conclusions: The emission levels for individual interactions found here are in the line of previous, more approximate, estimates, strengthening the hypothesis that collective jet-star interactions could significantly contribute at high energies under efficient particle acceleration.
NASA Astrophysics Data System (ADS)
Sanan, P.; Tackley, P. J.; Gerya, T.; Kaus, B. J. P.; May, D.
2017-12-01
StagBL is an open-source parallel solver and discretization library for geodynamic simulation,encapsulating and optimizing operations essential to staggered-grid finite volume Stokes flow solvers.It provides a parallel staggered-grid abstraction with a high-level interface in C and Fortran.On top of this abstraction, tools are available to define boundary conditions and interact with particle systems.Tools and examples to efficiently solve Stokes systems defined on the grid are provided in small (direct solver), medium (simple preconditioners), and large (block factorization and multigrid) model regimes.By working directly with leading application codes (StagYY, I3ELVIS, and LaMEM) and providing an API and examples to integrate with others, StagBL aims to become a community tool supplying scalable, portable, reproducible performance toward novel science in regional- and planet-scale geodynamics and planetary science.By implementing kernels used by many research groups beneath a uniform abstraction layer, the library will enable optimization for modern hardware, thus reducing community barriers to large- or extreme-scale parallel simulation on modern architectures. In particular, the library will include CPU-, Manycore-, and GPU-optimized variants of matrix-free operators and multigrid components.The common layer provides a framework upon which to introduce innovative new tools.StagBL will leverage p4est to provide distributed adaptive meshes, and incorporate a multigrid convergence analysis tool.These options, in addition to a wealth of solver options provided by an interface to PETSc, will make the most modern solution techniques available from a common interface. StagBL in turn provides a PETSc interface, DMStag, to its central staggered grid abstraction.We present public version 0.5 of StagBL, including preliminary integration with application codes and demonstrations with its own demonstration application, StagBLDemo. Central to StagBL is the notion of an uninterrupted pipeline from toy/teaching codes to high-performance, extreme-scale solves. StagBLDemo replicates the functionality of an advanced MATLAB-style regional geodynamics code, thus providing users with a concrete procedure to exceed the performance and scalability limitations of smaller-scale tools.
Agents, assemblers, and ANTS: scheduling assembly with market and biological software mechanisms
NASA Astrophysics Data System (ADS)
Toth-Fejel, Tihamer T.
2000-06-01
Nanoscale assemblers will need robust, scalable, flexible, and well-understood mechanisms such as software agents to control them. This paper discusses assemblers and agents, and proposes a taxonomy of their possible interaction. Molecular assembly is seen as a special case of general assembly, subject to many of the same issues, such as the advantages of convergent assembly, and the problem of scheduling. This paper discusses the contract net architecture of ANTS, an agent-based scheduling application under development. It also describes an algorithm for least commitment scheduling, which uses probabilistic committed capacity profiles of resources over time, along with realistic costs, to provide an abstract search space over which the agents can wander to quickly find optimal solutions.
NASA Astrophysics Data System (ADS)
Sharpanskykh, Alexei; Treur, Jan
Employing rich internal agent models of actors in large-scale socio-technical systems often results in scalability issues. The problem addressed in this paper is how to improve computational properties of a complex internal agent model, while preserving its behavioral properties. The problem is addressed for the case of an existing affective-cognitive decision making model instantiated for an emergency scenario. For this internal decision model an abstracted behavioral agent model is obtained, which ensures a substantial increase of the computational efficiency at the cost of approximately 1% behavioural error. The abstraction technique used can be applied to a wide range of internal agent models with loops, for example, involving mutual affective-cognitive interactions.
Improving the gate fidelity of capacitively coupled spin qubits
NASA Astrophysics Data System (ADS)
Wang, Xin; Barnes, Edwin
2015-03-01
Precise execution of quantum gates acting on two or multiple qubits is essential to quantum computation. For semiconductor spin qubits coupled via capacitive interaction, the best fidelity for a two-qubit gate demonstrated so far is around 70%, insufficient for fault-tolerant quantum computation. In this talk we present control protocols that may substantially improve the robustness of two-qubit gates against both nuclear noise and charge noise. Our pulse sequences incorporate simultaneous dynamical decoupling protocols and are simple enough for immediate experimental realization. Together with existing control protocols for single-qubit gates, our results constitute an important step toward scalable quantum computation using spin qubits. This work is done in collaboration with Sankar Das Sarma and supported by LPS-NSA-CMTC and IARPA-MQCO.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papp, A., E-mail: apapp@nd.edu; Pázmány Péter Catholic University, Faculty of Information Technology, Budapest 1088; Porod, W., E-mail: porod@nd.edu
We study coupled ferromagnetic layers, which could facilitate low loss, sub 100 nm wavelength spin-wave propagation and manipulation. One of the layers is a low-loss garnet film (such as yttrium iron garnet (YIG)) that enables long-distance, coherent spin-wave propagation. The other layer is made of metal-based (Permalloy, Co, and CoFe) magnetoelectronic structures that can be used to generate, manipulate, and detect the spin waves. Using micromagnetic simulations, we analyze the interactions between the spin waves in the YIG and the metallic nanomagnet structures and demonstrate the components of a scalable spin-wave based signal processing device. We argue that such hybrid-metallic ferromagnetmore » structures can be the basis of potentially high-performance, ultra low-power computing devices.« less
Charge reconfiguration in arrays of quantum dots
NASA Astrophysics Data System (ADS)
Bayer, Johannes C.; Wagner, Timo; Rugeramigabo, Eddy P.; Haug, Rolf J.
2017-12-01
Semiconductor quantum dots are potential building blocks for scalable qubit architectures. Efficient control over the exchange interaction and the possibility of coherently manipulating electron states are essential ingredients towards this goal. We studied experimentally the shuttling of electrons trapped in serial quantum dot arrays isolated from the reservoirs. The isolation hereby enables a high degree of control over the tunnel couplings between the quantum dots, while electrons can be transferred through the array by gate voltage variations. Model calculations are compared with our experimental results for double, triple, and quadruple quantum dot arrays. We are able to identify all transitions observed in our experiments, including cotunneling transitions between distant quantum dots. The shuttling of individual electrons between quantum dots along chosen paths is demonstrated.
Proposal for Microwave Boson Sampling.
Peropadre, Borja; Guerreschi, Gian Giacomo; Huh, Joonsuk; Aspuru-Guzik, Alán
2016-09-30
Boson sampling, the task of sampling the probability distribution of photons at the output of a photonic network, is believed to be hard for any classical device. Unlike other models of quantum computation that require thousands of qubits to outperform classical computers, boson sampling requires only a handful of single photons. However, a scalable implementation of boson sampling is missing. Here, we show how superconducting circuits provide such platform. Our proposal differs radically from traditional quantum-optical implementations: rather than injecting photons in waveguides, making them pass through optical elements like phase shifters and beam splitters, and finally detecting their output mode, we prepare the required multiphoton input state in a superconducting resonator array, control its dynamics via tunable and dispersive interactions, and measure it with nondemolition techniques.
The Newick utilities: high-throughput phylogenetic tree processing in the Unix shell
Junier, Thomas; Zdobnov, Evgeny M.
2010-01-01
Summary: We present a suite of Unix shell programs for processing any number of phylogenetic trees of any size. They perform frequently-used tree operations without requiring user interaction. They also allow tree drawing as scalable vector graphics (SVG), suitable for high-quality presentations and further editing, and as ASCII graphics for command-line inspection. As an example we include an implementation of bootscanning, a procedure for finding recombination breakpoints in viral genomes. Availability: C source code, Python bindings and executables for various platforms are available from http://cegg.unige.ch/newick_utils. The distribution includes a manual and example data. The package is distributed under the BSD License. Contact: thomas.junier@unige.ch PMID:20472542
Noise-Resilient Quantum Computing with a Nitrogen-Vacancy Center and Nuclear Spins.
Casanova, J; Wang, Z-Y; Plenio, M B
2016-09-23
Selective control of qubits in a quantum register for the purposes of quantum information processing represents a critical challenge for dense spin ensembles in solid-state systems. Here we present a protocol that achieves a complete set of selective electron-nuclear gates and single nuclear rotations in such an ensemble in diamond facilitated by a nearby nitrogen-vacancy (NV) center. The protocol suppresses internuclear interactions as well as unwanted coupling between the NV center and other spins of the ensemble to achieve quantum gate fidelities well exceeding 99%. Notably, our method can be applied to weakly coupled, distant spins representing a scalable procedure that exploits the exceptional properties of nuclear spins in diamond as robust quantum memories.
NASA Astrophysics Data System (ADS)
Pfister, Olivier
2017-05-01
When it comes to practical quantum computing, the two main challenges are circumventing decoherence (devastating quantum errors due to interactions with the environmental bath) and achieving scalability (as many qubits as needed for a real-life, game-changing computation). We show that using, in lieu of qubits, the "qumodes" represented by the resonant fields of the quantum optical frequency comb of an optical parametric oscillator allows one to create bona fide, large scale quantum computing processors, pre-entangled in a cluster state. We detail our recent demonstration of 60-qumode entanglement (out of an estimated 3000) and present an extension to combining this frequency-tagged with time-tagged entanglement, in order to generate an arbitrarily large, universal quantum computing processor.
Generating a Reduced Gravity Environment on Earth
NASA Technical Reports Server (NTRS)
Dungan, Larry K.; Cunningham, Tom; Poncia, Dina
2010-01-01
Since the 1950s several reduced gravity simulators have been designed and utilized in preparing humans for spaceflight and in reduced gravity system development. The Active Response Gravity Offload System (ARGOS) is the newest and most realistic gravity offload simulator. ARGOS provides three degrees of motion within the test area and is scalable for full building deployment. The inertia of the overhead system is eliminated by an active motor and control system. This presentation will discuss what ARGOS is, how it functions, and the unique challenges of interfacing to the human. Test data and video for human and robotic systems will be presented. A major variable in the human machine interaction is the interface of ARGOS to the human. These challenges along with design solutions will be discussed.
Demonstration of entanglement of electrostatically coupled singlet-triplet qubits.
Shulman, M D; Dial, O E; Harvey, S P; Bluhm, H; Umansky, V; Yacoby, A
2012-04-13
Quantum computers have the potential to solve certain problems faster than classical computers. To exploit their power, it is necessary to perform interqubit operations and generate entangled states. Spin qubits are a promising candidate for implementing a quantum processor because of their potential for scalability and miniaturization. However, their weak interactions with the environment, which lead to their long coherence times, make interqubit operations challenging. We performed a controlled two-qubit operation between singlet-triplet qubits using a dynamically decoupled sequence that maintains the two-qubit coupling while decoupling each qubit from its fluctuating environment. Using state tomography, we measured the full density matrix of the system and determined the concurrence and the fidelity of the generated state, providing proof of entanglement.
JBrowse: A dynamic web platform for genome visualization and analysis
Buels, Robert; Yao, Eric; Diesh, Colin M.; ...
2016-04-12
Background: JBrowse is a fast and full-featured genome browser built with JavaScript and HTML5. It is easily embedded into websites or apps but can also be served as a standalone web page. Results: Overall improvements to speed and scalability are accompanied by specific enhancements that support complex interactive queries on large track sets. Analysis functions can readily be added using the plugin framework; most visual aspects of tracks can also be customized, along with clicks, mouseovers, menus, and popup boxes. JBrowse can also be used to browse local annotation files offline and to generate high-resolution figures for publication. Conclusions: JBrowsemore » is a mature web application suitable for genome visualization and analysis.« less
Morris, Amanda Sheffield; Robinson, Lara R; Hays-Grudo, Jennifer; Claussen, Angelika H; Hartwig, Sophie A; Treat, Amy E
2017-03-01
In this article, the authors posit that programs promoting nurturing parent-child relationships influence outcomes of parents and young children living in poverty through two primary mechanisms: (a) strengthening parents' social support and (b) increasing positive parent-child interactions. The authors discuss evidence for these mechanisms as catalysts for change and provide examples from selected parenting programs that support the influence of nurturing relationships on child and parenting outcomes. The article focuses on prevention programs targeted at children and families living in poverty and closes with a discussion of the potential for widespread implementation and scalability for public health impact. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.
Impact of packet losses in scalable 3D holoscopic video coding
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2014-05-01
Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.
Space Situational Awareness Data Processing Scalability Utilizing Google Cloud Services
NASA Astrophysics Data System (ADS)
Greenly, D.; Duncan, M.; Wysack, J.; Flores, F.
Space Situational Awareness (SSA) is a fundamental and critical component of current space operations. The term SSA encompasses the awareness, understanding and predictability of all objects in space. As the population of orbital space objects and debris increases, the number of collision avoidance maneuvers grows and prompts the need for accurate and timely process measures. The SSA mission continually evolves to near real-time assessment and analysis demanding the need for higher processing capabilities. By conventional methods, meeting these demands requires the integration of new hardware to keep pace with the growing complexity of maneuver planning algorithms. SpaceNav has implemented a highly scalable architecture that will track satellites and debris by utilizing powerful virtual machines on the Google Cloud Platform. SpaceNav algorithms for processing CDMs outpace conventional means. A robust processing environment for tracking data, collision avoidance maneuvers and various other aspects of SSA can be created and deleted on demand. Migrating SpaceNav tools and algorithms into the Google Cloud Platform will be discussed and the trials and tribulations involved. Information will be shared on how and why certain cloud products were used as well as integration techniques that were implemented. Key items to be presented are: 1.Scientific algorithms and SpaceNav tools integrated into a scalable architecture a) Maneuver Planning b) Parallel Processing c) Monte Carlo Simulations d) Optimization Algorithms e) SW Application Development/Integration into the Google Cloud Platform 2. Compute Engine Processing a) Application Engine Automated Processing b) Performance testing and Performance Scalability c) Cloud MySQL databases and Database Scalability d) Cloud Data Storage e) Redundancy and Availability
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-07-08
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. The design also includes a scalable, general-purpose communication infrastructure. This paper presents a overview of the SLURM architecture and functionality.
Scalable Database Design of End-Game Model with Decoupled Countermeasure and Threat Information
2017-11-01
Threat Information by Decetria Akole and Michael Chen Approved for public release; distribution is unlimited...Scalable Database Design of End-Game Model with Decoupled Countermeasure and Threat Information by Decetria Akole The Thurgood Marshall...for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data
High Performance Computing Multicast
2012-02-01
responsiveness, first-tier applications often implement replicated in- memory key-value stores , using them to store state or to cache data from services...alternative that replicates data , combines agreement on update ordering with amnesia freedom, and supports both good scalability and fast response. A...alternative that replicates data , combines agreement on update ordering with amnesia freedom, and supports both good scalability and fast response
Bamdad Barari; Thomas K. Ellingham; Issam I. Ghamhia; Krishna M. Pillai; Rani El-Hajjar; Lih-Sheng Turng; Ronald Sabo
2016-01-01
Plant derived cellulose nano-fibers (CNF) are a material with remarkable mechanical properties compared to other natural fibers. However, efforts to produce nano-composites on a large scale using CNF have yet to be investigated. In this study, scalable CNF nano-composites were made from isotropically porous CNF preforms using a freeze drying process. An improvised...
Scalable microreactors and methods for using same
Lawal, Adeniyi; Qian, Dongying
2010-03-02
The present invention provides a scalable microreactor comprising a multilayered reaction block having alternating reaction plates and heat exchanger plates that have a plurality of microchannels; a multilaminated reactor input manifold, a collecting reactor output manifold, a heat exchange input manifold and a heat exchange output manifold. The present invention also provides methods of using the microreactor for multiphase chemical reactions.
ERIC Educational Resources Information Center
Britton, Todd Alan
2014-01-01
Purpose: The purpose of this study was to examine the key considerations of community, scalability, supportability, security, and functionality for selecting open-source software in California universities as perceived by technology leaders. Methods: After a review of the cogent literature, the key conceptual framework categories were identified…
Scalability of Classical Terramechanics Models for Lightweight Vehicle Applications
2013-08-01
Models for Lightweight Vehicle Applications Paramsothy Jayakumar Daniel Melanz Jamie MacLennan U.S. Army TARDEC Warren, MI, USA Carmine...NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Paramsothy Jayakumar ; Daniel Melanz; Jamie MacLennan; Carmine Senatore; Karl Iagnemma 5d. PROJECT...GVSETS), UNCLASSIFIED Scalability of Classical Terramechanics Models for Lightweight Vehicle Applications, Jayakumar , et al., UNCLASSIFIED Page 1 of 19
Automation of multi-agent control for complex dynamic systems in heterogeneous computational network
NASA Astrophysics Data System (ADS)
Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan
2017-01-01
The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.
On scalable lossless video coding based on sub-pixel accurate MCTF
NASA Astrophysics Data System (ADS)
Yea, Sehoon; Pearlman, William A.
2006-01-01
We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.
Improved inter-layer prediction for light field content coding with display scalability
NASA Astrophysics Data System (ADS)
Conti, Caroline; Ducla Soares, Luís.; Nunes, Paulo
2016-09-01
Light field imaging based on microlens arrays - also known as plenoptic, holoscopic and integral imaging - has recently risen up as feasible and prospective technology due to its ability to support functionalities not straightforwardly available in conventional imaging systems, such as: post-production refocusing and depth of field changing. However, to gradually reach the consumer market and to provide interoperability with current 2D and 3D representations, a display scalable coding solution is essential. In this context, this paper proposes an improved display scalable light field codec comprising a three-layer hierarchical coding architecture (previously proposed by the authors) that provides interoperability with 2D (Base Layer) and 3D stereo and multiview (First Layer) representations, while the Second Layer supports the complete light field content. For further improving the compression performance, novel exemplar-based inter-layer coding tools are proposed here for the Second Layer, namely: (i) an inter-layer reference picture construction relying on an exemplar-based optimization algorithm for texture synthesis, and (ii) a direct prediction mode based on exemplar texture samples from lower layers. Experimental results show that the proposed solution performs better than the tested benchmark solutions, including the authors' previous scalable codec.
Scalable splitting algorithms for big-data interferometric imaging in the SKA era
NASA Astrophysics Data System (ADS)
Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves
2016-11-01
In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.
NASA Astrophysics Data System (ADS)
Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi
2017-01-01
Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.
Optimized bit extraction using distortion modeling in the scalable extension of H.264/AVC.
Maani, Ehsan; Katsaggelos, Aggelos K
2009-09-01
The newly adopted scalable extension of H.264/AVC video coding standard (SVC) demonstrates significant improvements in coding efficiency in addition to an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. Due to the complicated hierarchical prediction structure of the SVC and the concept of key pictures, content-aware rate adaptation of SVC bit streams to intermediate bit rates is a nontrivial task. The concept of quality layers has been introduced in the design of the SVC to allow for fast content-aware prioritized rate adaptation. However, existing quality layer assignment methods are suboptimal and do not consider all network abstraction layer (NAL) units from different layers for the optimization. In this paper, we first propose a technique to accurately and efficiently estimate the quality degradation resulting from discarding an arbitrary number of NAL units from multiple layers of a bitstream by properly taking drift into account. Then, we utilize this distortion estimation technique to assign quality layers to NAL units for a more efficient extraction. Experimental results show that a significant gain can be achieved by the proposed scheme.
Ogi, Jun; Kato, Yuri; Matoba, Yoshihisa; Yamane, Chigusa; Nagahata, Kazunori; Nakashima, Yusaku; Kishimoto, Takuya; Hashimoto, Shigeki; Maari, Koichi; Oike, Yusuke; Ezaki, Takayuki
2017-12-19
A 24-μm-pitch microelectrode array (MEA) with 6912 readout channels at 12 kHz and 23.2-μV rms random noise is presented. The aim is to reduce noise in a "highly scalable" MEA with a complementary metal-oxide-semiconductor integration circuit (CMOS-MEA), in which a large number of readout channels and a high electrode density can be expected. Despite the small dimension and the simplicity of the in-pixel circuit for the high electrode-density and the relatively large number of readout channels of the prototype CMOS-MEA chip developed in this work, the noise within the chip is successfully reduced to less than half that reported in a previous work, for a device with similar in-pixel circuit simplicity and a large number of readout channels. Further, the action potential was clearly observed on cardiomyocytes using the CMOS-MEA. These results indicate the high-scalability of the CMOS-MEA. The highly scalable CMOS-MEA provides high-spatial-resolution mapping of cell action potentials, and the mapping can aid understanding of complex activities in cells, including neuron network activities.
A Secure and Efficient Scalable Secret Image Sharing Scheme with Flexible Shadow Sizes.
Xie, Dong; Li, Lixiang; Peng, Haipeng; Yang, Yixian
2017-01-01
In a general (k, n) scalable secret image sharing (SSIS) scheme, the secret image is shared by n participants and any k or more than k participants have the ability to reconstruct it. The scalability means that the amount of information in the reconstructed image scales in proportion to the number of the participants. In most existing SSIS schemes, the size of each image shadow is relatively large and the dealer does not has a flexible control strategy to adjust it to meet the demand of differen applications. Besides, almost all existing SSIS schemes are not applicable under noise circumstances. To address these deficiencies, in this paper we present a novel SSIS scheme based on a brand-new technique, called compressed sensing, which has been widely used in many fields such as image processing, wireless communication and medical imaging. Our scheme has the property of flexibility, which means that the dealer can achieve a compromise between the size of each shadow and the quality of the reconstructed image. In addition, our scheme has many other advantages, including smooth scalability, noise-resilient capability, and high security. The experimental results and the comparison with similar works demonstrate the feasibility and superiority of our scheme.
Chai, Zhimin; Abbasi, Salman A; Busnaina, Ahmed A
2018-05-30
Assembly of organic semiconductors with ordered crystal structure has been actively pursued for electronics applications such as organic field-effect transistors (OFETs). Among various film deposition methods, solution-based film growth from small molecule semiconductors is preferable because of its low material and energy consumption, low cost, and scalability. Here, we show scalable and controllable directed assembly of highly crystalline 2,7-dioctyl[1]benzothieno[3,2- b][1]benzothiophene (C8-BTBT) films via a dip-coating process. Self-aligned stripe patterns with tunable thickness and morphology over a centimeter scale are obtained by adjusting two governing parameters: the pulling speed of a substrate and the solution concentration. OFETs are fabricated using the C8-BTBT films assembled at various conditions. A field-effect hole mobility up to 3.99 cm 2 V -1 s -1 is obtained. Owing to the highly scalable crystalline film formation, the dip-coating directed assembly process could be a great candidate for manufacturing next-generation electronics. Meanwhile, the film formation mechanism discussed in this paper could provide a general guideline to prepare other organic semiconducting films from small molecule solutions.
A repeatable and scalable fabrication method for sharp, hollow silicon microneedles
NASA Astrophysics Data System (ADS)
Kim, H.; Theogarajan, L. S.; Pennathur, S.
2018-03-01
Scalability and manufacturability are impeding the mass commercialization of microneedles in the medical field. Specifically, microneedle geometries need to be sharp, beveled, and completely controllable, difficult to achieve with microelectromechanical fabrication techniques. In this work, we performed a parametric study using silicon etch chemistries to optimize the fabrication of scalable and manufacturable beveled silicon hollow microneedles. We theoretically verified our parametric results with diffusion reaction equations and created a design guideline for a various set of miconeedles (80-160 µm needle base width, 100-1000 µm pitch, 40-50 µm inner bore diameter, and 150-350 µm height) to show the repeatability, scalability, and manufacturability of our process. As a result, hollow silicon microneedles with any dimensions can be fabricated with less than 2% non-uniformity across a wafer and 5% deviation between different processes. The key to achieving such high uniformity and consistency is a non-agitated HF-HNO3 bath, silicon nitride masks, and surrounding silicon filler materials with well-defined dimensions. Our proposed method is non-labor intensive, well defined by theory, and straightforward for wafer scale mass production, opening doors to a plethora of potential medical and biosensing applications.
Lin, Xiaotong; Liu, Mei; Chen, Xue-wen
2009-04-29
Protein-protein interactions play vital roles in nearly all cellular processes and are involved in the construction of biological pathways such as metabolic and signal transduction pathways. Although large-scale experiments have enabled the discovery of thousands of previously unknown linkages among proteins in many organisms, the high-throughput interaction data is often associated with high error rates. Since protein interaction networks have been utilized in numerous biological inferences, the inclusive experimental errors inevitably affect the quality of such prediction. Thus, it is essential to assess the quality of the protein interaction data. In this paper, a novel Bayesian network-based integrative framework is proposed to assess the reliability of protein-protein interactions. We develop a cross-species in silico model that assigns likelihood scores to individual protein pairs based on the information entirely extracted from model organisms. Our proposed approach integrates multiple microarray datasets and novel features derived from gene ontology. Furthermore, the confidence scores for cross-species protein mappings are explicitly incorporated into our model. Applying our model to predict protein interactions in the human genome, we are able to achieve 80% in sensitivity and 70% in specificity. Finally, we assess the overall quality of the experimentally determined yeast protein-protein interaction dataset. We observe that the more high-throughput experiments confirming an interaction, the higher the likelihood score, which confirms the effectiveness of our approach. This study demonstrates that model organisms certainly provide important information for protein-protein interaction inference and assessment. The proposed method is able to assess not only the overall quality of an interaction dataset, but also the quality of individual protein-protein interactions. We expect the method to continually improve as more high quality interaction data from more model organisms becomes available and is readily scalable to a genome-wide application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krause, Josua; Dasgupta, Aritra; Fekete, Jean-Daniel
Dealing with the curse of dimensionality is a key challenge in high-dimensional data visualization. We present SeekAView to address three main gaps in the existing research literature. First, automated methods like dimensionality reduction or clustering suffer from a lack of transparency in letting analysts interact with their outputs in real-time to suit their exploration strategies. The results often suffer from a lack of interpretability, especially for domain experts not trained in statistics and machine learning. Second, exploratory visualization techniques like scatter plots or parallel coordinates suffer from a lack of visual scalability: it is difficult to present a coherent overviewmore » of interesting combinations of dimensions. Third, the existing techniques do not provide a flexible workflow that allows for multiple perspectives into the analysis process by automatically detecting and suggesting potentially interesting subspaces. In SeekAView we address these issues using suggestion based visual exploration of interesting patterns for building and refining multidimensional subspaces. Compared to the state-of-the-art in subspace search and visualization methods, we achieve higher transparency in showing not only the results of the algorithms, but also interesting dimensions calibrated against different metrics. We integrate a visually scalable design space with an iterative workflow guiding the analysts by choosing the starting points and letting them slice and dice through the data to find interesting subspaces and detect correlations, clusters, and outliers. We present two usage scenarios for demonstrating how SeekAView can be applied in real-world data analysis scenarios.« less
Unbiased, scalable sampling of protein loop conformations from probabilistic priors.
Zhang, Yajia; Hauser, Kris
2013-01-01
Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion.
Unbiased, scalable sampling of protein loop conformations from probabilistic priors
2013-01-01
Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175
Grindon, Christina; Harris, Sarah; Evans, Tom; Novik, Keir; Coveney, Peter; Laughton, Charles
2004-07-15
Molecular modelling played a central role in the discovery of the structure of DNA by Watson and Crick. Today, such modelling is done on computers: the more powerful these computers are, the more detailed and extensive can be the study of the dynamics of such biological macromolecules. To fully harness the power of modern massively parallel computers, however, we need to develop and deploy algorithms which can exploit the structure of such hardware. The Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a scalable molecular dynamics code including long-range Coulomb interactions, which has been specifically designed to function efficiently on parallel platforms. Here we describe the implementation of the AMBER98 force field in LAMMPS and its validation for molecular dynamics investigations of DNA structure and flexibility against the benchmark of results obtained with the long-established code AMBER6 (Assisted Model Building with Energy Refinement, version 6). Extended molecular dynamics simulations on the hydrated DNA dodecamer d(CTTTTGCAAAAG)(2), which has previously been the subject of extensive dynamical analysis using AMBER6, show that it is possible to obtain excellent agreement in terms of static, dynamic and thermodynamic parameters between AMBER6 and LAMMPS. In comparison with AMBER6, LAMMPS shows greatly improved scalability in massively parallel environments, opening up the possibility of efficient simulations of order-of-magnitude larger systems and/or for order-of-magnitude greater simulation times.
AEGIS: a robust and scalable real-time public health surveillance system.
Reis, Ben Y; Kirby, Chaim; Hadden, Lucy E; Olson, Karen; McMurry, Andrew J; Daniel, James B; Mandl, Kenneth D
2007-01-01
In this report, we describe the Automated Epidemiological Geotemporal Integrated Surveillance system (AEGIS), developed for real-time population health monitoring in the state of Massachusetts. AEGIS provides public health personnel with automated near-real-time situational awareness of utilization patterns at participating healthcare institutions, supporting surveillance of bioterrorism and naturally occurring outbreaks. As real-time public health surveillance systems become integrated into regional and national surveillance initiatives, the challenges of scalability, robustness, and data security become increasingly prominent. A modular and fault tolerant design helps AEGIS achieve scalability and robustness, while a distributed storage model with local autonomy helps to minimize risk of unauthorized disclosure. The report includes a description of the evolution of the design over time in response to the challenges of a regional and national integration environment.
A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.
Delussu, Giovanni; Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi
2016-01-01
This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.
Toward Scalable Trustworthy Computing Using the Human-Physiology-Immunity Metaphor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hively, Lee M; Sheldon, Frederick T
The cybersecurity landscape consists of an ad hoc patchwork of solutions. Optimal cybersecurity is difficult for various reasons: complexity, immense data and processing requirements, resource-agnostic cloud computing, practical time-space-energy constraints, inherent flaws in 'Maginot Line' defenses, and the growing number and sophistication of cyberattacks. This article defines the high-priority problems and examines the potential solution space. In that space, achieving scalable trustworthy computing and communications is possible through real-time knowledge-based decisions about cyber trust. This vision is based on the human-physiology-immunity metaphor and the human brain's ability to extract knowledge from data and information. The article outlines future steps towardmore » scalable trustworthy systems requiring a long-term commitment to solve the well-known challenges.« less
Scalable digital hardware for a trapped ion quantum computer
NASA Astrophysics Data System (ADS)
Mount, Emily; Gaultney, Daniel; Vrijsen, Geert; Adams, Michael; Baek, So-Young; Hudek, Kai; Isabella, Louis; Crain, Stephen; van Rynbach, Andre; Maunz, Peter; Kim, Jungsang
2016-12-01
Many of the challenges of scaling quantum computer hardware lie at the interface between the qubits and the classical control signals used to manipulate them. Modular ion trap quantum computer architectures address scalability by constructing individual quantum processors interconnected via a network of quantum communication channels. Successful operation of such quantum hardware requires a fully programmable classical control system capable of frequency stabilizing the continuous wave lasers necessary for loading, cooling, initialization, and detection of the ion qubits, stabilizing the optical frequency combs used to drive logic gate operations on the ion qubits, providing a large number of analog voltage sources to drive the trap electrodes, and a scheme for maintaining phase coherence among all the controllers that manipulate the qubits. In this work, we describe scalable solutions to these hardware development challenges.
A look at scalable dense linear algebra libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.
1992-01-01
We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less
A look at scalable dense linear algebra libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.
1992-08-01
We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less
Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison
NASA Astrophysics Data System (ADS)
van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder
2000-04-01
Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very similar. However, improved results can be obtained for the wavelet coder by deblocking the base- layer prior to the FGS residual computation. Based on the theoretical analysis and our measurements, we can conclude that for an optimal complexity versus coding-efficiency trade- off, only limited wavelet decomposition (e.g. 2 stages) needs to be performed for the FGS-residual signal. Also, it was observed that the good rate-distortion performance of a coding technique for a certain image type (e.g. natural still-images) does not necessarily translate into similarly good performance for signals with different visual characteristics and statistical properties.
2017-02-01
enable high scalability and reconfigurability for inter-CPU/Memory communications with an increased number of communication channels in frequency ...interconnect technology (MRFI) to enable high scalability and re-configurability for inter-CPU/Memory communications with an increased number of communication ...testing in the University of California, Los Angeles (UCLA) Center for High Frequency Electronics, and Dr. Afshin Momtaz at Broadcom Corporation for
Scalable Anonymous Group Communication in the Anytrust Model
2012-04-10
Scalable Anonymous Group Communication in the Anytrust Model David Isaac Wolinsky, Henry Corrigan-Gibbs, and Bryan Ford Yale University...12th KDD, Aug. 2006. [10] D. Chaum . Untraceable electronic mail, return addresses, and digital pseudonyms. Communications of the ACM, 24(2), Feb...1981. [11] D. Chaum . The dining cryptographers problem: Unconditional sender and recipient untraceability. Journal of Cryptology, 1(1):65–75, Jan. 1988
A Laboratory for Characterizing the Efficacy of Moving Target Defense
2016-10-25
of William and Mary are developing a scalable, dynamic, adaptive security system that combines virtualization , emulation, and mutable network...goal with the resource constraints of a small number of servers, and making virtual nodes “real enough” from the view of attackers. Unfortunately, with...we at College of William and Mary are developing a scalable, dynamic, adaptive security system that combines virtualization , emulation, and mutable
Tradespace and Affordability - Phase 2
2013-12-31
infrastructure capacity. Figure 15 locates the thirteen feasible configurations in survivability- mobility capability space (capability levels are scaled...battery power, or display size decreases. Other quantities may be applicable, such as the number of nodes in a scalable-up mobile network or the...limited size of a scalable-down mobile platform. Versatility involves the range of capabilities provided by a system as it is currently configured. A
Photoignition Torch Applied to Cryogenic H2/O2 Coaxial Jet
2016-12-06
suitable for certain thrusters and liquid rocket engines. This ignition system is scalable for applications in different combustion chambers such as gas ...turbines, gas generators, liquid rocket engines, and multi grain solid rocket motors. photoignition, fuel spray ignition, high pressure ignition...thrusters and liquid rocket engines. This ignition system is scalable for applications in different combustion chambers such as gas turbines, gas
Toward cost-effective solar energy use.
Lewis, Nathan S
2007-02-09
At present, solar energy conversion technologies face cost and scalability hurdles in the technologies required for a complete energy system. To provide a truly widespread primary energy source, solar energy must be captured, converted, and stored in a cost-effective fashion. New developments in nanotechnology, biotechnology, and the materials and physical sciences may enable step-change approaches to cost-effective, globally scalable systems for solar energy use.
U.S. Army Research Laboratory Annual Review 2011
2011-12-01
pioneered a defect reduction process using thermal cycle annealing (TCA) for improving mercury cadmium telluride ( MCT ) grown on scalable silicon (Si...substrates. Currently, the use of MCT -- a mainstay material for Army infrared (IR) systems -- is limited due to high levels of dislocations when...grown on scalable substrates such as Si (an inexpensive substrate material). These dislocations increase pixel noise and limit IR focal plane array
Volume-scalable high-brightness three-dimensional visible light source
Subramania, Ganapathi; Fischer, Arthur J; Wang, George T; Li, Qiming
2014-02-18
A volume-scalable, high-brightness, electrically driven visible light source comprises a three-dimensional photonic crystal (3DPC) comprising one or more direct bandgap semiconductors. The improved light emission performance of the invention is achieved based on the enhancement of radiative emission of light emitters placed inside a 3DPC due to the strong modification of the photonic density-of-states engendered by the 3DPC.
Leveraging the Cloud for Integrated Network Experimentation
2014-03-01
kernel settings, or any of the low-level subcomponents. 3. Scalable Solutions: Businesses can build scalable solutions for their clients , ranging from...values. These values 13 can assume several distributions that include normal, Pareto , uniform, exponential and Poisson, among others [21]. Additionally, D...communication, the web client establishes a connection to the server before traffic begins to flow. Web servers do not initiate connections to clients in
ERIC Educational Resources Information Center
Mantri, Archana
2014-01-01
The intent of the study presented in this paper is to show that the model of problem-based learning (PBL) can be made scalable by designing curriculum around a set of open-ended problems (OEPs). The detailed statistical analysis of the data collected to measure the effects of traditional and PBL instructions for three courses in Electronics and…
A real-time architecture for time-aware agents.
Prouskas, Konstantinos-Vassileios; Pitt, Jeremy V
2004-06-01
This paper describes the specification and implementation of a new three-layer time-aware agent architecture. This architecture is designed for applications and environments where societies of humans and agents play equally active roles, but interact and operate in completely different time frames. The architecture consists of three layers: the April real-time run-time (ART) layer, the time aware layer (TAL), and the application agents layer (AAL). The ART layer forms the underlying real-time agent platform. An original online, real-time, dynamic priority-based scheduling algorithm is described for scheduling the computation time of agent processes, and it is shown that the algorithm's O(n) complexity and scalable performance are sufficient for application in real-time domains. The TAL layer forms an abstraction layer through which human and agent interactions are temporally unified, that is, handled in a common way irrespective of their temporal representation and scale. A novel O(n2) interaction scheduling algorithm is described for predicting and guaranteeing interactions' initiation and completion times. The time-aware predicting component of a workflow management system is also presented as an instance of the AAL layer. The described time-aware architecture addresses two key challenges in enabling agents to be effectively configured and applied in environments where humans and agents play equally active roles. It provides flexibility and adaptability in its real-time mechanisms while placing them under direct agent control, and it temporally unifies human and agent interactions.
Fast algorithms for evaluating the stress field of dislocation lines in anisotropic elastic media
NASA Astrophysics Data System (ADS)
Chen, C.; Aubry, S.; Oppelstrup, T.; Arsenlis, A.; Darve, E.
2018-06-01
In dislocation dynamics (DD) simulations, the most computationally intensive step is the evaluation of the elastic interaction forces among dislocation ensembles. Because the pair-wise interaction between dislocations is long-range, this force calculation step can be significantly accelerated by the fast multipole method (FMM). We implemented and compared four different methods in isotropic and anisotropic elastic media: one based on the Taylor series expansion (Taylor FMM), one based on the spherical harmonics expansion (Spherical FMM), one kernel-independent method based on the Chebyshev interpolation (Chebyshev FMM), and a new kernel-independent method that we call the Lagrange FMM. The Taylor FMM is an existing method, used in ParaDiS, one of the most popular DD simulation softwares. The Spherical FMM employs a more compact multipole representation than the Taylor FMM does and is thus more efficient. However, both the Taylor FMM and the Spherical FMM are difficult to derive in anisotropic elastic media because the interaction force is complex and has no closed analytical formula. The Chebyshev FMM requires only being able to evaluate the interaction between dislocations and thus can be applied easily in anisotropic elastic media. But it has a relatively large memory footprint, which limits its usage. The Lagrange FMM was designed to be a memory-efficient black-box method. Various numerical experiments are presented to demonstrate the convergence and the scalability of the four methods.
2015-01-01
Background Though cluster analysis has become a routine analytic task for bioinformatics research, it is still arduous for researchers to assess the quality of a clustering result. To select the best clustering method and its parameters for a dataset, researchers have to run multiple clustering algorithms and compare them. However, such a comparison task with multiple clustering results is cognitively demanding and laborious. Results In this paper, we present XCluSim, a visual analytics tool that enables users to interactively compare multiple clustering results based on the Visual Information Seeking Mantra. We build a taxonomy for categorizing existing techniques of clustering results visualization in terms of the Gestalt principles of grouping. Using the taxonomy, we choose the most appropriate interactive visualizations for presenting individual clustering results from different types of clustering algorithms. The efficacy of XCluSim is shown through case studies with a bioinformatician. Conclusions Compared to other relevant tools, XCluSim enables users to compare multiple clustering results in a more scalable manner. Moreover, XCluSim supports diverse clustering algorithms and dedicated visualizations and interactions for different types of clustering results, allowing more effective exploration of details on demand. Through case studies with a bioinformatics researcher, we received positive feedback on the functionalities of XCluSim, including its ability to help identify stably clustered items across multiple clustering results. PMID:26328893
Zhang, Guo-Qiang; Luo, Lingyun; Ogbuji, Chime; Joslyn, Cliff; Mejino, Jose; Sahoo, Satya S
2012-01-01
The interaction of multiple types of relationships among anatomical classes in the Foundational Model of Anatomy (FMA) can provide inferred information valuable for quality assurance. This paper introduces a method called Motif Checking (MOCH) to study the effects of such multi-relation type interactions for detecting logical inconsistencies as well as other anomalies represented by the motifs. MOCH represents patterns of multi-type interaction as small labeled (with multiple types of edges) sub-graph motifs, whose nodes represent class variables, and labeled edges represent relational types. By representing FMA as an RDF graph and motifs as SPARQL queries, fragments of FMA are automatically obtained as auditing candidates. Leveraging the scalability and reconfigurability of Semantic Web Technology, we performed exhaustive analyses of a variety of labeled sub-graph motifs. The quality assurance feature of MOCH comes from the distinct use of a subset of the edges of the graph motifs as constraints for disjointness, whereby bringing in rule-based flavor to the approach as well. With possible disjointness implied by antonyms, we performed manual inspection of the resulting FMA fragments and tracked down sources of abnormal inferred conclusions (logical inconsistencies), which are amendable for programmatic revision of the FMA. Our results demonstrate that MOCH provides a unique source of valuable information for quality assurance. Since our approach is general, it is applicable to any ontological system with an OWL representation.
Zhang, Guo-Qiang; Luo, Lingyun; Ogbuji, Chime; Joslyn, Cliff; Mejino, Jose; Sahoo, Satya S
2012-01-01
The interaction of multiple types of relationships among anatomical classes in the Foundational Model of Anatomy (FMA) can provide inferred information valuable for quality assurance. This paper introduces a method called Motif Checking (MOCH) to study the effects of such multi-relation type interactions for detecting logical inconsistencies as well as other anomalies represented by the motifs. MOCH represents patterns of multi-type interaction as small labeled (with multiple types of edges) sub-graph motifs, whose nodes represent class variables, and labeled edges represent relational types. By representing FMA as an RDF graph and motifs as SPARQL queries, fragments of FMA are automatically obtained as auditing candidates. Leveraging the scalability and reconfigurability of Semantic Web Technology, we performed exhaustive analyses of a variety of labeled sub-graph motifs. The quality assurance feature of MOCH comes from the distinct use of a subset of the edges of the graph motifs as constraints for disjointness, whereby bringing in rule-based flavor to the approach as well. With possible disjointness implied by antonyms, we performed manual inspection of the resulting FMA fragments and tracked down sources of abnormal inferred conclusions (logical inconsistencies), which are amendable for programmatic revision of the FMA. Our results demonstrate that MOCH provides a unique source of valuable information for quality assurance. Since our approach is general, it is applicable to any ontological system with an OWL representation. PMID:23304382
Chikkagoudar, Satish; Wang, Kai; Li, Mingyao
2011-05-26
Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs) have multiple cores, whereas Graphics Processing Units (GPUs) also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1) the interaction of SNPs within it in parallel, and 2) the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/.
2011-01-01
Background Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs) have multiple cores, whereas Graphics Processing Units (GPUs) also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Findings Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1) the interaction of SNPs within it in parallel, and 2) the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. Conclusions GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/. PMID:21615923
ExSTraCS 2.0: Description and Evaluation of a Scalable Learning Classifier System.
Urbanowicz, Ryan J; Moore, Jason H
2015-09-01
Algorithmic scalability is a major concern for any machine learning strategy in this age of 'big data'. A large number of potentially predictive attributes is emblematic of problems in bioinformatics, genetic epidemiology, and many other fields. Previously, ExS-TraCS was introduced as an extended Michigan-style supervised learning classifier system that combined a set of powerful heuristics to successfully tackle the challenges of classification, prediction, and knowledge discovery in complex, noisy, and heterogeneous problem domains. While Michigan-style learning classifier systems are powerful and flexible learners, they are not considered to be particularly scalable. For the first time, this paper presents a complete description of the ExS-TraCS algorithm and introduces an effective strategy to dramatically improve learning classifier system scalability. ExSTraCS 2.0 addresses scalability with (1) a rule specificity limit, (2) new approaches to expert knowledge guided covering and mutation mechanisms, and (3) the implementation and utilization of the TuRF algorithm for improving the quality of expert knowledge discovery in larger datasets. Performance over a complex spectrum of simulated genetic datasets demonstrated that these new mechanisms dramatically improve nearly every performance metric on datasets with 20 attributes and made it possible for ExSTraCS to reliably scale up to perform on related 200 and 2000-attribute datasets. ExSTraCS 2.0 was also able to reliably solve the 6, 11, 20, 37, 70, and 135 multiplexer problems, and did so in similar or fewer learning iterations than previously reported, with smaller finite training sets, and without using building blocks discovered from simpler multiplexer problems. Furthermore, ExS-TraCS usability was made simpler through the elimination of previously critical run parameters.
NASA Technical Reports Server (NTRS)
West, Jeff; Yang, H. Q.
2014-01-01
There are many instances involving liquid/gas interfaces and their dynamics in the design of liquid engine powered rockets such as the Space Launch System (SLS). Some examples of these applications are: Propellant tank draining and slosh, subcritical condition injector analysis for gas generators, preburners and thrust chambers, water deluge mitigation for launch induced environments and even solid rocket motor liquid slag dynamics. Commercially available CFD programs simulating gas/liquid interfaces using the Volume of Fluid approach are currently limited in their parallel scalability. In 2010 for instance, an internal NASA/MSFC review of three commercial tools revealed that parallel scalability was seriously compromised at 8 cpus and no additional speedup was possible after 32 cpus. Other non-interface CFD applications at the time were demonstrating useful parallel scalability up to 4,096 processors or more. Based on this review, NASA/MSFC initiated an effort to implement a Volume of Fluid implementation within the unstructured mesh, pressure-based algorithm CFD program, Loci-STREAM. After verification was achieved by comparing results to the commercial CFD program CFD-Ace+, and validation by direct comparison with data, Loci-STREAM-VoF is now the production CFD tool for propellant slosh force and slosh damping rate simulations at NASA/MSFC. On these applications, good parallel scalability has been demonstrated for problems sizes of tens of millions of cells and thousands of cpu cores. Ongoing efforts are focused on the application of Loci-STREAM-VoF to predict the transient flow patterns of water on the SLS Mobile Launch Platform in order to support the phasing of water for launch environment mitigation so that vehicle determinantal effects are not realized.
Kosa, Gergely; Vuoristo, Kiira S; Horn, Svein Jarle; Zimmermann, Boris; Afseth, Nils Kristian; Kohler, Achim; Shapaval, Volha
2018-06-01
Recent developments in molecular biology and metabolic engineering have resulted in a large increase in the number of strains that need to be tested, positioning high-throughput screening of microorganisms as an important step in bioprocess development. Scalability is crucial for performing reliable screening of microorganisms. Most of the scalability studies from microplate screening systems to controlled stirred-tank bioreactors have been performed so far with unicellular microorganisms. We have compared cultivation of industrially relevant oleaginous filamentous fungi and microalga in a Duetz-microtiter plate system to benchtop and pre-pilot bioreactors. Maximal glucose consumption rate, biomass concentration, lipid content of the biomass, biomass, and lipid yield values showed good scalability for Mucor circinelloides (less than 20% differences) and Mortierella alpina (less than 30% differences) filamentous fungi. Maximal glucose consumption and biomass production rates were identical for Crypthecodinium cohnii in microtiter plate and benchtop bioreactor. Most likely due to shear stress sensitivity of this microalga in stirred bioreactor, biomass concentration and lipid content of biomass were significantly higher in the microtiter plate system than in the benchtop bioreactor. Still, fermentation results obtained in the Duetz-microtiter plate system for Crypthecodinium cohnii are encouraging compared to what has been reported in literature. Good reproducibility (coefficient of variation less than 15% for biomass growth, glucose consumption, lipid content, and pH) were achieved in the Duetz-microtiter plate system for Mucor circinelloides and Crypthecodinium cohnii. Mortierella alpina cultivation reproducibility might be improved with inoculation optimization. In conclusion, we have presented suitability of the Duetz-microtiter plate system for the reproducible, scalable, and cost-efficient high-throughput screening of oleaginous microorganisms.
Scalable Light Module for Low-Cost, High-Efficiency Light- Emitting Diode Luminaires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarsa, Eric
2015-08-31
During this two-year program Cree developed a scalable, modular optical architecture for low-cost, high-efficacy light emitting diode (LED) luminaires. Stated simply, the goal of this architecture was to efficiently and cost-effectively convey light from LEDs (point sources) to broad luminaire surfaces (area sources). By simultaneously developing warm-white LED components and low-cost, scalable optical elements, a high system optical efficiency resulted. To meet program goals, Cree evaluated novel approaches to improve LED component efficacy at high color quality while not sacrificing LED optical efficiency relative to conventional packages. Meanwhile, efficiently coupling light from LEDs into modular optical elements, followed by optimallymore » distributing and extracting this light, were challenges that were addressed via novel optical design coupled with frequent experimental evaluations. Minimizing luminaire bill of materials and assembly costs were two guiding principles for all design work, in the effort to achieve luminaires with significantly lower normalized cost ($/klm) than existing LED fixtures. Chief project accomplishments included the achievement of >150 lm/W warm-white LEDs having primary optics compatible with low-cost modular optical elements. In addition, a prototype Light Module optical efficiency of over 90% was measured, demonstrating the potential of this scalable architecture for ultra-high-efficacy LED luminaires. Since the project ended, Cree has continued to evaluate optical element fabrication and assembly methods in an effort to rapidly transfer this scalable, cost-effective technology to Cree production development groups. The Light Module concept is likely to make a strong contribution to the development of new cost-effective, high-efficacy luminaries, thereby accelerating widespread adoption of energy-saving SSL in the U.S.« less
A scalable parallel black oil simulator on distributed memory parallel computers
NASA Astrophysics Data System (ADS)
Wang, Kun; Liu, Hui; Chen, Zhangxin
2015-11-01
This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forrest, C. J.; Radha, P. B.; Knauer, J. P.
In this study, the deuterium-tritium (D-T) and deuterium-deuterium neutron yield ratio in cryogenic inertial confinement fusion (ICF) experiments is used to examine multifluid effects, traditionally not included in ICF modeling. This ratio has been measured for ignition-scalable direct-drive cryogenic DT implosions at the Omega Laser Facility using a high-dynamic-range neutron time-of-flight spectrometer. The experimentally inferred yield ratio is consistent with both the calculated values of the nuclear reaction rates and the measured preshot target-fuel composition. These observations indicate that the physical mechanisms that have been proposed to alter the fuel composition, such as species separation of the hydrogen isotopes, aremore » not significant during the period of peak neutron production in ignition-scalable cryogenic direct-drive DT implosions.« less
Towards Scalable Entangled Photon Sources with Self-Assembled InAs /GaAs Quantum Dots
NASA Astrophysics Data System (ADS)
Wang, Jianping; Gong, Ming; Guo, G.-C.; He, Lixin
2015-08-01
The biexciton cascade process in self-assembled quantum dots (QDs) provides an ideal system for realizing deterministic entangled photon-pair sources, which are essential to quantum information science. The entangled photon pairs have recently been generated in experiments after eliminating the fine-structure splitting (FSS) of excitons using a number of different methods. Thus far, however, QD-based sources of entangled photons have not been scalable because the wavelengths of QDs differ from dot to dot. Here, we propose a wavelength-tunable entangled photon emitter mounted on a three-dimensional stressor, in which the FSS and exciton energy can be tuned independently, thereby enabling photon entanglement between dissimilar QDs. We confirm these results via atomistic pseudopotential calculations. This provides a first step towards future realization of scalable entangled photon generators for quantum information applications.
Forrest, C. J.; Radha, P. B.; Knauer, J. P.; ...
2017-03-03
In this study, the deuterium-tritium (D-T) and deuterium-deuterium neutron yield ratio in cryogenic inertial confinement fusion (ICF) experiments is used to examine multifluid effects, traditionally not included in ICF modeling. This ratio has been measured for ignition-scalable direct-drive cryogenic DT implosions at the Omega Laser Facility using a high-dynamic-range neutron time-of-flight spectrometer. The experimentally inferred yield ratio is consistent with both the calculated values of the nuclear reaction rates and the measured preshot target-fuel composition. These observations indicate that the physical mechanisms that have been proposed to alter the fuel composition, such as species separation of the hydrogen isotopes, aremore » not significant during the period of peak neutron production in ignition-scalable cryogenic direct-drive DT implosions.« less
Yan Wei, Xiao; Kuang, Shuang Yang; Yang Li, Hua; Pan, Caofeng; Zhu, Guang; Wang, Zhong Lin
2015-01-01
Self-powered system that is interface-free is greatly desired for area-scalable application. Here we report a self-powered electroluminescent system that consists of a triboelectric generator (TEG) and a thin-film electroluminescent (TFEL) lamp. The TEG provides high-voltage alternating electric output, which fits in well with the needs of the TFEL lamp. Induced charges pumped onto the lamp by the TEG generate an electric field that is sufficient to excite luminescence without an electrical interface circuit. Through rational serial connection of multiple TFEL lamps, effective and area-scalable luminescence is realized. It is demonstrated that multiple types of TEGs are applicable to the self-powered system, indicating that the system can make use of diverse mechanical sources and thus has potentially broad applications in illumination, display, entertainment, indication, surveillance and many others. PMID:26338365
The P-Mesh: A Commodity-based Scalable Network Architecture for Clusters
NASA Technical Reports Server (NTRS)
Nitzberg, Bill; Kuszmaul, Chris; Stockdale, Ian; Becker, Jeff; Jiang, John; Wong, Parkson; Tweten, David (Technical Monitor)
1998-01-01
We designed a new network architecture, the P-Mesh which combines the scalability and fault resilience of a torus with the performance of a switch. We compare the scalability, performance, and cost of the hub, switch, torus, tree, and P-Mesh architectures. The latter three are capable of scaling to thousands of nodes, however, the torus has severe performance limitations with that many processors. The tree and P-Mesh have similar latency, bandwidth, and bisection bandwidth, but the P-Mesh outperforms the switch architecture (a lower bound for tree performance) on 16-node NAB Parallel Benchmark tests by up to 23%, and costs 40% less. Further, the P-Mesh has better fault resilience characteristics. The P-Mesh architecture trades increased management overhead for lower cost, and is a good bridging technology while the price of tree uplinks is expensive.
A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data
Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi
2016-01-01
This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR’s formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called “Constant Load” and “Constant Number of Records”, with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes. PMID:27936191
Generation of scalable terahertz radiation from cylindrically focused two-color laser pulses in air
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuk, D.; Yoo, Y. J.; Rosenthal, E. W.
2016-03-21
We demonstrate scalable terahertz (THz) generation by focusing terawatt, two-color laser pulses in air with a cylindrical lens. This focusing geometry creates a two-dimensional air plasma sheet, which yields two diverging THz lobe profiles in the far field. This setup can avoid plasma-induced laser defocusing and subsequent THz saturation, previously observed with spherical lens focusing of high-power laser pulses. By expanding the plasma source into a two-dimensional sheet, cylindrical focusing can lead to scalable THz generation. This scheme provides an energy conversion efficiency of 7 × 10{sup −4}, ∼7 times better than spherical lens focusing. The diverging THz lobes are refocused withmore » a combination of cylindrical and parabolic mirrors to produce strong THz fields (>21 MV/cm) at the focal point.« less
NPTool: Towards Scalability and Reliability of Business Process Management
NASA Astrophysics Data System (ADS)
Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton
Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.