Science.gov

Sample records for scalable isosurface visualization

  1. Scalable isosurface visualization of massive datasets on commodity off-the-shelf clusters

    PubMed Central

    Bajaj, Chandrajit

    2009-01-01

    Tomographic imaging and computer simulations are increasingly yielding massive datasets. Interactive and exploratory visualizations have rapidly become indispensable tools to study large volumetric imaging and simulation data. Our scalable isosurface visualization framework on commodity off-the-shelf clusters is an end-to-end parallel and progressive platform, from initial data access to the final display. Interactive browsing of extracted isosurfaces is made possible by using parallel isosurface extraction, and rendering in conjunction with a new specialized piece of image compositing hardware called Metabuffer. In this paper, we focus on the back end scalability by introducing a fully parallel and out-of-core isosurface extraction algorithm. It achieves scalability by using both parallel and out-of-core processing and parallel disks. It statically partitions the volume data to parallel disks with a balanced workload spectrum, and builds I/O-optimal external interval trees to minimize the number of I/O operations of loading large data from disk. We also describe an isosurface compression scheme that is efficient for progress extraction, transmission and storage of isosurfaces. PMID:19756231

  2. Faster isosurface ray tracing using implicit KD-trees.

    PubMed

    Wald, Ingo; Friedrich, Heiko; Marmitt, Gerd; Slusallek, Philipp; Seidel, Hans-Peter

    2005-01-01

    The visualization of high-quality isosurfaces at interactive rates is an important tool in many simulation and visualization applications. Today, isosurfaces are most often visualized by extracting a polygonal approximation that is then rendered via graphics hardware or by using a special variant of preintegrated volume rendering. However, these approaches have a number of limitations in terms of the quality of the isosurface, lack of performance for complex data sets, or supported shading models. An alternative isosurface rendering method that does not suffer from these limitations is to directly ray trace the isosurface. However, this approach has been much too slow for interactive applications unless massively parallel shared-memory supercomputers have been used. In this paper, we implement interactive isosurface ray tracing on commodity desktop PCs by building on recent advances in real-time ray tracing of polygonal scenes and using those to improve isosurface ray tracing performance as well. The high performance and scalability of our approach will be demonstrated with several practical examples, including the visualization of highly complex isosurface data sets, the interactive rendering of hybrid polygonal/isosurface scenes, including high-quality ray traced shading effects, and even interactive global illumination on isosurfaces.

  3. Large Scale Isosurface Bicubic Subdivision-Surface Wavelets for Representation and Visualization

    SciTech Connect

    Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.

    2000-01-05

    We introduce a new subdivision-surface wavelet transform for arbitrary two-manifolds with boundary that is the first to use simple lifting-style filtering operations with bicubic precision. We also describe a conversion process for re-mapping large-scale isosurfaces to have subdivision connectivity and fair parameterizations so that the new wavelet transform can be used for compression and visualization. The main idea enabling our wavelet transform is the circular symmetrization of the filters in irregular neighborhoods, which replaces the traditional separation of filters into two 1-D passes. Our wavelet transform uses polygonal base meshes to represent surface topology, from which a Catmull-Clark-style subdivision hierarchy is generated. The details between these levels of resolution are quickly computed and compactly stored as wavelet coefficients. The isosurface conversion process begins with a contour triangulation computed using conventional techniques, which we subsequently simplify with a variant edge-collapse procedure, followed by an edge-removal process. This provides a coarse initial base mesh, which is subsequently refined, relaxed and attracted in phases to converge to the contour. The conversion is designed to produce smooth, untangled and minimally-skewed parameterizations, which improves the subsequent compression after applying the transform. We have demonstrated our conversion and transform for an isosurface obtained from a high-resolution turbulent-mixing hydrodynamics simulation, showing the potential for compression and level-of-detail visualization.

  4. Effects of VR system fidelity on analyzing isosurface visualization of volume datasets.

    PubMed

    Laha, Bireswar; Bowman, Doug A; Socha, John J

    2014-04-01

    Volume visualization is an important technique for analyzing datasets from a variety of different scientific domains. Volume data analysis is inherently difficult because volumes are three-dimensional, dense, and unfamiliar, requiring scientists to precisely control the viewpoint and to make precise spatial judgments. Researchers have proposed that more immersive (higher fidelity) VR systems might improve task performance with volume datasets, and significant results tied to different components of display fidelity have been reported. However, more information is needed to generalize these results to different task types, domains, and rendering styles. We visualized isosurfaces extracted from synchrotron microscopic computed tomography (SR-μCT) scans of beetles, in a CAVE-like display. We ran a controlled experiment evaluating the effects of three components of system fidelity (field of regard, stereoscopy, and head tracking) on a variety of abstract task categories that are applicable to various scientific domains, and also compared our results with those from our prior experiment using 3D texture-based rendering. We report many significant findings. For example, for search and spatial judgment tasks with isosurface visualization, a stereoscopic display provides better performance, but for tasks with 3D texture-based rendering, displays with higher field of regard were more effective, independent of the levels of the other display components. We also found that systems with high field of regard and head tracking improve performance in spatial judgment tasks. Our results extend existing knowledge and produce new guidelines for designing VR systems to improve the effectiveness of volume data analysis.

  5. Interactive isosurface ray tracing of time-varying tetrahedral volumes.

    PubMed

    Wald, Ingo; Friedrich, Heiko; Knoll, Aaron; Hansen, Charles D

    2007-01-01

    We describe a system for interactively rendering isosurfaces of tetrahedral finite-element scalar fields using coherent ray tracing techniques on the CPU. By employing state-of-the art methods in polygonal ray tracing, namely aggressive packet/frustum traversal of a bounding volume hierarchy, we can accomodate large and time-varying unstructured data. In conjunction with this efficiency structure, we introduce a novel technique for intersecting ray packets with tetrahedral primitives. Ray tracing is flexible, allowing for dynamic changes in isovalue and time step, visualization of multiple isosurfaces, shadows, and depth-peeling transparency effects. The resulting system offers the intuitive simplicity of isosurfacing, guaranteed-correct visual results, and ultimately a scalable, dynamic and consistently interactive solution for visualizing unstructured volumes.

  6. Seamless multiresolution isosurfaces using wavelets

    SciTech Connect

    Udeshi, T.; Hudson, R.; Papka, M. E.

    2000-04-11

    Data sets that are being produced by today's simulations, such as the ones generated by DOE's ASCI program, are too large for real-time exploration and visualization. Therefore, new methods of visualizing these data sets need to be investigated. The authors present a method that combines isosurface representations of different resolutions into a seamless solution, virtually free of cracks and overlaps. The solution combines existing isosurface generation algorithms and wavelet theory to produce a real-time solution to multiple-resolution isosurfaces.

  7. Volume Haptics with Topology-Consistent Isosurfaces.

    PubMed

    Corenthy, Loc; Otaduy, Miguel A; Pastor, Luis; Garcia, Marcos

    2015-01-01

    Haptic interfaces offer an intuitive way to interact with and manipulate 3D datasets, and may simplify the interpretation of visual information. This work proposes an algorithm to provide haptic feedback directly from volumetric datasets, as an aid to regular visualization. The haptic rendering algorithm lets users perceive isosurfaces in volumetric datasets, and it relies on several design features that ensure a robust and efficient rendering. A marching tetrahedra approach enables the dynamic extraction of a piecewise linear continuous isosurface. Robustness is achieved using a continuous collision detection step coupled with state-of-the-art proxy-based rendering methods over the extracted isosurface. The introduced marching tetrahedra approach guarantees that the extracted isosurface will match the topology of an equivalent isosurface computed using trilinear interpolation. The proposed haptic rendering algorithm improves the consistency between haptic and visual cues computing a second proxy on the isosurface displayed on screen. Our experiments demonstrate the improvements on the isosurface extraction stage as well as the robustness and the efficiency of the complete algorithm.

  8. Case study of isosurface extraction algorithm performance

    SciTech Connect

    Sutton, P M; Hansen, C D; Shen, H; Schikore, D

    1999-12-14

    Isosurface extraction is an important and useful visualization method. Over the past ten years, the field has seen numerous isosurface techniques published leaving the user in a quandary about which one should be used. Some papers have published complexity analysis of the techniques yet empirical evidence comparing different methods is lacking. This case study presents a comparative study of several representative isosurface extraction algorithms. It reports and analyzes empirical measurements of execution times and memory behavior for each algorithm. The results show that asymptotically optimal techniques may not be the best choice when implemented on modern computer architectures.

  9. A graph algebra for scalable visual analytics.

    PubMed

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  10. Scalable Visual Reasoning: Supporting Collaboration through Distributed Analysis

    SciTech Connect

    Pike, William A.; May, Richard A.; Baddeley, Bob; Riensche, Roderick M.; Bruce, Joe; Younkin, Katarina

    2007-05-21

    We present a visualization environment called the Scalable Reasoning System (SRS) that provides a suite of tools for the collection, analysis, and dissemination of reasoning products. This environment is designed to function across multiple platforms, bringing the display of visual information and the capture of reasoning associated with that information to both mobile and desktop clients. The service-oriented architecture of SRS promotes collaboration and interaction between users regardless of their location or platform. Visualization services allow data processing to be centralized and analysis results collected from distributed clients in real time. We use the concept of “reasoning artifacts” to capture the analytic value attached to individual pieces of information and collections thereof, helping to fuse the foraging and sense-making loops in information analysis. Reasoning structures composed of these artifacts can be shared across platforms while maintaining references to the analytic activity (such as interactive visualization) that produced them.

  11. The Scalable Reasoning System: Lightweight Visualization for Distributed Analytics

    SciTech Connect

    Pike, William A.; Bruce, Joseph R.; Baddeley, Robert L.; Best, Daniel M.; Franklin, Lyndsey; May, Richard A.; Rice, Douglas M.; Riensche, Roderick M.; Younkin, Katarina

    2008-11-01

    A central challenge in visual analytics is the creation of accessible, widely distributable analysis applications that bring the benefits of visual discovery to as broad a user base as possible. Moreover, to support the role of visualization in the knowledge creation process, it is advantageous to allow users to describe the reasoning strategies they employ while interacting with analytic environments. We introduce an application suite called the Scalable Reasoning System (SRS), which provides web-based and mobile interfaces for visual analysis. The service-oriented analytic framework that underlies SRS provides a platform for deploying pervasive visual analytic environments across an enterprise. SRS represents a “lightweight” approach to visual analytics whereby thin client analytic applications can be rapidly deployed in a platform-agnostic fashion. Client applications support multiple coordinated views while giving analysts the ability to record evidence, assumptions, hypotheses and other reasoning artifacts. We describe the capabilities of SRS in the context of a real-world deployment at a regional law enforcement organization.

  12. Fast scalable visualization techniques for interactive billion-particle walkthrough

    NASA Astrophysics Data System (ADS)

    Liu, Xinlian

    This research develops a comprehensive framework for interactive walkthrough involving one billion particles in an immersive virtual environment to enable interrogative visualization of large atomistic simulation data. As a mixture of scientific and engineering approaches, the framework is based on four key techniques: adaptive data compression based on space-filling curves, octree-based visibility and occlusion culling, predictive caching based on machine learning, and scalable data reduction based on parallel and distributed processing. In terms of parallel rendering, this system combines functional parallelism, data parallelism, and temporal parallelism to improve interactivity. The visualization framework will be applicable not only to material simulation, but also to computational biology, applied mathematics, mechanical engineering, and nanotechnology, etc.

  13. ViSUS: Visualization Streams for Ultimate Scalability

    SciTech Connect

    Pascucci, V

    2005-02-14

    In this project we developed a suite of progressive visualization algorithms and a data-streaming infrastructure that enable interactive exploration of scientific datasets of unprecedented size. The methodology aims to globally optimize the data flow in a pipeline of processing modules. Each module reads a multi-resolution representation of the input while producing a multi-resolution representation of the output. The use of multi-resolution representations provides the necessary flexibility to trade speed for accuracy in the visualization process. Maximum coherency and minimum delay in the data-flow is achieved by extensive use of progressive algorithms that continuously map local geometric updates of the input stream into immediate updates of the output stream. We implemented a prototype software infrastructure that demonstrated the flexibility and scalability of this approach by allowing large data visualization on single desktop computers, on PC clusters, and on heterogeneous computing resources distributed over a wide area network. When processing terabytes of scientific data, we have achieved an effective increase in visualization performance of several orders of magnitude in two major settings: (i) interactive visualization on desktop workstations of large datasets that cannot be stored locally; (ii) real-time monitoring of a large scientific simulation with negligible impact on the computing resources available. The ViSUS streaming infrastructure enabled the real-time execution and visualization of the two LLNL simulation codes (Miranda and Raptor) run at Supercomputing 2004 on Blue Gene/L at its presentation as the fastest supercomputer in the world. In addition to SC04, we have run live demonstrations at the IEEE VIS conference and at invited talks at the DOE MICS office, DOE computer graphics forum, UC Riverside, and the University of Maryland. In all cases we have shown the capability to stream and visualize interactively data stored remotely at the San

  14. Interactive high-resolution isosurface ray casting on multicore processors.

    PubMed

    Wang, Qin; JaJa, Joseph

    2008-01-01

    We present a new method for the interactive rendering of isosurfaces using ray casting on multi-core processors. This method consists of a combination of an object-order traversal that coarsely identifies possible candidate 3D data blocks for each small set of contiguous pixels, and an isosurface ray casting strategy tailored for the resulting limited-size lists of candidate 3D data blocks. While static screen partitioning is widely used in the literature, our scheme performs dynamic allocation of groups of ray casting tasks to ensure almost equal loads among the different threads running on multi-cores while maintaining spatial locality. We also make careful use of memory management environment commonly present in multi-core processors. We test our system on a two-processor Clovertown platform, each consisting of a Quad-Core 1.86-GHz Intel Xeon Processor, for a number of widely different benchmarks. The detailed experimental results show that our system is efficient and scalable, and achieves high cache performance and excellent load balancing, resulting in an overall performance that is superior to any of the previous algorithms. In fact, we achieve an interactive isosurface rendering on a 1024(2) screen for all the datasets tested up to the maximum size of the main memory of our platform.

  15. Direct Isosurface Ray Casting of NURBS-Based Isogeometric Analysis.

    PubMed

    Schollmeyer, Andre; Froehlich, Bernd

    2014-09-01

    In NURBS-based isogeometric analysis, the basis functions of a 3D model's geometric description also form the basis for the solution space of variational formulations of partial differential equations. In order to visualize the results of a NURBS-based isogeometric analysis, we developed a novel GPU-based multi-pass isosurface visualization technique which performs directly on an equivalent rational Bézier representation without the need for discretization or approximation. Our approach utilizes rasterization to generate a list of intervals along the ray that each potentially contain boundary or isosurface intersections. Depth-sorting this list for each ray allows us to proceed in front-to-back order and enables early ray termination. We detect multiple intersections of a ray with the higher-order surface of the model using a sampling-based root-isolation method. The model's surfaces and the isosurfaces always appear smooth, independent of the zoom level due to our pixel-precise processing scheme. Our adaptive sampling strategy minimizes costs for point evaluations and intersection computations. The implementation shows that the proposed approach interactively visualizes volume meshes containing hundreds of thousands of Bézier elements on current graphics hardware. A comparison to a GPU-based ray casting implementation using spatial data structures indicates that our approach generally performs significantly faster while being more accurate.

  16. Scalable and portable visualization of large atomistic datasets

    NASA Astrophysics Data System (ADS)

    Sharma, Ashish; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2004-10-01

    A scalable and portable code named Atomsviewer has been developed to interactively visualize a large atomistic dataset consisting of up to a billion atoms. The code uses a hierarchical view frustum-culling algorithm based on the octree data structure to efficiently remove atoms outside of the user's field-of-view. Probabilistic and depth-based occlusion-culling algorithms then select atoms, which have a high probability of being visible. Finally a multiresolution algorithm is used to render the selected subset of visible atoms at varying levels of detail. Atomsviewer is written in C++ and OpenGL, and it has been tested on a number of architectures including Windows, Macintosh, and SGI. Atomsviewer has been used to visualize tens of millions of atoms on a standard desktop computer and, in its parallel version, up to a billion atoms. Program summaryTitle of program: Atomsviewer Catalogue identifier: ADUM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: 2.4 GHz Pentium 4/Xeon processor, professional graphics card; Apple G4 (867 MHz)/G5, professional graphics card Operating systems under which the program has been tested: Windows 2000/XP, Mac OS 10.2/10.3, SGI IRIX 6.5 Programming languages used: C++, C and OpenGL Memory required to execute with typical data: 1 gigabyte of RAM High speed storage required: 60 gigabytes No. of lines in the distributed program including test data, etc.: 550 241 No. of bytes in the distributed program including test data, etc.: 6 258 245 Number of bits in a word: Arbitrary Number of processors used: 1 Has the code been vectorized or parallelized: No Distribution format: tar gzip file Nature of physical problem: Scientific visualization of atomic systems Method of solution: Rendering of atoms using computer graphic techniques, culling algorithms for data

  17. Scalable nanohelices for predictive studies and enhanced 3D visualization.

    PubMed

    Meagher, Kwyn A; Doblack, Benjamin N; Ramirez, Mercedes; Davila, Lilian P

    2014-11-12

    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO₂) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of "bulk" silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for

  18. Scalable Nanohelices for Predictive Studies and Enhanced 3D Visualization

    PubMed Central

    Meagher, Kwyn A.; Doblack, Benjamin N.; Ramirez, Mercedes; Davila, Lilian P.

    2014-01-01

    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications.  For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately.  To study the effect of local structure on the properties of these complex geometries one must develop realistic models.  To date, software packages are rather limited in creating atomistic helical models.  This work focuses on producing atomistic models of silica glass (SiO2) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of “bulk” silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented.  The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix.  With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions.  The second method involves a more robust code which allows flexibility in modeling nanohelical structures.  This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models.  Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created.  An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material.  In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures.  One application of these methods is the recent study of nanohelices

  19. Trident: scalable compute archives: workflows, visualization, and analysis

    NASA Astrophysics Data System (ADS)

    Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Kotulla, Ralf; Henschel, Robert; Harbeck, Daniel

    2016-08-01

    The Astronomy scientific community has embraced Big Data processing challenges, e.g. associated with time-domain astronomy, and come up with a variety of novel and efficient data processing solutions. However, data processing is only a small part of the Big Data challenge. Efficient knowledge discovery and scientific advancement in the Big Data era requires new and equally efficient tools: modern user interfaces for searching, identifying and viewing data online without direct access to the data; tracking of data provenance; searching, plotting and analyzing metadata; interactive visual analysis, especially of (time-dependent) image data; and the ability to execute pipelines on supercomputing and cloud resources with minimal user overhead or expertise even to novice computing users. The Trident project at Indiana University offers a comprehensive web and cloud-based microservice software suite that enables the straight forward deployment of highly customized Scalable Compute Archive (SCA) systems; including extensive visualization and analysis capabilities, with minimal amount of additional coding. Trident seamlessly scales up or down in terms of data volumes and computational needs, and allows feature sets within a web user interface to be quickly adapted to meet individual project requirements. Domain experts only have to provide code or business logic about handling/visualizing their domain's data products and about executing their pipelines and application work flows. Trident's microservices architecture is made up of light-weight services connected by a REST API and/or a message bus; a web interface elements are built using NodeJS, AngularJS, and HighCharts JavaScript libraries among others while backend services are written in NodeJS, PHP/Zend, and Python. The software suite currently consists of (1) a simple work flow execution framework to integrate, deploy, and execute pipelines and applications (2) a progress service to monitor work flows and sub

  20. ParaText : scalable text analysis and visualization.

    SciTech Connect

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-07-01

    Automated analysis of unstructured text documents (e.g., web pages, newswire articles, research publications, business reports) is a key capability for solving important problems in areas including decision making, risk assessment, social network analysis, intelligence analysis, scholarly research and others. However, as data sizes continue to grow in these areas, scalable processing, modeling, and semantic analysis of text collections becomes essential. In this paper, we present the ParaText text analysis engine, a distributed memory software framework for processing, modeling, and analyzing collections of unstructured text documents. Results on several document collections using hundreds of processors are presented to illustrate the exibility, extensibility, and scalability of the the entire process of text modeling from raw data ingestion to application analysis.

  1. Infrastructure for Scalable and Interoperable Visualization and Analysis Software Technology

    SciTech Connect

    Bethel, E. Wes

    2004-08-01

    This document describes the LBNL vision for issues to be considered when assembling a large, multi-institution visualization and analysis effort. It was drafted at the request of the PNNL National Visual Analytics Center in July 2004.

  2. Visual Communications for Heterogeneous Networks/Visually Optimized Scalable Image Compression. Final Report for September 1, 1995 - February 28, 2002

    SciTech Connect

    Hemami, S. S.

    2003-06-03

    The authors developed image and video compression algorithms that provide scalability, reconstructibility, and network adaptivity, and developed compression and quantization strategies that are visually optimal at all bit rates. The goal of this research is to enable reliable ''universal access'' to visual communications over the National Information Infrastructure (NII). All users, regardless of their individual network connection bandwidths, qualities-of-service, or terminal capabilities, should have the ability to access still images, video clips, and multimedia information services, and to use interactive visual communications services. To do so requires special capabilities for image and video compression algorithms: scalability, reconstructibility, and network adaptivity. Scalability allows an information service to provide visual information at many rates, without requiring additional compression or storage after the stream has been compressed the first time. Reconstructibility allows reliable visual communications over an imperfect network. Network adaptivity permits real-time modification of compression parameters to adjust to changing network conditions. Furthermore, to optimize the efficiency of the compression algorithms, they should be visually optimal, where each bit expended reduces the visual distortion. Visual optimality is achieved through first extensive experimentation to quantify human sensitivity to supra-threshold compression artifacts and then incorporation of these experimental results into quantization strategies and compression algorithms.

  3. Scalable Visualization, applied to Galaxies,Oceans & Brains

    NASA Astrophysics Data System (ADS)

    Pailthorpe, Bernard

    2001-06-01

    The frontiers of Scientific Visualisation now include problems arising with data that scales in size or complexity. New metaphors may be needed to navigate, analyse and display the data emerging from bio-diversity, genomic and soci- economic studies. This talk addresses the challenges in generating algorithms and software libraries which are suitable for the large scale data emerging from tera-scale simulations and instruments. With larger and more complex datasets, moving into the 100GB-1TB realm, scalable methodologies and tools are required. The collaborative efforts to address these challenges, currently underway at the San Diego Supercomputer Center and within the National Partnership for Advanced Computational Infrastructure (NPACI), will be summarised. The ultimate aim of this R&D program is to facilitate queries and analysis of multiple, large data sets derived from motivating applications in astrophysics, planetary-scale oceanographic simulations and human brain mapping. Research challenges in such science application domains provide the justification for developing such tools. Previously planetary-scale oceanographic simulations had resolutions limited to 2 deg. latitude and longitude. With Teraflop computing resources coming on line, such simulations will be conducted at 10x (and presently 100x) resolution, soon yielding multiple sets of 100 GByte numerical output. In mapping the human brain, up to four distinct imaging modalities are used, with datasets already at 10s of GBytes. The immediate research challenge is composite these images, facilitating simultaneous analysis of structural and functional information. These applications manifest the need for high capacity computer displays,moving beyond the usual 1 Mega-pixel desktops to 10 M-pixel and more. Developments in this area will be discussed.

  4. CGLX: a scalable, high-performance visualization framework for networked display environments.

    PubMed

    Doerr, Kai-Uwe; Kuester, Falko

    2011-03-01

    The Cross Platform Cluster Graphics Library (CGLX) is a flexible and transparent OpenGL-based graphics framework for distributed, high-performance visualization systems. CGLX allows OpenGL based applications to utilize massively scalable visualization clusters such as multiprojector or high-resolution tiled display environments and to maximize the achievable performance and resolution. The framework features a programming interface for hardware-accelerated rendering of OpenGL applications on visualization clusters, mimicking a GLUT-like (OpenGL-Utility-Toolkit) interface to enable smooth translation of single-node applications to distributed parallel rendering applications. CGLX provides a unified, scalable, distributed OpenGL context to the user by intercepting and manipulating certain OpenGL directives. CGLX's interception mechanism, in combination with the core functionality for users to register callbacks, enables this framework to manage a visualization grid without additional implementation requirements to the user. Although CGLX grants access to its core engine, allowing users to change its default behavior, general development can occur in the context of a standalone desktop. The framework provides an easy-to-use graphical user interface (GUI) and tools to test, setup, and configure a visualization cluster. This paper describes CGLX's architecture, tools, and systems components. We present performance and scalability tests with different types of applications, and we compare the results with a Chromium-based approach.

  5. Scalable Multivariate Volume Visualization and Analysis Based on Dimension Projection and Parallel Coordinates.

    PubMed

    Guo, Hanqi; Xiao, He; Yuan, Xiaoru

    2012-09-01

    In this paper, we present an effective and scalable system for multivariate volume data visualization and analysis with a novel transfer function interface design that tightly couples parallel coordinates plots (PCP) and MDS-based dimension projection plots. In our system, the PCP visualizes the data distribution of each variate (dimension) and the MDS plots project features. They are integrated seamlessly to provide flexible feature classification without context switching between different data presentations during the user interaction. The proposed interface enables users to identify relevant correlation clusters and assign optical properties with lassos, magic wand, and other tools. Furthermore, direct sketching on the volume rendered images has been implemented to probe and edit features. With our system, users can interactively analyze multivariate volumetric data sets by navigating and exploring feature spaces in unified PCP and MDS plots. To further support large-scale multivariate volume data visualization and analysis, Scalable Pivot MDS (SPMDS), parallel adaptive continuous PCP rendering, as well as parallel rendering techniques are developed and integrated into our visualization system. Our experiments show that the system is effective in multivariate volume data visualization and its performance is highly scalable for data sets with different sizes and number of variates.

  6. A transparently scalable visualization architecture for exploring the universe.

    PubMed

    Fu, Chi-Wing; Hanson, Andrew J

    2007-01-01

    Modern astronomical instruments produce enormous amounts of three-dimensional data describing the physical Universe. The currently available data sets range from the solar system to nearby stars and portions of the Milky Way Galaxy, including the interstellar medium and some extrasolar planets, and extend out to include galaxies billions of light years away. Because of its gigantic scale and the fact that it is dominated by empty space, modeling and rendering the Universe is very different from modeling and rendering ordinary three-dimensional virtual worlds at human scales. Our purpose is to introduce a comprehensive approach to an architecture solving this visualization problem that encompasses the entire Universe while seeking to be as scale-neutral as possible. One key element is the representation of model-rendering procedures using power scaled coordinates (PSC), along with various PSC-based techniques that we have devised to generalize and optimize the conventional graphics framework to the scale domains of astronomical visualization. Employing this architecture, we have developed an assortment of scale-independent modeling and rendering methods for a large variety of astronomical models, and have demonstrated scale-insensitive interactive visualizations of the physical Universe covering scales ranging from human scale to the Earth, to the solar system, to the Milky Way Galaxy, and to the entire observable Universe.

  7. Isosurface extraction and spatial filtering using Persistent OcTree (POT).

    PubMed

    Shi, Qingmin; JaJa, Joseph

    2006-01-01

    We propose a novel Persistent OcTree (POT) indexing structure for accelerating isosurface extraction and spatial filtering from volumetric data. This data structure efficiently handles a wide range of visualization problems such as the generation of view-dependent isosurfaces, ray tracing, and isocontour slicing for high dimensional data. POT can be viewed as a hybrid data structure between the interval tree and the Branch-On-Need Octree (BONO) in the sense that it achieves the asymptotic bound of the interval tree for identifying the active cells corresponding to an isosurface and is more efficient than BONO for handling spatial queries. We encode a compact octree for each isovalue. Each such octree contains only the corresponding active cells, in such a way that the combined structure has linear space. The inherent hierarchical structure associated with the active cells enables very fast filtering of the active cells based on spatial constraints. We demonstrate the effectiveness of our approach by performing view-dependent isosurfacing on a wide variety of volumetric data sets and 4D isocontour slicing on the time-varying Richtmyer-Meshkov instability dataset.

  8. Shrink-wrapped isosurface from cross sectional images.

    PubMed

    Choi, Y K; Hahn, J K

    2007-12-01

    This paper addresses a new surface reconstruction scheme for approximating the isosurface from a set of tomographic cross sectional images. Differently from the novel Marching Cubes (MC) algorithm, our method does not extract the iso-density surface (isosurface) directly from the voxel data but calculates the iso-density point (isopoint) first. After building a coarse initial mesh approximating the ideal isosurface by the cell-boundary representation, it metamorphoses the mesh into the final isosurface by a relaxation scheme, called shrink-wrapping process. Compared with the MC algorithm, our method is robust and does not make any cracks on surface. Furthermore, since it is possible to utilize lots of additional isopoints during the surface reconstruction process by extending the adjacency definition, theoretically the resulting surface can be better in quality than the MC algorithm. According to experiments, it is proved to be very robust and efficient for isosurface reconstruction from cross sectional images.

  9. Isosurface construction in any dimension using Convex Hulls.

    PubMed

    Bhaniramka, Praveen; Wenger, Rephael; Crawfis, Roger

    2004-01-01

    We present an algorithm for constructing isosurfaces in any dimension. The input to the algorithm is a set of scalar values in a d-dimensional regular grid of (topological) hypercubes. The output is a set of (d-1)-dimensional simplices forming a piecewise linear approximation to the isosurface. The algorithm constructs the isosurface piecewise within each hypercube in the grid using the convex hull of an appropriate set of points. We prove that our algorithm correctly produces a triangulation of a (d-1)-manifold with boundary. In dimensions three and four, lookup tables with 2(8) and 2(16) entries, respectively, can be used to speed the algorithm's running time. In three dimensions, this gives the popular Marching Cubes algorithm. We discuss applications of four-dimensional isosurface construction to time varying isosurfaces, interval volumes, and morphing.

  10. AggreSet: Rich and Scalable Set Exploration using Visualizations of Element Aggregations.

    PubMed

    Yalçin, M Adil; Elmqvist, Niklas; Bederson, Benjamin B

    2016-01-01

    Datasets commonly include multi-value (set-typed) attributes that describe set memberships over elements, such as genres per movie or courses taken per student. Set-typed attributes describe rich relations across elements, sets, and the set intersections. Increasing the number of sets results in a combinatorial growth of relations and creates scalability challenges. Exploratory tasks (e.g. selection, comparison) have commonly been designed in separation for set-typed attributes, which reduces interface consistency. To improve on scalability and to support rich, contextual exploration of set-typed data, we present AggreSet. AggreSet creates aggregations for each data dimension: sets, set-degrees, set-pair intersections, and other attributes. It visualizes the element count per aggregate using a matrix plot for set-pair intersections, and histograms for set lists, set-degrees and other attributes. Its non-overlapping visual design is scalable to numerous and large sets. AggreSet supports selection, filtering, and comparison as core exploratory tasks. It allows analysis of set relations inluding subsets, disjoint sets and set intersection strength, and also features perceptual set ordering for detecting patterns in set matrices. Its interaction is designed for rich and rapid data exploration. We demonstrate results on a wide range of datasets from different domains with varying characteristics, and report on expert reviews and a case study using student enrollment and degree data with assistant deans at a major public university.

  11. Dynamic Isosurface Extraction and Level-of-Detail in Voxel Space

    SciTech Connect

    Lamphere, P.B.; Linebarger, J.M.

    1999-03-01

    A new visualization representation is described, which dramatically improves interactivity for scientific visualizations of structured grid data sets by creating isosurfaces at interactive speeds and with dynamically changeable levels-of-detail (LOD). This representation enables greater interactivity by allowing an analyst to dynamically specify both the desired isosurface threshold and required level-of-detail to be used while rendering the image. A scientist can therefore view very large isosurfaces at interactive speeds (with a low level-of-detail), but has the full data set always available for analysis. The key idea is that various levels-of-detail are represented as differently sized hexahedral virtual voxels, which are stored in a three-dimensional binary tree, or kd-tree; thus the level-of-detail representation is done in voxel space instead of the traditional approach which relies on surface or geometry space decimations. Utilizing the voxel space is an essential step to moving from a post-processing visualization paradigm to a quantitative, real-time paradigm. This algorithm has been implemented as an integral component of the EIGEN/VR project at Sandia National Laboratories, which provides a rich environment for scientists to interactively explore and visualize the results of very large-scale simulations performed on massively parallel supercomputers.

  12. Ray-tracing polymorphic multidomain spectral/hp elements for isosurface rendering.

    PubMed

    Nelson, Blake; Kirby, Robert M

    2006-01-01

    The purpose of this paper is to present a ray-tracing isosurface rendering algorithm for spectral/hp (high-order finite) element methods in which the visualization error is both quantified and minimized. Determination of the ray-isosurface intersection is accomplished by classic polynomial root-finding applied to a polynomial approximation obtained by projecting the finite element solution over element-partitioned segments along the ray. Combining the smoothness properties of spectral/hp elements with classic orthogonal polynomial approximation theory, we devise an adaptive scheme which allows the polynomial approximation along a ray-segment to be arbitrarily close to the true solution. The resulting images converge toward a pixel-exact image at a rate far faster than sampling the spectral/hp element solution and applying classic low-order visualization techniques such as marching cubes.

  13. Interactive Querying over Large Network Data: Scalability, Visualization, and Interaction Design.

    PubMed

    Pienta, Robert; Tamersoy, Acar; Tong, Hanghang; Endert, Alex; Chau, Duen Horng

    2015-01-01

    Given the explosive growth of modern graph data, new methods are needed that allow for the querying of complex graph structures without the need of a complicated querying languages; in short, interactive graph querying is desirable. We describe our work towards achieving our overall research goal of designing and developing an interactive querying system for large network data. We focus on three critical aspects: scalable data mining algorithms, graph visualization, and interaction design. We have already completed an approximate subgraph matching system called MAGE in our previous work that fulfills the algorithmic foundation allowing us to query a graph with hundreds of millions of edges. Our preliminary work on visual graph querying, Graphite, was the first step in the process to making an interactive graph querying system. We are in the process of designing the graph visualization and robust interaction needed to make truly interactive graph querying a reality.

  14. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    NASA Astrophysics Data System (ADS)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  15. Reinventing the Contingency Wheel: Scalable Visual Analytics of Large Categorical Data.

    PubMed

    Alsallakh, B; Aigner, W; Miksch, S; Groller, M E

    2012-12-01

    Contingency tables summarize the relations between categorical variables and arise in both scientific and business domains. Asymmetrically large two-way contingency tables pose a problem for common visualization methods. The Contingency Wheel has been recently proposed as an interactive visual method to explore and analyze such tables. However, the scalability and readability of this method are limited when dealing with large and dense tables. In this paper we present Contingency Wheel++, new visual analytics methods that overcome these major shortcomings: (1) regarding automated methods, a measure of association based on Pearson's residuals alleviates the bias of the raw residuals originally used, (2) regarding visualization methods, a frequency-based abstraction of the visual elements eliminates overlapping and makes analyzing both positive and negative associations possible, and (3) regarding the interactive exploration environment, a multi-level overview+detail interface enables exploring individual data items that are aggregated in the visualization or in the table using coordinated views. We illustrate the applicability of these new methods with a use case and show how they enable discovering and analyzing nontrivial patterns and associations in large categorical data.

  16. Decoupling Illumination from Isosurface Generation Using 4D Light Transport

    PubMed Central

    Banks, David C.; Beason, Kevin M.

    2014-01-01

    One way to provide global illumination for the scientist who performs an interactive sweep through a 3D scalar dataset is to pre-compute global illumination, resample the radiance onto a 3D grid, then use it as a 3D texture. The basic approach of repeatedly extracting isosurfaces, illuminating them, and then building a 3D illumination grid suffers from the non-uniform sampling that arises from coupling the sampling of radiance with the sampling of isosurfaces. We demonstrate how the illumination step can be decoupled from the isosurface extraction step by illuminating the entire 3D scalar function as a 3-manifold in 4-dimensional space. By reformulating light transport in a higher dimension, one can sample a 3D volume without requiring the radiance samples to aggregate along individual isosurfaces in the pre-computed illumination grid. PMID:19834238

  17. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform

    PubMed Central

    Poucke, Sven Van; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; Deyne, Cathy De

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner’s Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research. PMID:26731286

  18. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    PubMed

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  19. Scalable Linear Visual Feature Learning via Online Parallel Nonnegative Matrix Factorization.

    PubMed

    Zhao, Xueyi; Li, Xi; Zhang, Zhongfei; Shen, Chunhua; Zhuang, Yueting; Gao, Lixin; Li, Xuelong

    2016-12-01

    Visual feature learning, which aims to construct an effective feature representation for visual data, has a wide range of applications in computer vision. It is often posed as a problem of nonnegative matrix factorization (NMF), which constructs a linear representation for the data. Although NMF is typically parallelized for efficiency, traditional parallelization methods suffer from either an expensive computation or a high runtime memory usage. To alleviate this problem, we propose a parallel NMF method called alternating least square block decomposition (ALSD), which efficiently solves a set of conditionally independent optimization subproblems based on a highly parallelized fine-grained grid-based blockwise matrix decomposition. By assigning each block optimization subproblem to an individual computing node, ALSD can be effectively implemented in a MapReduce-based Hadoop framework. In order to cope with dynamically varying visual data, we further present an incremental version of ALSD, which is able to incrementally update the NMF solution with a low computational cost. Experimental results demonstrate the efficiency and scalability of the proposed methods as well as their applications to image clustering and image retrieval.

  20. JuxtaView - A tool for interactive visualization of large imagery on scalable tiled displays

    USGS Publications Warehouse

    Krishnaprasad, N.K.; Vishwanath, V.; Venkataraman, S.; Rao, A.G.; Renambot, L.; Leigh, J.; Johnson, A.E.; Davis, B.

    2004-01-01

    JuxtaView is a cluster-based application for viewing ultra-high-resolution images on scalable tiled displays. We present in JuxtaView, a new parallel computing and distributed memory approach for out-of-core montage visualization, using LambdaRAM, a software-based network-level cache system. The ultimate goal of JuxtaView is to enable a user to interactively roam through potentially terabytes of distributed, spatially referenced image data such as those from electron microscopes, satellites and aerial photographs. In working towards this goal, we describe our first prototype implemented over a local area network, where the image is distributed using LambdaRAM, on the memory of all nodes of a PC cluster driving a tiled display wall. Aggressive pre-fetching schemes employed by LambdaRAM help to reduce latency involved in remote memory access. We compare LambdaRAM with a more traditional memory-mapped file approach for out-of-core visualization. ?? 2004 IEEE.

  1. CoreWall: A Scalable Interactive Tool for Visual Core Description, Data Visualization, and Stratigraphic Correlation

    NASA Astrophysics Data System (ADS)

    Rao, A. G.; Rack, F.; Kamp, B.; Fils, D.; Ito, E.; Morin, P.; Higgins, S.; Leigh, J.; Johnson, A.; Renambot, L.

    2005-12-01

    A primary need for studies of sediment, ice and rock cores is an integrated environment for visual core description. CoreWall is a tool that uses digital line-scan images of split-core surfaces as the fundamental template for all sediment descriptive work. Textual and image annotations support description about structures, lithologic variation, macroscopic grain size variation, bioturbation intensity, chemical composition, and micropaleontology at points of interest registered within the core image itself. The integration of core-section images with discrete data streams and nested annotations provide a robust approach to the description of sediment and rock cores. This project provides for the real-time and/or simultaneous display of multiple integrated databases, with all the data rectified (co-registered) to the fundamental template of the core image. This visualization tool enables rapid multidisciplinary interpretation during the Initial Core Description process. A prototype computer environment for working with the high-resolution data is the Personal GeoWall-2, a single computer used to drive six tiled LCD screens. As a wideband display, the Personal GeoWall-2 can show more content then a single display system. This new visualization tool is both scaleable and portable from the Personal GeoWall-2 environment down to a single screen driven by a laptop computer. Using the screen resolution, core sections are drawn at a life size scale with both core and downhole wireline logging data drawn alongside. Using standard computer interfaces, individuals can pan through meters of core imagery and data, annotating along the length of the core itself. They can zoom in on a high-resolution core image to see details that appear under the proper lighting in which the images were taken. Using the Internet, CoreWall can retrieve images and data files from remote databases or web portals/services, such as CHRONOS, allowing individuals from ship to shore to look at data and

  2. Interactive View-Dependent Rendering of Large Iso-Surfaces

    SciTech Connect

    Gregorski, B.; Duchaineau, M.A.; Lindstrom, P.; Pascucci, V.; Joy, K.J.

    2002-01-11

    The authors present an algorithm for interactively extracting and rendering iso-surfaces of large volume datasets in a view-dependent optimal fashion. A recursive tetrahedral mesh subdivision scheme, based on longest edge bisection, is used to hierarchically decompose the data into a multi-resolution structure. This data structure allows fast extraction of arbitrary iso-surfaces to within user specified view-dependent error bounds. A compact encoding of the mesh subdivision optimizes memory usage and processor performance necessary for large datasets. A data layout scheme based on hierarchical space filling curves provides optimal access to the data in a cache coherent manner.

  3. Interactive View-Dependent Rendering of Large Isosurfaces

    SciTech Connect

    Gregorski, B; Duchaineau, M; Lindstrom, P; Pascucci, V; Joy, K I

    2002-11-19

    We present an algorithm for interactively extracting and rendering isosurfaces of large volume datasets in a view-dependent fashion. A recursive tetrahedral mesh refinement scheme, based on longest edge bisection, is used to hierarchically decompose the data into a multiresolution structure. This data structure allows fast extraction of arbitrary isosurfaces to within user specified view-dependent error bounds. A data layout scheme based on hierarchical space filling curves provides access to the data in a cache coherent manner that follows the data access pattern indicated by the mesh refinement.

  4. A Scalable Cloud Library Empowering Big Data Management, Diagnosis, and Visualization of Cloud-Resolving Models

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.

    2015-12-01

    A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a

  5. Adaptively synchronous scalable spread spectrum (A4S) data-hiding strategy for three-dimensional visualization

    NASA Astrophysics Data System (ADS)

    Hayat, Khizar; Puech, William; Gesquière, Gilles

    2010-04-01

    We propose an adaptively synchronous scalable spread spectrum (A4S) data-hiding strategy to integrate disparate data, needed for a typical 3-D visualization, into a single JPEG2000 format file. JPEG2000 encoding provides a standard format on one hand and the needed multiresolution for scalability on the other. The method has the potential of being imperceptible and robust at the same time. While the spread spectrum (SS) methods are known for the high robustness they offer, our data-hiding strategy is removable at the same time, which ensures highest possible visualization quality. The SS embedding of the discrete wavelet transform (DWT)-domain depth map is carried out in transform domain YCrCb components from the JPEG2000 coding stream just after the DWT stage. To maintain synchronization, the embedding is carried out while taking into account the correspondence of subbands. Since security is not the immediate concern, we are at liberty with the strength of embedding. This permits us to increase the robustness and bring the reversibility of our method. To estimate the maximum tolerable error in the depth map according to a given viewpoint, a human visual system (HVS)-based psychovisual analysis is also presented.

  6. ArrayXPath: mapping and visualizing microarray gene-expression data with integrated biological pathway resources using Scalable Vector Graphics.

    PubMed

    Chung, Hee-Joon; Kim, Mingoo; Park, Chan Hee; Kim, Jihoon; Kim, Ju Han

    2004-07-01

    Biological pathways can provide key information on the organization of biological systems. ArrayXPath (http://www.snubi.org/software/ArrayXPath/) is a web-based service for mapping and visualizing microarray gene-expression data for integrated biological pathway resources using Scalable Vector Graphics (SVG). By integrating major bio-databases and searching pathway resources, ArrayXPath automatically maps different types of identifiers from microarray probes and pathway elements. When one inputs gene-expression clusters, ArrayXPath produces a list of the best matching pathways for each cluster. We applied Fisher's exact test and the false discovery rate (FDR) to evaluate the statistical significance of the association between a cluster and a pathway while correcting the multiple-comparison problem. ArrayXPath produces Javascript-enabled SVGs for web-enabled interactive visualization of pathways integrated with gene-expression profiles.

  7. Dynamic isosurface extraction and level-of-detail in voxel space

    SciTech Connect

    Linebarger, J.M.; Lamphere, P.B.; Breckenridge, A.R.

    1998-06-01

    A new visualization technique is reported, which dramatically improves interactivity for scientific visualizations by working directly with voxel data and by employing efficient algorithms and data structures. This discussion covers the research software, the file structures, examples of data creation, data search, and triangle rendering codes that allow geometric surfaces to be extracted from volumetric data. Uniquely, these methods enable greater interactivity by allowing an analyst to dynamically specify both the desired isosurface threshold and required level-of-detail to be used while rendering the image. The key idea behind this visualization paradigm is that various levels-of-detail are represented as differently sized hexahedral virtual voxels, which are stored in a three-dimensional kd-tree; thus the level-of-detail representation is done in voxel space instead of the traditional approach which relies on surface or geometry space decimations. This algorithm has been implemented as an integral component in the EIGEN/VR project at Sandia National Laboratories, which provides a rich environment for scientists to interactively explore and visualize the results of very large-scale simulations performed on massively parallel supercomputers.

  8. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Astrophysics Data System (ADS)

    Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.

    2013-12-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video

  9. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt

    2013-01-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video

  10. PhyloDet: a scalable visualization tool for mapping multiple traits to large evolutionary trees

    PubMed Central

    Lee, Bongshin; Nachmanson, Lev; Robertson, George; Carlson, Jonathan M.; Heckerman, David

    2009-01-01

    Summary: Evolutionary biologists are often interested in finding correlations among biological traits across a number of species, as such correlations may lead to testable hypotheses about the underlying function. Because some species are more closely related than others, computing and visualizing these correlations must be done in the context of the evolutionary tree that relates species. In this note, we introduce PhyloDet (short for PhyloDetective), an evolutionary tree visualization tool that enables biologists to visualize multiple traits mapped to the tree. Availability: http://research.microsoft.com/cue/phylodet/ Contact: bongshin@microsoft.com. PMID:19633096

  11. SBIR Phase II Final Report for Scalable Grid Technologies for Visualization Services

    SciTech Connect

    Sebastien Barre; Will Schroeder

    2006-10-15

    This project developed software tools for the automation of grid computing. In particular, the project focused in visualization and imaging tools (VTK, ParaView and ITK); i.e., we developed tools to automatically create Grid services from C++ programs implemented using the open-source VTK visualization and ITK segmentation and registration systems. This approach helps non-Grid experts to create applications using tools with which they are familiar, ultimately producing Grid services for visualization and image analysis by invocation of an automatic process.

  12. Scalable Inference and Learning in Very Large Graphical Models Patterned after the Primate Visual Cortex

    DTIC Science & Technology

    2008-04-07

    entitled "A Computational Model of the Cerebral Cortex" [5]. The paper (he scribed a graphical mo(lcl of the visual cortex inspired bY David Niumford’s...Proceedings of the ninth IEEE International Conference on Computer Vision, volume 1, pages 432-439, 2003. 117] Tai Sing Lee and David ’Mumford...Hierarchical Bayesian inference in the visual cortex. Journal of the Opthcal Socicty of America, 2(7):1434-1,148, July 2003. 16~] David Lowe. Distitict ivi

  13. Design of a High Resolution Scalable Cluster Based Portable Tiled Display for Earth Sciences Visualization

    NASA Astrophysics Data System (ADS)

    Nayak, A. M.; Dawe, G.; Samilo, D.; Keen, C.; Matthews, J.; Patel, A.; Im, T.; Orcutt, J.; Defanti, T.

    2006-12-01

    The Center for Earth Observations and Applications (CEOA) collaborated with researchers at the Scripps Institution of Oceanography Visualization Center and the California Institute for Telecommunications and Information Technology (Calit2) to design an advanced portable visualization system to explore geophysical and oceanography datasets at very high resolution. The system consists of 15 Dell 24" monitors arranged in a 3x5 grid ( 3 panels high and 5 wide). Each monitor supports a resolution of upto 1920 x 1200 and is driven by one node of a cluster of 15 Intel Mac Minis. The tiled display supports a total resolution of over 34 million pixels and can be used either as a single large desktop to display rendered animations, HD movies and image files or to display web-based content on each panel for simultaneous viewing of mutliple datasets. The system is enclosed in a custom built case that can hold all the required components and transported to research sites or to meetings and conferences for public awareness activities. We call the system the 'Mobile INteractive Imaging Multidisplay Environment' or simply 'miniMe'. The design of the miniMe wall is based on a class of advanced display systems called Geowall-2 developed at the Electronic Visualization Laboratory, University of Illinois at Chicago.

  14. A scalable architecture for extracting, aligning, linking, and visualizing multi-Int data

    NASA Astrophysics Data System (ADS)

    Knoblock, Craig A.; Szekely, Pedro

    2015-05-01

    An analyst today has a tremendous amount of data available, but each of the various data sources typically exists in their own silos, so an analyst has limited ability to see an integrated view of the data and has little or no access to contextual information that could help in understanding the data. We have developed the Domain-Insight Graph (DIG) system, an innovative architecture for extracting, aligning, linking, and visualizing massive amounts of domain-specific content from unstructured sources. Under the DARPA Memex program we have already successfully applied this architecture to multiple application domains, including the enormous international problem of human trafficking, where we extracted, aligned and linked data from 50 million online Web pages. DIG builds on our Karma data integration toolkit, which makes it easy to rapidly integrate structured data from a variety of sources, including databases, spreadsheets, XML, JSON, and Web services. The ability to integrate Web services allows Karma to pull in live data from the various social media sites, such as Twitter, Instagram, and OpenStreetMaps. DIG then indexes the integrated data and provides an easy to use interface for query, visualization, and analysis.

  15. Direct interval volume visualization.

    PubMed

    Ament, Marco; Weiskopf, Daniel; Carr, Hamish

    2010-01-01

    We extend direct volume rendering with a unified model for generalized isosurfaces, also called interval volumes, allowing a wider spectrum of visual classification. We generalize the concept of scale-invariant opacity—typical for isosurface rendering—to semi-transparent interval volumes. Scale-invariant rendering is independent of physical space dimensions and therefore directly facilitates the analysis of data characteristics. Our model represents sharp isosurfaces as limits of interval volumes and combines them with features of direct volume rendering. Our objective is accurate rendering, guaranteeing that all isosurfaces and interval volumes are visualized in a crack-free way with correct spatial ordering. We achieve simultaneous direct and interval volume rendering by extending preintegration and explicit peak finding with data-driven splitting of ray integration and hybrid computation in physical and data domains. Our algorithm is suitable for efficient parallel processing for interactive applications as demonstrated by our CUDA implementation.

  16. Recent Advances in VisIt: Parallel Crack-free Isosurface Extraction

    NASA Astrophysics Data System (ADS)

    Weber, G. H.; Childs, H.; Meredith, J. S.

    2013-04-01

    We present a new method for extracting crack-free isosurfaces from adaptive mesh refinement (AMR) data. This method builds on prior work, which utilizes dual grids and stitch cells that fill the resulting gaps. To simplify implementation of stitch cell generation, we propose a case-table-based approach. The most significant benefit of our new technique is that it uses ghost data to handle parallel isosurface extraction in a distributed memory environment efficiently.

  17. Isosurface Computation Made Simple: Hardware acceleration,Adaptive Refinement and tetrahedral Stripping

    SciTech Connect

    Pascucci, V

    2004-02-18

    This paper presents a simple approach for rendering isosurfaces of a scalar field. Using the vertex programming capability of commodity graphics cards, we transfer the cost of computing an isosurface from the Central Processing Unit (CPU), running the main application, to the Graphics Processing Unit (GPU), rendering the images. We consider a tetrahedral decomposition of the domain and draw one quadrangle (quad) primitive per tetrahedron. A vertex program transforms the quad into the piece of isosurface within the tetrahedron (see Figure 2). In this way, the main application is only devoted to streaming the vertices of the tetrahedra from main memory to the graphics card. For adaptively refined rectilinear grids, the optimization of this streaming process leads to the definition of a new 3D space-filling curve, which generalizes the 2D Sierpinski curve used for efficient rendering of triangulated terrains. We maintain the simplicity of the scheme when constructing view-dependent adaptive refinements of the domain mesh. In particular, we guarantee the absence of T-junctions by satisfying local bounds in our nested error basis. The expensive stage of fixing cracks in the mesh is completely avoided. We discuss practical tradeoffs in the distribution of the workload between the application and the graphics hardware. With current GPU's it is convenient to perform certain computations on the main CPU. Beyond the performance considerations that will change with the new generations of GPU's this approach has the major advantage of avoiding completely the storage in memory of the isosurface vertices and triangles.

  18. Isosurface Display of 3-D Scalar Fields from a Meteorological Model on Google Earth

    DTIC Science & Technology

    2013-07-01

    The resulting isosurface is then displayed automatically in Google Earth’s three-dimensional display and saved in a Keyhole Markup Language (KML...modules and a GUI using HyperText Markup Language version 4 (HTML v4) and JavaScript (JS). Figure 6 is a screen capture of the GUI used to extract... Markup Language JS JavaScript KML Keyhole Markup Language MC marching cubes 25 NO. OF COPIES ORGANIZATION 1 ADMNSTR (PDF) DEFNS TECHL INFO

  19. A Unified Air-Sea Visualization System: Survey on Gridding Structures

    NASA Technical Reports Server (NTRS)

    Anand, Harsh; Moorhead, Robert

    1995-01-01

    The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.

  20. A Unified Air-Sea Visualization System: Survey on Gridding Structures

    NASA Technical Reports Server (NTRS)

    Anand, Harsh; Moorhead, Robert

    1995-01-01

    The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.

  1. Grid-based Visualization Framework

    NASA Astrophysics Data System (ADS)

    Thiebaux, M.; Tangmunarunkit, H.; Kesselman, C.

    2003-12-01

    Advances in science and engineering have put high demands on tools for high-performance large-scale visual data exploration and analysis. For example, earthquake scientists can now study earthquake phenomena from first principle physics-based simulations. These simulations can generate large amounts of data, possibly high spatial resolution, and long time series. Single-system visualization software running on commodity machines cannot scale up to the large amounts of data generated by these simulations. To address this problem, we propose a flexible and extensible Grid-based visualization framework for time-critical, interactively controlled visual browsing of spatially and temporally large datasets in a Grid environment. Our framework leverages Grid resources for scalable computation and data storage to maintain performance and interactivity with large visualization jobs. Our framework utilizes Globus Toolkit 2.4 components for security (i.e., GSI), resource allocation and management (i.e., DUROC, GRAM) and communication (i.e., Globus-IO) to couple commodity desktops with remote, scalable storage and computational resources in a Grid for interactive data exploration. There are two major components in this framework---Grid Data Transport (GDT) and the Grid Visualization Utility (GVU). GDT provides libraries for performing parallel data filtering and parallel data exchange among Grid resources. GDT allows arbitrary data filtering to be integrated into the system. It also facilitates multi-tiered pipeline topology construction of compute resources and displays. In addition to scientific visualization applications, GDT can be used to support other applications that require parallel processing and parallel transfer of partial ordered independent files, such as file-set transfer. On top of GDT, we have developed the Grid Visualization Utility (GVU), which is designed to assist visualization dataset management, including file formatting, data transport and automatic

  2. PHYLOViZ 2.0: providing scalable data integration and visualization for multiple phylogenetic inference methods.

    PubMed

    Nascimento, Marta; Sousa, Adriano; Ramirez, Mário; Francisco, Alexandre P; Carriço, João A; Vaz, Cátia

    2017-01-01

    High Throughput Sequencing provides a cost effective means of generating high resolution data for hundreds or even thousands of strains, and is rapidly superseding methodologies based on a few genomic loci. The wealth of genomic data deposited on public databases such as Sequence Read Archive/European Nucleotide Archive provides a powerful resource for evolutionary analysis and epidemiological surveillance. However, many of the analysis tools currently available do not scale well to these large datasets, nor provide the means to fully integrate ancillary data. Here we present PHYLOViZ 2.0, an extension of PHYLOViZ tool, a platform independent Java tool that allows phylogenetic inference and data visualization for large datasets of sequence based typing methods, including Single Nucleotide Polymorphism (SNP) and whole genome/core genome Multilocus Sequence Typing (wg/cgMLST) analysis. PHYLOViZ 2.0 incorporates new data analysis algorithms and new visualization modules, as well as the capability of saving projects for subsequent work or for dissemination of results. http://www.phyloviz.net/ (licensed under GPLv3). cvaz@inesc-id.ptSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. An ISO-surface folding analysis method applied to premature neonatal brain development

    NASA Astrophysics Data System (ADS)

    Rodriguez-Carranza, Claudia E.; Rousseau, Francois; Iordanova, Bistra; Glenn, Orit; Vigneron, Daniel; Barkovich, James; Studholme, Colin

    2006-03-01

    In this paper we describe the application of folding measures to tracking in vivo cortical brain development in premature neonatal brain anatomy. The outer gray matter and the gray-white matter interface surfaces were extracted from semi-interactively segmented high-resolution T1 MRI data. Nine curvature- and geometric descriptor-based folding measures were applied to six premature infants, aged 28-37 weeks, using a direct voxelwise iso-surface representation. We have shown that using such an approach it is feasible to extract meaningful surfaces of adequate quality from typical clinically acquired neonatal MRI data. We have shown that most of the folding measures, including a new proposed measure, are sensitive to changes in age and therefore applicable in developing a model that tracks development in premature infants. For the first time gyrification measures have been computed on the gray-white matter interface and on cases whose age is representative of a period of intense brain development.

  4. Equalizer: a scalable parallel rendering framework.

    PubMed

    Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato

    2009-01-01

    Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.

  5. Scalable motion vector coding

    NASA Astrophysics Data System (ADS)

    Barbarien, Joeri; Munteanu, Adrian; Verdicchio, Fabio; Andreopoulos, Yiannis; Cornelis, Jan P.; Schelkens, Peter

    2004-11-01

    Modern video coding applications require transmission of video data over variable-bandwidth channels to a variety of terminals with different screen resolutions and available computational power. Scalable video coding is needed to optimally support these applications. Recently proposed wavelet-based video codecs employing spatial domain motion compensated temporal filtering (SDMCTF) provide quality, resolution and frame-rate scalability while delivering compression performance comparable to that of the state-of-the-art non-scalable H.264-codec. These codecs require scalable coding of the motion vectors in order to support a large range of bit-rates with optimal compression efficiency. Scalable motion vector coding algorithms based on the integer wavelet transform followed by embedded coding of the wavelet coefficients were recently proposed. In this paper, a new and fundamentally different scalable motion vector codec (MVC) using median-based motion vector prediction is proposed. Extensive experimental results demonstrate that the proposed MVC systematically outperforms the wavelet-based state-of-the-art solutions. To be able to take advantage of the proposed scalable MVC, a rate allocation mechanism capable of optimally dividing the available rate among texture and motion information is required. Two rate allocation strategies are proposed and compared. The proposed MVC and rate allocation schemes are incorporated into an SDMCTF-based video codec and the benefits of scalable motion vector coding are experimentally demonstrated.

  6. Visualization of Vector Field by Virtual Reality

    NASA Astrophysics Data System (ADS)

    Kageyama, A.; Tamura, Y.; Sato, T.

    A visualization software program is developed in order to analyze three dimensional vector fields by means of today's advanced virtual reality technology. This program enables simulation researchers to interactively visualize stream lines, tracer particle motions, isosurfaces, etc.~with stereo view. The program accepts any kind of vector fields on structured mesh as an input data. A virtual reality hardware system, on which the program operates, and details of the program are described.

  7. Finite Element Results Visualization for Unstructured Grids

    SciTech Connect

    Speck, Douglas E.; Dovey, Donald J.

    1996-07-15

    GRIZ is a general-purpose post-processing application supporting interactive visualization of finite element analysis results on unstructured grids. In addition to basic pseudocolor renderings of state variables over the mesh surface, GRIZ provides modern visualization techniques such as isocontours and isosurfaces, cutting planes, vector field display, and particle traces. GRIZ accepts both command-line and mouse-driven input, and is portable to virtually any UNIX platform which provides Motif and OpenGl libraries.

  8. Isosurface extraction and view-dependent filtering from time-varying fields using Persistent Time-Octree (PTOT).

    PubMed

    Wang, Cong; Chiang, Yi-Jen

    2009-01-01

    We develop a new algorithm for isosurface extraction and view-dependent filtering from large time-varying fields, by using a novel Persistent Time-Octree (PTOT) indexing structure. Previously, the Persistent Octree (POT) was proposed to perform isosurface extraction and view-dependent filtering, which combines the advantages of the interval tree (for optimal searches of active cells) and of the Branch-On-Need Octree (BONO, forview-dependent filtering), but it only works for steady-state(i.e., single time step) data. For time-varying fields, a 4D version of POT, 4D-POT, was proposed for 4D isocontour slicing, where slicing on the time domain gives all active cells in the queried timestep and isovalue. However, such slicing is not output sensitive and thus the searching is sub-optimal. Moreover, it was not known how to support view-dependent filtering in addition to time-domain slicing.In this paper, we develop a novel Persistent Time-Octree (PTOT) indexing structure, which has the advantages of POT and performs 4D isocontour slicing on the time domain with an output-sensitive and optimal searching. In addition, when we query the same isovalue q over m consecutive time steps, there is no additional searching overhead (except for reporting the additionalactive cells) compared to querying just the first time step. Such searching performance for finding active cells is asymptotically optimal, with asymptotically optimal space and preprocessing time as well. Moreover, our PTOT supports view-dependent filtering in addition to time-domain slicing. We propose a simple and effective out-of-core scheme, where we integrate our PTOT with implicit occluders, batched occlusion queries and batched CUDA computing tasks, so that we can greatly reduce the I/O cost as well as increase the amount of data being concurrently computed in GPU.This results in an efficient algorithm for isosurface extraction with view-dependent filtering utilizing a state-of-the-art programmable GPUfor time

  9. Efficient visualization of unsteady and huge scalar and vector fields

    NASA Astrophysics Data System (ADS)

    Vetter, Michael; Olbrich, Stephan

    2016-04-01

    and methods, we are developing a stand-alone post-processor, adding further data structures and mapping algorithms, and cooperating with the ICON developers and users. With the implementation of a DSVR-based post-processor, a milestone was achieved. By using the DSVR post-processor the mentioned 3 processes are completely separated: the data set is processed in a batch mode - e.g. on the same supercomputer, which the data is generated on - and the interactive 3D rendering is done afterwards on the scientist's local system. At the actual status of implementation the DSVR post-processor supports the generation of isosurfaces and colored slicers on volume data set time series based on rectilinear grids as well as the visualization of pathlines on time varying flow fields based on either rectilinear grids or prism grids. The software implementation and evaluation is done on the supercomputers at DKRZ, including scalability tests using ICON output files in NetCDF format. The next milestones will be (a) the in-situ integration of the DSVR library in the ICON model and (b) the implementation of an isosurface algorithm for prism grids.

  10. Scalable coherent interface

    SciTech Connect

    Alnaes, K.; Kristiansen, E.H. ); Gustavson, D.B. ); James, D.V. )

    1990-01-01

    The Scalable Coherent Interface (IEEE P1596) is establishing an interface standard for very high performance multiprocessors, supporting a cache-coherent-memory model scalable to systems with up to 64K nodes. This Scalable Coherent Interface (SCI) will supply a peak bandwidth per node of 1 GigaByte/second. The SCI standard should facilitate assembly of processor, memory, I/O and bus bridge cards from multiple vendors into massively parallel systems with throughput far above what is possible today. The SCI standard encompasses two levels of interface, a physical level and a logical level. The physical level specifies electrical, mechanical and thermal characteristics of connectors and cards that meet the standard. The logical level describes the address space, data transfer protocols, cache coherence mechanisms, synchronization primitives and error recovery. In this paper we address logical level issues such as packet formats, packet transmission, transaction handshake, flow control, and cache coherence. 11 refs., 10 figs.

  11. Sandia Scalable Encryption Software

    SciTech Connect

    Tarman, Thomas D.

    1997-08-13

    Sandia Scalable Encryption Library (SSEL) Version 1.0 is a library of functions that implement Sandia''s scalable encryption algorithm. This algorithm is used to encrypt Asynchronous Transfer Mode (ATM) data traffic, and is capable of operating on an arbitrary number of bits at a time (which permits scaling via parallel implementations), while being interoperable with differently scaled versions of this algorithm. The routines in this library implement 8 bit and 32 bit versions of a non-linear mixer which is compatible with Sandia''s hardware-based ATM encryptor.

  12. Visualizing the Positive-Negative Interface of Molecular Electrostatic Potentials as an Educational Tool for Assigning Chemical Polarity

    ERIC Educational Resources Information Center

    Schonborn, Konrad; Host, Gunnar; Palmerius, Karljohan

    2010-01-01

    To help in interpreting the polarity of a molecule, charge separation can be visualized by mapping the electrostatic potential at the van der Waals surface using a color gradient or by indicating positive and negative regions of the electrostatic potential using different colored isosurfaces. Although these visualizations capture the molecular…

  13. Visualizing the Positive-Negative Interface of Molecular Electrostatic Potentials as an Educational Tool for Assigning Chemical Polarity

    ERIC Educational Resources Information Center

    Schonborn, Konrad; Host, Gunnar; Palmerius, Karljohan

    2010-01-01

    To help in interpreting the polarity of a molecule, charge separation can be visualized by mapping the electrostatic potential at the van der Waals surface using a color gradient or by indicating positive and negative regions of the electrostatic potential using different colored isosurfaces. Although these visualizations capture the molecular…

  14. Scalable Parallel Utopia

    SciTech Connect

    King, D.; Pierson, L.

    1998-10-01

    This contribution proposes a 128 bit wide interface structure clocked at approximately 80 MHz that will operate at 10 Gbps as a strawman for a 0C192C Utopia Specification. In addition, the concept of scalable width of data transfers in order to maintain manageably low clock rates is proposed.

  15. Visualization of High-Order Finite Element Methods

    DTIC Science & Technology

    2013-03-27

    Peters , Valerio Pascucci, Robert M. Kirby and Claudio T. Silva, "Topology Verification for Isosurface Extraction", IEEE Transactions on Visualization...Visualization of High-Order Methods Professor Robert M. Kirby , Mr. Robert Haimes University of Utah Office of Sponsored Programs University of Utah Salt Lake...ORGANIZATION REPORT NUMBER 19a. NAME OF RESPONSIBLE PERSON 19b. TELEPHONE NUMBER Robert Kirby 801-585-3421 3. DATES COVERED (From - To) 26-Sep-2008

  16. GRIZ. Finite Element Results Visualization for Unstructured Grids

    SciTech Connect

    Dovey, D.; Spelce, T.E.; Christon, M.A.

    1996-03-01

    GRIZ is a general-purpose post-processing application supporting interactive visualization of finite element analysis results on unstructured grids. In addition to basic pseudocolor renderings of state variables over the mesh surface, GRIZ provides modern visualization techniques such as isocontours and isosurfaces, cutting planes, vector field display, and particle traces. GRIZ accepts both command-line and mouse-driven input, and is portable to virtually any UNIX platform which provides Motif and OpenGl libraries.

  17. Scalable Work Stealing

    SciTech Connect

    Dinan, James S.; Larkins, D. B.; Sadayappan, Ponnuswamy; Krishnamoorthy, Sriram; Nieplocha, Jaroslaw

    2009-11-14

    Irregular and dynamic parallel applications pose significant challenges to achieving scalable performance on large-scale multicore clusters. These applications often require ongoing, dynamic load balancing in order to maintain efficiency. While effective at small scale, centralized load balancing schemes quickly become a bottleneck on large-scale clusters. Work stealing is a popular approach to distributed dynamic load balancing; however its performance on large-scale clusters is not well understood. Prior work on work stealing has largely focused on shared memory machines. In this work we investigate the design and scalability of work stealing on modern distributed memory systems. We demonstrate high efficiency and low overhead when scaling to 8,192 processors for three benchmark codes: a producer-consumer benchmark, the unbalanced tree search benchmark, and a multiresolution analysis kernel.

  18. Visualizing higher order finite elements. Final report

    SciTech Connect

    Thompson, David C; Pebay, Philippe Pierre

    2005-11-01

    This report contains an algorithm for decomposing higher-order finite elements into regions appropriate for isosurfacing and proves the conditions under which the algorithm will terminate. Finite elements are used to create piecewise polynomial approximants to the solution of partial differential equations for which no analytical solution exists. These polynomials represent fields such as pressure, stress, and momentum. In the past, these polynomials have been linear in each parametric coordinate. Each polynomial coefficient must be uniquely determined by a simulation, and these coefficients are called degrees of freedom. When there are not enough degrees of freedom, simulations will typically fail to produce a valid approximation to the solution. Recent work has shown that increasing the number of degrees of freedom by increasing the order of the polynomial approximation (instead of increasing the number of finite elements, each of which has its own set of coefficients) can allow some types of simulations to produce a valid approximation with many fewer degrees of freedom than increasing the number of finite elements alone. However, once the simulation has determined the values of all the coefficients in a higher-order approximant, tools do not exist for visual inspection of the solution. This report focuses on a technique for the visual inspection of higher-order finite element simulation results based on decomposing each finite element into simplicial regions where existing visualization algorithms such as isosurfacing will work. The requirements of the isosurfacing algorithm are enumerated and related to the places where the partial derivatives of the polynomial become zero. The original isosurfacing algorithm is then applied to each of these regions in turn.

  19. Complexity in scalable computing.

    SciTech Connect

    Rouson, Damian W. I.

    2008-12-01

    The rich history of scalable computing research owes much to a rapid rise in computing platform scale in terms of size and speed. As platforms evolve, so must algorithms and the software expressions of those algorithms. Unbridled growth in scale inevitably leads to complexity. This special issue grapples with two facets of this complexity: scalable execution and scalable development. The former results from efficient programming of novel hardware with increasing numbers of processing units (e.g., cores, processors, threads or processes). The latter results from efficient development of robust, flexible software with increasing numbers of programming units (e.g., procedures, classes, components or developers). The progression in the above two parenthetical lists goes from the lowest levels of abstraction (hardware) to the highest (people). This issue's theme encompasses this entire spectrum. The lead author of each article resides in the Scalable Computing Research and Development Department at Sandia National Laboratories in Livermore, CA. Their co-authors hail from other parts of Sandia, other national laboratories and academia. Their research sponsors include several programs within the Department of Energy's Office of Advanced Scientific Computing Research and its National Nuclear Security Administration, along with Sandia's Laboratory Directed Research and Development program and the Office of Naval Research. The breadth of interests of these authors and their customers reflects in the breadth of applications this issue covers. This article demonstrates how to obtain scalable execution on the increasingly dominant high-performance computing platform: a Linux cluster with multicore chips. The authors describe how deep memory hierarchies necessitate reducing communication overhead by using threads to exploit shared register and cache memory. On a matrix-matrix multiplication problem, they achieve up to 96% parallel efficiency with a three-part strategy: intra

  20. A Scalable Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Aiken, Alexander

    2001-01-01

    The Scalable Analysis Toolkit (SAT) project aimed to demonstrate that it is feasible and useful to statically detect software bugs in very large systems. The technical focus of the project was on a relatively new class of constraint-based techniques for analysis software, where the desired facts about programs (e.g., the presence of a particular bug) are phrased as constraint problems to be solved. At the beginning of this project, the most successful forms of formal software analysis were limited forms of automatic theorem proving (as exemplified by the analyses used in language type systems and optimizing compilers), semi-automatic theorem proving for full verification, and model checking. With a few notable exceptions these approaches had not been demonstrated to scale to software systems of even 50,000 lines of code. Realistic approaches to large-scale software analysis cannot hope to make every conceivable formal method scale. Thus, the SAT approach is to mix different methods in one application by using coarse and fast but still adequate methods at the largest scales, and reserving the use of more precise but also more expensive methods at smaller scales for critical aspects (that is, aspects critical to the analysis problem under consideration) of a software system. The principled method proposed for combining a heterogeneous collection of formal systems with different scalability characteristics is mixed constraints. This idea had been used previously in small-scale applications with encouraging results: using mostly coarse methods and narrowly targeted precise methods, useful information (meaning the discovery of bugs in real programs) was obtained with excellent scalability.

  1. Scalable solvers and applications

    SciTech Connect

    Ribbens, C J

    2000-10-27

    The purpose of this report is to summarize research activities carried out under Lawrence Livermore National Laboratory (LLNL) research subcontract B501073. This contract supported the principal investigator (P1), Dr. Calvin Ribbens, during his sabbatical visit to LLNL from August 1999 through June 2000. Results and conclusions from the work are summarized below in two major sections. The first section covers contributions to the Scalable Linear Solvers and hypre projects in the Center for Applied Scientific Computing (CASC). The second section describes results from collaboration with Patrice Turchi of LLNL's Chemistry and Materials Science Directorate (CMS). A list of publications supported by this subcontract appears at the end of the report.

  2. Scalable optical quantum computer

    SciTech Connect

    Manykin, E A; Mel'nichenko, E V

    2014-12-31

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  3. Scalable optical quantum computer

    NASA Astrophysics Data System (ADS)

    Manykin, E. A.; Mel'nichenko, E. V.

    2014-12-01

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr3+, regularly located in the lattice of the orthosilicate (Y2SiO5) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications.

  4. SFT: Scalable Fault Tolerance

    SciTech Connect

    Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod

    2006-04-15

    In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparent and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5μs; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.

  5. Visualization of a Large Set of Hydrogen Atomic Orbital Contours Using New and Expanded Sets of Parametric Equations

    ERIC Educational Resources Information Center

    Rhile, Ian J.

    2014-01-01

    Atomic orbitals are a theme throughout the undergraduate chemistry curriculum, and visualizing them has been a theme in this journal. Contour plots as isosurfaces or contour lines in a plane are the most familiar representations of the hydrogen wave functions. In these representations, a surface of a fixed value of the wave function ? is plotted…

  6. Visualization of a Large Set of Hydrogen Atomic Orbital Contours Using New and Expanded Sets of Parametric Equations

    ERIC Educational Resources Information Center

    Rhile, Ian J.

    2014-01-01

    Atomic orbitals are a theme throughout the undergraduate chemistry curriculum, and visualizing them has been a theme in this journal. Contour plots as isosurfaces or contour lines in a plane are the most familiar representations of the hydrogen wave functions. In these representations, a surface of a fixed value of the wave function ? is plotted…

  7. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  8. Optimized scalable network switch

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2007-12-04

    In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.

  9. Engineering scalable biological systems

    PubMed Central

    2010-01-01

    Synthetic biology is focused on engineering biological organisms to study natural systems and to provide new solutions for pressing medical, industrial and environmental problems. At the core of engineered organisms are synthetic biological circuits that execute the tasks of sensing inputs, processing logic and performing output functions. In the last decade, significant progress has been made in developing basic designs for a wide range of biological circuits in bacteria, yeast and mammalian systems. However, significant challenges in the construction, probing, modulation and debugging of synthetic biological systems must be addressed in order to achieve scalable higher-complexity biological circuits. Furthermore, concomitant efforts to evaluate the safety and biocontainment of engineered organisms and address public and regulatory concerns will be necessary to ensure that technological advances are translated into real-world solutions. PMID:21468204

  10. Optimized scalable network switch

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    2010-02-23

    In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.

  11. Scalable Node Monitoring

    SciTech Connect

    Drotar, Alexander P.; Quinn, Erin E.; Sutherland, Landon D.

    2012-07-30

    Project description is: (1) Build a high performance computer; and (2) Create a tool to monitor node applications in Component Based Tool Framework (CBTF) using code from Lightweight Data Metric Service (LDMS). The importance of this project is that: (1) there is a need a scalable, parallel tool to monitor nodes on clusters; and (2) New LDMS plugins need to be able to be easily added to tool. CBTF stands for Component Based Tool Framework. It's scalable and adjusts to different topologies automatically. It uses MRNet (Multicast/Reduction Network) mechanism for information transport. CBTF is flexible and general enough to be used for any tool that needs to do a task on many nodes. Its components are reusable and 'EASILY' added to a new tool. There are three levels of CBTF: (1) frontend node - interacts with users; (2) filter nodes - filters or concatenates information from backend nodes; and (3) backend nodes - where the actual work of the tool is done. LDMS stands for lightweight data metric servies. It's a tool used for monitoring nodes. Ltool is the name of the tool we derived from LDMS. It's dynamically linked and includes the following components: Vmstat, Meminfo, Procinterrupts and more. It works by: Ltool command is run on the frontend node; Ltool collects information from the backend nodes; backend nodes send information to the filter nodes; and filter nodes concatenate information and send to a database on the front end node. Ltool is a useful tool when it comes to monitoring nodes on a cluster because the overhead involved with running the tool is not particularly high and it will automatically scale to any size cluster.

  12. AstroBlend: Visualization package for use with Blender

    NASA Astrophysics Data System (ADS)

    Naiman, J. P.

    2015-12-01

    AstroBlend is a visualization package for use in the three dimensional animation and modeling software, Blender. It reads data in via a text file or can use pre-fab isosurface files stored as OBJ or Wavefront files. AstroBlend supports a variety of codes such as FLASH (ascl:1010.082), Enzo (ascl:1010.072), and Athena (ascl:1010.014), and combines artistic 3D models with computational astrophysics datasets to create models and animations.

  13. Scientific Visualization for Atmospheric Data Analysis in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Engelke, Wito; Flatken, Markus; Garcia, Arturo S.; Bar, Christian; Gerndt, Andreas

    2016-04-01

    terabytes. The combination of different data sources (e.g., MOLA, HRSC, HiRISE) and selection of presented data (e.g., infrared, spectral, imagery) is also supported. Furthermore, the data is presented unchanged and with the highest possible resolution for the target setup (e.g., power-wall, workstation, laptop) and view distance. The visualization techniques for the volumetric data sets can handle VTK [6] based data sets and also support different grid types as well as a time component. In detail, the integrated volume rendering uses a GPU based ray casting algorithm which was adapted to work in spherical coordinate systems. This approach results in interactive frame-rates without compromising visual fidelity. Besides direct visualization via volume rendering the prototype supports interactive slicing, extraction of iso-surfaces and probing. The latter can also be used for side-by-side comparison and on-the-fly diagram generation within the application. Similarily to the surface data a combination of different data sources is supported as well. For example, the extracted iso-surface of a scalar pressure field can be used for the visualization of the temperature. The software development is supported by the ViSTA VR-toolkit [7] and supports different target systems as well as a wide range of VR-devices. Furthermore, the prototype is scalable to run on laptops, workstations and cluster setups. REFERENCES [1] A. S. Garcia, D. J. Roberts, T. Fernando, C. Bar, R. Wolff, J. Dodiya, W. Engelke, and A. Gerndt, "A collaborative workspace architecture for strengthening collaboration among space scientists," in IEEE Aerospace Conference, (Big Sky, Montana, USA), 7-14 March 2015. [2] W. Engelke, "Mars Cartography VR System 2/3." German Aerospace Center (DLR), 2015. Project Deliverable D4.2. [3] E. Hivon, F. K. Hansen, and A. J. Banday, "The healpix primer," arXivpreprint astro-ph/9905275, 1999. [4] K. M. Gorski, E. Hivon, A. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, and M

  14. PADMA: PArallel Data Mining Agents for scalable text classification

    SciTech Connect

    Kargupta, H.; Hamzaoglu, I.; Stafford, B.

    1997-03-01

    This paper introduces PADMA (PArallel Data Mining Agents), a parallel agent based system for scalable text classification. PADMA contains modules for (1) parallel data accessing operations, (2) parallel hierarchical clustering, and (3) web-based data visualization. This paper introduces the general architecture of PADMA and presents a detailed description of its different modules.

  15. Customer oriented SNR scalability scheme for scalable video coding

    NASA Astrophysics Data System (ADS)

    Li, Z. G.; Rahardja, S.

    2005-07-01

    Let the whole region be the whole bit rate range that customers are interested in, and a sub-region be a specific bit rate range. The weighting factor of each sub-region is determined according to customers' interest. A new type of region of interest (ROI) is defined for the SNR scalability as the gap between the coding efficiency of SNR scalability scheme and that of the state-of-the-art single layer coding for a sub-region is a monotonically non-increasing function of its weighting factor. This type of ROI is used as a performance index to design a customer oriented SNR scalability scheme. Our scheme can be used to achieve an optimal customer oriented scalable tradeoff (COST). The profit can thus be maximized.

  16. Scalable Nonlinear Compact Schemes

    SciTech Connect

    Ghosh, Debojyoti; Constantinescu, Emil M.; Brown, Jed

    2014-04-01

    In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.

  17. Scalable SCPPM Decoder

    NASA Technical Reports Server (NTRS)

    Quir, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy; Nakashima, Michael A.; Moision, Bruce E.

    2012-01-01

    A decoder was developed that decodes a serial concatenated pulse position modulation (SCPPM) encoded information sequence. The decoder takes as input a sequence of four bit log-likelihood ratios (LLR) for each PPM slot in a codeword via a XAUI 10-Gb/s quad optical fiber interface. If the decoder is unavailable, it passes the LLRs on to the next decoder via a XAUI 10-Gb/s quad optical fiber interface. Otherwise, it decodes the sequence and outputs information bits through a 1-GB/s Ethernet UDP/IP (User Datagram Protocol/Internet Protocol) interface. The throughput for a single decoder unit is 150-Mb/s at an average of four decoding iterations; by connecting a number of decoder units in series, a decoding rate equal to that of the aggregate rate is achieved. The unit is controlled through a 1-GB/s Ethernet UDP/IP interface. This ground station decoder was developed to demonstrate a deep space optical communication link capability, and is unique in the scalable design to achieve real-time SCPP decoding at the aggregate data rate.

  18. Scalable large format 3D displays

    NASA Astrophysics Data System (ADS)

    Chang, Nelson L.; Damera-Venkata, Niranjan

    2010-02-01

    We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.

  19. Declarative Visualization Queries

    NASA Astrophysics Data System (ADS)

    Pinheiro da Silva, P.; Del Rio, N.; Leptoukh, G. G.

    2011-12-01

    necessarily entirely exposed to scientists writing visualization queries, facilitates the automated construction of visualization pipelines. VisKo queries have been successfully used in support of visualization scenarios from Earth Science domains including: velocity model isosurfaces, gravity data raster, and contour map renderings. Our synergistic environment provided by our CYBER-ShARE initiative at the University of Texas at El Paso has allowed us to work closely with Earth Science experts that have both provided us our test data as well as validation as to whether the execution of VisKo queries are returning visualizations that can be used for data analysis. Additionally, we have employed VisKo queries to support visualization scenarios associated with Giovanni, an online platform for data analysis developed by NASA GES DISC. VisKo-enhanced visualizations included time series plotting of aerosol data as well as contour and raster map generation of gridded brightness-temperature data.

  20. Scalable cloud without dedicated storage

    NASA Astrophysics Data System (ADS)

    Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.

    2015-05-01

    We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.

  1. Libra: Scalable Load Balance Analysis

    SciTech Connect

    2009-09-16

    Libra is a tool for scalable analysis of load balance data from all processes in a parallel application. Libra contains an instrumentation module that collects model data from parallel applications and a parallel compression mechanism that uses distributed wavelet transforms to gather load balance model data in a scalable fashion. Data is output to files, and these files can be viewed in a GUI tool by Libra users. The GUI tool associates particular load balance data with regions for code, emabling users to view the load balance properties of distributed "slices" of their application code.

  2. iSIGHT-FD scalability test report.

    SciTech Connect

    Clay, Robert L.; Shneider, Max S.

    2008-07-01

    The engineering analysis community at Sandia National Laboratories uses a number of internal and commercial software codes and tools, including mesh generators, preprocessors, mesh manipulators, simulation codes, post-processors, and visualization packages. We define an analysis workflow as the execution of an ordered, logical sequence of these tools. Various forms of analysis (and in particular, methodologies that use multiple function evaluations or samples) involve executing parameterized variations of these workflows. As part of the DART project, we are evaluating various commercial workflow management systems, including iSIGHT-FD from Engineous. This report documents the results of a scalability test that was driven by DAKOTA and conducted on a parallel computer (Thunderbird). The purpose of this experiment was to examine the suitability and performance of iSIGHT-FD for large-scale, parameterized analysis workflows. As the results indicate, we found iSIGHT-FD to be suitable for this type of application.

  3. 3D visualization of biomedical CT images based on OpenGL and VRML techniques

    NASA Astrophysics Data System (ADS)

    Yin, Meng; Luo, Qingming; Xia, Fuhua

    2002-04-01

    Current high-performance computers and advanced image processing capabilities have made the application of three- dimensional visualization objects in biomedical computer tomographic (CT) images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3D data are typically stored and processed on powerful servers accessible by using TCP/IP, we should hold the results of the isosurface be applied in medical visualization generally. Furthermore, this project is a future part of PACS system our lab is working on. So in this system we use the 3D file format VRML2.0, which is used through the Web interface for manipulating 3D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm. Then we used OpenGL and MFC techniques to render the isosurface and manipulating voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3D image processing on personal computers is rather slow and the set of tools for 3D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed.

  4. Scalable Computation of Streamlines on Very Large Datasets

    SciTech Connect

    Pugmire, David; Childs, Hank; Garth, Christoph; Ahern, Sean; Weber, Gunther H.

    2009-09-01

    Understanding vector fields resulting from large scientific simulations is an important and often difficult task. Streamlines, curves that are tangential to a vector field at each point, are a powerful visualization method in this context. Application of streamline-based visualization to very large vector field data represents a significant challenge due to the non-local and data-dependent nature of streamline computation, and requires careful balancing of computational demands placed on I/O, memory, communication, and processors. In this paper we review two parallelization approaches based on established parallelization paradigms (static decomposition and on-demand loading) and present a novel hybrid algorithm for computing streamlines. Our algorithm is aimed at good scalability and performance across the widely varying computational characteristics of streamline-based problems. We perform performance and scalability studies of all three algorithms on a number of prototypical application problems and demonstrate that our hybrid scheme is able to perform well in different settings.

  5. A New Approach to the Visual Rendering of Mantle Tomography

    NASA Astrophysics Data System (ADS)

    Holtzman, B. K.; Pratt, M. J.; Turk, M.; Hannasch, D. A.

    2016-12-01

    Visualization of mantle tomographic models requires a range of subjective aesthetic decisions that are often made subconsciously or unarticulated by authors. Many of these decisions affect the interpretations of the model, and therefore should be articulated and understood. In 2D these decisions are manifest in the choice of colormap, including the data values associated with the neutral/transitional colorband, as well as the correspondence between the extrema in the colormap and the parameters of the extrema. For example, we generally choose warm color signifying slow- and cool colors signifying fast velocities (or perturbations), but where is the transition, and the color gradients from transition to extrema? In 3D, volumes are generally rendered by choosing an isosurface of a velocity perturbation (relative to a model at each depth) and coloring it slow to fast. The choice of isosurface is arbitrary or guided by a researcher's intuition, again strongly affecting (or driven by) the interpretation. Here, we present a different approach to 3-D rendering of tomography models, using true volumetric rendering with "yt", a python package for visualization and analysis of data. In our approach, we do not use isosurfaces; instead, we render the extrema in the tomographic model as the most opaque, with an opacity function that touches zero (totally transparent) at dynamically selected values, or at the average value at each depth. The intent is that the most robust aspects of the model are visually clear, and the visualization emphasizes the nature of the interfaces between regions as well as the form of distinct mantle regions. Much of the current scientific discussion in upper mantle tomography focuses on the nature of interfaces, so we will demonstrate how decisions in the definition of the transparent regions influence interpretation of tomographic models. Our aim is to develop a visual language for tomographic visualization that can help focus geodynamic questions.

  6. Scalability study of solid xenon

    SciTech Connect

    Yoo, J.; Cease, H.; Jaskierny, W. F.; Markley, D.; Pahlka, R. B.; Balakishiyeva, D.; Saab, T.; Filipenko, M.

    2015-04-01

    We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified {\\it Bridgeman's technique} reproduces a large scale optically transparent solid xenon.

  7. Scalable, distributed data mining using an agent based architecture

    SciTech Connect

    Kargupta, H.; Hamzaoglu, I.; Stafford, B.

    1997-05-01

    Algorithm scalability and the distributed nature of both data and computation deserve serious attention in the context of data mining. This paper presents PADMA (PArallel Data Mining Agents), a parallel agent based system, that makes an effort to address these issues. PADMA contains modules for (1) parallel data accessing operations, (2) parallel hierarchical clustering, and (3) web-based data visualization. This paper describes the general architecture of PADMA and experimental results.

  8. A Scalable Database Infrastructure

    NASA Astrophysics Data System (ADS)

    Arko, R. A.; Chayes, D. N.

    2001-12-01

    The rapidly increasing volume and complexity of MG&G data, and the growing demand from funding agencies and the user community that it be easily accessible, demand that we improve our approach to data management in order to reach a broader user-base and operate more efficient and effectively. We have chosen an approach based on industry-standard relational database management systems (RDBMS) that use community-wide data specifications, where there is a clear and well-documented external interface that allows use of general purpose as well as customized clients. Rapid prototypes assembled with this approach show significant advantages over the traditional, custom-built data management systems that often use "in-house" legacy file formats, data specifications, and access tools. We have developed an effective database prototype based a public domain RDBMS (PostgreSQL) and metadata standard (FGDC), and used it as a template for several ongoing MG&G database management projects - including ADGRAV (Antarctic Digital Gravity Synthesis), MARGINS, the Community Review system of the Digital Library for Earth Science Education, multibeam swath bathymetry metadata, and the R/V Maurice Ewing onboard acquisition system. By using standard formats and specifications, and working from a common prototype, we are able to reuse code and deploy rapidly. Rather than spend time on low-level details such as storage and indexing (which are built into the RDBMS), we can focus on high-level details such as documentation and quality control. In addition, because many commercial off-the-shelf (COTS) and public domain data browsers and visualization tools have built-in RDBMS support, we can focus on backend development and leave the choice of a frontend client(s) up to the end user. While our prototype is running under an open source RDBMS on a single processor host, the choice of standard components allows this implementation to scale to commercial RDBMS products and multiprocessor servers as

  9. A Scalable and Integrative System for Pathway Bioinformatics and Systems Biology

    PubMed Central

    Compani, Behnam; Su, Trent; Chang, Ivan; Cheng, Jianlin; Shah, Kandarp H.; Whisenant, Thomas; Dou, Yimeng; Bergmann, Adriel; Cheong, Raymond; Wold, Barbara; Bardwell, Lee; Levchenko, Andre; Baldi, Pierre; Mjolsness, Eric

    2011-01-01

    Motivation Progress in systems biology depends on developing scalable informatics tools to predictively model, visualize, and flexibly store information about complex biological systems. Scalability of these tools, as well as their ability to integrate within larger frameworks of evolving tools, is critical to address the multi-scale and size complexity of biological systems. Results Using current software technology, such as self-generation of database and object code from UML schemas, facilitates rapid updating of a scalable expert assistance system for modeling biological pathways. Distribution of key components along with connectivity to external data sources and analysis tools is achieved via a web service interface. PMID:20865537

  10. Medical visualization based on VRML technology and its application

    NASA Astrophysics Data System (ADS)

    Yin, Meng; Luo, Qingming; Lu, Qiang; Sheng, Rongbing; Liu, Yafeng

    2003-07-01

    Current high-performance computers and advanced image processing capabilities have made the application of three dimensional visualization objects in biomedical images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3-D data are typically stored and processed on powerful servers accessible by using TCP/IP, we held the results of the isosurface be applied in medical visualization generally. So in this system we use the 3-D file format VRML2.0, which is used through the Web interface for manipulating 3-D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm, using OpenGL and MFC techniques to render the isosurface and manipulate voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3-D image processing on personal computers is rather slow and the set of tools for 3-D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed. With the help of OCT and MPE scanning image system, applying these techniques to the visualization of rabbit brain, constructing data sets of hierarchical subdivisions of the cerebral information, we can establish a virtual environment on the World Wide Web for the rabbit brain research from its gross anatomy to its tissue and cellular levels of detail, providng graphical modeling and information management of both the outer and the inner space of the rabbit brain.

  11. A scalable tools communication infrastructure.

    SciTech Connect

    Buntinas, D.; Bosilca, G.; Graham, R. L.; Vallee, G.; Watson, G. R.; Mathematics and Computer Science; Univ. of Tennessee; ORNL; IBM

    2008-07-01

    The Scalable Tools Communication Infrastructure (STCI) is an open source collaborative effort intended to provide high-performance, scalable, resilient, and portable communications and process control services for a wide variety of user and system tools. STCI is aimed specifically at tools for ultrascale computing and uses a component architecture to simplify tailoring the infrastructure to a wide range of scenarios. This paper describes STCI's design philosophy, the various components that will be used to provide an STCI implementation for a range of ultrascale platforms, and a range of tool types. These include tools supporting parallel run-time environments, such as MPI, parallel application correctness tools and performance analysis tools, as well as system monitoring and management tools.

  12. A Scalable Tools Communication Infrastructure

    SciTech Connect

    Buntinas, Darius; Bosilca, George; Graham, Richard L; Vallee, Geoffroy R; Watson, Gregory R.

    2008-01-01

    The Scalable Tools Communication Infrastructure (STCI) is an open source collaborative effort intended to provide high-performance, scalable, resilient, and portable communications and process control services for a wide variety of user and system tools. STCI is aimed specifically at tools for ultrascale computing and uses a component architecture to simplify tailoring the infrastructure to a wide range of scenarios. This paper describes STCI's design philosophy, the various components that will be used to provide an STCI implementation for a range of ultrascale platforms, and a range of tool types. These include tools supporting parallel run-time environments, such as MPI, parallel application correctness tools and performance analysis tools, as well as system monitoring and management tools.

  13. A Scalable Media Multicasting Scheme

    NASA Astrophysics Data System (ADS)

    Youwei, Zhang

    IP multicast has been proved to be unfeasible for deployment, Application Layer Multicast (ALM) Based on end multicast system is practical and more scalable than IP multicast in Internet. In this paper, an ALM protocol called Scalable multicast for High Definition streaming media (SHD) is proposed in which end to end transmission capability is fully cultivated for HD media transmission without increasing much control overhead. Similar to the transmission style of BiTtorrent, hosts only forward part of data piece according to the available bandwidth that improves the usage of bandwidth greatly. On the other hand, some novel strategies are adopted to overcome the disadvantages of BiTtorrent protocol in streaming media transmission. Data transmission between hosts is implemented in many-one transmission style in Hierarchical architecture in most circumstances. Simulations implemented on Internet-like topology indicate that SHD achieves low link stress, end to end latency and stability.

  14. Scalable computations in penetration mechanics

    SciTech Connect

    Kimsey, K.D.; Schraml, S.J.; Hertel, E.S.

    1998-01-01

    This paper presents an overview of an explicit message passing paradigm for an Eulerian finite volume method for modeling solid dynamics problems involving shock wave propagation, multiple materials, and large deformations. Three-dimensional simulations of high-velocity impact were conducted on the IBM SP2, the SGI Power challenge Array, and the SGI Origin 2000. The scalability of the message-passing code on distributed-memory and symmetric multiprocessor architectures is presented and compared to the ideal linear performance.

  15. Study on scalable coding algorithm for medical image.

    PubMed

    Hongxin, Chen; Zhengguang, Liu; Hongwei, Zhang

    2005-01-01

    According to the characteristics of medical image and wavelet transform, a scalable coding algorithm is presented, which can be used in image transmission by network. Wavelet transform makes up for the weakness of DCT transform and it is similar to the human visual system. The second generation of wavelet transform, the lifting scheme, can be completed by integer form, which is divided into several steps, and they can be realized by calculation form integer to integer. Lifting scheme can simplify the computing process and increase transform precision. According to the property of wavelet sub-bands, wavelet coefficients are organized on the basis of the sequence of their importance, so code stream is formed progressively and it is scalable in resolution. Experimental results show that the algorithm can be used effectively in medical image compression and suitable to long-distance browse.

  16. Design and Analysis of Scalable Parallel Algorithms

    DTIC Science & Technology

    1993-11-15

    Journal of Parallel Programming, 20(2), 1991. 8 Conference Proceedings "* Anshul Gupta, Vipin Kumar and Ahmed Sameh . Performance and Scalability of Precon...Science, University of Minnesota, Minneapolis, 1993. o Anshul Gupta, Vipin Kumar and Ahmed Sameh . Performance and Scalability of Precondi- tioned...Ahmed Sameh . Performance and scalability of precondi- tioned conjugate gradient methods on parallel computers. Technical Report TR 92-64, Uni- versity

  17. Perspective: n-type oxide thermoelectrics via visual search strategies

    NASA Astrophysics Data System (ADS)

    Xing, Guangzong; Sun, Jifeng; Ong, Khuong P.; Fan, Xiaofeng; Zheng, Weitao; Singh, David J.

    2016-05-01

    We discuss and present search strategies for finding new thermoelectric compositions based on first principles electronic structure and transport calculations. We illustrate them by application to a search for potential n-type oxide thermoelectric materials. This includes a screen based on visualization of electronic energy isosurfaces. We report compounds that show potential as thermoelectric materials along with detailed properties, including SrTiO3, which is a known thermoelectric, and appropriately doped KNbO3 and rutile TiO2.

  18. Perspective: n-type oxide thermoelectrics via visual search strategies

    DOE PAGES

    Xing, Guangzong; Sun, Jifeng; Ong, Khuong P.; ...

    2016-02-12

    We discuss and present search strategies for finding new thermoelectric compositions based on first principles electronic structure and transport calculations. We illustrate them by application to a search for potential n-type oxide thermoelectric materials. This includes a screen based on visualization of electronic energy isosurfaces. Lastly, we report compounds that show potential as thermoelectric materials along with detailed properties, including SrTiO3, which is a known thermoelectric, and appropriately doped KNbO3 and rutile TiO2.

  19. Scalable WIM: effective exploration in large-scale astrophysical environments.

    PubMed

    Li, Yinggang; Fu, Chi-Wing; Hanson, Andrew J

    2006-01-01

    Navigating through large-scale virtual environments such as simulations of the astrophysical Universe is difficult. The huge spatial range of astronomical models and the dominance of empty space make it hard for users to travel across cosmological scales effectively, and the problem of wayfinding further impedes the user's ability to acquire reliable spatial knowledge of astronomical contexts. We introduce a new technique called the scalable world-in-miniature (WIM) map as a unifying interface to facilitate travel and wayfinding in a virtual environment spanning gigantic spatial scales: Power-law spatial scaling enables rapid and accurate transitions among widely separated regions; logarithmically mapped miniature spaces offer a global overview mode when the full context is too large; 3D landmarks represented in the WIM are enhanced by scale, positional, and directional cues to augment spatial context awareness; a series of navigation models are incorporated into the scalable WIM to improve the performance of travel tasks posed by the unique characteristics of virtual cosmic exploration. The scalable WIM user interface supports an improved physical navigation experience and assists pragmatic cognitive understanding of a visualization context that incorporates the features of large-scale astronomy.

  20. Rapid and scalable assembly of firefly luciferase substrates†

    PubMed Central

    McCutcheon, David C.; Porterfield, William B.; Prescher, Jennifer A.

    2015-01-01

    Bioluminescence imaging with luciferase-luciferin pairs is a popular method for visualizing biological processes in vivo. Unfortunately, most luciferins are difficult to access and remain prohibitively expensive for some imaging applications. Here we report cost-effective and efficient syntheses of D-luciferin and 6′-aminoluciferin, two widely used bioluminescent substrates. Our approach employs inexpensive anilines and Appel's salt to generate the luciferin cores in a single pot. Additionally, the syntheses are scalable and can provide multi-gram quantities of both substrates. The streamlined production and improved accessibility of luciferin reagents will bolster in vivo imaging efforts. PMID:25525906

  1. Scripts for Scalable Monitoring of Parallel Filesystem Infrastructure

    SciTech Connect

    Caldwell, Blake

    2014-02-27

    Scripts for scalable monitoring of parallel filesystem infrastructure provide frameworks for monitoring the health of block storage arrays and large InfiniBand fabrics. The block storage framework uses Python multiprocessing to within scale the number monitored arrays to scale with the number of processors in the system. This enables live monitoring of HPC-scale filesystem with 10-50 storage arrays. For InfiniBand monitoring, there are scripts included that monitor InfiniBand health of each host along with visualization tools for mapping the topology of complex fabric topologies.

  2. Rapid and scalable assembly of firefly luciferase substrates.

    PubMed

    McCutcheon, David C; Porterfield, William B; Prescher, Jennifer A

    2015-02-21

    Bioluminescence imaging with luciferase-luciferin pairs is a popular method for visualizing biological processes in vivo. Unfortunately, most luciferins are difficult to access and remain prohibitively expensive for some imaging applications. Here we report cost-effective and efficient syntheses of d-luciferin and 6'-aminoluciferin, two widely used bioluminescent substrates. Our approach employs inexpensive anilines and Appel's salt to generate the luciferin cores in a single pot. Additionally, the syntheses are scalable and can provide multi-gram quantities of both substrates. The streamlined production and improved accessibility of luciferin reagents will bolster in vivo imaging efforts.

  3. Scripts for Scalable Monitoring of Parallel Filesystem Infrastructure

    SciTech Connect

    Caldwell, Blake

    2014-02-27

    Scripts for scalable monitoring of parallel filesystem infrastructure provide frameworks for monitoring the health of block storage arrays and large InfiniBand fabrics. The block storage framework uses Python multiprocessing to within scale the number monitored arrays to scale with the number of processors in the system. This enables live monitoring of HPC-scale filesystem with 10-50 storage arrays. For InfiniBand monitoring, there are scripts included that monitor InfiniBand health of each host along with visualization tools for mapping the topology of complex fabric topologies.

  4. Scalable coding of encrypted images.

    PubMed

    Zhang, Xinpeng; Feng, Guorui; Ren, Yanli; Qian, Zhenxing

    2012-06-01

    This paper proposes a novel scheme of scalable coding for encrypted images. In the encryption phase, the original pixel values are masked by a modulo-256 addition with pseudorandom numbers that are derived from a secret key. After decomposing the encrypted data into a downsampled subimage and several data sets with a multiple-resolution construction, an encoder quantizes the subimage and the Hadamard coefficients of each data set to reduce the data amount. Then, the data of quantized subimage and coefficients are regarded as a set of bitstreams. At the receiver side, while a subimage is decrypted to provide the rough information of the original content, the quantized coefficients can be used to reconstruct the detailed content with an iteratively updating procedure. Because of the hierarchical coding mechanism, the principal original content with higher resolution can be reconstructed when more bitstreams are received.

  5. Highly scalable coherent fiber combining

    NASA Astrophysics Data System (ADS)

    Antier, M.; Bourderionnet, J.; Larat, C.; Lallier, E.; Brignon, A.

    2015-10-01

    An architecture for active coherent fiber laser beam combining using an interferometric measurement is demonstrated. This technique allows measuring the exact phase errors of each fiber beam in a single shot. Therefore, this method is a promising candidate toward very large number of combined fibers. Our experimental system, composed of 16 independent fiber channels, is used to evaluate the achieved phase locking stability in terms of phase shift error and bandwidth. We show that only 8 pixels per fiber on the camera is required for a stable close loop operation with a residual phase error of λ/20 rms, which demonstrates the scalability of this concept. Furthermore we propose a beam shaping technique to increase the combining efficiency.

  6. Scalable Performance Measurement and Analysis

    SciTech Connect

    Gamblin, Todd

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  7. Visualization of Turbulence with OpenGL

    NASA Astrophysics Data System (ADS)

    Avril, A.; Makowski, M. A.; Umansky, M.; Kalling, R.; Schissel, D. P.

    2009-11-01

    Turbulence is an all-pervasive phenomenon in plasmas. The edge turbulence is of particular interest for the containment of plasmas during fusion processes. It is simulated with BOUT, a 4D (3 spatial + time coordinates) edge turbulence simulation code that is typical of modern codes in many ways. While predictive, the 4D outputs of these codes are difficult to visualize. In an effort to better understand the macroscopic trends of edge turbulence in toroidal plasmas, we are developing routines to render the BOUT output, using the OpenGL framework in C^++. These routines will allow us to follow the evolution of isosurfaces through time, and we anticipate gaining insight into the nonlinear dynamics of turbulence as a result. Additionally, these routines could potentially be used to visualize the output of other modeling codes.

  8. Evaluation of angiogram visualization methods for fast and reliable aneurysm diagnosis

    NASA Astrophysics Data System (ADS)

    Lesar, Žiga; Bohak, Ciril; Marolt, Matija

    2015-03-01

    In this paper we present the results of an evaluation of different visualization methods for angiogram volumetric data-ray casting, marching cubes, and multi-level partition of unity implicits. There are several options available with ray-casting: isosurface extraction, maximum intensity projection and alpha compositing, each producing fundamentally different results. Different visualization methods are suitable for different needs, so this choice is crucial in diagnosis and decision making processes. We also evaluate visual effects such as ambient occlusion, screen space ambient occlusion, and depth of field. Some visualization methods include transparency, so we address the question of relevancy of this additional visual information. We employ transfer functions to map data values to color and transparency, allowing us to view or hide particular tissues. All the methods presented in this paper were developed using OpenCL, striving for real-time rendering and quality interaction. An evaluation has been conducted to assess the suitability of the visualization methods. Results show superiority of isosurface extraction with ambient occlusion effects. Visual effects may positively or negatively affect perception of depth, motion, and relative positions in space.

  9. Illustrative volume visualization using GPU-based particle systems.

    PubMed

    van Pelt, Roy; Vilanova, Anna; van de Wetering, Huub

    2010-01-01

    Illustrative techniques are generally applied to produce stylized renderings. Various illustrative styles have been applied to volumetric data sets, producing clearer images and effectively conveying visual information. We adopt particle systems to produce user-configurable stylized renderings from the volume data, imitating traditional pen-and-ink drawings. In the following, we present an interactive GPU-based illustrative volume rendering framework, called VolFliesGPU. In this framework, isosurfaces are sampled by evenly distributed particle sets, delineating surface shape by illustrative styles. The appearance of these styles is based on locally-measured surface properties. For instance, hatches convey surface shape by orientation and shape characteristics are enhanced by color, mapped using a curvature-based transfer function. Hidden-surfaces are generally removed to avoid visual clutter, after that a combination of styles is applied per isosurface. Multiple surfaces and styles can be explored interactively, exploiting parallelism in both graphics hardware and particle systems. We achieve real-time interaction and prompt parametrization of the illustrative styles, using an intuitive GPGPU paradigm that delivers the computational power to drive our particle system and visualization algorithms.

  10. Rate control scheme for consistent video quality in scalable video codec.

    PubMed

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  11. Scalable Video Transcaling for the Wireless Internet

    NASA Astrophysics Data System (ADS)

    Radha, Hayder; van der Schaar, Mihaela; Karande, Shirish

    2004-12-01

    The rapid and unprecedented increase in the heterogeneity of multimedia networks and devices emphasizes the need for scalable and adaptive video solutions both for coding and transmission purposes. However, in general, there is an inherent trade-off between the level of scalability and the quality of scalable video streams. In other words, the higher the bandwidth variation, the lower the overall video quality of the scalable stream that is needed to support the desired bandwidth range. In this paper, we introduce the notion of wireless video transcaling (TS), which is a generalization of (nonscalable) transcoding. With TS, a scalable video stream, that covers a given bandwidth range, is mapped into one or more scalable video streams covering different bandwidth ranges. Our proposed TS framework exploits the fact that the level of heterogeneity changes at different points of the video distribution tree over wireless and mobile Internet networks. This provides the opportunity to improve the video quality by performing the appropriate TS process. We argue that an Internet/wireless network gateway represents a good candidate for performing TS. Moreover, we describe hierarchical TS (HTS), which provides a "transcaler" with the option of choosing among different levels of TS processes with different complexities. We illustrate the benefits of TS by considering the recently developed MPEG-4 fine granularity scalability (FGS) video coding. Extensive simulation results of video TS over bit rate ranges supported by emerging wireless LANs are presented.

  12. Fully scalable video coding with packed stream

    NASA Astrophysics Data System (ADS)

    Lopez, Manuel F.; Rodriguez, Sebastian G.; Ortiz, Juan Pablo; Dana, Jose Miguel; Ruiz, Vicente G.; Garcia, Inmaculada

    2005-03-01

    Scalable video coding is a technique which allows a compressed video stream to be decoded in several different ways. This ability allows a user to adaptively recover a specific version of a video depending on its own requirements. Video sequences have temporal, spatial and quality scalabilities. In this work we introduce a novel fully scalable video codec. It is based on a motion-compensated temporal filtering (MCTF) of the video sequences and it uses some of the basic elements of JPEG 2000. This paper describes several specific proposals for video on demand and video-conferencing applications over non-reliable packet-switching data networks.

  13. Scalable Computation of Streamlines on Very Large Datasets

    SciTech Connect

    Pugmire, Dave; Garth, Christoph; Childs, Hank; Ahern, Sean; Weber, Gunther H

    2009-01-01

    nderstanding vector fields resulting from large scientific simulations is an important and often difficult task. Streamlines, curves that are tangential to a ve ctor field at each point, are a powerful visualization method in this context. Application of streamline-based visualization to very large vector field data repr esents a significant challenge due to the non-local and data-dependent nature of streamline computation, and requires careful balancing of computational demands placed on I/O, memory, communication, and processors. In this paper we review two parallelization approaches based on established parallelization paradigms (stat ic decomposition and on-demand loading) and present a novel hybrid algorithm for computing streamlines. Our algorithm is aimed at good scalability and performanc e across the widely varying computational characteristics of streamline-based problems. We perform performance and scalability studies of all three algorithms on a number of prototypical application problems and demonstrate that our hybrid scheme is able to perform well in different settings.

  14. Scalable architecture in mammalian brains.

    PubMed

    Clark, D A; Mitra, P P; Wang, S S

    2001-05-10

    Comparison of mammalian brain parts has often focused on differences in absolute size, revealing only a general tendency for all parts to grow together. Attempts to find size-independent effects using body weight as a reference variable obscure size relationships owing to independent variation of body size and give phylogenies of questionable significance. Here we use the brain itself as a size reference to define the cerebrotype, a species-by-species measure of brain composition. With this measure, across many mammalian taxa the cerebellum occupies a constant fraction of the total brain volume (0.13 +/- 0.02), arguing against the hypothesis that the cerebellum acts as a computational engine principally serving the neocortex. Mammalian taxa can be well separated by cerebrotype, thus allowing the use of quantitative neuroanatomical data to test evolutionary relationships. Primate cerebrotypes have progressively shifted and neocortical volume fractions have become successively larger in lemurs and lorises, New World monkeys, Old World monkeys, and hominoids, lending support to the idea that primate brain architecture has been driven by directed selection pressure. At the same time, absolute brain size can vary over 100-fold within a taxon, while maintaining a relatively uniform cerebrotype. Brains therefore constitute a scalable architecture.

  15. Scalable encryption using alpha rooting

    NASA Astrophysics Data System (ADS)

    Wharton, Eric J.; Panetta, Karen A.; Agaian, Sos S.

    2008-04-01

    Full and partial encryption methods are important for subscription based content providers, such as internet and cable TV pay channels. Providers need to be able to protect their products while at the same time being able to provide demonstrations to attract new customers without giving away the full value of the content. If an algorithm were introduced which could provide any level of full or partial encryption in a fast and cost effective manner, the applications to real-time commercial implementation would be numerous. In this paper, we present a novel application of alpha rooting, using it to achieve fast and straightforward scalable encryption with a single algorithm. We further present use of the measure of enhancement, the Logarithmic AME, to select optimal parameters for the partial encryption. When parameters are selected using the measure, the output image achieves a balance between protecting the important data in the image while still containing a good overall representation of the image. We will show results for this encryption method on a number of images, using histograms to evaluate the effectiveness of the encryption.

  16. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    PubMed

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  17. Scalable analysis tools for sensitivity analysis and UQ (3160) results.

    SciTech Connect

    Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.

    2009-09-01

    The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.

  18. Scalable Multi-Platform Distribution of Spatial 3d Contents

    NASA Astrophysics Data System (ADS)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  19. Visual Debugging of Visualization Software: A Case Study for Particle Systems

    SciTech Connect

    Angel, Edward; Crossno, Patricia

    1999-07-12

    Visualization systems are complex dynamic software systems. Debugging such systems is difficult using conventional debuggers because the programmer must try to imagine the three-dimensional geometry based on a list of positions and attributes. In addition, the programmer must be able to mentally animate changes in those positions and attributes to grasp dynamic behaviors within the algorithm. In this paper we shall show that representing geometry, attributes, and relationships graphically permits visual pattern recognition skills to be applied to the debugging problem. The particular application is a particle system used for isosurface extraction from volumetric data. Coloring particles based on individual attributes is especially helpful when these colorings are viewed as animations over successive iterations in the program. Although we describe a particular application, the types of tools that we discuss can be applied to a variety of problems.

  20. Load balancing techniques for scalable web servers

    NASA Astrophysics Data System (ADS)

    Bryhni, Haakon; Klovning, Espen; Kure, Oivind

    1998-10-01

    Scalable web servers can be built using a Network of Workstations (NOW) where server capability can be added by adding new workstations as the workload increases. The task of load balancing Hyper Text Transfer Protocol traffic to scalable web servers is the topic of this paper. We present a classification framework for scalable web servers, and present simulations of a clustered web server. The cluster communication is modeled using a detailed, verified model of TCP/IP processing over Asynchronous Transfer Mode. The simulator is a trace driven discrete even simulator, and the traces are obtained from the proxy server of a large Internet Service Provider in Norway. Various load balancing schemes are simulated for Robin load balancing policy implemented in a modified router gives better average response time and better load balancing than the Rotating Nameserver method used in current scalable web servers.

  1. Efficient entropy coding for scalable video coding

    NASA Astrophysics Data System (ADS)

    Choi, Woong Il; Yang, Jungyoup; Jeon, Byeungwoo

    2005-10-01

    The standardization for the scalable extension of H.264 has called for additional functionality based on H.264 standard to support the combined spatio-temporal and SNR scalability. For the entropy coding of H.264 scalable extension, Context-based Adaptive Binary Arithmetic Coding (CABAC) scheme is considered so far. In this paper, we present a new context modeling scheme by using inter layer correlation between the syntax elements. As a result, it improves coding efficiency of entropy coding in H.264 scalable extension. In simulation results of applying the proposed scheme to encoding the syntax element mb_type, it is shown that improvement in coding efficiency of the proposed method is up to 16% in terms of bit saving due to estimation of more adequate probability model.

  2. A Scalable Segmented Decision Tree Abstract Domain

    NASA Astrophysics Data System (ADS)

    Cousot, Patrick; Cousot, Radhia; Mauborgne, Laurent

    The key to precision and scalability in all formal methods for static program analysis and verification is the handling of disjunctions arising in relational analyses, the flow-sensitive traversal of conditionals and loops, the context-sensitive inter-procedural calls, the interleaving of concurrent threads, etc. Explicit case enumeration immediately yields to combinatorial explosion. The art of scalable static analysis is therefore to abstract disjunctions to minimize cost while preserving weak forms of disjunctions for expressivity.

  3. Temporally Scalable Visual SLAM using a Reduced Pose Graph

    DTIC Science & Technology

    2012-05-25

    generated from Kinect data for illustration purposes. II. RELATED WORK The pose graph optimization approach to SLAM was first introduced by Lu and Milios [19...such as the Microsoft Kinect ) and results for both camera types are presented in Section V. Additionally, our approach can incorporate IMU (roll and...camera, a Kinect sensor and a Microstrain IMU among other sensors. The data was collected in a large building over a period of six months. There were

  4. Visual Interface for Materials Simulations

    SciTech Connect

    Muller, Richard P.; Dorsey, David M.

    2004-08-01

    VIMES (Visual Inteface for Materials Simulations) is a graphical user interface (GUI) for pre- and post-processing alomistic materials science calculations. The code includes tools for building and visualizing simple crystals, supercells, and surfaces, as well as tools for managing and modifying the input to Sandia materials simulations codes such as Quest (Peter Schultz, SNL 9235) and Towhee (Marcus Martin, SNL 9235). It is often useful to have a graphical interlace to construct input for materials simulations codes and to analyze the output of these programs. VIMES has been designed not only to build and visualize different materials systems, but also to allow several Sandia codes to be easier to use and analyze. Furthermore. VIMES has been designed to be reasonably easy to extend to new materials programs. We anticipate that users of Sandia materials simulations codes will use VIMCS to simplify the submission and analysis of these simulations. VIMES uses standard OpenGL graphics (as implemented in the Python programming language) to display the molecules. The algorithms used to rotate, zoom, and pan molecules are all standard applications using the OpenGL libraries. VIMES uses the Marching Cubes algorithm for isosurfacing 3D data such as molecular orbitals or electron densities around the molecules.

  5. Scalable hybrid computation with spikes.

    PubMed

    Sarpeshkar, Rahul; O'Halloran, Micah

    2002-09-01

    We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moderate-precision analog units to collectively compute a precise answer to a computation. Second, frequent discrete signal restoration of the analog information prevents analog noise and offset from degrading the computation. And, third, a state machine enables complex computations to be created using a sequence of elementary computations. A natural choice for implementing this hybrid scheme is one based on spikes because spike-count codes are digital, while spike-time codes are analog. We illustrate how spikes afford easy ways to implement all three components of scalable hybrid computation. First, as an important example of distributed analog computation, we show how spikes can create a distributed modular representation of an analog number by implementing digital carry interactions between spiking analog neurons. Second, we show how signal restoration may be performed by recursive spike-count quantization of spike-time codes. And, third, we use spikes from an analog dynamical system to trigger state transitions in a digital dynamical system, which reconfigures the analog dynamical system using a binary control vector; such feedback interactions between analog and digital dynamical systems create a hybrid state machine (HSM). The HSM extends and expands the concept of a digital finite-state-machine to the hybrid domain. We present experimental data from a two-neuron HSM on a chip that implements error-correcting analog-to-digital conversion with the concurrent use of spike-time and spike-count codes. We also present experimental data from silicon circuits that implement HSM-based pattern recognition using spike-time synchrony. We outline how HSMs may be

  6. Scalable extensions of HEVC for next generation services

    NASA Astrophysics Data System (ADS)

    Misra, Kiran; Segall, Andrew; Zhao, Jie; Kim, Seung-Hwan

    2013-02-01

    The high efficiency video coding (HEVC) standard being developed by ITU-T VCEG and ISO/IEC MPEG achieves a compression goal of reducing the bitrate by half for the same visual quality when compared with earlier video compression standards such as H.264/AVC. It achieves this goal with the use of several new tools such as quad-tree based partitioning of data, larger block sizes, improved intra prediction, the use of sophisticated prediction of motion information, inclusion of an in-loop sample adaptive offset process etc. This paper describes an approach where the HEVC framework is extended to achieve spatial scalability using a multi-loop approach. The enhancement layer inter-predictive coding efficiency is improved by including within the decoded picture buffer multiple up-sampled versions of the decoded base layer picture. This approach has the advantage of achieving significant coding gains with a simple extension of the base layer tools such as inter-prediction, motion information signaling etc. Coding efficiency of the enhancement layer is further improved using adaptive loop filter and internal bit-depth increment. The performance of the proposed scalable video coding approach is compared to simulcast transmission of video data using high efficiency model version 6.1 (HM-6.1). The bitrate savings are measured using Bjontegaard Delta (BD) rate for a spatial scalability factor of 2 and 1.5 respectively when compared with simulcast anchors. It is observed that the proposed approach provides an average luma BD rate gains of 33.7% and 50.5% respectively.

  7. Wanted: Scalable Tracers for Diffusion Measurements

    PubMed Central

    2015-01-01

    Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core–shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say “reinforced Ficoll” or “reinforced hyperbranched polyglycerol.” PMID:25319586

  8. Wanted: scalable tracers for diffusion measurements.

    PubMed

    Saxton, Michael J

    2014-11-13

    Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core-shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say "reinforced Ficoll" or "reinforced hyperbranched polyglycerol."

  9. A Statistical Direct Volume Rendering Framework for Visualization of Uncertain Data.

    PubMed

    Sakhaee, Elham; Entezari, Alireza

    2016-12-08

    With uncertainty present in almost all modalities of data acquisition, reduction, transformation, and representation, there is a growing demand for mathematical analysis of uncertainty propagation in data processing pipelines. In this paper, we present a statistical framework for quantification of uncertainty and its propagation in the main stages of the visualization pipeline. We propose a novel generalization of Irwin-Hall distributions from the statistical viewpoint of splines and box-splines, that enables interpolation of random variables. Moreover, we introduce a probabilistic transfer function classification model that allows for incorporating probability density functions into the volume rendering integral. Our statistical framework allows for incorporating distributions from various sources of uncertainty which makes it suitable in a wide range of visualization applications. We demonstrate effectiveness of our approach in visualization of ensemble data, visualizing large datasets at reduced scale, iso-surface extraction, and visualization of noisy data.

  10. Garuda: a scalable tiled display wall using commodity PCs.

    PubMed

    Nirnimesh; Harish, Pawan; Narayanan, P J

    2007-01-01

    Cluster-based tiled display walls can provide cost-effective and scalable displays with high resolution and a large display area. The software to drive them needs to scale too if arbitrarily large displays are to be built. Chromium is a popular software API used to construct such displays. Chromium transparently renders any OpenGL application to a tiled display by partitioning and sending individual OpenGL primitives to each client per frame. Visualization applications often deal with massive geometric data with millions of primitives. Transmitting them every frame results in huge network requirements that adversely affect the scalability of the system. In this paper, we present Garuda, a client-server-based display wall framework that uses off-the-shelf hardware and a standard network. Garuda is scalable to large tile configurations and massive environments. It can transparently render any application built using the Open Scene Graph (OSG) API to a tiled display without any modification by the user. The Garuda server uses an object-based scene structure represented using a scene graph. The server determines the objects visible to each display tile using a novel adaptive algorithm that culls the scene graph to a hierarchy of frustums. Required parts of the scene graph are transmitted to the clients, which cache them to exploit the interframe redundancy. A multicast-based protocol is used to transmit the geometry to exploit the spatial redundancy present in tiled display systems. A geometry push philosophy from the server helps keep the clients in sync with one another. Neither the server nor a client needs to render the entire scene, making the system suitable for interactive rendering of massive models. Transparent rendering is achieved by intercepting the cull, draw, and swap functions of OSG and replacing them with our own. We demonstrate the performance and scalability of the Garuda system for different configurations of display wall. We also show that the

  11. Scalable k-means statistics with Titan.

    SciTech Connect

    Thompson, David C.; Bennett, Janine C.; Pebay, Philippe Pierre

    2009-11-01

    This report summarizes existing statistical engines in VTK/Titan and presents both the serial and parallel k-means statistics engines. It is a sequel to [PT08], [BPRT09], and [PT09] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, and contingency engines. The ease of use of the new parallel k-means engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the k-means engine.

  12. Scalable still image coding based on wavelet

    NASA Astrophysics Data System (ADS)

    Yan, Yang; Zhang, Zhengbing

    2005-02-01

    The scalable image coding is an important objective of the future image coding technologies. In this paper, we present a kind of scalable image coding scheme based on wavelet transform. This method uses the famous EZW (Embedded Zero tree Wavelet) algorithm; we give a high-quality encoding to the ROI (region of interest) of the original image and a rough encoding to the rest. This method is applied well in limited memory space condition, and we encode the region of background according to the memory capacity. In this way, we can store the encoded image in limited memory space easily without losing its main information. Simulation results show it is effective.

  13. Validation of a Scalable Solar Sailcraft

    NASA Technical Reports Server (NTRS)

    Murphy, D. M.

    2006-01-01

    The NASA In-Space Propulsion (ISP) program sponsored intensive solar sail technology and systems design, development, and hardware demonstration activities over the past 3 years. Efforts to validate a scalable solar sail system by functional demonstration in relevant environments, together with test-analysis correlation activities on a scalable solar sail system have recently been successfully completed. A review of the program, with descriptions of the design, results of testing, and analytical model validations of component and assembly functional, strength, stiffness, shape, and dynamic behavior are discussed. The scaled performance of the validated system is projected to demonstrate the applicability to flight demonstration and important NASA road-map missions.

  14. Enabling New Visualization and Analysis Tools with a Kameleon-based Data Streaming Service

    NASA Astrophysics Data System (ADS)

    Berrios, D.; Rastaetter, L.; Maddox, M. M.

    2012-12-01

    The Community Coordinated Modeling Center (CCMC) at NASA Goddard Space Flight Center has developed a new data streaming service for space weather simulation data. Leveraging the capabilities of the Kameleon-Plus data access and interpolation library, it provides access to visualizations of high resolution simulations with an easy to use, well defined API. 2D slices, isosurfaces, fieldlines, and volumetric datacubes can be requested in multiple formats. We present the motivations for designing such a service, the capabilities of the underlying Kameleon-Plus library, the new data streaming service, and demo experimental tools using this service.

  15. Medusa: A Scalable MR Console Using USB

    PubMed Central

    Stang, Pascal P.; Conolly, Steven M.; Santos, Juan M.; Pauly, John M.; Scott, Greig C.

    2012-01-01

    MRI pulse sequence consoles typically employ closed proprietary hardware, software, and interfaces, making difficult any adaptation for innovative experimental technology. Yet MRI systems research is trending to higher channel count receivers, transmitters, gradient/shims, and unique interfaces for interventional applications. Customized console designs are now feasible for researchers with modern electronic components, but high data rates, synchronization, scalability, and cost present important challenges. Implementing large multi-channel MR systems with efficiency and flexibility requires a scalable modular architecture. With Medusa, we propose an open system architecture using the Universal Serial Bus (USB) for scalability, combined with distributed processing and buffering to address the high data rates and strict synchronization required by multi-channel MRI. Medusa uses a modular design concept based on digital synthesizer, receiver, and gradient blocks, in conjunction with fast programmable logic for sampling and synchronization. Medusa is a form of synthetic instrument, being reconfigurable for a variety of medical/scientific instrumentation needs. The Medusa distributed architecture, scalability, and data bandwidth limits are presented, and its flexibility is demonstrated in a variety of novel MRI applications. PMID:21954200

  16. Scalable Domain Decomposed Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  17. Scalable microreactors and methods for using same

    DOEpatents

    Lawal, Adeniyi; Qian, Dongying

    2010-03-02

    The present invention provides a scalable microreactor comprising a multilayered reaction block having alternating reaction plates and heat exchanger plates that have a plurality of microchannels; a multilaminated reactor input manifold, a collecting reactor output manifold, a heat exchange input manifold and a heat exchange output manifold. The present invention also provides methods of using the microreactor for multiphase chemical reactions.

  18. Medusa: a scalable MR console using USB.

    PubMed

    Stang, Pascal P; Conolly, Steven M; Santos, Juan M; Pauly, John M; Scott, Greig C

    2012-02-01

    Magnetic resonance imaging (MRI) pulse sequence consoles typically employ closed proprietary hardware, software, and interfaces, making difficult any adaptation for innovative experimental technology. Yet MRI systems research is trending to higher channel count receivers, transmitters, gradient/shims, and unique interfaces for interventional applications. Customized console designs are now feasible for researchers with modern electronic components, but high data rates, synchronization, scalability, and cost present important challenges. Implementing large multichannel MR systems with efficiency and flexibility requires a scalable modular architecture. With Medusa, we propose an open system architecture using the universal serial bus (USB) for scalability, combined with distributed processing and buffering to address the high data rates and strict synchronization required by multichannel MRI. Medusa uses a modular design concept based on digital synthesizer, receiver, and gradient blocks, in conjunction with fast programmable logic for sampling and synchronization. Medusa is a form of synthetic instrument, being reconfigurable for a variety of medical/scientific instrumentation needs. The Medusa distributed architecture, scalability, and data bandwidth limits are presented, and its flexibility is demonstrated in a variety of novel MRI applications.

  19. Scalable IP switching based on optical interconnect

    NASA Astrophysics Data System (ADS)

    Luo, Zhixiang; Cao, Mingcui; Liu, Erwu

    2000-10-01

    IP traffic on the Internet and enterprise networks has been growing exponentially in the last several years, and much attention is being focused on the use of IP multicast for real-time multimedia applications. The current soft and general-purpose CPU-based routers face great stress since they have great latency and low forwarding speeds. Based on the ASICs, layer 2 switching provides high-speed packet forwarding. Integrating high-speed of Layer 2 switching with the flexibility of Layer 3 routing, Layer 3 switching (IP switching) has been put forward in order to avoid the performance bottleneck associated with Layer 3 forwarding. In this paper, we present a prototype system of a scalable IP switching based on scalable ATM switching fabric and optical interconnect. The IP switching system mainly consists of the input/output interface unit, scalable ATM switching fabric and IP control component. Optical interconnects between the input fan-out stage and the interconnect stage, also the interconnect stage and the output concentration stage provide high-speed data paths. And the interconnect stage is composed of 16 X 16 CMOS-SEED ATM switching modules. With 64 ports of OC-12 interface, the maximum throughput of the prototype system is about 20 million packets per second (MPPS) for 256 bytes average packet length, and the packet loss ratio is less than 10e-9. Benefiting from the scalable architecture and the optical interconnect, this IP switching system can easily scale to very large network size.

  20. Novel Scalable 3-D MT Inverse Solver

    NASA Astrophysics Data System (ADS)

    Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.

    2016-12-01

    We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.

  1. GPView: A program for wave function analysis and visualization.

    PubMed

    Shi, Tian; Wang, Ping

    2016-11-01

    In this manuscript, we will introduce a recently developed program GPView, which can be used for wave function analysis and visualization. The wave function analysis module can calculate and generate 3D cubes for various types of molecular orbitals and electron density of electronic excited states, such as natural orbitals, natural transition orbitals, natural difference orbitals, hole-particle density, detachment-attachment density and transition density. The visualization module of GPView can display molecular and electronic (iso-surfaces) structures. It is also able to animate single trajectories of molecular dynamics and non-adiabatic excited state molecular dynamics using the data stored in existing files. There are also other utilities to extract and process the output of quantum chemistry calculations. The GPView provides full graphic user interface (GUI), so it very easy to use. It is available from website http://life-tp.com/gpview.

  2. Open source Scalable Vector Graphics components for enabling GIS in web-based public health surveillance systems.

    PubMed

    Kamadjeu, Raoul; Tolentino, Herman

    2006-01-01

    Geographic Information Systems (GIS) are useful for visual analysis and sharing of spatial data. Open standards like Scalable Vector Graphics (SVG) provide viable alternatives to overcome traditional GIS limitations in resource-constrained settings (low-bandwidth, cost and manpower barriers). This project describes the design and implementation of reusable SVG components for managing geographic information for web-based public health surveillance systems.

  3. A tool for intraoperative visualization of registration results

    NASA Astrophysics Data System (ADS)

    King, Franklin; Lasso, Andras; Pinter, Csaba; Fichtinger, Gabor

    2014-03-01

    PURPOSE: Validation of image registration algorithms is frequently accomplished by the visual inspection of the resulting linear or deformable transformation due to the lack of ground truth information. Visualization of transformations produced by image registration algorithms during image-guided interventions allows for a clinician to evaluate the accuracy of the result transformation. Software packages that perform the visualization of transformations exist, but are not part of a clinically usable software application. We present a tool that visualizes both linear and deformable transformations and is integrated in an open-source software application framework suited for intraoperative use and general evaluation of registration algorithms. METHODS: A choice of six different modes are available for visualization of a transform. Glyph visualization mode uses oriented and scaled glyphs, such as arrows, to represent the displacement field in 3D whereas glyph slice visualization mode creates arrows that can be seen as a 2D vector field. Grid visualization mode creates deformed grids shown in 3D whereas grid slice visualization mode creates a series of 2D grids. Block visualization mode creates a deformed bounding box of the warped volume. Finally, contour visualization mode creates isosurfaces and isolines that visualize the magnitude of displacement across a volume. The application 3D Slicer was chosen as the platform for the transform visualizer tool. 3D Slicer is a comprehensive open-source application framework developed for medical image computing and used for intra-operative registration. RESULTS: The transform visualizer tool fulfilled the requirements for quick evaluation of intraoperative image registrations. Visualizations were generated in 3D Slicer with little computation time on realistic datasets. It is freely available as an extension for 3D Slicer. CONCLUSION: A tool for the visualization of displacement fields was created and integrated into 3D Slicer

  4. Visualizing Higher Order Finite Elements: FY05 Yearly Report.

    SciTech Connect

    Thompson, David; Pebay, Philippe Pierre

    2005-11-01

    This report contains an algorithm for decomposing higher-order finite elementsinto regions appropriate for isosurfacing and proves the conditions under which thealgorithm will terminate. Finite elements are used to create piecewise polynomialapproximants to the solution of partial differential equations for which no analyticalsolution exists. These polynomials represent fields such as pressure, stress, and mo-mentim. In the past, these polynomials have been linear in each parametric coordinate.Each polynomial coefficient must be uniquely determined by a simulation, and thesecoefficients are called degrees of freedom. When there are not enough degrees of free-dom, simulations will typically fail to produce a valid approximation to the solution.Recent work has shown that increasing the number of degrees of freedom by increas-ing the order of the polynomial approximation (instead of increasing the number offinite elements, each of which has its own set of coefficients) can allow some typesof simulations to produce a valid approximation with many fewer degrees of freedomthan increasing the number of finite elements alone. However, once the simulation hasdetermined the values of all the coefficients in a higher-order approximant, tools donot exist for visual inspection of the solution.This report focuses on a technique for the visual inspection of higher-order finiteelement simulation results based on decomposing each finite element into simplicialregions where existing visualization algorithms such as isosurfacing will work. Therequirements of the isosurfacing algorithm are enumerated and related to the placeswhere the partial derivatives of the polynomial become zero. The original isosurfacingalgorithm is then applied to each of these regions in turn.3 AcknowledgementThe authors would like to thank David Day and Louis Romero for their insight into poly-nomial system solvers and the LDRD Senior Council for the opportunity to pursue thisresearch. The authors were

  5. A Scalability Model for ECS's Data Server

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Singhal, Mukesh

    1998-01-01

    This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.

  6. Scalable coherent interface: Links to the future

    SciTech Connect

    Gustavson, D.B. ); Kristiansen, E. )

    1991-11-01

    Now that the Scalable Coherent Interface (SCI) has solved the bandwidth problem, what can we use it for SCI was developed to support closely coupled multiprocessors and their caches in a distributed shared-memory environment, but its scalability and the efficient generality of its architecture make it work very well over a wide range of applications. It can replace a local area network for connecting workstations on a campus. It can be powerful I/O channel for a supercomputer. It can be the processor-cache-memory-I/O connection in a highly parallel computer. It can gather data from enormous particle detectors and distribute it among thousands of processors. It can connect a desktop microprocessor to memory chips a few millimeters away, disk drivers a few meters away, and servers a few kilometers away.

  7. Scalable coherent interface: Links to the future

    SciTech Connect

    Gustavson, D.B.; Kristiansen, E.

    1991-11-01

    Now that the Scalable Coherent Interface (SCI) has solved the bandwidth problem, what can we use it for? SCI was developed to support closely coupled multiprocessors and their caches in a distributed shared-memory environment, but its scalability and the efficient generality of its architecture make it work very well over a wide range of applications. It can replace a local area network for connecting workstations on a campus. It can be powerful I/O channel for a supercomputer. It can be the processor-cache-memory-I/O connection in a highly parallel computer. It can gather data from enormous particle detectors and distribute it among thousands of processors. It can connect a desktop microprocessor to memory chips a few millimeters away, disk drivers a few meters away, and servers a few kilometers away.

  8. Scalable metadata environments (MDE): artistically impelled immersive environments for large-scale data exploration

    NASA Astrophysics Data System (ADS)

    West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram

    2014-02-01

    Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.

  9. Scalable descriptive and correlative statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2008-12-01

    This report summarizes the existing statistical engines in VTK/Titan and presents the parallel versions thereof which have already been implemented. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.

  10. Scalable, Flexible and Active Learning on Distributions

    DTIC Science & Technology

    2016-09-01

    Scalable, Flexible and Active Learning on Distributions Dougal J. Sutherland CMU-CS-16-128 September 2016 School of Computer Science Carnegie Mellon...sponsoring institution, the U.S. government or any other entity. Keywords: kernel methods, approximate embeddings, statistical machine learning , nonpara...metric statistics, two-sample testing, active learning . To my parents. Abstract A wide range of machine learning problems, including astronomical

  11. Architecture Knowledge for Evaluating Scalable Databases

    DTIC Science & Technology

    2015-01-16

    anurgali@andrew.cmu.edu Abstract—Designing massively scalable, highly available big data systems is an immense challenge for software architects. Big ...commercial technologies that can provide the required quality attributes. In big data systems, the data management layer presents unique engineering...features by software architects. QuABaseBD links the taxonomy to general quality attribute scenarios and design tactics for big data systems. This

  12. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  13. DISP: Optimizations towards Scalable MPI Startup

    SciTech Connect

    Fu, Huansong; Pophale, Swaroop S; Gorentla Venkata, Manjunath; Yu, Weikuan

    2016-01-01

    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namely Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.

  14. Pursuing Scalability for hypre's Conceptual Interfaces

    SciTech Connect

    Falgout, R D; Jones, J E; Yang, U M

    2004-07-21

    The software library hypre provides high performance preconditioners and solvers for the solution of large, sparse linear systems on massively parallel computers as well as conceptual interfaces that allow users to access the library in the way they naturally think about their problems. These interfaces include a stencil-based structured interface (Struct); a semi-structured interface (semiStruct), which is appropriate for applications that are mostly structured, e.g. block structured grids, composite grids in structured adaptive mesh refinement applications, and overset grids; a finite element interface (FEI) for unstructured problems, as well as a conventional linear-algebraic interface (IJ). It is extremely important to provide an efficient, scalable implementation of these interfaces in order to support the scalable solvers of the library, especially when using tens of thousands of processors. This paper describes the data structures, parallel implementation and resulting performance of the IJ, Struct and semiStruct interfaces. It investigates their scalability, presents successes as well as pitfalls of some of the approaches and suggests ways of dealing with them.

  15. Planetary subsurface investigation by 3D visualization model .

    NASA Astrophysics Data System (ADS)

    Seu, R.; Catallo, C.; Tragni, M.; Abbattista, C.; Cinquepalmi, L.

    Subsurface data analysis and visualization represents one of the main aspect in Planetary Observation (i.e. search for water or geological characterization). The data are collected by subsurface sounding radars as instruments on-board of deep space missions. These data are generally represented as 2D radargrams in the perspective of space track and z axes (perpendicular to the subsurface) but without direct correlation to other data acquisition or knowledge on the planet . In many case there are plenty of data from other sensors of the same mission, or other ones, with high continuity in time and in space and specially around the scientific sites of interest (i.e. candidate landing areas or particular scientific interesting sites). The 2D perspective is good to analyse single acquisitions and to perform detailed analysis on the returned echo but are quite useless to compare very large dataset as now are available on many planets and moons of solar system. The best way is to approach the analysis on 3D visualization model generated from the entire stack of data. First of all this approach allows to navigate the subsurface in all directions and analyses different sections and slices or moreover navigate the iso-surfaces respect to a value (or interval). The last one allows to isolate one or more iso-surfaces and remove, in the visualization mode, other data not interesting for the analysis; finally it helps to individuate the underground 3D bodies. Other aspect is the needs to link the on-ground data, as imaging, to the underground one by geographical and context field of view.

  16. Design and implementation of scalable tape archiver

    NASA Technical Reports Server (NTRS)

    Nemoto, Toshihiro; Kitsuregawa, Masaru; Takagi, Mikio

    1996-01-01

    In order to reduce costs, computer manufacturers try to use commodity parts as much as possible. Mainframes using proprietary processors are being replaced by high performance RISC microprocessor-based workstations, which are further being replaced by the commodity microprocessor used in personal computers. Highly reliable disks for mainframes are also being replaced by disk arrays, which are complexes of disk drives. In this paper we try to clarify the feasibility of a large scale tertiary storage system composed of 8-mm tape archivers utilizing robotics. In the near future, the 8-mm tape archiver will be widely used and become a commodity part, since recent rapid growth of multimedia applications requires much larger storage than disk drives can provide. We designed a scalable tape archiver which connects as many 8-mm tape archivers (element archivers) as possible. In the scalable archiver, robotics can exchange a cassette tape between two adjacent element archivers mechanically. Thus, we can build a large scalable archiver inexpensively. In addition, a sophisticated migration mechanism distributes frequently accessed tapes (hot tapes) evenly among all of the element archivers, which improves the throughput considerably. Even with the failures of some tape drives, the system dynamically redistributes hot tapes to the other element archivers which have live tape drives. Several kinds of specially tailored huge archivers are on the market, however, the 8-mm tape scalable archiver could replace them. To maintain high performance in spite of high access locality when a large number of archivers are attached to the scalable archiver, it is necessary to scatter frequently accessed cassettes among the element archivers and to use the tape drives efficiently. For this purpose, we introduce two cassette migration algorithms, foreground migration and background migration. Background migration transfers cassettes between element archivers to redistribute frequently accessed

  17. Scalable and balanced dynamic hybrid data assimilation

    NASA Astrophysics Data System (ADS)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them

  18. ParaText : scalable solutions for processing and searching very large document collections : final LDRD report.

    SciTech Connect

    Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-09-01

    This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages of information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.

  19. Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface

    NASA Astrophysics Data System (ADS)

    Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry

    2007-04-01

    As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.

  20. VEEVVIE: Visual Explorer for Empirical Visualization, VR and Interaction Experiments.

    PubMed

    Papadopoulos, C; Gutenko, I; Kaufman, A E

    2016-01-01

    Empirical, hypothesis-driven, experimentation is at the heart of the scientific discovery process and has become commonplace in human-factors related fields. To enable the integration of visual analytics in such experiments, we introduce VEEVVIE, the Visual Explorer for Empirical Visualization, VR and Interaction Experiments. VEEVVIE is comprised of a back-end ontology which can model several experimental designs encountered in these fields. This formalization allows VEEVVIE to capture experimental data in a query-able form and makes it accessible through a front-end interface. This front-end offers several multi-dimensional visualization widgets with built-in filtering and highlighting functionality. VEEVVIE is also expandable to support custom experimental measurements and data types through a plug-in visualization widget architecture. We demonstrate VEEVVIE through several case studies of visual analysis, performed on the design and data collected during an experiment on the scalability of high-resolution, immersive, tiled-display walls.

  1. Scalable resource management in high performance computers.

    SciTech Connect

    Frachtenberg, E.; Petrini, F.; Fernandez Peinador, J.; Coll, S.

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  2. Tip-Based Nanofabrication for Scalable Manufacturing

    DOE PAGES

    Hu, Huan; Kim, Hoe; Somnath, Suhas

    2017-03-16

    Tip-based nanofabrication (TBN) is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. Here in this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  3. Scalable Unix tools on parallel processors

    SciTech Connect

    Gropp, W.; Lusk, E.

    1994-12-31

    The introduction of parallel processors that run a separate copy of Unix on each process has introduced new problems in managing the user`s environment. This paper discusses some generalizations of common Unix commands for managing files (e.g. 1s) and processes (e.g. ps) that are convenient and scalable. These basic tools, just like their Unix counterparts, are text-based. We also discuss a way to use these with a graphical user interface (GUI). Some notes on the implementation are provided. Prototypes of these commands are publicly available.

  4. Overcoming Scalability Challenges for Tool Daemon Launching

    SciTech Connect

    Ahn, D H; Arnold, D C; de Supinski, B R; Lee, G L; Miller, B P; Schulz, M

    2008-02-15

    Many tools that target parallel and distributed environments must co-locate a set of daemons with the distributed processes of the target application. However, efficient and portable deployment of these daemons on large scale systems is an unsolved problem. We overcome this gap with LaunchMON, a scalable, robust, portable, secure, and general purpose infrastructure for launching tool daemons. Its API allows tool builders to identify all processes of a target job, launch daemons on the relevant nodes and control daemon interaction. Our results show that Launch-MON scales to very large daemon counts and substantially enhances performance over existing ad hoc mechanisms.

  5. First experience with the scalable coherent interface

    SciTech Connect

    Mueller, H. . ECP Division); RD24 Collaboration

    1994-02-01

    The research project RD24 is studying applications of the Scalable Coherent Interface (IEEE-1596) standard for the large hadron collider (LHC). First SCI node chips from Dolphin were used to demonstrate the use and functioning of SCI's packet protocols and to measure data rates. The authors present results from a first, two-node SCI ringlet at CERN, based on a R3000 RISC processor node and DMA node on a MC68040 processor bus. A diagnostic link analyzer monitors the SCI packet protocols up to full link bandwidth. In its second phase, RD24 will build a first implementation of a multi-ringlet SCI data merger.

  6. Scalable networks for discrete quantum random walks

    SciTech Connect

    Fujiwara, S.; Osaki, H.; Buluta, I.M.; Hasegawa, S.

    2005-09-15

    Recently, quantum random walks (QRWs) have been thoroughly studied in order to develop new quantum algorithms. In this paper we propose scalable quantum networks for discrete QRWs on circles, lines, and also in higher dimensions. In our method the information about the position of the walker is stored in a quantum register and the network consists of only one-qubit rotation and (controlled){sup n}-NOT gates, therefore it is purely computational and independent of the physical implementation. As an example, we describe the experimental realization in an ion-trap system.

  7. Towards a Scalable, Biomimetic, Antibacterial Coating

    NASA Astrophysics Data System (ADS)

    Dickson, Mary Nora

    Corneal afflictions are the second leading cause of blindness worldwide. When a corneal transplant is unavailable or contraindicated, an artificial cornea device is the only chance to save sight. Bacterial or fungal biofilm build up on artificial cornea devices can lead to serious complications including the need for systemic antibiotic treatment and even explantation. As a result, much emphasis has been placed on anti-adhesion chemical coatings and antibiotic leeching coatings. These methods are not long-lasting, and microorganisms can eventually circumvent these measures. Thus, I have developed a surface topographical antimicrobial coating. Various surface structures including rough surfaces, superhydrophobic surfaces, and the natural surfaces of insects' wings and sharks' skin are promising anti-biofilm candidates, however none meet the criteria necessary for implementation on the surface of an artificial cornea device. In this thesis I: 1) developed scalable fabrication protocols for a library of biomimetic nanostructure polymer surfaces 2) assessed the potential these for poly(methyl methacrylate) nanopillars to kill or prevent formation of biofilm by E. coli bacteria and species of Pseudomonas and Staphylococcus bacteria and improved upon a proposed mechanism for the rupture of Gram-negative bacterial cell walls 3) developed a scalable, commercially viable method for producing antibacterial nanopillars on a curved, PMMA artificial cornea device and 4) developed scalable fabrication protocols for implantation of antibacterial nanopatterned surfaces on the surfaces of thermoplastic polyurethane materials, commonly used in catheter tubings. This project constitutes a first step towards fabrication of the first entirely PMMA artificial cornea device. The major finding of this work is that by precisely controlling the topography of a polymer surface at the nano-scale, we can kill adherent bacteria and prevent biofilm formation of certain pathogenic bacteria

  8. A Scalable High-Throughput Chemical Synthesizer

    PubMed Central

    Livesay, Eric A.; Liu, Ying-Horng; Luebke, Kevin J.; Irick, Joel; Belosludtsev, Yuri; Rayner, Simon; Balog, Robert; Johnston, Stephen Albert

    2002-01-01

    A machine that employs a novel reagent delivery technique for biomolecular synthesis has been developed. This machine separates the addressing of individual synthesis sites from the actual process of reagent delivery by using masks placed over the sites. Because of this separation, this machine is both cost-effective and scalable, and thus the time required to synthesize 384 or 1536 unique biomolecules is very nearly the same. Importantly, the mask design allows scaling of the number of synthesis sites without the addition of new valving. Physical and biological comparisons between DNA made on a commercially available synthesizer and this unit show that it produces DNA of similar quality. PMID:12466300

  9. Scalable Synthesis of (−)-Thapsigargin

    PubMed Central

    2016-01-01

    Total syntheses of the complex, highly oxygenated sesquiterpenes thapsigargin (1) and nortrilobolide (2) are presented. Access to analogues of these promising bioactive natural products has been limited to tedious isolation and semisynthetic efforts. Elegant prior total syntheses demonstrated the feasibility of creating these entitites in 36–42 step processes. The currently reported route proceeds in a scalable and more concise fashion by utilizing two-phase terpene synthesis logic. Salient features of the work include application of the classic photosantonin rearrangement and precisely choreographed installation of the multiple oxygenations present on the guaianolide skeleton. PMID:28149952

  10. Scalable Optical-Fiber Communication Networks

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Peterson, John C.

    1993-01-01

    Scalable arbitrary fiber extension network (SAFEnet) is conceptual fiber-optic communication network passing digital signals among variety of computers and input/output devices at rates from 200 Mb/s to more than 100 Gb/s. Intended for use with very-high-speed computers and other data-processing and communication systems in which message-passing delays must be kept short. Inherent flexibility makes it possible to match performance of network to computers by optimizing configuration of interconnections. In addition, interconnections made redundant to provide tolerance to faults.

  11. Young Investigator Program: Modular Paradigm for Scalable Quantum Information

    DTIC Science & Technology

    2016-03-04

    AFRL-AFOSR-VA-TR-2016-0120 Modular Paradigm for Scalable Quantum Information Paola Cappellaro MASSACHUSETTS INSTITUTE OF TECHNOLOGY Final Report 03...AND SUBTITLE Young Investigator Program: Modular Paradigm for Scalable Quantum Information 5a. CONTRACT NUMBER FA9550-12-1-0292 5b. GRANT NUMBER...Modular Paradigm for Scalable Quantum Information” was to address some of the challenges facing the field of quantum information science (QIS). The

  12. SCIMITAR: Scalable Stream-Processing for Sensor Information Brokering

    DTIC Science & Technology

    2013-11-01

    paradigms, one might consider use any of the highly scalable batched Map-Reduce technologies as, for example, implemented in Hadoop [10]. Although...extremely scalable for information processing, this approach cannot pro- vide a scalable, low-latency approach to information. Hadoop needs to register...information in the Hadoop NameNode ser- vice, and then read from disk for any brokering function that could be supported by Hadoop . Whereas successful

  13. An Open Infrastructure for Scalable, Reconfigurable Analysis

    SciTech Connect

    de Supinski, B R; Fowler, R; Gamblin, T; Mueller, F; Ratn, P; Schulz, M

    2008-05-15

    Petascale systems will have hundreds of thousands of processor cores so their applications must be massively parallel. Effective use of petascale systems will require efficient interprocess communication through memory hierarchies and complex network topologies. Tools to collect and analyze detailed data about this communication would facilitate its optimization. However, several factors complicate tool design. First, large-scale runs on petascale systems will be a precious commodity, so scalable tools must have almost no overhead. Second, the volume of performance data from petascale runs could easily overwhelm hand analysis and, thus, tools must collect only data that is relevant to diagnosing performance problems. Analysis must be done in-situ, when available processing power is proportional to the data. We describe a tool framework that overcomes these complications. Our approach allows application developers to combine existing techniques for measurement, analysis, and data aggregation to develop application-specific tools quickly. Dynamic configuration enables application developers to select exactly the measurements needed and generic components support scalable aggregation and analysis of this data with little additional effort.

  14. Towards Scalable Graph Computation on Mobile Devices

    PubMed Central

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  15. Scalable enantioselective total synthesis of taxanes

    NASA Astrophysics Data System (ADS)

    Mendoza, Abraham; Ishihara, Yoshihiro; Baran, Phil S.

    2012-01-01

    Taxanes form a large family of terpenes comprising over 350 members, the most famous of which is Taxol (paclitaxel), a billion-dollar anticancer drug. Here, we describe the first practical and scalable synthetic entry to these natural products via a concise preparation of (+)-taxa-4(5),11(12)-dien-2-one, which has a suitable functional handle with which to access more oxidized members of its family. This route enables a gram-scale preparation of the ‘parent’ taxane—taxadiene—which is the largest quantity of this naturally occurring terpene ever isolated or prepared in pure form. The characteristic 6-8-6 tricyclic system of the taxane family, containing a bridgehead alkene, is forged via a vicinal difunctionalization/Diels-Alder strategy. Asymmetry is introduced by means of an enantioselective conjugate addition that forms an all-carbon quaternary centre, from which all other stereocentres are fixed through substrate control. This study lays a critical foundation for a planned access to minimally oxidized taxane analogues and a scalable laboratory preparation of Taxol itself.

  16. A scalable and operationally simple radical trifluoromethylation

    PubMed Central

    Beatty, Joel W.; Douglas, James J.; Cole, Kevin P.; Stephenson, Corey R. J.

    2015-01-01

    The large number of reagents that have been developed for the synthesis of trifluoromethylated compounds is a testament to the importance of the CF3 group as well as the associated synthetic challenge. Current state-of-the-art reagents for appending the CF3 functionality directly are highly effective; however, their use on preparative scale has minimal precedent because they require multistep synthesis for their preparation, and/or are prohibitively expensive for large-scale application. For a scalable trifluoromethylation methodology, trifluoroacetic acid and its anhydride represent an attractive solution in terms of cost and availability; however, because of the exceedingly high oxidation potential of trifluoroacetate, previous endeavours to use this material as a CF3 source have required the use of highly forcing conditions. Here we report a strategy for the use of trifluoroacetic anhydride for a scalable and operationally simple trifluoromethylation reaction using pyridine N-oxide and photoredox catalysis to affect a facile decarboxylation to the CF3 radical. PMID:26258541

  17. Towards Scalable Graph Computation on Mobile Devices.

    PubMed

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2014-10-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  18. An atmospheric visual analysis and exploration system.

    PubMed

    Song, Yuyan; Ye, Jing; Svakhine, Nikolai; Lasher-Trapp, Sonia; Baldwin, Mike; Ebert, David S

    2006-01-01

    Meteorological research involves the analysis of multi-field, multi-scale, and multi-source data sets. In order to better understand these data sets, models and measurements at different resolutions must be analyzed. Unfortunately, traditional atmospheric visualization systems only provide tools to view a limited number of variables and small segments of the data. These tools are often restricted to two-dimensional contour or vector plots or three-dimensional isosurfaces. The meteorologist must mentally synthesize the data from multiple plots to glean the information needed to produce a coherent picture of the weather phenomenon of interest. In order to provide better tools to meteorologists and reduce system limitations, we have designed an integrated atmospheric visual analysis and exploration system for interactive analysis of weather data sets. Our system allows for the integrated visualization of 1D, 2D, and 3D atmospheric data sets in common meteorological grid structures and utilizes a variety of rendering techniques. These tools provide meteorologists with new abilities to analyze their data and answer questions on regions of interest, ranging from physics-based atmospheric rendering to illustrative rendering containing particles and glyphs. In this paper, we will discuss the use and performance of our visual analysis for two important meteorological applications. The first application is warm rain formation in small cumulus clouds. Here, our three-dimensional, interactive visualization of modeled drop trajectories within spatially correlated fields from a cloud simulation has provided researchers with new insight. Our second application is improving and validating severe storm models, specifically the Weather Research and Forecasting (WRF) model. This is done through correlative visualization of WRF model and experimental Doppler storm data.

  19. Designing Scalable PGAS Communication Subsystems on Cray Gemini Interconnect

    SciTech Connect

    Vishnu, Abhinav; Daily, Jeffrey A.; Palmer, Bruce J.

    2012-12-26

    The Cray Gemini Interconnect has been recently introduced as a next generation network architecture for building multi-petaflop supercomputers. Cray XE6 systems including LANL Cielo, NERSC Hopper, ORNL Titan and proposed NCSA BlueWaters leverage the Gemini Interconnect as their primary Interconnection network. At the same time, programming models such as the Message Passing Interface (MPI) and Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) and Co-Array Fortran (CAF) have become available on these systems. Global Arrays is a popular PGAS model used in a variety of application domains including hydrodynamics, chemistry and visualization. Global Arrays uses Aggregate Re- mote Memory Copy Interface (ARMCI) as the communication runtime system for Remote Memory Access communication. This paper presents a design, implementation and performance evaluation of scalable and high performance communication subsystems on Cray Gemini Interconnect using ARMCI. The design space is explored and time-space complexities of commu- nication protocols for one-sided communication primitives such as contiguous and uniformly non-contiguous datatypes, atomic memory operations (AMOs) and memory synchronization is presented. An implementation of the proposed design (referred as ARMCI-Gemini) demonstrates the efficacy on communication primitives, application kernels such as LU decomposition and full applications such as Smooth Particle Hydrodynamics (SPH) application.

  20. Network selection, Information filtering and Scalable computation

    NASA Astrophysics Data System (ADS)

    Ye, Changqing

    -complete factorizations, possibly with a high percentage of missing values. This promotes additional sparsity beyond rank reduction. Computationally, we design methods based on a ``decomposition and combination'' strategy, to break large-scale optimization into many small subproblems to solve in a recursive and parallel manner. On this basis, we implement the proposed methods through multi-platform shared-memory parallel programming, and through Mahout, a library for scalable machine learning and data mining, for mapReduce computation. For example, our methods are scalable to a dataset consisting of three billions of observations on a single machine with sufficient memory, having good timings. Both theoretical and numerical investigations show that the proposed methods exhibit significant improvement in accuracy over state-of-the-art scalable methods.

  1. Scalable Feature Matching by Dual Cascaded Scalar Quantization for Image Retrieval.

    PubMed

    Zhou, Wengang; Yang, Ming; Wang, Xiaoyu; Li, Houqiang; Lin, Yuanqing; Tian, Qi

    2016-01-01

    In this paper, we investigate the problem of scalable visual feature matching in large-scale image search and propose a novel cascaded scalar quantization scheme in dual resolution. We formulate the visual feature matching as a range-based neighbor search problem and approach it by identifying hyper-cubes with a dual-resolution scalar quantization strategy. Specifically, for each dimension of the PCA-transformed feature, scalar quantization is performed at both coarse and fine resolutions. The scalar quantization results at the coarse resolution are cascaded over multiple dimensions to index an image database. The scalar quantization results over multiple dimensions at the fine resolution are concatenated into a binary super-vector and stored into the index list for efficient verification. The proposed cascaded scalar quantization (CSQ) method is free of the costly visual codebook training and thus is independent of any image descriptor training set. The index structure of the CSQ is flexible enough to accommodate new image features and scalable to index large-scale image database. We evaluate our approach on the public benchmark datasets for large-scale image retrieval. Experimental results demonstrate the competitive retrieval performance of the proposed method compared with several recent retrieval algorithms on feature quantization.

  2. Interactive Visualization of Astrophysical Plasma Simulations with SDvision

    NASA Astrophysics Data System (ADS)

    Pomarède, D.; Fidaali, Y.; Audit, E.; Brun, A. S.; Masset, F.; Teyssier, R.

    2008-04-01

    SDvision is a graphical interface developed in the framework of IDL Object Graphics and designed for the interactive and immersive visualization of astrophysical plasma simulations. Three-dimensional scalar and vector fields distributed over regular mesh grids or more complex structures such as adaptive mesh refinement data or multiple embedded grids, as well as N-body systems, can be visualized in a number of different, complementary ways. Various implementations of the visualization of the data are simultaneously proposed, such as 3D isosurfaces, volume projections, hedgehog and streamline displays, surface and image of 2D subsets, profile plots, particle clouds. The SDvision widget allows to visualize complex, composite scenes both from outside and from within the simulated volume. This tool is used to visualize data from RAMSES, a hybrid N-body and hydrodynamical code which solves the interplay of dark matter and the baryon gas in the study of cosmological structures formation, from HERACLES, a radiation hydrodynamics code used in particular to study turbulence in interstellar molecular clouds, from the ASH code dedicated to the study of stellar magnetohydrodynamics, and from the JUPITER multi-resolution code used in the study of protoplanetary disks formation.

  3. A scalable distributed paradigm for multi-user interaction with tiled rear projection display walls.

    PubMed

    Roman, Pablo; Lazarov, Maxim; Majumder, Aditi

    2010-01-01

    We present the first distributed paradigm for multiple users to interact simultaneously with large tiled rear projection display walls. Unlike earlier works, our paradigm allows easy scalability across different applications, interaction modalities, displays and users. The novelty of the design lies in its distributed nature allowing well-compartmented, application independent, and application specific modules. This enables adapting to different 2D applications and interaction modalities easily by changing a few application specific modules. We demonstrate four challenging 2D applications on a nine projector display to demonstrate the application scalability of our method: map visualization, virtual graffiti, virtual bulletin board and an emergency management system. We demonstrate the scalability of our method to multiple interaction modalities by showing both gesture-based and laser-based user interfaces. Finally, we improve earlier distributed methods to register multiple projectors. Previous works need multiple patterns to identify the neighbors, the configuration of the display and the registration across multiple projectors in logarithmic time with respect to the number of projectors in the display. We propose a new approach that achieves this using a single pattern based on specially augmented QR codes in constant time. Further, previous distributed registration algorithms are prone to large misregistrations. We propose a novel radially cascading geometric registration technique that yields significantly better accuracy. Thus, our improvements allow a significantly more efficient and accurate technique for distributed self-registration of multi-projector display walls.

  4. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells

    PubMed Central

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-01-01

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level. PMID:27005630

  5. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells.

    PubMed

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-03-09

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level.

  6. Efficient scalable solid-state neutron detector

    NASA Astrophysics Data System (ADS)

    Moses, Daniel

    2015-06-01

    We report on scalable solid-state neutron detector system that is specifically designed to yield high thermal neutron detection sensitivity. The basic detector unit in this system is made of a 6Li foil coupled to two crystalline silicon diodes. The theoretical intrinsic efficiency of a detector-unit is 23.8% and that of detector element comprising a stack of five detector-units is 60%. Based on the measured performance of this detector-unit, the performance of a detector system comprising a planar array of detector elements, scaled to encompass effective area of 0.43 m2, is estimated to yield the minimum absolute efficiency required of radiological portal monitors used in homeland security.

  7. Scalable problems and memory bounded speedup

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Ni, Lionel M.

    1992-01-01

    In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time speedup and memory-bounded speedup. The latter two consider the relationship between speedup and problem scalability. Two sets of speedup formulations are derived for these three models. One set considers uneven workload allocation and communication overhead and gives more accurate estimation. Another set considers a simplified case and provides a clear picture on the impact of the sequential portion of an application on the possible performance gain from parallel processing. The simplified fixed-size speedup is Amdahl's law. The simplified fixed-time speedup is Gustafson's scaled speedup. The simplified memory-bounded speedup contains both Amdahl's law and Gustafson's scaled speedup as special cases. This study leads to a better understanding of parallel processing.

  8. A scalable sparse eigensolver for petascale applications

    NASA Astrophysics Data System (ADS)

    Keceli, Murat; Zhang, Hong; Zapol, Peter; Dixon, David; Wagner, Albert

    2015-03-01

    Exploiting locality of chemical interactions and therefore sparsity is necessary to push the limits of quantum simulations beyond petascale. However, sparse numerical algorithms are known to have poor strong scaling. Here, we show that shift-and-invert parallel spectral transformations (SIPs) method can scale up to two-hundred thousand cores for density functional based tight-binding (DFTB), or semi-empirical molecular orbital (SEMO) applications. We demonstrated the robustness and scalability of the SIPs method on various kinds of systems including metallic carbon nanotubes, diamond crystals and water clusters. We analyzed how sparsity patterns and eigenvalue spectrums of these different type of applications affect the computational performance of the SIPs. The SIPs method enables us to perform simulations with more than five hundred thousands of basis functions utilizing more than hundreds of thousands of cores. SIPs has a better scaling for memory and computational time in contrast to dense eigensolvers, and it does not require fast interconnects.

  9. Scalable, extensible, and portable numerical libraries

    SciTech Connect

    Gropp, W.; Smith, B.

    1995-01-01

    Designing a scalable and portable numerical library requires consideration of many factors, including choice of parallel communication technology, data structures, and user interfaces. The PETSc library (Portable Extensible Tools for Scientific computing) makes use of modern software technology to provide a flexible and portable implementation. This talk will discuss the use of a meta-communication layer (allowing the user to choose different transport layers such as MPI, p4, pvm, or vendor-specific libraries) for portability, an aggressive data-structure-neutral implementation that minimizes dependence on particular data structures (even vectors), permitting the library to adapt to the user rather than the other way around, and the separation of implementation language from user-interface language. Examples are presented.

  10. Parallel scalability of Hartree–Fock calculations

    SciTech Connect

    Chow, Edmond Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-14

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree–Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  11. Efficient scalable solid-state neutron detector

    SciTech Connect

    Moses, Daniel

    2015-06-15

    We report on scalable solid-state neutron detector system that is specifically designed to yield high thermal neutron detection sensitivity. The basic detector unit in this system is made of a {sup 6}Li foil coupled to two crystalline silicon diodes. The theoretical intrinsic efficiency of a detector-unit is 23.8% and that of detector element comprising a stack of five detector-units is 60%. Based on the measured performance of this detector-unit, the performance of a detector system comprising a planar array of detector elements, scaled to encompass effective area of 0.43 m{sup 2}, is estimated to yield the minimum absolute efficiency required of radiological portal monitors used in homeland security.

  12. Scalable quantum search using trapped ions

    SciTech Connect

    Ivanov, S. S.; Ivanov, P. A.; Linington, I. E.; Vitanov, N. V.

    2010-04-15

    We propose a scalable implementation of Grover's quantum search algorithm in a trapped-ion quantum information processor. The system is initialized in an entangled Dicke state by using adiabatic techniques. The inversion-about-average and oracle operators take the form of single off-resonant laser pulses. This is made possible by utilizing the physical symmetries of the trapped-ion linear crystal. The physical realization of the algorithm represents a dramatic simplification: each logical iteration (oracle and inversion about average) requires only two physical interaction steps, in contrast to the large number of concatenated gates required by previous approaches. This not only facilitates the implementation but also increases the overall fidelity of the algorithm.

  13. BASSET: Scalable Gateway Finder in Large Graphs

    SciTech Connect

    Tong, H; Papadimitriou, S; Faloutsos, C; Yu, P S; Eliassi-Rad, T

    2010-11-03

    Given a social network, who is the best person to introduce you to, say, Chris Ferguson, the poker champion? Or, given a network of people and skills, who is the best person to help you learn about, say, wavelets? The goal is to find a small group of 'gateways': persons who are close enough to us, as well as close enough to the target (person, or skill) or, in other words, are crucial in connecting us to the target. The main contributions are the following: (a) we show how to formulate this problem precisely; (b) we show that it is sub-modular and thus it can be solved near-optimally; (c) we give fast, scalable algorithms to find such gateways. Experiments on real data sets validate the effectiveness and efficiency of the proposed methods, achieving up to 6,000,000x speedup.

  14. Porphyrins as Catalysts in Scalable Organic Reactions.

    PubMed

    Barona-Castaño, Juan C; Carmona-Vargas, Christian C; Brocksom, Timothy J; de Oliveira, Kleber T

    2016-03-08

    Catalysis is a topic of continuous interest since it was discovered in chemistry centuries ago. Aiming at the advance of reactions for efficient processes, a number of approaches have been developed over the last 180 years, and more recently, porphyrins occupy an important role in this field. Porphyrins and metalloporphyrins are fascinating compounds which are involved in a number of synthetic transformations of great interest for industry and academy. The aim of this review is to cover the most recent progress in reactions catalysed by porphyrins in scalable procedures, thus presenting the state of the art in reactions of epoxidation, sulfoxidation, oxidation of alcohols to carbonyl compounds and C-H functionalization. In addition, the use of porphyrins as photocatalysts in continuous flow processes is covered.

  15. A versatile scalable PET processing system

    SciTech Connect

    H. Dong, A. Weisenberger, J. McKisson, Xi Wenze, C. Cuevas, J. Wilson, L. Zukerman

    2011-06-01

    Positron Emission Tomography (PET) historically has major clinical and preclinical applications in cancerous oncology, neurology, and cardiovascular diseases. Recently, in a new direction, an application specific PET system is being developed at Thomas Jefferson National Accelerator Facility (Jefferson Lab) in collaboration with Duke University, University of Maryland at Baltimore (UMAB), and West Virginia University (WVU) targeted for plant eco-physiology research. The new plant imaging PET system is versatile and scalable such that it could adapt to several plant imaging needs - imaging many important plant organs including leaves, roots, and stems. The mechanical arrangement of the detectors is designed to accommodate the unpredictable and random distribution in space of the plant organs without requiring the plant be disturbed. Prototyping such a system requires a new data acquisition system (DAQ) and data processing system which are adaptable to the requirements of these unique and versatile detectors.

  16. Scalable ranked retrieval using document images

    NASA Astrophysics Data System (ADS)

    Jain, Rajiv; Oard, Douglas W.; Doermann, David

    2013-12-01

    Despite the explosion of text on the Internet, hard copy documents that have been scanned as images still play a significant role for some tasks. The best method to perform ranked retrieval on a large corpus of document images, however, remains an open research question. The most common approach has been to perform text retrieval using terms generated by optical character recognition. This paper, by contrast, examines whether a scalable segmentation-free image retrieval algorithm, which matches sub-images containing text or graphical objects, can provide additional benefit in satisfying a user's information needs on a large, real world dataset. Results on 7 million scanned pages from the CDIP v1.0 test collection show that content based image retrieval finds a substantial number of documents that text retrieval misses, and that when used as a basis for relevance feedback can yield improvements in retrieval effectiveness.

  17. Scalable Sensor Data Processor: Development and Validation

    NASA Astrophysics Data System (ADS)

    Pinto, R.; Berrojo, L.; Garcia, E.; Trautner, R.; Rauwerda, G.; Sunesen, K.; Redant, S.; Thys, G.; Andersson, J.; Hernandez, F.; Habinc, S.; Lopez, J.

    2016-08-01

    Future science and robotic exploration missions are envisaged to be demanding w.r.t. on-board data processing capabilities, due to the scarcity of downlink bandwidth together with the massive amount of data which can be generated by next- generation instruments, both in terms of data rate and volume. Therefore, new architectures for on- board data processing are in need.The Scalable Sensor Data Processor (SSDP) is a next-generation heterogeneous multicore mixed- signal ASIC for on-board data processing, aiming at providing in a single chip the resources needed to perform data acquisition, control and high- performance processing.This paper presents the project background and design of the SSDP ASIC. The architecture of the control and processing subsystems are presented and detailed. The current status and future development activitiess are also presented, both with prototyping and envisaged testing and validation procedures.

  18. A scalable approach to combinatorial library design.

    PubMed

    Sharma, Puneet; Salapaka, Srinivasa; Beck, Carolyn

    2011-01-01

    In this chapter, we describe an algorithm for the design of lead-generation libraries required in combinatorial drug discovery. This algorithm addresses simultaneously the two key criteria of diversity and representativeness of compounds in the resulting library and is computationally efficient when applied to a large class of lead-generation design problems. At the same time, additional constraints on experimental resources are also incorporated in the framework presented in this chapter. A computationally efficient scalable algorithm is developed, where the ability of the deterministic annealing algorithm to identify clusters is exploited to truncate computations over the entire dataset to computations over individual clusters. An analysis of this algorithm quantifies the trade-off between the error due to truncation and computational effort. Results applied on test datasets corroborate the analysis and show improvement by factors as large as ten or more depending on the datasets.

  19. Toward Scalable Benchmarks for Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1996-01-01

    This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short- and long-term behavior. These benchmarks should be both portable and scalable so they may be used on storage systems from tens of gigabytes to petabytes or more. By developing a standard set of benchmarks that reflect real user workload, we hope to encourage system designers and users to publish performance figures that can be compared with those of other systems. This will allow users to choose the system that best meets their needs and give designers a tool with which they can measure the performance effects of improvements to their systems.

  20. Improved volume rendering for the visualization of living cells examined with confocal microscopy

    NASA Astrophysics Data System (ADS)

    Enloe, L. Charity; Griffing, Lawrence R.

    2000-02-01

    This research applies recent advances in 3D isosurface reconstruction to images of test spheres and plant cells growing in suspension culture. Isosurfaces that represent object boundaries are constructed with a Marching Cubes algorithm applied to simple data sets, i.e., fluorescent test beads, and complex data sets, i.e., fluorescent plant cells, acquired with a Zeiss Confocal Laser Scanning Microscope (LSM). The marching cubes algorithm treats each pixel or voxel of the image as a separate entity when performing computations. To test the spatial accuracy of the reconstruction, control data representing the volume of a 25 micrometer test shaper was obtained with the LSM. This volume was then judged on the basis of uniformity and smoothness. Using polygon decimation and smoothing algorithms available through the visualization toolkit, 'voxellated' test spheres and cells were smoothed using several different smoothing algorithms after unessential polygons were eliminated. With these improvements, the shape of subcellular organelles could be modeled at various levels of accuracy. However, in order to accurately reconstruct these complex structures of interest to us, the subcellular organelles of the endosomal system or the endoplasmic reticulum of plant cells, measurements of the accuracy of connectedness of structures need to be developed.

  1. Scalable multiparticle entanglement of trapped ions.

    PubMed

    Häffner, H; Hänsel, W; Roos, C F; Benhelm, J; Chek-al-Kar, D; Chwalla, M; Körber, T; Rapol, U D; Riebe, M; Schmidt, P O; Becher, C; Gühne, O; Dür, W; Blatt, R

    2005-12-01

    The generation, manipulation and fundamental understanding of entanglement lies at the very heart of quantum mechanics. Entangled particles are non-interacting but are described by a common wavefunction; consequently, individual particles are not independent of each other and their quantum properties are inextricably interwoven. The intriguing features of entanglement become particularly evident if the particles can be individually controlled and physically separated. However, both the experimental realization and characterization of entanglement become exceedingly difficult for systems with many particles. The main difficulty is to manipulate and detect the quantum state of individual particles as well as to control the interaction between them. So far, entanglement of four ions or five photons has been demonstrated experimentally. The creation of scalable multiparticle entanglement demands a non-exponential scaling of resources with particle number. Among the various kinds of entangled states, the 'W state' plays an important role as its entanglement is maximally persistent and robust even under particle loss. Such states are central as a resource in quantum information processing and multiparty quantum communication. Here we report the scalable and deterministic generation of four-, five-, six-, seven- and eight-particle entangled states of the W type with trapped ions. We obtain the maximum possible information on these states by performing full characterization via state tomography, using individual control and detection of the ions. A detailed analysis proves that the entanglement is genuine. The availability of such multiparticle entangled states, together with full information in the form of their density matrices, creates a test-bed for theoretical studies of multiparticle entanglement. Independently, 'Greenberger-Horne-Zeilinger' entangled states with up to six ions have been created and analysed in Boulder.

  2. Scalable asynchronous execution of cellular automata

    NASA Astrophysics Data System (ADS)

    Folino, Gianluigi; Giordano, Andrea; Mastroianni, Carlo

    2016-10-01

    The performance and scalability of cellular automata, when executed on parallel/distributed machines, are limited by the necessity of synchronizing all the nodes at each time step, i.e., a node can execute only after the execution of the previous step at all the other nodes. However, these synchronization requirements can be relaxed: a node can execute one step after synchronizing only with the adjacent nodes. In this fashion, different nodes can execute different time steps. This can be a notable advantageous in many novel and increasingly popular applications of cellular automata, such as smart city applications, simulation of natural phenomena, etc., in which the execution times can be different and variable, due to the heterogeneity of machines and/or data and/or executed functions. Indeed, a longer execution time at a node does not slow down the execution at all the other nodes but only at the neighboring nodes. This is particularly advantageous when the nodes that act as bottlenecks vary during the application execution. The goal of the paper is to analyze the benefits that can be achieved with the described asynchronous implementation of cellular automata, when compared to the classical all-to-all synchronization pattern. The performance and scalability have been evaluated through a Petri net model, as this model is very useful to represent the synchronization barrier among nodes. We examined the usual case in which the territory is partitioned into a number of regions, and the computation associated with a region is assigned to a computing node. We considered both the cases of mono-dimensional and two-dimensional partitioning. The results show that the advantage obtained through the asynchronous execution, when compared to the all-to-all synchronous approach is notable, and it can be as large as 90% in terms of speedup.

  3. Network-aware scalable video monitoring system for emergency situations with operator-managed fidelity control

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos

    2014-05-01

    In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier

  4. Open Source Scalable Vector Graphics Components for Enabling GIS in Web-based Public Health Surveillance Systems

    PubMed Central

    Kamadjeu, Raoul; Tolentino, Herman

    2006-01-01

    Geographic Information Systems (GIS) are useful for visual analysis and sharing of spatial data. Open standards like Scalable Vector Graphics (SVG) provide viable alternatives to overcome traditional GIS limitations in resource-constrained settings (low-bandwidth, cost and manpower barriers). This project describes the design and implementation of reusable SVG components for managing geographic information for web-based public health surveillance systems. PMID:17238592

  5. GRIZ: Finite element analysis results visualization for unstructured grids. User manual

    SciTech Connect

    Dovey, D.J.; Spelce, T.E.

    1993-10-01

    GRIZ supports interactive visualization of finite element analysis results on unstructured grids. GRIZ is a general-purpose post-processing application which is designed to work with a variety of an analysis codes. Currently, GRIZ is capable of calculating and displaying derived variables for the DYNA3D, NIKE3D and TOPAZ3D analysis codes. GRIZ reads in data files in the ``MDG plotfile`` format. GRIZ provides support for modern 3D visualization techniques such as isosurface display, cutting planes and display of vector data. GRIZ also incorporates the ability to animate data over time and to store animation frames to a video disk. GRIZ is designed to utilize the capabilities of modern graphics workstations which provide hardware support for 3D graphics, thereby giving the user as much interactive performance as possible. This should make it easier for analysts to explore and interrogate their analysis results.

  6. Visual Analytics for Power Grid Contingency Analysis

    SciTech Connect

    Wong, Pak C.; Huang, Zhenyu; Chen, Yousu; Mackey, Patrick S.; Jin, Shuangshuang

    2014-01-20

    Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure to do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.

  7. Block-based scalable wavelet image codec

    NASA Astrophysics Data System (ADS)

    Bao, Yiliang; Kuo, C.-C. Jay

    1999-10-01

    This paper presents a high performance block-based wavelet image coder which is designed to be of very low implementational complexity yet with rich features. In this image coder, the Dual-Sliding Wavelet Transform (DSWT) is first applied to image data to generate wavelet coefficients in fixed-size blocks. Here, a block only consists of wavelet coefficients from a single subband. The coefficient blocks are directly coded with the Low Complexity Binary Description (LCBiD) coefficient coding algorithm. Each block is encoded using binary context-based bitplane coding. No parent-child correlation is exploited in the coding process. There is also no intermediate buffering needed in between DSWT and LCBiD. The compressed bit stream generated by the proposed coder is both SNR and resolution scalable, as well as highly resilient to transmission errors. Both DSWT and LCBiD process the data in blocks whose size is independent of the size of the original image. This gives more flexibility in the implementation. The codec has a very good coding performance even the block size is (16,16).

  8. Scalable Tight-Binding Model for Graphene

    NASA Astrophysics Data System (ADS)

    Liu, Ming-Hao; Rickhaus, Peter; Makk, Péter; Tóvári, Endre; Maurand, Romain; Tkatschenko, Fedor; Weiss, Markus; Schönenberger, Christian; Richter, Klaus

    2015-01-01

    Artificial graphene consisting of honeycomb lattices other than the atomic layer of carbon has been shown to exhibit electronic properties similar to real graphene. Here, we reverse the argument to show that transport properties of real graphene can be captured by simulations using "theoretical artificial graphene." To prove this, we first derive a simple condition, along with its restrictions, to achieve band structure invariance for a scalable graphene lattice. We then present transport measurements for an ultraclean suspended single-layer graphene p n junction device, where ballistic transport features from complex Fabry-Pérot interference (at zero magnetic field) to the quantum Hall effect (at unusually low field) are observed and are well reproduced by transport simulations based on properly scaled single-particle tight-binding models. Our findings indicate that transport simulations for graphene can be efficiently performed with a strongly reduced number of atomic sites, allowing for reliable predictions for electric properties of complex graphene devices. We demonstrate the capability of the model by applying it to predict so-far unexplored gate-defined conductance quantization in single-layer graphene.

  9. Scalable Combinatorial Tools for Health Disparities Research

    PubMed Central

    Langston, Michael A.; Levine, Robert S.; Kilbourne, Barbara J.; Rogers, Gary L.; Kershenbaum, Anne D.; Baktash, Suzanne H.; Coughlin, Steven S.; Saxton, Arnold M.; Agboto, Vincent K.; Hood, Darryl B.; Litchveld, Maureen Y.; Oyana, Tonny J.; Matthews-Juarez, Patricia; Juarez, Paul D.

    2014-01-01

    Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject. PMID:25310540

  10. A scalable neuristor built with Mott memristors

    NASA Astrophysics Data System (ADS)

    Pickett, Matthew D.; Medeiros-Ribeiro, Gilberto; Williams, R. Stanley

    2013-02-01

    The Hodgkin-Huxley model for action potential generation in biological axons is central for understanding the computational capability of the nervous system and emulating its functionality. Owing to the historical success of silicon complementary metal-oxide-semiconductors, spike-based computing is primarily confined to software simulations and specialized analogue metal-oxide-semiconductor field-effect transistor circuits. However, there is interest in constructing physical systems that emulate biological functionality more directly, with the goal of improving efficiency and scale. The neuristor was proposed as an electronic device with properties similar to the Hodgkin-Huxley axon, but previous implementations were not scalable. Here we demonstrate a neuristor built using two nanoscale Mott memristors, dynamical devices that exhibit transient memory and negative differential resistance arising from an insulating-to-conducting phase transition driven by Joule heating. This neuristor exhibits the important neural functions of all-or-nothing spiking with signal gain and diverse periodic spiking, using materials and structures that are amenable to extremely high-density integration with or without silicon transistors.

  11. SCTP as scalable video coding transport

    NASA Astrophysics Data System (ADS)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  12. Scalable combinatorial tools for health disparities research.

    PubMed

    Langston, Michael A; Levine, Robert S; Kilbourne, Barbara J; Rogers, Gary L; Kershenbaum, Anne D; Baktash, Suzanne H; Coughlin, Steven S; Saxton, Arnold M; Agboto, Vincent K; Hood, Darryl B; Litchveld, Maureen Y; Oyana, Tonny J; Matthews-Juarez, Patricia; Juarez, Paul D

    2014-10-10

    Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual's genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject.

  13. Towards Scalable Optimal Sequence Homology Detection

    SciTech Connect

    Daily, Jeffrey A.; Krishnamoorthy, Sriram; Kalyanaraman, Anantharaman

    2012-12-26

    Abstract—The field of bioinformatics and computational biol- ogy is experiencing a data revolution — experimental techniques to procure data have increased in throughput, improved in accuracy and reduced in costs. This has spurred an array of high profile sequencing and data generation projects. While the data repositories represent untapped reservoirs of rich information critical for scientific breakthroughs, the analytical software tools that are needed to analyze large volumes of such sequence data have significantly lagged behind in their capacity to scale. In this paper, we address homology detection, which is a funda- mental problem in large-scale sequence analysis with numerous applications. We present a scalable framework to conduct large- scale optimal homology detection on massively parallel super- computing platforms. Our approach employs distributed memory work stealing to effectively parallelize optimal pairwise alignment computation tasks. Results on 120,000 cores of the Hopper Cray XE6 supercomputer demonstrate strong scaling and up to 2.42 × 107 optimal pairwise sequence alignments computed per second (PSAPS), the highest reported in the literature.

  14. Deep Hashing for Scalable Image Search.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2017-05-01

    In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.

  15. Parallel heuristics for scalable community detection

    DOE PAGES

    Lu, Hao; Halappanavar, Mahantesh; Kalyanaraman, Ananth

    2015-08-14

    Community detection has become a fundamental operation in numerous graph-theoretic applications. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method ismore » also inherently sequential, thereby limiting its scalability. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose heuristics that are designed to break the sequential barrier. For evaluation purposes, we implemented our heuristics using OpenMP multithreading, and tested them over real world graphs derived from multiple application domains. Compared to the serial Louvain implementation, our parallel implementation is able to produce community outputs with a higher modularity for most of the inputs tested, in comparable number or fewer iterations, while providing real speedups of up to 16x using 32 threads.« less

  16. Scalable multichannel MRI data acquisition system.

    PubMed

    Bodurka, Jerzy; Ledden, Patrick J; van Gelderen, Peter; Chu, Renxin; de Zwart, Jacco A; Morris, Doug; Duyn, Jeff H

    2004-01-01

    A scalable multichannel digital MRI receiver system was designed to achieve high bandwidth echo-planar imaging (EPI) acquisitions for applications such as BOLD-fMRI. The modular system design allows for easy extension to an arbitrary number of channels. A 16-channel receiver was developed and integrated with a General Electric (GE) Signa 3T VH/3 clinical scanner. Receiver performance was evaluated on phantoms and human volunteers using a custom-built 16-element receive-only brain surface coil array. At an output bandwidth of 1 MHz, a 100% acquisition duty cycle was achieved. Overall system noise figure and dynamic range were better than 0.85 dB and 84 dB, respectively. During repetitive EPI scanning on phantoms, the relative temporal standard deviation of the image intensity time-course was below 0.2%. As compared to the product birdcage head coil, 16-channel reception with the custom array yielded a nearly 6-fold SNR gain in the cerebral cortex and a 1.8-fold SNR gain in the center of the brain. The excellent system stability combined with the increased sensitivity and SENSE capabilities of 16-channel coils are expected to significantly benefit and enhance fMRI applications. Published 2003 Wiley-Liss, Inc.

  17. Scalability and interoperability within glideinWMS

    SciTech Connect

    Bradley, D.; Sfiligoi, I.; Padhi, S.; Frey, J.; Tannenbaum, T.; /Wisconsin U., Madison

    2010-01-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  18. Scalability and interoperability within glideinWMS

    NASA Astrophysics Data System (ADS)

    Bradley, D.; Sfiligoi, I.; Padhi, S.; Frey, J.; Tannenbaum, T.

    2010-04-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  19. Scalable Silicon Nanostructuring for Thermoelectric Applications

    NASA Astrophysics Data System (ADS)

    Koukharenko, E.; Boden, S. A.; Platzek, D.; Bagnall, D. M.; White, N. M.

    2013-07-01

    The current limitations of commercially available thermoelectric (TE) generators include their incompatibility with human body applications due to the toxicity of commonly used alloys and possible future shortage of raw materials (Bi-Sb-Te and Se). In this respect, exploiting silicon as an environmentally friendly candidate for thermoelectric applications is a promising alternative since it is an abundant, ecofriendly semiconductor for which there already exists an infrastructure for low-cost and high-yield processing. Contrary to the existing approaches, where n/ p-legs were either heavily doped to an optimal carrier concentration of 1019 cm-3 or morphologically modified by increasing their roughness, in this work improved thermoelectric performance was achieved in smooth silicon nanostructures with low doping concentration (1.5 × 1015 cm-3). Scalable, highly reproducible e-beam lithographies, which are compatible with nanoimprint and followed by deep reactive-ion etching (DRIE), were employed to produce arrays of regularly spaced nanopillars of 400 nm height with diameters varying from 140 nm to 300 nm. A potential Seebeck microprobe (PSM) was used to measure the Seebeck coefficients of such nanostructures. This resulted in values ranging from -75 μV/K to -120 μV/K for n-type and 100 μV/K to 140 μV/K for p-type, which are significant improvements over previously reported data.

  20. Developing a scalable inert gas ion thruster

    NASA Technical Reports Server (NTRS)

    James, E.; Ramsey, W.; Steiner, G.

    1982-01-01

    Analytical studies to identify and then design a high performance scalable ion thruster operating with either argon or xenon for use in large space systems are presented. The magnetoelectrostatic containment concept is selected for its efficient ion generation capabilities. The iterative nature of the bounding magnetic fields allows the designer to scale both the diameter and length, so that the thruster can be adapted to spacecraft growth over time. Three different thruster assemblies (conical, hexagonal and hemispherical) are evaluated for a 12 cm diameter thruster and performance mapping of the various thruster configurations shows that conical discharge chambers produce the most efficient discharge operation, achieving argon efficiencies of 50-80% mass utilization at 240-310 eV/ion and xenon efficiencies of 60-97% at 240-280 eV/ion. Preliminary testing of the large 30 cm thruster, using argon propellant, indicates a 35% improvement over the 12 cm thruster in mass utilization efficiency. Since initial performance is found to be better than projected, a larger 50 cm thruster is already in the development stage.

  1. SCAN: A Scalable Model of Attentional Selection.

    PubMed

    Hudson, Patrick T.W.; van den Herik, H Jaap; Postma, Eric O.

    1997-08-01

    This paper describes the SCAN (Signal Channelling Attentional Network) model, a scalable neural network model for attentional scanning. The building block of SCAN is a gating lattice, a sparsely-connected neural network defined as a special case of the Ising lattice from statistical mechanics. The process of spatial selection through covert attention is interpreted as a biological solution to the problem of translation-invariant pattern processing. In SCAN, a sequence of pattern translations combines active selection with translation-invariant processing. Selected patterns are channelled through a gating network, formed by a hierarchical fractal structure of gating lattices, and mapped onto an output window. We show how the incorporation of an expectation-generating classifier network (e.g. Carpenter and Grossberg's ART network) into SCAN allows attentional selection to be driven by expectation. Simulation studies show the SCAN model to be capable of attending and identifying object patterns that are part of a realistically sized natural image. Copyright 1997 Elsevier Science Ltd.

  2. Scalable cell alignment on optical media substrates.

    PubMed

    Anene-Nzelu, Chukwuemeka G; Choudhury, Deepak; Li, Huipeng; Fraiszudeen, Azmall; Peh, Kah-Yim; Toh, Yi-Chin; Ng, Sum Huan; Leo, Hwa Liang; Yu, Hanry

    2013-07-01

    Cell alignment by underlying topographical cues has been shown to affect important biological processes such as differentiation and functional maturation in vitro. However, the routine use of cell culture substrates with micro- or nano-topographies, such as grooves, is currently hampered by the high cost and specialized facilities required to produce these substrates. Here we present cost-effective commercially available optical media as substrates for aligning cells in culture. These optical media, including CD-R, DVD-R and optical grating, allow different cell types to attach and grow well on them. The physical dimension of the grooves in these optical media allowed cells to be aligned in confluent cell culture with maximal cell-cell interaction and these cell alignment affect the morphology and differentiation of cardiac (H9C2), skeletal muscle (C2C12) and neuronal (PC12) cell lines. The optical media is amenable to various chemical modifications with fibronectin, laminin and gelatin for culturing different cell types. These low-cost commercially available optical media can serve as scalable substrates for research or drug safety screening applications in industry scales.

  3. Lightweight and scalable secure communication in VANET

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoling; Lu, Yang; Zhu, Xiaojuan; Qiu, Shuwei

    2015-05-01

    To avoid a message to be tempered and forged in vehicular ad hoc network (VANET), the digital signature method is adopted by IEEE1609.2. However, the costs of the method are excessively high for large-scale networks. The paper efficiently copes with the issue with a secure communication framework by introducing some lightweight cryptography primitives. In our framework, point-to-point and broadcast communications for vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) are studied, mainly based on symmetric cryptography. A new issue incurred is symmetric key management. Thus, we develop key distribution and agreement protocols for two-party key and group key under different environments, whether a road side unit (RSU) is deployed or not. The analysis shows that our protocols provide confidentiality, authentication, perfect forward secrecy, forward secrecy and backward secrecy. The proposed group key agreement protocol especially solves the key leak problem caused by members joining or leaving in existing key agreement protocols. Due to aggregated signature and substitution of XOR for point addition, the average computation and communication costs do not significantly increase with the increase in the number of vehicles; hence, our framework provides good scalability.

  4. Coupling Advanced Modeling and Visualization to Improve High-Impact Tropical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Green, Bryan

    2009-01-01

    To meet the goals of extreme weather event warning, this approach couples a modeling and visualization system that integrates existing NASA technologies and improves the modeling system's parallel scalability to take advantage of petascale supercomputers. It also streamlines the data flow for fast processing and 3D visualizations, and develops visualization modules to fuse NASA satellite data.

  5. Coupling Advanced Modeling and Visualization to Improve High-Impact Tropical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Green, Bryan

    2009-01-01

    To meet the goals of extreme weather event warning, this approach couples a modeling and visualization system that integrates existing NASA technologies and improves the modeling system's parallel scalability to take advantage of petascale supercomputers. It also streamlines the data flow for fast processing and 3D visualizations, and develops visualization modules to fuse NASA satellite data.

  6. Validity of the Developmental Test of Visual-Motor Integration Supplemental Developmental Test of Visual Perception.

    PubMed

    Brown, Ted; Rodger, Sylvia

    2008-06-01

    Visual perceptual skills of school-age children are often assessed using the Supplemental Developmental Test of Visual Perception of the Developmental Test of Visual-Motor Integration. The study purpose was to consider the construct validity of this test by evaluating its scalability (interval level measurement), unidimensionality, differential item functioning, and hierarchical ordering of its items. Visual perceptual performance scores from a sample of 356 typically developing children (171 boys and 185 girls ages 5 to 11 years) were used to complete a Rasch analysis of the test. Seven items were discarded for poor fit, while none of the items exhibited differential item functioning by sex. The construct validity, scalability, hierarchical ordering, and lack of differential item functioning requirements were met by the final test version. Since 7 test items did not fit the Rasch analysis specifications, the clinical value of the test is questionable and limited.

  7. Trelliscope: A System for Detailed Visualization in Analysis of Large Complex Data

    SciTech Connect

    Hafen, Ryan P.; Gosink, Luke J.; McDermott, Jason E.; Rodland, Karin D.; Kleese-Van Dam, Kerstin; Cleveland, William S.

    2013-12-01

    Visualization plays a critical role in the statistical model building and data analysis process. Data analysts, well-versed in statistical and machine learning methods, visualize data to hypothesize and validate models. These analysts need flexible, scalable visualization tools that are not decoupled from their analysis environment. In this paper we introduce Trelliscope, a visualization framework for statistical analysis of large complex data. Trelliscope extends Trellis, an effective visualization framework that divides data into subsets and applies a plotting method to each subset, arranging the results in rows and columns of panels. Trelliscope provides a way to create, arrange and interactively view panels for very large datasets, enabling flexible detailed visualization for data of any size. Scalability is achieved using distributed computing technologies coupled with . We discuss the underlying principles, design, and scalable architecture of Trelliscope, and illustrate its use on three analysis projects in the domains of proteomics, high energy physics, and power systems engineering.

  8. Memory Scalability and Efficiency Analysis of Parallel Codes

    SciTech Connect

    Janjusic, Tommy; Kartsaklis, Christos

    2015-01-01

    Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an application s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).

  9. Sparse Distributed Representation and Hierarchy: Keys to Scalable Machine Intelligence

    DTIC Science & Technology

    2016-04-01

    AFRL-RY-WP-TR-2016-0030 SPARSE DISTRIBUTED REPRESENTATION & HIERARCHY: KEYS TO SCALABLE MACHINE INTELLIGENCE Gerard (Rod) Rinkus, Greg...REPRESENTATION & HIERARCHY: KEYS TO SCALABLE MACHINE INTELLIGENCE 5a. CONTRACT NUMBER FA8650-13-C-7342 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...classification accuracy on the Weizmann data set, accomplished with 3.5 minutes training time, with no machine parallelism and almost no software

  10. TriG: Next Generation Scalable Spaceborne GNSS Receiver

    NASA Technical Reports Server (NTRS)

    Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.

    2012-01-01

    TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.

  11. TriG: Next Generation Scalable Spaceborne GNSS Receiver

    NASA Technical Reports Server (NTRS)

    Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.

    2012-01-01

    TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.

  12. Toward Scalable Ion Traps for Quantum Information Processing

    DTIC Science & Technology

    2010-01-01

    Deterministic quantum teleportation of atomic qubits Nature 429 737 [15] Jost J D, Home J P, Amini J M, Hanneke D, Ozeri R, Langer C, Bollinger J J, Leibfried...Toward scalable ion traps for quantum information processing This article has been downloaded from IOPscience. Please scroll down to see the full...AND SUBTITLE Toward Scalable ion Traps For Quantum Information Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR

  13. Responsive, Flexible and Scalable Broader Impacts (Invited)

    NASA Astrophysics Data System (ADS)

    Decharon, A.; Companion, C.; Steinman, M.

    2010-12-01

    In many educator professional development workshops, scientists present content in a slideshow-type format and field questions afterwards. Drawbacks of this approach include: inability to begin the lecture with content that is responsive to audience needs; lack of flexible access to specific material within the linear presentation; and “Q&A” sessions are not easily scalable to broader audiences. Often this type of traditional interaction provides little direct benefit to the scientists. The Centers for Ocean Sciences Education Excellence - Ocean Systems (COSEE-OS) applies the technique of concept mapping with demonstrated effectiveness in helping scientists and educators “get on the same page” (deCharon et al., 2009). A key aspect is scientist professional development geared towards improving face-to-face and online communication with non-scientists. COSEE-OS promotes scientist-educator collaboration, tests the application of scientist-educator maps in new contexts through webinars, and is piloting the expansion of maps as long-lived resources for the broader community. Collaboration - COSEE-OS has developed and tested a workshop model bringing scientists and educators together in a peer-oriented process, often clarifying common misconceptions. Scientist-educator teams develop online concept maps that are hyperlinked to “assets” (i.e., images, videos, news) and are responsive to the needs of non-scientist audiences. In workshop evaluations, 91% of educators said that the process of concept mapping helped them think through science topics and 89% said that concept mapping helped build a bridge of communication with scientists (n=53). Application - After developing a concept map, with COSEE-OS staff assistance, scientists are invited to give webinar presentations that include live “Q&A” sessions. The webinars extend the reach of scientist-created concept maps to new contexts, both geographically and topically (e.g., oil spill), with a relatively small

  14. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  15. Statistical Scalability Analysis of Communication Operations in Distributed Applications

    SciTech Connect

    Vetter, J S; McCracken, M O

    2001-02-27

    Current trends in high performance computing suggest that users will soon have widespread access to clusters of multiprocessors with hundreds, if not thousands, of processors. This unprecedented degree of parallelism will undoubtedly expose scalability limitations in existing applications, where scalability is the ability of a parallel algorithm on a parallel architecture to effectively utilize an increasing number of processors. Users will need precise and automated techniques for detecting the cause of limited scalability. This paper addresses this dilemma. First, we argue that users face numerous challenges in understanding application scalability: managing substantial amounts of experiment data, extracting useful trends from this data, and reconciling performance information with their application's design. Second, we propose a solution to automate this data analysis problem by applying fundamental statistical techniques to scalability experiment data. Finally, we evaluate our operational prototype on several applications, and show that statistical techniques offer an effective strategy for assessing application scalability. In particular, we find that non-parametric correlation of the number of tasks to the ratio of the time for individual communication operations to overall communication time provides a reliable measure for identifying communication operations that scale poorly.

  16. Physical principles for scalable neural recording

    PubMed Central

    Zamft, Bradley M.; Maguire, Yael G.; Shapiro, Mikhail G.; Cybulski, Thaddeus R.; Glaser, Joshua I.; Amodei, Dario; Stranges, P. Benjamin; Kalhor, Reza; Dalrymple, David A.; Seo, Dongjin; Alon, Elad; Maharbiz, Michel M.; Carmena, Jose M.; Rabaey, Jan M.; Boyden, Edward S.; Church, George M.; Kording, Konrad P.

    2013-01-01

    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power–bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and

  17. Scalable Machine Learning for Massive Astronomical Datasets

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.; Gray, A.

    2014-04-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors. This is likely of particular interest to the radio astronomy community given, for example, that survey projects contain groups dedicated to this topic. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex

  18. Scalability of Localized Arc Filament Plasma Actuators

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2008-01-01

    Temporal flow control of a jet has been widely studied in the past to enhance jet mixing or reduce jet noise. Most of this research, however, has been done using small diameter low Reynolds number jets that often have little resemblance to the much larger jets common in real world applications because the flow actuators available lacked either the power or bandwidth to sufficiently impact these larger higher energy jets. The Localized Arc Filament Plasma Actuators (LAFPA), developed at the Ohio State University (OSU), have demonstrated the ability to impact a small high speed jet in experiments conducted at OSU and the power to perturb a larger high Reynolds number jet in experiments conducted at the NASA Glenn Research Center. However, the response measured in the large-scale experiments was significantly reduced for the same number of actuators compared to the jet response found in the small-scale experiments. A computational study has been initiated to simulate the LAFPA system with additional actuators on a large-scale jet to determine the number of actuators required to achieve the same desired response for a given jet diameter. Central to this computational study is a model for the LAFPA that both accurately represents the physics of the actuator and can be implemented into a computational fluid dynamics solver. One possible model, based on pressure waves created by the rapid localized heating that occurs at the actuator, is investigated using simplified axisymmetric simulations. The results of these simulations will be used to determine the validity of the model before more realistic and time consuming three-dimensional simulations are conducted to ultimately determine the scalability of the LAFPA system.

  19. Scalable tensor factorizations with missing data.

    SciTech Connect

    Morup, Morten; Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2010-04-01

    The problem of missing data is ubiquitous in domains such as biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, and communication networks|all domains in which data collection is subject to occasional errors. Moreover, these data sets can be quite large and have more than two axes of variation, e.g., sender, receiver, time. Many applications in those domains aim to capture the underlying latent structure of the data; in other words, they need to factorize data sets with missing entries. If we cannot address the problem of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP), and formulate the CP model as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) using a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes.

  20. Memory-Scalable GPU Spatial Hierarchy Construction.

    PubMed

    Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D

    2011-04-01

    Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.

  1. Physical principles for scalable neural recording.

    PubMed

    Marblestone, Adam H; Zamft, Bradley M; Maguire, Yael G; Shapiro, Mikhail G; Cybulski, Thaddeus R; Glaser, Joshua I; Amodei, Dario; Stranges, P Benjamin; Kalhor, Reza; Dalrymple, David A; Seo, Dongjin; Alon, Elad; Maharbiz, Michel M; Carmena, Jose M; Rabaey, Jan M; Boyden, Edward S; Church, George M; Kording, Konrad P

    2013-01-01

    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power-bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and

  2. Myria: Scalable Analytics as a Service

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  3. A Robust Scalable Transportation System Concept

    NASA Technical Reports Server (NTRS)

    Hahn, Andrew; DeLaurentis, Daniel

    2006-01-01

    This report documents the 2005 Revolutionary System Concept for Aeronautics (RSCA) study entitled "A Robust, Scalable Transportation System Concept". The objective of the study was to generate, at a high-level of abstraction, characteristics of a new concept for the National Airspace System, or the new NAS, under which transportation goals such as increased throughput, delay reduction, and improved robustness could be realized. Since such an objective can be overwhelmingly complex if pursued at the lowest levels of detail, instead a System-of-Systems (SoS) approach was adopted to model alternative air transportation architectures at a high level. The SoS approach allows the consideration of not only the technical aspects of the NAS", but also incorporates policy, socio-economic, and alternative transportation system considerations into one architecture. While the representations of the individual systems are basic, the higher level approach allows for ways to optimize the SoS at the network level, determining the best topology (i.e. configuration of nodes and links). The final product (concept) is a set of rules of behavior and network structure that not only satisfies national transportation goals, but represents the high impact rules that accomplish those goals by getting the agents to "do the right thing" naturally. The novel combination of Agent Based Modeling and Network Theory provides the core analysis methodology in the System-of-Systems approach. Our method of approach is non-deterministic which means, fundamentally, it asks and answers different questions than deterministic models. The nondeterministic method is necessary primarily due to our marriage of human systems with technological ones in a partially unknown set of future worlds. Our goal is to understand and simulate how the SoS, human and technological components combined, evolve.

  4. Parallel Heuristics for Scalable Community Detection

    SciTech Connect

    Lu, Howard; Kalyanaraman, Anantharaman; Halappanavar, Mahantesh; Choudhury, Sutanay

    2014-05-17

    Community detection has become a fundamental operation in numerous graph-theoretic applications. It is used to reveal natural divisions that exist within real world networks without imposing prior size or cardinality constraints on the set of communities. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed by Blondel et al. in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method is also inherently sequential, thereby limiting its scalability to problems that can be solved on desktops. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose multiple heuristics that are designed to break the sequential barrier. Our heuristics are agnostic to the underlying parallel architecture. For evaluation purposes, we implemented our heuristics on shared memory (OpenMP) and distributed memory (MapReduce-MPI) machines, and tested them over real world graphs derived from multiple application domains (internet, biological, natural language processing). Experimental results demonstrate the ability of our heuristics to converge to high modularity solutions comparable to those output by the serial algorithm in nearly the same number of iterations, while also drastically reducing time to solution.

  5. Scalable persistent identifier systems for dynamic datasets

    NASA Astrophysics Data System (ADS)

    Golodoniuc, P.; Cox, S. J. D.; Klump, J. F.

    2016-12-01

    Reliable and persistent identification of objects, whether tangible or not, is essential in information management. Many Internet-based systems have been developed to identify digital data objects, e.g., PURL, LSID, Handle, ARK. These were largely designed for identification of static digital objects. The amount of data made available online has grown exponentially over the last two decades and fine-grained identification of dynamically generated data objects within large datasets using conventional systems (e.g., PURL) has become impractical. We have compared capabilities of various technological solutions to enable resolvability of data objects in dynamic datasets, and developed a dataset-centric approach to resolution of identifiers. This is particularly important in Semantic Linked Data environments where dynamic frequently changing data is delivered live via web services, so registration of individual data objects to obtain identifiers is impractical. We use identifier patterns and pattern hierarchies for identification of data objects, which allows relationships between identifiers to be expressed, and also provides means for resolving a single identifier into multiple forms (i.e. views or representations of an object). The latter can be implemented through (a) HTTP content negotiation, or (b) use of URI querystring parameters. The pattern and hierarchy approach has been implemented in the Linked Data API supporting the United Nations Spatial Data Infrastructure (UNSDI) initiative and later in the implementation of geoscientific data delivery for the Capricorn Distal Footprints project using International Geo Sample Numbers (IGSN). This enables flexible resolution of multi-view persistent identifiers and provides a scalable solution for large heterogeneous datasets.

  6. Visualization for Molecular Dynamics Simulation of Gas and Metal Surface Interaction

    NASA Astrophysics Data System (ADS)

    Puzyrkov, D.; Polyakov, S.; Podryga, V.

    2016-02-01

    The development of methods, algorithms and applications for visualization of molecular dynamics simulation outputs is discussed. The visual analysis of the results of such calculations is a complex and actual problem especially in case of the large scale simulations. To solve this challenging task it is necessary to decide on: 1) what data parameters to render, 2) what type of visualization to choose, 3) what development tools to use. In the present work an attempt to answer these questions was made. For visualization it was offered to draw particles in the corresponding 3D coordinates and also their velocity vectors, trajectories and volume density in the form of isosurfaces or fog. We tested the way of post-processing and visualization based on the Python language with use of additional libraries. Also parallel software was developed that allows processing large volumes of data in the 3D regions of the examined system. This software gives the opportunity to achieve desired results that are obtained in parallel with the calculations, and at the end to collect discrete received frames into a video file. The software package "Enthought Mayavi2" was used as the tool for visualization. This visualization application gave us the opportunity to study the interaction of a gas with a metal surface and to closely observe the adsorption effect.

  7. Visual Perception versus Visual Function.

    ERIC Educational Resources Information Center

    Lieberman, Laurence M.

    1984-01-01

    Disfunctions are drawn between visual perception and visual function, and four optometrists respond with further analysis of the visual perception-visual function controversy and its implications for children with learning problems. (CL)

  8. Visualization of CFD Results in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Wasfy, Tamer M.; Noor Ahmed K.

    2001-01-01

    An object-oriented event-driven immersive virtual environment (VE) is described for the visualization of computational fluid dynamics (CFD) results. The VE incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. The fluid domain is discretized using either a multi-block structured grid or an unstructured finite element mesh. The VE allows natural 'fly-through' visualization of the model, the CFD grid, and the model's surroundings. In order to help visualize the flow and its effects on the model, the VE incorporates the following objects: stream objects (lines, surface-restricted lines. ribbons. and volumes); colored surfaces; elevation surfaces; surface arrows; global and local iso-surfaces; vortex cores; and separation/attachment surfaces and lines. Most of these objects can be used for dynamically probing the flow. Particles and arrow animations can be displayed on top of stream objects. Primitive response quantities as well as derived quantities can be used. A recursive tree search algorithm is used for real-time point and value search in the CFD grid.

  9. Visualization of CFD Results in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Wasfy, Tamer M.; Noor Ahmed K.

    2001-01-01

    An object-oriented event-driven immersive virtual environment (VE) is described for the visualization of computational fluid dynamics (CFD) results. The VE incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. The fluid domain is discretized using either a multi-block structured grid or an unstructured finite element mesh. The VE allows natural 'fly-through' visualization of the model, the CFD grid, and the model's surroundings. In order to help visualize the flow and its effects on the model, the VE incorporates the following objects: stream objects (lines, surface-restricted lines. ribbons. and volumes); colored surfaces; elevation surfaces; surface arrows; global and local iso-surfaces; vortex cores; and separation/attachment surfaces and lines. Most of these objects can be used for dynamically probing the flow. Particles and arrow animations can be displayed on top of stream objects. Primitive response quantities as well as derived quantities can be used. A recursive tree search algorithm is used for real-time point and value search in the CFD grid.

  10. Visual Text Analytics for Impromptu Analysts

    SciTech Connect

    Love, Oriana J.; Best, Daniel M.; Bruce, Joseph R.; Dowson, Scott T.; Larmey, Christopher S.

    2011-10-23

    The Scalable Reasoning System (SRS) is a lightweight visual analytics framework that makes analytical capabilities widely accessible to a class of users we have deemed “impromptu analysts.” By focusing on a deployment of SRS, the Lessons Learned Explorer (LLEx), we examine how to develop visualizations around analytical-oriented goals and data availability. We discuss how to help impromptu analysts to explore deeper patterns. Through designing consistent interactions, we arrive at an interdependent view capable of showcasing patterns. With the combination of SRS widget visualizations and interactions around the underlying textual data, we aim to transition the casual, infrequent user into a viable–albeit impromptu–analyst.

  11. Provenance Storage, Querying, and Visualization in PBase

    SciTech Connect

    Kianmajd, Parisa; Ludascher, Bertram; Missier, Paolo; Chirigati, Fernando; Wei, Yaxing; Koop, David; Dey, Saumen

    2015-01-01

    We present PBase, a repository for scientific workflows and their corresponding provenance information that facilitates the sharing of experiments among the scientific community. PBase is interoperable since it uses ProvONE, a standard provenance model for scientific workflows. Workflows and traces are stored in RDF, and with the support of SPARQL and the tree cover encoding, the repository provides a scalable infrastructure for querying the provenance data. Furthermore, through its user interface, it is possible to: visualize workflows and execution traces; visualize reachability relations within these traces; issue SPARQL queries; and visualize query results.

  12. A scalable climate health justice assessment model.

    PubMed

    McDonald, Yolanda J; Grineski, Sara E; Collins, Timothy W; Kim, Young-An

    2015-05-01

    This paper introduces a scalable "climate health justice" model for assessing and projecting incidence, treatment costs, and sociospatial disparities for diseases with well-documented climate change linkages. The model is designed to employ low-cost secondary data, and it is rooted in a perspective that merges normative environmental justice concerns with theoretical grounding in health inequalities. Since the model employs International Classification of Diseases, Ninth Revision Clinical Modification (ICD-9-CM) disease codes, it is transferable to other contexts, appropriate for use across spatial scales, and suitable for comparative analyses. We demonstrate the utility of the model through analysis of 2008-2010 hospitalization discharge data at state and county levels in Texas (USA). We identified several disease categories (i.e., cardiovascular, gastrointestinal, heat-related, and respiratory) associated with climate change, and then selected corresponding ICD-9 codes with the highest hospitalization counts for further analyses. Selected diseases include ischemic heart disease, diarrhea, heat exhaustion/cramps/stroke/syncope, and asthma. Cardiovascular disease ranked first among the general categories of diseases for age-adjusted hospital admission rate (5286.37 per 100,000). In terms of specific selected diseases (per 100,000 population), asthma ranked first (517.51), followed by ischemic heart disease (195.20), diarrhea (75.35), and heat exhaustion/cramps/stroke/syncope (7.81). Charges associated with the selected diseases over the 3-year period amounted to US$5.6 billion. Blacks were disproportionately burdened by the selected diseases in comparison to non-Hispanic whites, while Hispanics were not. Spatial distributions of the selected disease rates revealed geographic zones of disproportionate risk. Based upon a downscaled regional climate-change projection model, we estimate a >5% increase in the incidence and treatment costs of asthma attributable to

  13. Scalable Designs for Planar Ion Trap Arrays

    NASA Astrophysics Data System (ADS)

    Slusher, R. E.

    2007-03-01

    , ``Architecture for a large-scale ion-trap quantum computer,'' Nature, Vol.417, pp.709--711, (2002). S. Seidelin, J. Chiaverini, R. Reicle, J. J. Bollinger, D. Leibfried, J. Briton, J. H. Wesenberg, R. B. Blakestad, R. J. Epstein, D. B. Hume, J. D. Jost, C. Langer, R. Ozeri, N. Shiga, and D. J. Wineland, ``Amicrofabricated surface-electrode ion trap for scalable quantum informtion processing,'' quant-ph/0601173, (2006). J. Kim, S. Pau, Z. Ma, H.R. McLellan, J.V. Gates, A. Kornblit, and R.E. Slusher, ``System design for large-scale ion trap quantum information processor,'' Quantum Inf. Comput., Vol 5, pp 515--537, (2005).

  14. A scalable climate health justice assessment model

    PubMed Central

    McDonald, Yolanda J.; Grineski, Sara E.; Collins, Timothy W.; Kim, Young-An

    2014-01-01

    This paper introduces a scalable “climate health justice” model for assessing and projecting incidence, treatment costs, and sociospatial disparities for diseases with well-documented climate change linkages. The model is designed to employ low-cost secondary data, and it is rooted in a perspective that merges normative environmental justice concerns with theoretical grounding in health inequalities. Since the model employs International Classification of Diseases, Ninth Revision Clinical Modification (ICD-9-CM) disease codes, it is transferable to other contexts, appropriate for use across spatial scales, and suitable for comparative analyses. We demonstrate the utility of the model through analysis of 2008–2010 hospitalization discharge data at state and county levels in Texas (USA). We identified several disease categories (i.e., cardiovascular, gastrointestinal, heat-related, and respiratory) associated with climate change, and then selected corresponding ICD-9 codes with the highest hospitalization counts for further analyses. Selected diseases include ischemic heart disease, diarrhea, heat exhaustion/cramps/stroke/syncope, and asthma. Cardiovascular disease ranked first among the general categories of diseases for age-adjusted hospital admission rate (5286.37 per 100,000). In terms of specific selected diseases (per 100,000 population), asthma ranked first (517.51), followed by ischemic heart disease (195.20), diarrhea (75.35), and heat exhaustion/cramps/stroke/syncope (7.81). Charges associated with the selected diseases over the 3-year period amounted to US$5.6 billion. Blacks were disproportionately burdened by the selected diseases in comparison to non-Hispanic whites, while Hispanics were not. Spatial distributions of the selected disease rates revealed geographic zones of disproportionate risk. Based upon a downscaled regional climate-change projection model, we estimate a >5% increase in the incidence and treatment costs of asthma attributable to

  15. Laplacian embedded regression for scalable manifold regularization.

    PubMed

    Chen, Lin; Tsang, Ivor W; Xu, Dong

    2012-06-01

    world data sets show the effectiveness and scalability of the proposed framework.

  16. Visualization of time-varying MRI data for MS lesion analysis

    NASA Astrophysics Data System (ADS)

    Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella

    2001-05-01

    Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.

  17. Visualization of AMR data with multi-level dual-mesh interpolation.

    PubMed

    Moran, Patrick J; Ellsworth, David

    2011-12-01

    We present a new technique for providing interpolation within cell-centered Adaptive Mesh Refinement (AMR) data that achieves C(0) continuity throughout the 3D domain. Our technique improves on earlier work in that it does not require that adjacent patches differ by at most one refinement level. Our approach takes the dual of each mesh patch and generates "stitching cells" on the fly to fill the gaps between dual meshes. We demonstrate applications of our technique with data from Enzo, an AMR cosmological structure formation simulation code. We show ray-cast visualizations that include contributions from particle data (dark matter and stars, also output by Enzo) and gridded hydrodynamic data. We also show results from isosurface studies, including surfaces in regions where adjacent patches differ by more than one refinement level.

  18. WebGL-enabled 3D visualization of a Solar Flare Simulation

    NASA Astrophysics Data System (ADS)

    Chen, A.; Cheung, C. M. M.; Chintzoglou, G.

    2016-12-01

    The visualization of magnetohydrodynamic (MHD) simulations of astrophysical systems such as solar flares often requires specialized software packages (e.g. Paraview and VAPOR). A shortcoming of using such software packages is the inability to share our findings with the public and scientific community in an interactive and engaging manner. By using the javascript-based WebGL application programming interface (API) and the three.js javascript package, we create an online in-browser experience for rendering solar flare simulations that will be interactive and accessible to the general public. The WebGL renderer displays objects such as vector flow fields, streamlines and textured isosurfaces. This allows the user to explore the spatial relation between the solar coronal magnetic field and the thermodynamic structure of the plasma in which the magnetic field is embedded. Plans for extending the features of the renderer will also be presented.

  19. Developing a personal computer-based data visualization system using public domain software

    NASA Astrophysics Data System (ADS)

    Chen, Philip C.

    1999-03-01

    The current research will investigate the possibility of developing a computing-visualization system using a public domain software system built on a personal computer. Visualization Toolkit (VTK) is available on UNIX and PC platforms. VTK uses C++ to build an executable. It has abundant programming classes/objects that are contained in the system library. Users can also develop their own classes/objects in addition to those existing in the class library. Users can develop applications with any of the C++, Tcl/Tk, and JAVA environments. The present research will show how a data visualization system can be developed with VTK running on a personal computer. The topics will include: execution efficiency; visual object quality; availability of the user interface design; and exploring the feasibility of the VTK-based World Wide Web data visualization system. The present research will feature a case study showing how to use VTK to visualize meteorological data with techniques including, iso-surface, volume rendering, vector display, and composite analysis. The study also shows how the VTK outline, axes, and two-dimensional annotation text and title are enhancing the data presentation. The present research will also demonstrate how VTK works in an internet environment while accessing an executable with a JAVA application programing in a webpage.

  20. Scalability enhancement of AODV using local link repairing

    NASA Astrophysics Data System (ADS)

    Jain, Jyoti; Gupta, Roopam; Bandhopadhyay, T. K.

    2014-09-01

    Dynamic change in the topology of an ad hoc network makes it difficult to design an efficient routing protocol. Scalability of an ad hoc network is also one of the important criteria of research in this field. Most of the research works in ad hoc network focus on routing and medium access protocols and produce simulation results for limited-size networks. Ad hoc on-demand distance vector (AODV) is one of the best reactive routing protocols. In this article, modified routing protocols based on local link repairing of AODV are proposed. Method of finding alternate routes for next-to-next node is proposed in case of link failure. These protocols are beacon-less, means periodic hello message is removed from the basic AODV to improve scalability. Few control packet formats have been changed to accommodate suggested modification. Proposed protocols are simulated to investigate scalability performance and compared with basic AODV protocol. This also proves that local link repairing of proposed protocol improves scalability of the network. From simulation results, it is clear that scalability performance of routing protocol is improved because of link repairing method. We have tested protocols for different terrain area with approximate constant node densities and different traffic load.

  1. WIFIRE: A Scalable Data-Driven Monitoring, Dynamic Prediction and Resilience Cyberinfrastructure for Wildfires

    NASA Astrophysics Data System (ADS)

    Altintas, I.; Block, J.; Braun, H.; de Callafon, R. A.; Gollner, M. J.; Smarr, L.; Trouve, A.

    2013-12-01

    Recent studies confirm that climate change will cause wildfires to increase in frequency and severity in the coming decades especially for California and in much of the North American West. The most critical sustainability issue in the midst of these ever-changing dynamics is how to achieve a new social-ecological equilibrium of this fire ecology. Wildfire wind speeds and directions change in an instant, and first responders can only be effective when they take action as quickly as the conditions change. To deliver information needed for sustainable policy and management in this dynamically changing fire regime, we must capture these details to understand the environmental processes. We are building an end-to-end cyberinfrastructure (CI), called WIFIRE, for real-time and data-driven simulation, prediction and visualization of wildfire behavior. The WIFIRE integrated CI system supports social-ecological resilience to the changing fire ecology regime in the face of urban dynamics and climate change. Networked observations, e.g., heterogeneous satellite data and real-time remote sensor data is integrated with computational techniques in signal processing, visualization, modeling and data assimilation to provide a scalable, technological, and educational solution to monitor weather patterns to predict a wildfire's Rate of Spread. Our collaborative WIFIRE team of scientists, engineers, technologists, government policy managers, private industry, and firefighters architects implement CI pathways that enable joint innovation for wildfire management. Scientific workflows are used as an integrative distributed programming model and simplify the implementation of engineering modules for data-driven simulation, prediction and visualization while allowing integration with large-scale computing facilities. WIFIRE will be scalable to users with different skill-levels via specialized web interfaces and user-specified alerts for environmental events broadcasted to receivers before

  2. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  3. FMOE-MR: content-driven multiresolution MPEG-4 fine grained scalable layered video encoding

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, S.; Luo, X.; Bhandarkar, S. M.; Li, K.

    2007-01-01

    The MPEG-4 Fine Grained Scalability (FGS) profile aims at scalable layered video encoding, in order to ensure efficient video streaming in networks with fluctuating bandwidths. In this paper, we propose a novel technique, termed as FMOEMR, which delivers significantly improved rate distortion performance compared to existing MPEG-4 Base Layer encoding techniques. The video frames are re-encoded at high resolution at semantically and visually important regions of the video (termed as Features, Motion and Objects) that are defined using a mask (FMO-Mask) and at low resolution in the remaining regions. The multiple-resolution re-rendering step is implemented such that further MPEG-4 compression leads to low bit rate Base Layer video encoding. The Features, Motion and Objects Encoded-Multi- Resolution (FMOE-MR) scheme is an integrated approach that requires only encoder-side modifications, and is transparent to the decoder. Further, since the FMOE-MR scheme incorporates "smart" video preprocessing, it requires no change in existing MPEG-4 codecs. As a result, it is straightforward to use the proposed FMOE-MR scheme with any existing MPEG codec, thus allowing great flexibility in implementation. In this paper, we have described, and implemented, unsupervised and semi-supervised algorithms to create the FMO-Mask from a given video sequence, using state-of-the-art computer vision algorithms.

  4. NEXUS Scalable and Distributed Next-Generation Avionics Bus for Space Missions

    NASA Technical Reports Server (NTRS)

    He, Yutao; Shalom, Eddy; Chau, Savio N.; Some, Raphael R.; Bolotin, Gary S.

    2011-01-01

    A paper discusses NEXUS, a common, next-generation avionics interconnect that is transparently compatible with wired, fiber-optic, and RF physical layers; provides a flexible, scalable, packet switched topology; is fault-tolerant with sub-microsecond detection/recovery latency; has scalable bandwidth from 1 Kbps to 10 Gbps; has guaranteed real-time determinism with sub-microsecond latency/jitter; has built-in testability; features low power consumption (< 100 mW per Gbps); is lightweight with about a 5,000-logic-gate footprint; and is implemented in a small Bus Interface Unit (BIU) with reconfigurable back-end providing interface to legacy subsystems. NEXUS enhances a commercial interconnect standard, Serial RapidIO, to meet avionics interconnect requirements without breaking the standard. This unified interconnect technology can be used to meet performance, power, size, and reliability requirements of all ranges of equipment, sensors, and actuators at chip-to-chip, board-to-board, or box-to-box boundary. Early results from in-house modeling activity of Serial RapidIO using VisualSim indicate that the use of a switched, high-performance avionics network will provide a quantum leap in spacecraft onboard science and autonomy capability for science and exploration missions.

  5. Computing and Visualizing Reachable Volumes for Maneuvering Satellites

    SciTech Connect

    Jiang, M; de Vries, W H; Pertica, A J; Olivier, S S

    2011-09-11

    Detecting and predicting maneuvering satellites is an important problem for Space Situational Awareness. The spatial envelope of all possible locations within reach of such a maneuvering satellite is known as the Reachable Volume (RV). As soon as custody of a satellite is lost, calculating the RV and its subsequent time evolution is a critical component in the rapid recovery of the satellite. In this paper, we present a Monte Carlo approach to computing the RV for a given object. Essentially, our approach samples all possible trajectories by randomizing thrust-vectors, thrust magnitudes and time of burn. At any given instance, the distribution of the 'point-cloud' of the virtual particles defines the RV. For short orbital time-scales, the temporal evolution of the point-cloud can result in complex, multi-reentrant manifolds. Visualization plays an important role in gaining insight and understanding into this complex and evolving manifold. In the second part of this paper, we focus on how to effectively visualize the large number of virtual trajectories and the computed RV. We present a real-time out-of-core rendering technique for visualizing the large number of virtual trajectories. We also examine different techniques for visualizing the computed volume of probability density distribution, including volume slicing, convex hull and isosurfacing. We compare and contrast these techniques in terms of computational cost and visualization effectiveness, and describe the main implementation issues encountered during our development process. Finally, we will present some of the results from our end-to-end system for computing and visualizing RVs using examples of maneuvering satellites.

  6. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  7. Automated Scalability Analysis Tools for Message Passing Parallel Programs

    NASA Technical Reports Server (NTRS)

    Sarukkai, Sekhar R.; Mehra, Pankaj; Tucker, Deanne (Technical Monitor)

    1994-01-01

    In order to develop scalable parallel applications, a number of programming decisions have to be made during the development of the program. Performance tools that help in making these decisions are few, if existent. Traditionally, performance tools have focused on exposing performance bottlenecks of small-scale executions of the program. However, it is common knowledge that programs that perform exceptionally well on small processor configurations, more often than not, perform poorly when executed on larger processor configurations. Hence, new tools that predict the execution characteristics of scaled-up programs are an essential part of an application developers toolkit. In this paper we discuss important issues that need to be considered in order to build useful scalability analysis tools for parallel programs. We introduce a simple tool that automatically extracts scalability characteristics of a class of deterministic parallel programs. We show with the help of a number of results on the Intel iPSC/860, that predictions are within reasonable bounds.

  8. Current parallel I/O limitations to scalable data analysis.

    SciTech Connect

    Mascarenhas, Ajith Arthur; Pebay, Philippe Pierre

    2011-07-01

    This report describes the limitations to parallel scalability which we have encountered when applying our otherwise optimally scalable parallel statistical analysis tool kit to large data sets distributed across the parallel file system of the current premier DOE computational facility. This report describes our study to evaluate the effect of parallel I/O on the overall scalability of a parallel data analysis pipeline using our scalable parallel statistics tool kit [PTBM11]. In this goal, we tested it using the Jaguar-pf DOE/ORNL peta-scale platform on a large combustion simulation data under a variety of process counts and domain decompositions scenarios. In this report we have recalled the foundations of the parallel statistical analysis tool kit which we have designed and implemented, with the specific double intent of reproducing typical data analysis workflows, and achieving optimal design for scalable parallel implementations. We have briefly reviewed those earlier results and publications which allow us to conclude that we have achieved both goals. However, in this report we have further established that, when used in conjuction with a state-of-the-art parallel I/O system, as can be found on the premier DOE peta-scale platform, the scaling properties of the overall analysis pipeline comprising parallel data access routines degrade rapidly. This finding is problematic and must be addressed if peta-scale data analysis is to be made scalable, or even possible. In order to attempt to address these parallel I/O limitations, we will investigate the use the Adaptable IO System (ADIOS) [LZL+10] to improve I/O performance, while maintaining flexibility for a variety of IO options, such MPI IO, POSIX IO. This system is developed at ORNL and other collaborating institutions, and is being tested extensively on Jaguar-pf. Simulation code being developed on these systems will also use ADIOS to output the data thereby making it easier for other systems, such as ours, to

  9. geoKepler Workflow Module for Computationally Scalable and Reproducible Geoprocessing and Modeling

    NASA Astrophysics Data System (ADS)

    Cowart, C.; Block, J.; Crawl, D.; Graham, J.; Gupta, A.; Nguyen, M.; de Callafon, R.; Smarr, L.; Altintas, I.

    2015-12-01

    The NSF-funded WIFIRE project has developed an open-source, online geospatial workflow platform for unifying geoprocessing tools and models for for fire and other geospatially dependent modeling applications. It is a product of WIFIRE's objective to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. geoKepler includes a set of reusable GIS components, or actors, for the Kepler Scientific Workflow System (https://kepler-project.org). Actors exist for reading and writing GIS data in formats such as Shapefile, GeoJSON, KML, and using OGC web services such as WFS. The actors also allow for calling geoprocessing tools in other packages such as GDAL and GRASS. Kepler integrates functions from multiple platforms and file formats into one framework, thus enabling optimal GIS interoperability, model coupling, and scalability. Products of the GIS actors can be fed directly to models such as FARSITE and WRF. Kepler's ability to schedule and scale processes using Hadoop and Spark also makes geoprocessing ultimately extensible and computationally scalable. The reusable workflows in geoKepler can be made to run automatically when alerted by real-time environmental conditions. Here, we show breakthroughs in the speed of creating complex data for hazard assessments with this platform. We also demonstrate geoKepler workflows that use Data Assimilation to ingest real-time weather data into wildfire simulations, and for data mining techniques to gain insight into environmental conditions affecting fire behavior. Existing machine learning tools and libraries such as R and MLlib are being leveraged for this purpose in Kepler, as well as Kepler's Distributed Data Parallel (DDP) capability to provide a framework for scalable processing. geoKepler workflows can be executed via an iPython notebook as a part of a Jupyter hub at UC San Diego for sharing and reporting of the scientific analysis and results from

  10. Providing scalable system software for high-end simulations

    SciTech Connect

    Greenberg, D.

    1997-12-31

    Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.

  11. SSEL1.0. Sandia Scalable Encryption Software

    SciTech Connect

    Tarman, T.D.

    1996-08-29

    Sandia Scalable Encryption Library (SSEL) Version 1.0 is a library of functions that implement Sandia`s scalable encryption algorithm. This algorithm is used to encrypt Asynchronous Transfer Mode (ATM) data traffic, and is capable of operating on an arbitrary number of bits at a time (which permits scaling via parallel implementations), while being interoperable with differently scaled versions of this algorithm. The routines in this library implement 8 bit and 32 bit versions of a non-linear mixer which is compatible with Sandia`s hardware-based ATM encryptor.

  12. Scalable photonic crystal chips for high sensitivity protein detection.

    PubMed

    Liang, Feng; Clarke, Nigel; Patel, Parth; Loncar, Marko; Quan, Qimin

    2013-12-30

    Scalable microfabrication technology has enabled semiconductor and microelectronics industries, among other fields. Meanwhile, rapid and sensitive bio-molecule detection is increasingly important for drug discovery and biomedical diagnostics. In this work, we designed and demonstrated that photonic crystal sensor chips have high sensitivity for protein detection and can be mass-produced with scalable deep-UV lithography. We demonstrated label-free detection of carcinoembryonic antigen from pg/mL to μg/mL, with high quality factor photonic crystal nanobeam cavities.

  13. Hierarchical aggregation for information visualization: overview, techniques, and design guidelines.

    PubMed

    Elmqvist, Niklas; Fekete, Jean-Daniel

    2010-01-01

    We present a model for building, visualizing, and interacting with multiscale representations of information visualization techniques using hierarchical aggregation. The motivation for this work is to make visual representations more visually scalable and less cluttered. The model allows for augmenting existing techniques with multiscale functionality, as well as for designing new visualization and interaction techniques that conform to this new class of visual representations. We give some examples of how to use the model for standard information visualization techniques such as scatterplots, parallel coordinates, and node-link diagrams, and discuss existing techniques that are based on hierarchical aggregation. This yields a set of design guidelines for aggregated visualizations. We also present a basic vocabulary of interaction techniques suitable for navigating these multiscale visualizations.

  14. BactoGeNIE: A large-scale comparative genome visualization for big displays

    DOE PAGES

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; ...

    2015-08-13

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less

  15. BactoGeNIE: A large-scale comparative genome visualization for big displays

    SciTech Connect

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; Marai, Elisabeta G.; Leigh, Jason

    2015-08-13

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.

  16. Generation of pedigree diagrams for web display using scalable vector graphics from a clinical trials database.

    PubMed Central

    Fernando, S. K.; Brandt, C.; Nadkarni, P.

    2001-01-01

    The standard method of studying inherited disease is to observe its pattern of distribution in families, that is, its pattern in a pedigree. For clinical studies focused on inherited disease, a pedigree diagram is a valuable visual tool for the display of inheritance patterns. We describe the creation of a web-based pedigree display module for Trial/DB, a Web accessible database developed at the Yale Center for Medical Informatics (YCMI) to support clinical research studies. The pedigree diagram is generated dynamically from the database. The icons representing each subject in the pedigree are selectable hyperlinks that will display detailed clinical data collected on the subject. Microsoft Active Server Page and Scalable Vector Graphics (SVG) are used to create the interactive pedigree diagrams. PMID:11825175

  17. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  18. Three-dimensional Visualization of Cosmological and Galaxy Formation Simulations

    NASA Astrophysics Data System (ADS)

    Thooris, Bruno; Pomarède, Daniel

    2011-12-01

    Our understanding of the structuring of the Universe from large-scale cosmological structures down to the formation of galaxies now largely benefits from numerical simulations. The RAMSES code, relying on the Adaptive Mesh Refinement technique, is used to perform massively parallel simulations at multiple scales. The interactive, immersive, three-dimensional visualization of such complex simulations is a challenge that is addressed using the SDvision software package. Several rendering techniques are available, including ray-casting and isosurface reconstruction, to explore the simulated volumes at various resolution levels and construct temporal sequences. These techniques are illustrated in the context of different classes of simulations. We first report on the visualization of the HORIZON Galaxy Formation Simulation at MareNostrum, a cosmological simulation with detailed physics at work in the galaxy formation process. We then carry on in the context of an intermediate zoom simulation leading to the formation of a Milky-Way like galaxy. Finally, we present a variety of simulations of interacting galaxies, including a case-study of the Antennae Galaxies interaction.

  19. Advances in 3D visualization of air quality data

    NASA Astrophysics Data System (ADS)

    San José, R.; Pérez, J. L.; González, R. M.

    2012-10-01

    The air quality models produce a considerable amount of data, raw data can be hard to conceptualize, particularly when the size of the data sets can be terabytes, so to understand the atmospheric processes and consequences of air pollution it is necessary to analyse the results of the air pollution simulations. The basis of the development of the visualization is shaped by the requirements of the different group of users. We show different possibilities to represent 3D atmospheric data and geographic data. We present several examples developed with IDV software, which is a generic tool that can be used directly with the simulation results. The rest of solutions are specific applications developed by the authors which are the integration of different tools and technologies. In the case of the buildings has been necessary to make a 3D model from the buildings data using COLLADA standard format. In case of the Google Earth approach, for the atmospheric part we use Ferret software. In the case of gvSIG.-3D for the atmospheric visualization we have used different geometric figures available: "QuadPoints", "Polylines", "Spheres" and isosurfaces. The last one is also displayed following the VRML standard.

  20. Scalable Track Initiation for Optical Space Surveillance

    NASA Astrophysics Data System (ADS)

    Schumacher, P.; Wilkins, M. P.

    2012-09-01

    least cubic and commonly quartic or higher. Therefore, practical implementations require attention to the scalability of the algorithms, when one is dealing with the very large number of observations from large surveillance telescopes. We address two broad categories of algorithms. The first category includes and extends the classical methods of Laplace and Gauss, as well as the more modern method of Gooding, in which one solves explicitly for the apparent range to the target in terms of the given data. In particular, recent ideas offered by Mortari and Karimi allow us to construct a family of range-solution methods that can be scaled to many processors efficiently. We find that the orbit solutions (data association hypotheses) can be ranked by means of a concept we call persistence, in which a simple statistical measure of likelihood is based on the frequency of occurrence of combinations of observations in consistent orbit solutions. Of course, range-solution methods can be expected to perform poorly if the orbit solutions of most interest are not well conditioned. The second category of algorithms addresses this difficulty. Instead of solving for range, these methods attach a set of range hypotheses to each measured line of sight. Then all pair-wise combinations of observations are considered and the family of Lambert problems is solved for each pair. These algorithms also have polynomial complexity, though now the complexity is quadratic in the number of observations and also quadratic in the number of range hypotheses. We offer a novel type of admissible-region analysis, constructing partitions of the orbital element space and deriving rigorous upper and lower bounds on the possible values of the range for each partition. This analysis allows us to parallelize with respect to the element partitions and to reduce the number of range hypotheses that have to be considered in each processor simply by making the partitions smaller. Naturally, there are many ways to

  1. Visualization methods for high-resolution, transient, 3-D, finite element situations

    SciTech Connect

    Christon, M.A.

    1995-01-10

    Scientific visualization is the process whereby numerical data is transformed into a visual form to augment the process of discovery and understanding. Visualizing the data generated by large-scale, transient, three-dimensional finite element simulations poses many challenges due to geometric complexity, the presence of multiple materials and multiple element types, and the inherent unstructured nature of the meshes. In this paper, the direct use of finite element data structures, nodal assembly procedures, and element interpolants for volumetric adaptive surface extraction, surface rendering, vector grids and particle tracing is discussed. A brief description of a {open_quotes}direct-to-disk{close_quotes} animation system is presented, and case studies which demonstrate the use of isosurfaces, vector plots, cutting planes, reference surfaces and particle tracing are then discussed in the context of several case studies for transient incompressible viscous flow, and acoustic fluid-structure interaction simulations. An overview of the implications of massively parallel computers on visualization is presented to highlight the issues in parallel visualization methodology, algorithms. data locality and the ultimate requirements for temporary and archival data storage and network bandwidth.

  2. A scalable portable object-oriented framework for parallel multisensor data-fusion applications in HPC systems

    NASA Astrophysics Data System (ADS)

    Gupta, Pankaj; Prasad, Guru

    2004-04-01

    Multi-sensor Data Fusion is synergistic integration of multiple data sets. Data fusion includes processes for aligning, associating and combining data and information in estimating and predicting the state of objects, their relationships, and characterizing situations and their significance. The combination of complex data sets and the need for real-time data storage and retrieval compounds the data fusion problem. The systematic development and use of data fusion techniques are particularly critical in applications requiring massive, diverse, ambiguous, and time-critical data. Such conditions are characteristic of new emerging requirements; e.g., network-centric and information-centric warfare, low intensity conflicts such as special operations, counter narcotics, antiterrorism, information operations and CALOW (Conventional Arms, Limited Objectives Warfare), economic and political intelligence. In this paper, Aximetric presents a novel, scalable, object-oriented, metamodel framework for parallel, cluster-based data-fusion engine on High Performance Computing (HPC) Systems. The data-clustering algorithms provide a fast, scalable technique to sift through massive, complex data sets coming through multiple streams in real-time. The load-balancing algorithm provides the capability to evenly distribute the workload among processors on-the-fly and achieve real-time scalability. The proposed data-fusion engine exploits unique data-structures for fast storage, retrieval and interactive visualization of the multiple data streams.

  3. Evaluation of Graph Sampling: A Visualization Perspective.

    PubMed

    Wu, Yanhong; Cao, Nan; Archambault, Daniel; Shen, Qiaomu; Qu, Huamin; Cui, Weiwei

    2017-01-01

    Graph sampling is frequently used to address scalability issues when analyzing large graphs. Many algorithms have been proposed to sample graphs, and the performance of these algorithms has been quantified through metrics based on graph structural properties preserved by the sampling: degree distribution, clustering coefficient, and others. However, a perspective that is missing is the impact of these sampling strategies on the resultant visualizations. In this paper, we present the results of three user studies that investigate how sampling strategies influence node-link visualizations of graphs. In particular, five sampling strategies widely used in the graph mining literature are tested to determine how well they preserve visual features in node-link diagrams. Our results show that depending on the sampling strategy used different visual features are preserved. These results provide a complimentary view to metric evaluations conducted in the graph mining literature and provide an impetus to conduct future visualization studies.

  4. Scalable multi-variate analytics of seismic and satellite-based observational data.

    PubMed

    Yuan, Xiaoru; He, Xiao; Guo, Hanqi; Guo, Peihong; Kendall, Wesley; Huang, Jian; Zhang, Yongxian

    2010-01-01

    Over the past few years, large human populations around the world have been affected by an increase in significant seismic activities. For both conducting basic scientific research and for setting critical government policies, it is crucial to be able to explore and understand seismic and geographical information obtained through all scientific instruments. In this work, we present a visual analytics system that enables explorative visualization of seismic data together with satellite-based observational data, and introduce a suite of visual analytical tools. Seismic and satellite data are integrated temporally and spatially. Users can select temporal ;and spatial ranges to zoom in on specific seismic events, as well as to inspect changes both during and after the events. Tools for designing high dimensional transfer functions have been developed to enable efficient and intuitive comprehension of the multi-modal data. Spread-sheet style comparisons are used for data drill-down as well as presentation. Comparisons between distinct seismic events are also provided for characterizing event-wise differences. Our system has been designed for scalability in terms of data size, complexity (i.e. number of modalities), and varying form factors of display environments.

  5. Visual exploration of nasal airflow.

    PubMed

    Zachow, Stefan; Muigg, Philipp; Hildebrandt, Thomas; Doleisch, Helmut; Hege, Hans-Christian

    2009-01-01

    Rhinologists are often faced with the challenge of assessing nasal breathing from a functional point of view to derive effective therapeutic interventions. While the complex nasal anatomy can be revealed by visual inspection and medical imaging, only vague information is available regarding the nasal airflow itself: Rhinomanometry delivers rather unspecific integral information on the pressure gradient as well as on total flow and nasal flow resistance. In this article we demonstrate how the understanding of physiological nasal breathing can be improved by simulating and visually analyzing nasal airflow, based on an anatomically correct model of the upper human respiratory tract. In particular we demonstrate how various Information Visualization (InfoVis) techniques, such as a highly scalable implementation of parallel coordinates, time series visualizations, as well as unstructured grid multi-volume rendering, all integrated within a multiple linked views framework, can be utilized to gain a deeper understanding of nasal breathing. Evaluation is accomplished by visual exploration of spatio-temporal airflow characteristics that include not only information on flow features but also on accompanying quantities such as temperature and humidity. To our knowledge, this is the first in-depth visual exploration of the physiological function of the nose over several simulated breathing cycles under consideration of a complete model of the nasal airways, realistic boundary conditions, and all physically relevant time-varying quantities.

  6. UpSetR: an R package for the visualization of intersecting sets and their properties.

    PubMed

    Conway, Jake R; Lex, Alexander; Gehlenborg, Nils

    2017-09-15

    Venn and Euler diagrams are a popular yet inadequate solution for quantitative visualization of set intersections. A scalable alternative to Venn and Euler diagrams for visualizing intersecting sets and their properties is needed. We developed UpSetR, an open source R package that employs a scalable matrix-based visualization to show intersections of sets, their size, and other properties. UpSetR is available at https://github.com/hms-dbmi/UpSetR/ and released under the MIT License. A Shiny app is available at https://gehlenborglab.shinyapps.io/upsetr/ . nils@hms.harvard.edu. Supplementary data are available at Bioinformatics online.

  7. Scalable Power-Component Models for Concept Testing

    DTIC Science & Technology

    2011-08-16

    Public Release 18. NUMBER OF PAGES 12 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c . THIS PAGE...Scalable Battery Ver. 2.0 • Technology: Lithium-Ion – Lithium-Cobalt-Oxide LiCoO2 – Lithium-Iron-Phosphate LiFePO4 • Model: Electrical Analogue

  8. Scalable Quantum Networks for Distributed Computing and Sensing

    DTIC Science & Technology

    2016-04-01

    creation of reliable and efficient network components that can be operated in large numbers, we developed chip- integrated photon sources and circuits ...numbers. To this end, we developed novel chip- integrated photon sources and photonic circuits that achieve low-loss transmission in a small footprint... integrated interferometers for complex preparation of entangled states. Second, scalability requires a method to synchronize protocols on inherently

  9. Scalability and Validation of Big Data Bioinformatics Software.

    PubMed

    Yang, Andrian; Troup, Michael; Ho, Joshua W K

    2017-01-01

    This review examines two important aspects that are central to modern big data bioinformatics analysis - software scalability and validity. We argue that not only are the issues of scalability and validation common to all big data bioinformatics analyses, they can be tackled by conceptually related methodological approaches, namely divide-and-conquer (scalability) and multiple executions (validation). Scalability is defined as the ability for a program to scale based on workload. It has always been an important consideration when developing bioinformatics algorithms and programs. Nonetheless the surge of volume and variety of biological and biomedical data has posed new challenges. We discuss how modern cloud computing and big data programming frameworks such as MapReduce and Spark are being used to effectively implement divide-and-conquer in a distributed computing environment. Validation of software is another important issue in big data bioinformatics that is often ignored. Software validation is the process of determining whether the program under test fulfils the task for which it was designed. Determining the correctness of the computational output of big data bioinformatics software is especially difficult due to the large input space and complex algorithms involved. We discuss how state-of-the-art software testing techniques that are based on the idea of multiple executions, such as metamorphic testing, can be used to implement an effective bioinformatics quality assurance strategy. We hope this review will raise awareness of these critical issues in bioinformatics.

  10. : A Scalable and Transparent System for Simulating MPI Programs

    SciTech Connect

    Perumalla, Kalyan S

    2010-01-01

    is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, and MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.

  11. Adapting for Scalability: Automating the Video Assessment of Instructional Learning

    ERIC Educational Resources Information Center

    Roberts , Amy M.; LoCasale-Crouch, Jennifer; Hamre, Bridget K.; Buckrop, Jordan M.

    2017-01-01

    Although scalable programs, such as online courses, have the potential to reach broad audiences, they may pose challenges to evaluating learners' knowledge and skills. Automated scoring offers a possible solution. In the current paper, we describe the process of creating and testing an automated means of scoring a validated measure of teachers'…

  12. Scalable Robust Principal Component Analysis using Grassmann Averages.

    PubMed

    Hauberg, Soren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael

    2015-12-23

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average (GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average (TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  13. Scalable Robust Principal Component Analysis Using Grassmann Averages.

    PubMed

    Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J

    2016-11-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  14. Scalable complexity-distortion model for fast motion estimation

    NASA Astrophysics Data System (ADS)

    Yi, Xiaoquan; Ling, Nam

    2005-07-01

    Recently established international video coding standard H.264/AVC and the upcoming standard on scalable video coding (SVC) bring part of the solution to high compression ratio requirement and heterogeneity requirement. However, these algorithms have unbearable complexities for real-time encoding. Therefore, there is an important challenge to reduce encoding complexity, preferably in a scalable manner. Motion estimation and motion compensation techniques provide significant coding gain but are the most time-intensive parts in an encoder system. They present tremendous research challenges to design a flexible, rate-distortion optimized, yet computationally efficient encoder, especially for various applications. In this paper, we present a scalable motion estimation framework for complexitydistortion consideration. We propose a new progressive initial search (PIS) method to generate an accurate initial search point, followed by a fast search method, which can greatly benefit from the tighter bounds of the PIS. Such approach offers not only significant speedup but also an optimal distortion performance for a given complexity constrain. We analyze the relationship between computational complexity and distortion (C-D) through probabilistic distance measure extending from the complexity and distortion theory. A configurable complexity quantization parameter (Q) is introduced. Simulation results demonstrate that the proposed scalable complexity-distortion framework enables video encoder to conveniently adjust its complexity while providing best possible services.

  15. Estimates of the Sampling Distribution of Scalability Coefficient H

    ERIC Educational Resources Information Center

    Van Onna, Marieke J. H.

    2004-01-01

    Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…

  16. Visual agnosia.

    PubMed

    Álvarez, R; Masjuan, J

    2016-03-01

    Visual agnosia is defined as an impairment of object recognition, in the absence of visual acuity or cognitive dysfunction that would explain this impairment. This condition is caused by lesions in the visual association cortex, sparing primary visual cortex. There are 2 main pathways that process visual information: the ventral stream, tasked with object recognition, and the dorsal stream, in charge of locating objects in space. Visual agnosia can therefore be divided into 2 major groups depending on which of the two streams is damaged. The aim of this article is to conduct a narrative review of the various visual agnosia syndromes, including recent developments in a number of these syndromes.

  17. Data Intensive Architecture for Scalable Cyber Analytics

    SciTech Connect

    Olsen, Bryan K.; Johnson, John R.; Critchlow, Terence J.

    2011-11-15

    Cyber analysts are tasked with the identification and mitigation of network exploits and threats. These compromises are difficult to identify due to the characteristics of cyber communication, the volume of traffic, and the duration of possible attack. It is necessary to have analytical tools to help analysts identify anomalies that span seconds, days, and weeks. Unfortunately, providing analytical tools effective access to the volumes of underlying data requires novel architectures, which is often overlooked in operational deployments. Our work is focused on a summary record of communication, called a flow. Flow records are intended to summarize a communication session between a source and a destination, providing a level of aggregation from the base data. Despite this aggregation, many enterprise network perimeter sensors store millions of network flow records per day. The volume of data makes analytics difficult, requiring the development of new techniques to efficiently identify temporal patterns and potential threats. The massive volume makes analytics difficult, but there are other characteristics in the data which compound the problem. Within the billions of records of communication that transact, there are millions of distinct IP addresses involved. Characterizing patterns of entity behavior is very difficult with the vast number of entities that exist in the data. Research has struggled to validate a model for typical network behavior with hopes it will enable the identification of atypical behavior. Complicating matters more, typically analysts are only able to visualize and interact with fractions of data and have the potential to miss long term trends and behaviors. Our analysis approach focuses on aggregate views and visualization techniques to enable flexible and efficient data exploration as well as the capability to view trends over long periods of time. Realizing that interactively exploring summary data allowed analysts to effectively identify

  18. An application architecture for large data visualization : a case study /.

    SciTech Connect

    Law, C.; Ahrens, James; Henderson, Amy

    2001-01-01

    In this case study we present an open-source visualization application with a data-parallel novel application architecture. The architecture is unique because is uses the Tcl scripting language to synchronize the user integuce with the VTK parallel visualization pipeline and parallel-rendering module. The resulting application shows scalable performance, and is easily extendable becuuse of its simple modulur architecture. We demonstrate the application with a 9.8 gigabyte structured-grid ocean model

  19. Computing and Visualizing Reachable Volumes for Maneuvering Satellites

    DTIC Science & Technology

    2011-09-01

    Computing and Visualizing Reachable Volumes for Maneuvering Satellites Ming Jiang, Willem H. de Vries, Alexander J. Pertica , Scot S. Olivier...Handbook. Elsevier, 2004. 6. M. Jiang, M. Andereck, A. J. Pertica , and S. S. Olivier. A Scalable Visualization System for Improving Space Situational...Jiang, J. Leek, J. L. Levatin, S. Nikolaev, A. J. Pertica , D. W. Phillion, H. K. Springer, and W. H. de Vries. High-Performance Computer Modeling of

  20. Freeprocessing: Transparent in situ visualization via data interception

    PubMed Central

    Fogal, Thomas; Proch, Fabian; Schiewe, Alexander; Hasemann, Olaf; Kempf, Andreas; Krüger, Jens

    2014-01-01

    In situ visualization has become a popular method for avoiding the slowest component of many visualization pipelines: reading data from disk. Most previous in situ work has focused on achieving visualization scalability on par with simulation codes, or on the data movement concerns that become prevalent at extreme scales. In this work, we consider in situ analysis with respect to ease of use and programmability. We describe an abstraction that opens up new applications for in situ visualization, and demonstrate that this abstraction and an expanded set of use cases can be realized without a performance cost. PMID:25995996

  1. Scalable Parallel Dynamic Monte Carlo for Large Asynchronous Systems: Non-equilibrium Surface Growth and Algorithmic Scalability

    NASA Astrophysics Data System (ADS)

    Korniss, G.; Novotny, M. A.; Rikvold, P. A.; Toroczkai, Z.

    2001-06-01

    Modeling and simulation of the evolution of natural and artificial complex systems are of fundamental importance in both science and engineering. In a large class of systems the underlying dynamic is inherently asynchronous. Examples of such systems include magnetization dynamics in condensed matter, cellular communication networks, and the spread of epidemics. Constructing a parallel scheme for simulating the time evolution of these systems, in which the local changes in the configuration are asynchronous, leads to at least two difficult questions. First, to put it simply, "How to design parallel algorithms for non-parallel dynamics?". Second, if there is a faithful parallel algorithm for the problem, "Is it scalable?". We describe novel parallel Monte Carlo algorithms for asynchronous systems with both the computation and the measurement part being asymptotically scalable. In particular, we discuss the intimate connection [1] between algorithmic scalability and non-equilibrium surface growth phenomena, which has led us to the engineering and fine-tuning of fully scalable massively parallel schemes. Supported by NSF, NERSC-DOE, CSIT-FSU, and RPI. [1] G. Korniss et.al, Phys. Rev. Lett. 84, 1351 (2000).

  2. Oceanotron, Scalable Server for Marine Observations

    NASA Astrophysics Data System (ADS)

    Loubrieu, T.; Bregent, S.; Blower, J. D.; Griffiths, G.

    2013-12-01

    Ifremer, French marine institute, is deeply involved in data management for different ocean in-situ observation programs (ARGO, OceanSites, GOSUD, ...) or other European programs aiming at networking ocean in-situ observation data repositories (myOcean, seaDataNet, Emodnet). To capitalize the effort for implementing advance data dissemination services (visualization, download with subsetting) for these programs and generally speaking water-column observations repositories, Ifremer decided to develop the oceanotron server (2010). Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpeNDAP, ...), the server is designed to manage plugins: - StorageUnits : which enable to read specific data repository formats (netCDF/OceanSites, RDBMS schema, ODV binary format). - FrontDesks : which get external requests and send results for interoperable protocols (OGC/WMS, OGC/SOS, OpenDAP). In between a third type of plugin may be inserted: - TransformationUnits : which enable ocean business related transformation of the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). The server is released under open-source license so that partners can develop their own plugins. Within MyOcean project, University of Reading has plugged a WMS implementation as an oceanotron frontdesk. The modules are connected together by sharing the same information model for marine observations (or sampling features: vertical profiles, point series and trajectories), dataset metadata and queries. The shared information model is based on OGC/Observation & Measurement and Unidata/Common Data Model initiatives. The model is implemented in java (http://www.ifremer.fr/isi/oceanotron/javadoc/). This inner-interoperability level enables to capitalize ocean business expertise in software development without being indentured to

  3. Pathfinder: Visual Analysis of Paths in Graphs

    PubMed Central

    Partl, C.; Gratzl, S.; Streit, M.; Wassermann, A. M.; Pfister, H.; Schmalstieg, D.; Lex, A.

    2016-01-01

    The analysis of paths in graphs is highly relevant in many domains. Typically, path-related tasks are performed in node-link layouts. Unfortunately, graph layouts often do not scale to the size of many real world networks. Also, many networks are multivariate, i.e., contain rich attribute sets associated with the nodes and edges. These attributes are often critical in judging paths, but directly visualizing attributes in a graph layout exacerbates the scalability problem. In this paper, we present visual analysis solutions dedicated to path-related tasks in large and highly multivariate graphs. We show that by focusing on paths, we can address the scalability problem of multivariate graph visualization, equipping analysts with a powerful tool to explore large graphs. We introduce Pathfinder (Figure 1), a technique that provides visual methods to query paths, while considering various constraints. The resulting set of paths is visualized in both a ranked list and as a node-link diagram. For the paths in the list, we display rich attribute data associated with nodes and edges, and the node-link diagram provides topological context. The paths can be ranked based on topological properties, such as path length or average node degree, and scores derived from attribute data. Pathfinder is designed to scale to graphs with tens of thousands of nodes and edges by employing strategies such as incremental query results. We demonstrate Pathfinder's fitness for use in scenarios with data from a coauthor network and biological pathways. PMID:27942090

  4. ElVis: A System for the Accurate and Interactive Visualization of High-Order Finite Element Solutions.

    PubMed

    Nelson, B; Liu, E; Kirby, R M; Haimes, R

    2012-12-01

    This paper presents the Element Visualizer (ElVis), a new, open-source scientific visualization system for use with high-order finite element solutions to PDEs in three dimensions. This system is designed to minimize visualization errors of these types of fields by querying the underlying finite element basis functions (e.g., high-order polynomials) directly, leading to pixel-exact representations of solutions and geometry. The system interacts with simulation data through runtime plugins, which only require users to implement a handful of operations fundamental to finite element solvers. The data in turn can be visualized through the use of cut surfaces, contours, isosurfaces, and volume rendering. These visualization algorithms are implemented using NVIDIA's OptiX GPU-based ray-tracing engine, which provides accelerated ray traversal of the high-order geometry, and CUDA, which allows for effective parallel evaluation of the visualization algorithms. The direct interface between ElVis and the underlying data differentiates it from existing visualization tools. Current tools assume the underlying data is composed of linear primitives; high-order data must be interpolated with linear functions as a result. In this work, examples drawn from aerodynamic simulations-high-order discontinuous Galerkin finite element solutions of aerodynamic flows in particular-will demonstrate the superiority of ElVis' pixel-exact approach when compared with traditional linear-interpolation methods. Such methods can introduce a number of inaccuracies in the resulting visualization, making it unclear if visual artifacts are genuine to the solution data or if these artifacts are the result of interpolation errors. Linear methods additionally cannot properly visualize curved geometries (elements or boundaries) which can greatly inhibit developers' debugging efforts. As we will show, pixel-exact visualization exhibits none of these issues, removing the visualization scheme as a source of

  5. Scalable Parallel Distance Field Construction for Large-Scale Applications.

    PubMed

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan-Liu; Kolla, Hemanth; Chen, Jacqueline H

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. A new distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking over time, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. Our work greatly extends the usability of distance fields for demanding applications.

  6. Scalable, full-colour and controllable chromotropic plasmonic printing

    PubMed Central

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization. PMID:26567803

  7. Performance and Scalability Evaluation of the Ceph Parallel File System

    SciTech Connect

    Wang, Feiyi; Nelson, Mark; Oral, H Sarp; Settlemyer, Bradley W; Atchley, Scott; Caldwell, Blake A; Hill, Jason J

    2013-01-01

    Ceph is an open-source and emerging parallel distributed file and storage system technology. By design, Ceph assumes running on unreliable and commodity storage and network hardware and provides reliability and fault-tolerance through controlled object placement and data replication. We evaluated the Ceph technology for scientific high-performance computing (HPC) environments. This paper presents our evaluation methodology, experiments, results and observations from mostly parallel I/O performance and scalability perspectives. Our work made two unique contributions. First, our evaluation is performed under a realistic setup for a large-scale capability HPC environment using a commercial high-end storage system. Second, our path of investigation, tuning efforts, and findings made direct contributions to Ceph's development and improved code quality, scalability, and performance. These changes should also benefit both Ceph and HPC communities at large. Throughout the evaluation, we observed that Ceph still is an evolving technology under fast-paced development and showing great promises.

  8. Scalable digital hardware for a trapped ion quantum computer

    NASA Astrophysics Data System (ADS)

    Mount, Emily; Gaultney, Daniel; Vrijsen, Geert; Adams, Michael; Baek, So-Young; Hudek, Kai; Isabella, Louis; Crain, Stephen; van Rynbach, Andre; Maunz, Peter; Kim, Jungsang

    2016-12-01

    Many of the challenges of scaling quantum computer hardware lie at the interface between the qubits and the classical control signals used to manipulate them. Modular ion trap quantum computer architectures address scalability by constructing individual quantum processors interconnected via a network of quantum communication channels. Successful operation of such quantum hardware requires a fully programmable classical control system capable of frequency stabilizing the continuous wave lasers necessary for loading, cooling, initialization, and detection of the ion qubits, stabilizing the optical frequency combs used to drive logic gate operations on the ion qubits, providing a large number of analog voltage sources to drive the trap electrodes, and a scheme for maintaining phase coherence among all the controllers that manipulate the qubits. In this work, we describe scalable solutions to these hardware development challenges.

  9. Multilayer microwave integrated quantum circuits for scalable quantum computing

    NASA Astrophysics Data System (ADS)

    Brecht, Teresa; Pfaff, Wolfgang; Wang, Chen; Chu, Yiwen; Frunzio, Luigi; Devoret, Michel H.; Schoelkopf, Robert J.

    2016-02-01

    As experimental quantum information processing (QIP) rapidly advances, an emerging challenge is to design a scalable architecture that combines various quantum elements into a complex device without compromising their performance. In particular, superconducting quantum circuits have successfully demonstrated many of the requirements for quantum computing, including coherence levels that approach the thresholds for scaling. However, it remains challenging to couple a large number of circuit components through controllable channels while suppressing any other interactions. We propose a hardware platform intended to address these challenges, which combines the advantages of integrated circuit fabrication and the long coherence times achievable in three-dimensional circuit quantum electrodynamics. This multilayer microwave integrated quantum circuit platform provides a path towards the realisation of increasingly complex superconducting devices in pursuit of a scalable quantum computer.

  10. Thermally assisted MRAMs: ultimate scalability and logic functionalities

    NASA Astrophysics Data System (ADS)

    Prejbeanu, I. L.; Bandiera, S.; Alvarez-Hérault, J.; Sousa, R. C.; Dieny, B.; Nozières, J.-P.

    2013-02-01

    This paper is focused on thermally assisted magnetic random access memories (TA-MRAMs). It explains how the heating produced by Joule dissipation around the tunnel barrier of magnetic tunnel junctions (MTJs) can be used advantageously to assist writing in MRAMs. The main idea is to apply a heating pulse to the junction simultaneously with a magnetic field (field-induced thermally assisted (TA) switching). Since the heating current also provides a spin-transfer torque (current-induced TA switching), the magnetic field lines can be removed to increase the storage density of TA-MRAMs. Ultimately, thermally induced anisotropy reorientation (TIAR)-assisted spin-transfer torque switching can be used in MTJs with perpendicular magnetic anisotropy to obtain ultimate downsize scalability with reduced power consumption. TA writing allows extending the downsize scalability of MRAMs as it does in hard disk drive technology, but it also allows introducing new functionalities particularly useful for security applications (Match-in-Place™ technology).

  11. A look at scalable dense linear algebra libraries

    SciTech Connect

    Dongarra, J.J. Dept. of Computer Science Oak Ridge National Lab., TN ); van de Geijn, R. . Dept. of Computer Sciences); Walker, D.W. )

    1992-07-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization are presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 Gflop/s (double precision) for the largest problem considered.

  12. Scalable fabrication of triboelectric nanogenerators for commercial applications

    NASA Astrophysics Data System (ADS)

    Dhakar, Lokesh; Shan, Xuechuan; Wang, Zhiping; Yang, Bin; Eng Hock Tay, Francis; Heng, Chun-Huat; Lee, Chengkuo

    2015-12-01

    Harvesting mechanical energy from irregular sources is a potential way to charge batteries for devices and sensor nodes. Triboelectric effect has been extensively utilized in energy harvesting devices as a method to convert mechanical energy into electrical energy. As triboelectric nanogenerators have immense potential to be commercialized, it is important to develop scalable fabrication methods to manufacture these devices. This paper presents scalable fabrication steps to realize large scale triboelectric nanogenerators. Roll-to-roll UV embossing and lamination techniques are used to fabricate different components of large scale triboelectric nanogenerators. The device generated a peak-to-peak voltage and current of 486 V and 21.2 μA, respectively at a frequency of 5 Hz.

  13. Scalable Fourier transform system for instantly structured illumination in lithography.

    PubMed

    Ye, Yan; Xu, Fengchuan; Wei, Guojun; Xu, Yishen; Pu, Donglin; Chen, Linsen; Huang, Zhiwei

    2017-05-15

    We report the development of a unique scalable Fourier transform 4-f system for instantly structured illumination in lithography. In the 4-f system, coupled with a 1-D grating and a phase retarder, the ±1st order of diffracted light from the grating serve as coherent incident sources for creating interference patterns on the image plane. By adjusting the grating and the phase retarder, the interference fringes with consecutive frequencies, as well as their orientations and phase shifts, can be generated instantly within a constant interference area. We demonstrate that by adapting this scalable Fourier transform system into lithography, the pixelated nano-fringe arrays with arbitrary frequencies and orientations can be dynamically produced in the photoresist with high variation resolution, suggesting its promising application for large-area functional materials based on space-variant nanostructures in lithography.

  14. Scalable Synthesis of Cortistatin A and Related Structures

    PubMed Central

    Shi, Jun; Manolikakes, Georg; Yeh, Chien-Hung; Guerrero, Carlos A.; Shenvi, Ryan A.; Shigehisa, Hiroki

    2011-01-01

    Full details are provided for an improved synthesis of cortistatin A and related structures as well as the underlying logic and evolution of strategy. The highly functionalized cortistatin A-ring embedded with a key heteroadamantane was synthesized by a simple and scalable 5-step sequence. A chemoselective, tandem geminal dihalogenation of an unactivated methyl group, a reductive fragmentation/trapping/elimination of a bromocyclopropane, and a facile chemoselective etherification reaction afforded the cortistatin A core, dubbed “cortistatinone”. A selective Δ16-alkene reduction with Raney Ni provided cortistatin A. With this scalable and practical route, copious quantities of cortistatinone, Δ16-cortistatin A-the equipotent direct precursor to cortistatin A, and its related analogs were prepared for further biological studies. PMID:21539314

  15. Potential of Scalable Vector Graphics (SVG) for Ocean Science Research

    NASA Astrophysics Data System (ADS)

    Sears, J. R.

    2002-12-01

    Scalable Vector Graphics (SVG), a graphic format encoded in Extensible Markup Language (XML), is a recent W3C standard. SVG is text-based and platform-neutral, allowing interoperability and a rich array of features that offer significant promise for the presentation and publication of ocean and earth science research. This presentation (a) provides a brief introduction to SVG with real-world examples; (b) reviews browsers, editors, and other SVG tools; and (c) talks about some of the more powerful capabilities of SVG that might be important for ocean and earth science data presentation, such as searchability, animation and scripting, interactivity, accessibility, dynamic SVG, layers, scalability, SVG Text, SVG Audio, server-side SVG, and embedding metadata and data. A list of useful SVG resources is also given.

  16. Scalable Quantum Photonics with Single Color Centers in Silicon Carbide.

    PubMed

    Radulaski, Marina; Widmann, Matthias; Niethammer, Matthias; Zhang, Jingyuan Linda; Lee, Sang-Yun; Rendler, Torsten; Lagoudakis, Konstantinos G; Son, Nguyen Tien; Janzén, Erik; Ohshima, Takeshi; Wrachtrup, Jörg; Vučković, Jelena

    2017-03-08

    Silicon carbide is a promising platform for single photon sources, quantum bits (qubits), and nanoscale sensors based on individual color centers. Toward this goal, we develop a scalable array of nanopillars incorporating single silicon vacancy centers in 4H-SiC, readily available for efficient interfacing with free-space objective and lensed-fibers. A commercially obtained substrate is irradiated with 2 MeV electron beams to create vacancies. Subsequent lithographic process forms 800 nm tall nanopillars with 400-1400 nm diameters. We obtain high collection efficiency of up to 22 kcounts/s optical saturation rates from a single silicon vacancy center while preserving the single photon emission and the optically induced electron-spin polarization properties. Our study demonstrates silicon carbide as a readily available platform for scalable quantum photonics architecture relying on single photon sources and qubits.

  17. Scalable quantum memory in the ultrastrong coupling regime.

    PubMed

    Kyaw, T H; Felicetti, S; Romero, G; Solano, E; Kwek, L-C

    2015-03-02

    Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances.

  18. Development of Scalable Culture Systems for Human Embryonic Stem Cells

    PubMed Central

    Azarin, Samira M.; Palecek, Sean P.

    2009-01-01

    The use of human pluripotent stem cells, including embryonic and induced pluripotent stem cells, in therapeutic applications will require the development of robust, scalable culture technologies for undifferentiated cells. Advances made in large-scale cultures of other mammalian cells will facilitate expansion of undifferentiated human embryonic stem cells (hESCs), but challenges specific to hESCs will also have to be addressed, including development of defined, humanized culture media and substrates, monitoring spontaneous differentiation and heterogeneity in the cultures, and maintaining karyotypic integrity in the cells. This review will describe our current understanding of environmental factors that regulate hESC self-renewal and efforts to provide these cues in various scalable bioreactor culture systems. PMID:20161686

  19. A look at scalable dense linear algebra libraries

    SciTech Connect

    Dongarra, J.J. |; van de Geijn, R.; Walker, D.W.

    1992-07-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization are presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 Gflop/s (double precision) for the largest problem considered.

  20. Scalable cluster administration - Chiba City I approach and lessons learned.

    SciTech Connect

    Navarro, J. P.; Evard, R.; Nurmi, D.; Desai, N.

    2002-07-01

    Systems administrators of large clusters often need to perform the same administrative activity hundreds or thousands of times. Often such activities are time-consuming, especially the tasks of installing and maintaining software. By combining network services such as DHCP, TFTP, FTP, HTTP, and NFS with remote hardware control, cluster administrators can automate all administrative tasks. Scalable cluster administration addresses the following challenge: What systems design techniques can cluster builders use to automate cluster administration on very large clusters? We describe the approach used in the Mathematics and Computer Science Division of Argonne National Laboratory on Chiba City I, a 314-node Linux cluster; and we analyze the scalability, flexibility, and reliability benefits and limitations from that approach.

  1. Scalable quantum memory in the ultrastrong coupling regime

    PubMed Central

    Kyaw, T. H.; Felicetti, S.; Romero, G.; Solano, E.; Kwek, L.-C.

    2015-01-01

    Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances. PMID:25727251

  2. Scalable graphene coatings for enhanced condensation heat transfer.

    PubMed

    Preston, Daniel J; Mafra, Daniela L; Miljkovic, Nenad; Kong, Jing; Wang, Evelyn N

    2015-05-13

    Water vapor condensation is commonly observed in nature and routinely used as an effective means of transferring heat with dropwise condensation on nonwetting surfaces exhibiting heat transfer improvement compared to filmwise condensation on wetting surfaces. However, state-of-the-art techniques to promote dropwise condensation rely on functional hydrophobic coatings that either have challenges with chemical stability or are so thick that any potential heat transfer improvement is negated due to the added thermal resistance of the coating. In this work, we show the effectiveness of ultrathin scalable chemical vapor deposited (CVD) graphene coatings to promote dropwise condensation while offering robust chemical stability and maintaining low thermal resistance. Heat transfer enhancements of 4× were demonstrated compared to filmwise condensation, and the robustness of these CVD coatings was superior to typical hydrophobic monolayer coatings. Our results indicate that graphene is a promising surface coating to promote dropwise condensation of water in industrial conditions with the potential for scalable application via CVD.

  3. Scalable parallel distance field construction for large-scale applications

    SciTech Connect

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; Kolla, Hemanth; Chen, Jacqueline H.

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.

  4. Economical and scalable synthesis of 6-amino-2-cyanobenzothiazole

    PubMed Central

    Hauser, Jacob R; Beard, Hester A; Bayana, Mary E; Jolley, Katherine E; Warriner, Stuart L

    2016-01-01

    Summary 2-Cyanobenzothiazoles (CBTs) are useful building blocks for: 1) luciferin derivatives for bioluminescent imaging; and 2) handles for bioorthogonal ligations. A particularly versatile CBT is 6-amino-2-cyanobenzothiazole (ACBT), which has an amine handle for straight-forward derivatisation. Here we present an economical and scalable synthesis of ACBT based on a cyanation catalysed by 1,4-diazabicyclo[2.2.2]octane (DABCO), and discuss its advantages for scale-up over previously reported routes. PMID:27829906

  5. Scalable Adaptive Architectures for Maritime Operations Center Command and Control

    DTIC Science & Technology

    2011-05-06

    SYSTEM ARCHITECTURES LABORATORY DEPT. OF ELECTRICAL AND COMPUTER ENGINEERING GEORGE MASON UNIVERSITY Fairfax, VA22030 SCALABLE ADAPTIVE...the project to investigate the possibility of using earlier work on the validation and verification of rule bases in addressing the dynamically...consideration of the cases identified by it. The approach, implemented as a software suite of tools, provides a repository of OPLAW/ROH sets of rules

  6. Scalable Real Time Data Management for Smart Grid

    SciTech Connect

    Yin, Jian; Kulkarni, Anand V.; Purohit, Sumit; Gorton, Ian; Akyol, Bora A.

    2011-12-16

    This paper presents GridMW, a scalable and reliable data middleware for smart grids. Smart grids promise to improve the efficiency of power grid systems and reduce green house emissions through incorporating power generation from renewable sources and shaping demand to match the supply. As a result, power grid systems will become much more dynamic and require constant adjustments, which requires analysis and decision making applications to improve the efficiency and reliability of smart grid systems.

  7. Scalable high-power optically pumped GaAs laser

    NASA Astrophysics Data System (ADS)

    Le, H. Q.; di Cecca, S.; Mooradian, A.

    1991-05-01

    The use of disk geometry, optically pumped semiconductor gain elements for high-power scalability and good transverse mode quality has been studied. A room-temperature TEM00 transverse mode, external-cavity GaAs disk laser has been demonstrated with 500 W peak-power output and 40-percent slope efficiency, when pumped by a Ti:Al2O3 laser. The conditions for diode laser pumping are shown to be consistent with available power level.

  8. Scalable Wavelet-Based Active Network Stepping Stone Detection

    DTIC Science & Technology

    2012-03-22

    network and mask malicious actions from detection. This research focuses on a novel active watermark technique using Discrete Wavelet Transformations...to mark and detect interactive network sessions. This technique is scalable, nearly invisible and resilient to multi-flow attacks. The watermark is...technique accurately detects the presence of a watermark at a 5% False Positive and False Negative rate for both the extracted timestamps as well as the

  9. Quicksilver: Middleware for Scalable Self-Regenerative Systems

    DTIC Science & Technology

    2006-04-01

    structured approach to establishing these paths. Nodes start by joining a Distributed Hash Table (DHT) called Pastry [ROW01]. DHTs in general, and... Pastry in particular, have some specific internal structure that allows messages to be routed between any pair of nodes in under log(N) hops, where N...Workshop on Peer- to-Peer Systems IPTPS 2005, Feb. 2005 24. [PPL06] www.pplive.com 25. [ROW01] Pastry : Scalable, decentralized object location

  10. Performance and Scalability of the NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for scientific applications. In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for scientific applications.

  11. Efficient Byzantine Fault Tolerance for Scalable Storage and Services

    DTIC Science & Technology

    2009-07-01

    k O p s/ se c) No Redundancy Zzyzx-noPQ Zzyzx Zyzzyva (B=10) Zyzzyva (B=1) Zzyz x +f +1 Figure 5.5.6: Throughput vs. client processes when f = 1 and...28 ix x CONTENTS 3.4.4 Linearizability and Immediate Recovery...need only the minimal number of responsive servers to ensure high throughput, provide single roundtrip latency, and provide scalability through

  12. Scalable Anonymous Group Communication in the Anytrust Model

    DTIC Science & Technology

    2012-04-10

    nets messaging phase was high and not a significant improvement over the shuffle alone. Herbivore [31] makes low latency guar- antees (100s of...practical anonymity systems such as Tor [16] or Herbivore [31], where a small number of “wrong” choices—e.g., the choice of entry and exit relay in Tor—can...of-service attacks makes them largely impractical. Herbivore [31] attempts to make DC-nets more scalable, but it provides unconditional anonymity only

  13. A Scalable Platform for Functional Nanomaterials via Bubble-Bursting.

    PubMed

    Feng, Jie; Nunes, Janine K; Shin, Sangwoo; Yan, Jing; Kong, Yong Lin; Prud'homme, Robert K; Arnaudov, Luben N; Stoyanov, Simeon D; Stone, Howard A

    2016-06-01

    A continuous and scalable bubbling system to generate functional nanodroplets dispersed in a continuous phase is proposed. Scaling up of this system can be achieved by simply tuning the bubbling parameters. This new and versatile system is capable of encapsulating various functional nanomaterials to form functional nanoemulsions and nanoparticles in one step. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Space Situational Awareness Data Processing Scalability Utilizing Google Cloud Services

    NASA Astrophysics Data System (ADS)

    Greenly, D.; Duncan, M.; Wysack, J.; Flores, F.

    Space Situational Awareness (SSA) is a fundamental and critical component of current space operations. The term SSA encompasses the awareness, understanding and predictability of all objects in space. As the population of orbital space objects and debris increases, the number of collision avoidance maneuvers grows and prompts the need for accurate and timely process measures. The SSA mission continually evolves to near real-time assessment and analysis demanding the need for higher processing capabilities. By conventional methods, meeting these demands requires the integration of new hardware to keep pace with the growing complexity of maneuver planning algorithms. SpaceNav has implemented a highly scalable architecture that will track satellites and debris by utilizing powerful virtual machines on the Google Cloud Platform. SpaceNav algorithms for processing CDMs outpace conventional means. A robust processing environment for tracking data, collision avoidance maneuvers and various other aspects of SSA can be created and deleted on demand. Migrating SpaceNav tools and algorithms into the Google Cloud Platform will be discussed and the trials and tribulations involved. Information will be shared on how and why certain cloud products were used as well as integration techniques that were implemented. Key items to be presented are: 1.Scientific algorithms and SpaceNav tools integrated into a scalable architecture a) Maneuver Planning b) Parallel Processing c) Monte Carlo Simulations d) Optimization Algorithms e) SW Application Development/Integration into the Google Cloud Platform 2. Compute Engine Processing a) Application Engine Automated Processing b) Performance testing and Performance Scalability c) Cloud MySQL databases and Database Scalability d) Cloud Data Storage e) Redundancy and Availability

  15. Scalable graphene field-effect sensors for specific protein detection.

    PubMed

    Saltzgaber, Grant; Wojcik, Peter; Sharf, Tal; Leyden, Matthew R; Wardini, Jenna L; Heist, Christopher A; Adenuga, Adeniyi A; Remcho, Vincent T; Minot, Ethan D

    2013-09-06

    We demonstrate that micron-scale graphene field-effect transistor biosensors can be fabricated in a scalable fashion from large-area chemical vapor deposition derived graphene. We electrically detect the real-time binding and unbinding of a protein biomarker, thrombin, to and from aptamer-coated graphene surfaces. Our sensors have low background noise and high transconductance, comparable to exfoliated graphene devices. The devices are reusable and have a shelf-life greater than one week.

  16. Scalable Solutions for Interactive Virtual Humans that can Manipulate Objects

    DTIC Science & Technology

    2005-01-01

    A scalable approach is therefore sought for addressing such different requirements in an unified framework. Related Work Only few animation frameworks... animation of human grasping using forward and in- verse kinematics. Computer & Graphics 23:145–154. Baerlocher, P., and Boulic, R. 1998. Task-priority...formu- lations for the kinematic control of highly redundant artic - ulated structures. In Proceedings of IEEE IROS’98, 323– 329. Baerlocher, P. 2001

  17. Visual Scripting.

    ERIC Educational Resources Information Center

    Halas, John

    Visual scripting is the coordination of words with pictures in sequence. This book presents the methods and viewpoints on visual scripting of fourteen film makers, from nine countries, who are involved in animated cinema; it contains concise examples of how a storybook and preproduction script can be prepared in visual terms; and it includes a…

  18. Scalable and Adaptive Streaming of 3D Mesh to Heterogeneous Devices

    NASA Astrophysics Data System (ADS)

    Abderrahim, Zeineb; Bouhlel, Mohamed Salim

    2016-12-01

    This article comprises a presentation of a web platform for the diffusion and visualization of 3D compressed data on the web. Indeed, the major goal of this work resides in the proposal of the transfer adaptation of the three-dimensional data to resources (network bandwidth, the type of visualization terminals, display resolution, user's preferences...). Also, it is an attempt to provide an effective consultation adapted to the user's request (preferences, levels of the requested detail, etc.). Such a platform can adapt the levels of detail to the change in the bandwidth and the rendering time when loading the mesh at the client level. In addition, the levels of detail are adapted to the distance between the object and the camera. These features are able to minimize the latency time and to make the real time interaction possible. The experiences as well as the comparison with the existing solutions show auspicious results in terms of latency, scalability and the quality of the experience offered to the users.

  19. ALMA software scalability experience with growing number of antennas

    NASA Astrophysics Data System (ADS)

    Mora, Matias; Avarias, Jorge; Tejeda, Alexis; Gil, Juan Pablo; Sommer, Heiko

    2012-09-01

    The ALMA Observatory is a challenging project in many ways. The hardware and software pieces were often designed specifically for ALMA, based on overall scientific requirements. The observatory is still in its construction phase, but already started Early Science observations with 16 antennas in September 2011, and has currently (June 2012) 39 accepted antennas, with 1 or 2 new antennas delivered every month. The finished array will integrate up to 66 antennas in 2014. The on-line software is a critical part of the operations: it controls everything from the low level real-time hardware and data processing up to the observations scheduler and data storage. Many pieces of the software are eventually affected by a growing number of antennas, as more processes are integrated into the distributed system, and more data flows to the Correlator and Database. Although some early scalability tests were performed in a simulated environment, the system proved to be very dependent on real deployment conditions and several unforeseen scalability issues have been found in the last year, starting with a critical number of about 15 antennas. Processes that grow with the number of antennas tend to quickly demand more powerful machines, unless alternatives are implemented. This paper describes the practical experience of dealing with (and hopefully preventing) blocking scalability issues during the construction phase, while the expectant users push the system to its limits. This may also be a very useful example for other upcoming radio-telescopes with a large number of receivers.

  20. Scalability improvements to NRLMOL for DFT calculations of large molecules

    NASA Astrophysics Data System (ADS)

    Diaz, Carlos Manuel

    Advances in high performance computing (HPC) have provided a way to treat large, computationally demanding tasks using thousands of processors. With the development of more powerful HPC architectures, the need to create efficient and scalable code has grown more important. Electronic structure calculations are valuable in understanding experimental observations and are routinely used for new materials predictions. For the electronic structure calculations, the memory and computation time are proportional to the number of atoms. Memory requirements for these calculations scale as N2, where N is the number of atoms. While the recent advances in HPC offer platforms with large numbers of cores, the limited amount of memory available on a given node and poor scalability of the electronic structure code hinder their efficient usage of these platforms. This thesis will present some developments to overcome these bottlenecks in order to study large systems. These developments, which are implemented in the NRLMOL electronic structure code, involve the use of sparse matrix storage formats and the use of linear algebra using sparse and distributed matrices. These developments along with other related development now allow ground state density functional calculations using up to 25,000 basis functions and the excited state calculations using up to 17,000 basis functions while utilizing all cores on a node. An example on a light-harvesting triad molecule is described. Finally, future plans to further improve the scalability will be presented.

  1. Scalable fault tolerant image communication and storage grid

    NASA Astrophysics Data System (ADS)

    Slik, David; Seiler, Oliver; Altman, Tym; Montour, Mike; Kermani, Mohammad; Proseilo, Walter; Terry, David; Kawahara, Midori; Leckie, Chris; Muir, Dale

    2003-05-01

    Increasing production and use of digital medical imagery are driving new approaches to information storage and management. Traditional, centralized approaches to image communication, storage and archiving are becoming increasingly expensive to scale and operate with high levels of reliability. Multi-site, geographically-distributed deployments connected by limited-bandwidth networks present further scalability, reliability, and availability challenges. A grid storage architecture built from a distributed network of low cost, off-the-shelf servers (nodes) provides scalable data and metadata storage, processing, and communication without single points of failure. Imaging studies are stored, replicated, cached, managed, and retrieved based on defined rules, and nodes within the grid can acquire studies and respond to queries. Grid nodes transparently load-balance queries, storage/retrieval requests, and replicate data for automated backup and disaster recovery. This approach reduces latency, increases availability, provides near-linear scalability and allows the creation of a geographically distributed medical imaging network infrastructure. This paper presents some key concepts in grid storage and discusses the results of a clinical deployment of a multi-site storage grid for cancer care in the province of British Columbia.

  2. The intergroup protocols: Scalable group communication for the internet

    SciTech Connect

    Berket, Karlo

    2000-12-04

    Reliable group ordered delivery of multicast messages in a distributed system is a useful service that simplifies the programming of distributed applications. Such a service helps to maintain the consistency of replicated information and to coordinate the activities of the various processes. With the increasing popularity of the Internet, there is an increasing interest in scaling the protocols that provide this service to the environment of the Internet. The InterGroup protocol suite, described in this dissertation, provides such a service, and is intended for the environment of the Internet with scalability to large numbers of nodes and high latency links. The InterGroup protocols approach the scalability problem from various directions. They redefine the meaning of group membership, allow voluntary membership changes, add a receiver-oriented selection of delivery guarantees that permits heterogeneity of the receiver set, and provide a scalable reliability service. The InterGroup system comprises several components, executing at various sites within the system. Each component provides part of the services necessary to implement a group communication system for the wide-area. The components can be categorized as: (1) control hierarchy, (2) reliable multicast, (3) message distribution and delivery, and (4) process group membership. We have implemented a prototype of the InterGroup protocols in Java, and have tested the system performance in both local-area and wide-area networks.

  3. Scalability, Timing, and System Design Issues for Intrinsic Evolvable Hardware

    NASA Technical Reports Server (NTRS)

    Hereford, James; Gwaltney, David

    2004-01-01

    In this paper we address several issues pertinent to intrinsic evolvable hardware (EHW). The first issue is scalability; namely, how the design space scales as the programming string for the programmable device gets longer. We develop a model for population size and the number of generations as a function of the programming string length, L, and show that the number of circuit evaluations is an O(L2) process. We compare our model to several successful intrinsic EHW experiments and discuss the many implications of our model. The second issue that we address is the timing of intrinsic EHW experiments. We show that the processing time is a small part of the overall time to derive or evolve a circuit and that major improvements in processor speed alone will have only a minimal impact on improving the scalability of intrinsic EHW. The third issue we consider is the system-level design of intrinsic EHW experiments. We review what other researchers have done to break the scalability barrier and contend that the type of reconfigurable platform and the evolutionary algorithm are tied together and impose limits on each other.

  4. A Systems Approach to Scalable Transportation Network Modeling

    SciTech Connect

    Perumalla, Kalyan S

    2006-01-01

    Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory and speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.

  5. Scalable van der Waals Heterojunctions for High-Performance Photodetectors.

    PubMed

    Yeh, Chao-Hui; Liang, Zheng-Yong; Lin, Yung-Chang; Wu, Tien-Lin; Fan, Ta; Chu, Yu-Cheng; Ma, Chun-Hao; Liu, Yu-Chen; Chu, Ying-Hao; Suenaga, Kazutomo; Chiu, Po-Wen

    2017-10-05

    Atomically thin two-dimensional (2D) materials have attracted increasing attention for optoelectronic applications in view of their compact, ultrathin, flexible, and superior photosensing characteristics. Yet, scalable growth of 2D heterostructures and the fabrication of integrable optoelectronic devices remain unaddressed. Here, we show a scalable formation of 2D stacks and the fabrication of phototransistor arrays, with each photosensing element made of a graphene-WS2 vertical heterojunction and individually addressable by a local top gate. The constituent layers in the heterojunction are grown using chemical vapor deposition in combination with sulfurization, providing a clean junction interface and processing scalability. The aluminum top gate possesses a self-limiting oxide around the gate structure, allowing for a self-aligned deposition of drain/source contacts to reduce the access (ungated) channel regions and to boost the device performance. The generated photocurrent, inherently restricted by the limited optical absorption cross section of 2D materials, can be enhanced by 2 orders of magnitude by top gating. The resulting photoresponsivity can reach 4.0 A/W under an illumination power density of 0.5 mW/cm(2), and the dark current can be minimized to few picoamperes, yielding a low noise-equivalent power of 2.5 × 10(-16) W/Hz(1/2). Tailoring 2D heterostacks as well as the device architecture moves the applications of 2D-based optoelectronic devices one big step forward.

  6. Community health workers in global health: scale and scalability.

    PubMed

    Liu, Anne; Sullivan, Sarah; Khan, Mohammed; Sachs, Sonia; Singh, Prabhjot

    2011-01-01

    Community health worker programs have emerged as one of the most effective strategies to address human resources for health shortages while improving access to and quality of primary healthcare. Many developing countries have succeeded in deploying community health worker programs in recognition of the potential of community health workers to identify, refer, and in many cases treat illnesses at the household level. However, challenges in program design and sustainability are expanded when such programs are expanded at scale, particularly with regard to systems management and integration with primary health facilities. Several nongovernmental organizations provide cases of innovation on management of community health worker programs that could support a sustainable system that is capable of being expanded without being stressed in its functionality nor effectiveness--therefore, providing for stronger scalability. This paper explores community health worker programs that have been deployed at national scale, as well as scalable innovations found in successful nongovernmental organization-run community health worker programs. In exploration of strategies to ensure sustainable community health worker programs at scale, we reconcile scaling constraints and scalable innovations by mapping strengths of nongovernmental organizations' community health worker programs to the challenges faced by programs currently deployed at national scale. © 2011 Mount Sinai School of Medicine.

  7. Event metadata records as a testbed for scalable data mining

    NASA Astrophysics Data System (ADS)

    van Gemmeren, P.; Malon, D.

    2010-04-01

    At a data rate of 200 hertz, event metadata records ("TAGs," in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise "data mining," but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  8. A scalable micro-mixer for biomedical applications

    NASA Astrophysics Data System (ADS)

    Cortelezzi, Luca; Ferrari, Simone; Dubini, Angelo

    2016-11-01

    Our study presents a geometrically scalable active micro-mixer suitable for biomedical/bioengineering applications and potentially assimilable in a Lab-on-Chip. We designed our micro-mixer with the goal of satisfying the following constraints: small dimensions, because the device must be able to process volumes of fluid in the range of 10-6 ÷10-9 liters; high mixing speed, because mixing should be obtained in the shortest possible time; constructive simplicity, to facilitate realizability, assimilability and reusability of the micro-mixer; and geometrical scalability, because the micro-mixer should be assimilable to microfluidic systems of different dimensions. We studied numerically the mixing performance of our micro-mixer both in two- and three-dimensions. We characterize the mixing performance in terms of Reynolds, Strouhal and Péclet numbers in order to establish a practical range of operating conditions for our micro-mixer. Finally, we show that our micro-mixer is geometrically scalable, ie., micro-mixers of different geometrical dimensions having the same nondimensional specifications produce nearly the same mixing performance.

  9. Design and Implementation of Ceph: A Scalable Distributed File System

    SciTech Connect

    Weil, S A; Brandt, S A; Miller, E L; Long, D E; Maltzahn, C

    2006-04-19

    File system designers continue to look to new architectures to improve scalability. Object-based storage diverges from server-based (e.g. NFS) and SAN-based storage systems by coupling processors and memory with disk drives, delegating low-level allocation to object storage devices (OSDs) and decoupling I/O (read/write) from metadata (file open/close) operations. Even recent object-based systems inherit decades-old architectural choices going back to early UNIX file systems, however, limiting their ability to effectively scale to hundreds of petabytes. We present Ceph, a distributed file system that provides excellent performance and reliability with unprecedented scalability. Ceph maximizes the separation between data and metadata management by replacing allocation tables with a pseudo-random data distribution function (CRUSH) designed for heterogeneous and dynamic clusters of unreliable OSDs. We leverage OSD intelligence to distribute data replication, failure detection and recovery with semi-autonomous OSDs running a specialized local object storage file system (EBOFS). Finally, Ceph is built around a dynamic distributed metadata management cluster that provides extremely efficient metadata management that seamlessly adapts to a wide range of general purpose and scientific computing file system workloads. We present performance measurements under a variety of workloads that show superior I/O performance and scalable metadata management (more than a quarter million metadata ops/sec).

  10. Rethinking Visual Analytics for Streaming Data Applications

    SciTech Connect

    Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris

    2017-01-01

    In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between the two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is massive

  11. Evaluation of Bessel beam machining for scalable fabrication of conductive channels through diamond

    NASA Astrophysics Data System (ADS)

    Canfield, Brian K.; Davis, Lloyd M.

    2017-02-01

    Scalable methods must be developed for fabricating high-density arrays of conductive microchannels through 0.5 mmthick synthetic diamonds in order to form radiation-hard 3D particle tracking detectors for use in future high-energy particle physics experiments, such as those beyond the scheduled 2022 high-luminosity upgrade of the Large Hadron Collider. Prototype detectors with small-area arrays of graphitic columns, each written by slowly translating a femtosecond laser beam focus through the diamond, have established proof of concept, but much faster procedures are needed to manufacture large arrays of electrodes with <100 micron spacing over diameters of 10 cm. We have used a Bessel beam, formed using a 10° axicon and 0.68 NA aspheric lens, to very quickly write micron-diameter columns through 0.5 mm-thick electronic grade CVD diamonds without axially translating the diamond with respect to the beam. We employ an optical microscope to visualize columns, Raman spectroscopy to ascertain the degree of graphitization, and cat-whisker probes to test overall conductivity. Bessel focusing enables formation of a complete column with just a few femtosecond laser pulses, and so provides a scalable manufacturing method. However, reduction in the electrode resistivity is desired. To this end, expulsion of material from the column is probably needed, as carbon plasma will otherwise condense back into diamond, due to the disparate densities of graphite and diamond. We describe the use of several different femtosecond laser systems to evaluate a range of pulse parameters with the goal of increasing the level of graphitization and improving the conductivity of the electrodes.

  12. QUADrATiC: scalable gene expression connectivity mapping for repurposing FDA-approved therapeutics.

    PubMed

    O'Reilly, Paul G; Wen, Qing; Bankhead, Peter; Dunne, Philip D; McArt, Darragh G; McPherson, Suzanne; Hamilton, Peter W; Mills, Ken I; Zhang, Shu-Dong

    2016-05-04

    Gene expression connectivity mapping has proven to be a powerful and flexible tool for research. Its application has been shown in a broad range of research topics, most commonly as a means of identifying potential small molecule compounds, which may be further investigated as candidates for repurposing to treat diseases. The public release of voluminous data from the Library of Integrated Cellular Signatures (LINCS) programme further enhanced the utilities and potentials of gene expression connectivity mapping in biomedicine. We describe QUADrATiC ( http://go.qub.ac.uk/QUADrATiC ), a user-friendly tool for the exploration of gene expression connectivity on the subset of the LINCS data set corresponding to FDA-approved small molecule compounds. It enables the identification of compounds for repurposing therapeutic potentials. The software is designed to cope with the increased volume of data over existing tools, by taking advantage of multicore computing architectures to provide a scalable solution, which may be installed and operated on a range of computers, from laptops to servers. This scalability is provided by the use of the modern concurrent programming paradigm provided by the Akka framework. The QUADrATiC Graphical User Interface (GUI) has been developed using advanced Javascript frameworks, providing novel visualization capabilities for further analysis of connections. There is also a web services interface, allowing integration with other programs or scripts. QUADrATiC has been shown to provide an improvement over existing connectivity map software, in terms of scope (based on the LINCS data set), applicability (using FDA-approved compounds), usability and speed. It offers potential to biological researchers to analyze transcriptional data and generate potential therapeutics for focussed study in the lab. QUADrATiC represents a step change in the process of investigating gene expression connectivity and provides more biologically-relevant results than

  13. Encryption and authentication for scalable multimedia: current state of the art and challenges

    NASA Astrophysics Data System (ADS)

    Zhu, Bin B.; Swanson, Mitchell D.; Li, Shipeng

    2004-10-01

    Scalable coding is a technology that encodes a multimedia signal in a scalable manner where various representations can be extracted from a single codestream to fit a wide range of applications. Many new scalable coders such as JPEG 2000 and MPEG-4 FGS offer fine granularity scalability to provide near continuous optimal tradeoff between quality and rates in a large range. This fine granularity scalability poses great new challenges to the design of encryption and authentication systems for scalable media in Digital Rights Management (DRM) and other applications. It may be desirable or even mandatory to maintain a certain level of scalability in the encrypted or signed codestream so that no decryption or re-signing is needed when legitimate adaptations are applied. In other words, the encryption and authentication should be scalable, i.e., adaptation friendly. Otherwise secrets have to be shared with every intermediate stage along the content delivery system which performs adaptation manipulations. Sharing secrets with many parties would jeopardize the overall security of a system since the security depends on the weakest component of the system. In this paper, we first describe general requirements and desirable features for an encryption or authentication system for scalable media, esp. those not encountered with the non-scalable case. Then we present an overview of the current state of the art of technologies in scalable encryption and authentication. These technologies include full and selective encryption schemes that maintain the original or coarser granularity of scalability offered by an unencrypted scalable codestream, layered access control and block level authentication that reduce the fine granularity of scalability to a block level, among others. Finally, we summarize existing challenges and propose future research directions.

  14. Visual signatures in video visualization.

    PubMed

    Chen, Min; Botchen, Ralf P; Hashim, Rudy R; Weiskopf, Daniel; Ertl, Thomas; Thornton, Ian M

    2006-01-01

    Video visualization is a computation process that extracts meaningful information from original video data sets and conveys the extracted information to users in appropriate visual representations. This paper presents a broad treatment of the subject, following a typical research pipeline involving concept formulation, system development, a path-finding user study, and a field trial with real application data. In particular, we have conducted a fundamental study on the visualization of motion events in videos. We have, for the first time, deployed flow visualization techniques in video visualization. We have compared the effectiveness of different abstract visual representations of videos. We have conducted a user study to examine whether users are able to learn to recognize visual signatures of motions, and to assist in the evaluation of different visualization techniques. We have applied our understanding and the developed techniques to a set of application video clips. Our study has demonstrated that video visualization is both technically feasible and cost-effective. It has provided the first set of evidence confirming that ordinary users can be accustomed to the visual features depicted in video visualizations, and can learn to recognize visual signatures of a variety of motion events.

  15. Visual Imagery without Visual Perception?

    ERIC Educational Resources Information Center

    Bertolo, Helder

    2005-01-01

    The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…

  16. The Flatworld Simulation Control Architecture (FSCA): A Framework for Scalable Immersive Visualization Systems

    DTIC Science & Technology

    2004-12-01

    handling using the X10 home automation protocol. Each 3D graphics client renders its scene according to an assigned virtual camera position. By having...control protocol. DMX is a versatile and robust framework which overcomes limitations of the X10 home automation protocol which we are currently using

  17. Customizing computational methods for visual analytics with big data.

    PubMed

    Choo, Jaegul; Park, Haesun

    2013-01-01

    The volume of available data has been growing exponentially, increasing data problem's complexity and obscurity. In response, visual analytics (VA) has gained attention, yet its solutions haven't scaled well for big data. Computational methods can improve VA's scalability by giving users compact, meaningful information about the input data. However, the significant computation time these methods require hinders real-time interactive visualization of big data. By addressing crucial discrepancies between these methods and VA regarding precision and convergence, researchers have proposed ways to customize them for VA. These approaches, which include low-precision computation and iteration-level interactive visualization, ensure real-time interactive VA for big data.

  18. Amira: Multi-Dimensional Scientific Visualization for the GeoSciences in the 21st Century

    NASA Astrophysics Data System (ADS)

    Bartsch, H.; Erlebacher, G.

    2003-12-01

    amira (www.amiravis.com) is a general purpose framework for 3D scientific visualization that meets the needs of the non-programmer, the script writer, and the advanced programmer alike. Provided modules may be visually assembled in an interactive manner to create complex visual displays. These modules and their associated user interfaces are controlled either through a mouse, or via an interactive scripting mechanism based on Tcl. We provide interactive demonstrations of the various features of Amira and explain how these may be used to enhance the comprehension of datasets in use in the Earth Sciences community. Its features will be illustrated on scalar and vector fields on grid types ranging from Cartesian to fully unstructured. Specialized extension modules developed by some of our collaborators will be illustrated [1]. These include a module to automatically choose values for salient isosurface identification and extraction, and color maps suitable for volume rendering. During the session, we will present several demonstrations of remote networking, processing of very large spatio-temporal datasets, and various other projects that are underway. In particular, we will demonstrate WEB-IS, a java-applet interface to Amira that allows script editing via the web, and selected data analysis [2]. [1] G. Erlebacher, D. A. Yuen, F. Dubuffet, "Case Study: Visualization and Analysis of High Rayleigh Number -- 3D Convection in the Earth's Mantle", Proceedings of Visualization 2002, pp. 529--532. [2] Y. Wang, G. Erlebacher, Z. A. Garbow, D. A. Yuen, "Web-Based Service of a Visualization Package 'amira' for the Geosciences", Visual Geosciences, 2003.

  19. VPLS: an effective technology for building scalable transparent LAN services

    NASA Astrophysics Data System (ADS)

    Dong, Ximing; Yu, Shaohua

    2005-02-01

    Virtual Private LAN Service (VPLS) is generating considerable interest with enterprises and service providers as it offers multipoint transparent LAN service (TLS) over MPLS networks. This paper describes an effective technology - VPLS, which links virtual switch instances (VSIs) through MPLS to form an emulated Ethernet switch and build Scalable Transparent Lan Services. It first focuses on the architecture of VPLS with Ethernet bridging technique at the edge and MPLS at the core, then it tries to elucidate the data forwarding mechanism within VPLS domain, including learning and aging MAC addresses on a per LSP basis, flooding of unknown frames and replication for unknown, multicast, and broadcast frames. The loop-avoidance mechanism, known as split horizon forwarding, is also analyzed. Another important aspect of VPLS service is its basic operation, including autodiscovery and signaling, is discussed. From the perspective of efficiency and scalability the paper compares two important signaling mechanism, BGP and LDP, which are used to set up a PW between the PEs and bind the PWs to a particular VSI. With the extension of VPLS and the increase of full mesh of PWs between PE devices (n*(n-1)/2 PWs in all, a n2 complete problem), VPLS instance could have a large number of remote PE associations, resulting in an inefficient use of network bandwidth and system resources as the ingress PE has to replicate each frame and append MPLS labels for remote PE. So the latter part of this paper focuses on the scalability issue: the Hierarchical VPLS. Within the architecture of HVPLS, this paper addresses two ways to cope with a possibly large number of MAC addresses, which make VPLS operate more efficiently.

  20. Focal plane array with modular pixel array components for scalability

    SciTech Connect

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  1. Scalable Production Method for Graphene Oxide Water Vapor Separation Membranes

    SciTech Connect

    Fifield, Leonard S.; Shin, Yongsoon; Liu, Wei; Gotthold, David W.

    2016-01-01

    ABSTRACT

    Membranes for selective water vapor separation were assembled from graphene oxide suspension using techniques compatible with high volume industrial production. The large-diameter graphene oxide flake suspensions were synthesized from graphite materials via relatively efficient chemical oxidation steps with attention paid to maintaining flake size and achieving high graphene oxide concentrations. Graphene oxide membranes produced using scalable casting methods exhibited water vapor flux and water/nitrogen selectivity performance meeting or exceeding that of membranes produced using vacuum-assisted laboratory techniques. (PNNL-SA-117497)

  2. pcircle - A Suite of Scalable Parallel File System Tools

    SciTech Connect

    WANG, FEIYI

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  3. pcircle - A Suite of Scalable Parallel File System Tools

    SciTech Connect

    WANG, FEIYI

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  4. Scalable brain network construction on white matter fibers

    NASA Astrophysics Data System (ADS)

    Chung, Moo K.; Adluru, Nagesh; Dalton, Kim M.; Alexander, Andrew L.; Davidson, Richard J.

    2011-03-01

    DTI offers a unique opportunity to characterize the structural connectivity of the human brain non-invasively by tracing white matter fiber tracts. Whole brain tractography studies routinely generate up to half million tracts per brain, which serves as edges in an extremely large 3D graph with up to half million edges. Currently there is no agreed-upon method for constructing the brain structural network graphs out of large number of white matter tracts. In this paper, we present a scalable iterative framework called the ɛ-neighbor method for building a network graph and apply it to testing abnormal connectivity in autism.

  5. Scalability and Performance of a Large Linux Cluster

    SciTech Connect

    BRIGHTWELL,RONALD B.; PLIMPTON,STEVEN J.

    2000-01-20

    In this paper the authors present performance results from several parallel benchmarks and applications on a 400-node Linux cluster at Sandia National Laboratories. They compare the results on the Linux cluster to performance obtained on a traditional distributed-memory massively parallel processing machine, the Intel TeraFLOPS. They discuss the characteristics of these machines that influence the performance results and identify the key components of the system software that they feel are important to allow for scalability of commodity-based PC clusters to hundreds and possibly thousands of processors.

  6. Scalable C-H Oxidation with Copper: Synthesis of Polyoxypregnanes.

    PubMed

    See, Yi Yang; Herrmann, Aaron T; Aihara, Yoshinori; Baran, Phil S

    2015-11-04

    Steroids bearing C12 oxidations are widespread in nature, yet only one preparative chemical method addresses this challenge in a low-yielding and not fully understood fashion: Schönecker's Cu-mediated oxidation. This work shines new light onto this powerful C-H oxidation method through mechanistic investigation, optimization, and wider application. Culminating in a scalable, rapid, high-yielding, and operationally simple protocol, this procedure is applied to the first synthesis of several parent polyoxypregnane natural products, representing a gateway to over 100 family members.

  7. Simplex-stochastic collocation method with improved scalability

    SciTech Connect

    Edeling, W.N.; Dwight, R.P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  8. Scalable syntheses of the BET bromodomain inhibitor JQ1.

    PubMed

    Syeda, Shameem Sultana; Jakkaraj, Sudhakar; Georg, Gunda I

    2015-06-03

    We have developed methods involving the use of alternate, safer reagents for the scalable syntheses of the potent BET bromodomain inhibitor JQ1. A one-pot three step method, involving the conversion of a benzodiazepine to a thioamde using Lawesson's reagent, followed by amidrazone formation and installation of the triazole moiety furnished JQ1. This method provides good yields and a facile purification process. For the synthesis of enantiomerically enriched (+)-JQ1, the highly toxic reagent diethyl chlorophosphate, used in a previous synthesis, was replaced with the safer reagent diphenyl chlorophosphate in the three-step one-pot triazole formation without effecting yields and enantiomeric purity of (+)-JQ1.

  9. Scalable Architecture for Multihop Wireless ad Hoc Networks

    NASA Technical Reports Server (NTRS)

    Arabshahi, Payman; Gray, Andrew; Okino, Clayton; Yan, Tsun-Yee

    2004-01-01

    A scalable architecture for wireless digital data and voice communications via ad hoc networks has been proposed. Although the details of the architecture and of its implementation in hardware and software have yet to be developed, the broad outlines of the architecture are fairly clear: This architecture departs from current commercial wireless communication architectures, which are characterized by low effective bandwidth per user and are not well suited to low-cost, rapid scaling in large metropolitan areas. This architecture is inspired by a vision more akin to that of more than two dozen noncommercial community wireless networking organizations established by volunteers in North America and several European countries.

  10. A scalable parallel algorithm for multiple objective linear programs

    NASA Technical Reports Server (NTRS)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  11. Scalable and Robust Randomized Benchmarking of Quantum Processes

    NASA Astrophysics Data System (ADS)

    Magesan, Easwar; Gambetta, J. M.; Emerson, Joseph

    2011-05-01

    In this Letter we propose a fully scalable randomized benchmarking protocol for quantum information processors. We prove that the protocol provides an efficient and reliable estimate of the average error-rate for a set operations (gates) under a very general noise model that allows for both time and gate-dependent errors. In particular we obtain a sequence of fitting models for the observable fidelity decay as a function of a (convergent) perturbative expansion of the gate errors about the mean error. We illustrate the protocol through numerical examples.

  12. SciDAC Institute for Ultra-Scale Visualization: Activity Recognition for Ultra-Scale Visualization

    SciTech Connect

    Silver, Deborah

    2014-04-30

    Understanding the science behind ultra-scale simulations requires extracting meaning from data sets of hundreds of terabytes or more. Developing scalable parallel visualization algorithms is a key step enabling scientists to interact and visualize their data at this scale. However, at extreme scales, the datasets are so huge, there is not even enough time to view the data, let alone explore it with basic visualization methods. Automated tools are necessary for knowledge discovery -- to help sift through the information and isolate characteristic patterns, thereby enabling the scientist to study local interactions, the origin of features and their evolution in large volumes of data. These tools must be able to operate on data of this scale and work with the visualization process. In this project, we developed a framework for activity detection to allow scientists to model and extract spatio-temporal patterns from time-varying data.

  13. Runtime volume visualization for parallel CFD

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu

    1995-01-01

    This paper discusses some aspects of design of a data distributed, massively parallel volume rendering library for runtime visualization of parallel computational fluid dynamics simulations in a message-passing environment. Unlike the traditional scheme in which visualization is a postprocessing step, the rendering is done in place on each node processor. Computational scientists who run large-scale simulations on a massively parallel computer can thus perform interactive monitoring of their simulations. The current library provides an interface to handle volume data on rectilinear grids. The same design principles can be generalized to handle other types of grids. For demonstration, we run a parallel Navier-Stokes solver making use of this rendering library on the Intel Paragon XP/S. The interactive visual response achieved is found to be very useful. Performance studies show that the parallel rendering process is scalable with the size of the simulation as well as with the parallel computer.

  14. Visual Knowledge.

    ERIC Educational Resources Information Center

    Chipman, Susan F.

    Visual knowledge is an enormously important part of our total knowledge. The psychological study of learning and knowledge has focused almost exclusively on verbal materials. Today, the advance of technology is making the use of visual communication increasingly feasible and popular. However, this enthusiasm involves the illusion that visual…

  15. Visual Theorems.

    ERIC Educational Resources Information Center

    Davis, Philip J.

    1993-01-01

    Argues for a mathematics education that interprets the word "theorem" in a sense that is wide enough to include the visual aspects of mathematical intuition and reasoning. Defines the term "visual theorems" and illustrates the concept using the Marigold of Theodorus. (Author/MDH)

  16. Visual Literacy.

    ERIC Educational Resources Information Center

    Lamberski, Richard J.

    A series of articles examines visual literacy from the perspectives of definition, research, curriculum, and resources. Articles examining the definition of visual literacy approach it in terms of semantics, techniques, and exploratory definition areas. There are surveys of present and potential research, and a discussion of the problem of…

  17. Visual Thinking.

    ERIC Educational Resources Information Center

    Arnheim, Rudolf

    Based on the more general principle that all thinking (including reasoning) is basically perceptual in nature, the author proposes that visual perception is not a passive recording of stimulus material but an active concern of the mind. He delineates the task of visually distinguishing changes in size, shape, and position and points out the…

  18. Visual Impairment

    MedlinePlus

    ... include: Visual acuity test. A person reads an eye chart to measure how well he or she sees at various ... Visual field test. Ophthalmologists use this test to measure side, or peripheral, ... inside the eye to evaluate for glaucoma. If your doctor determines ...

  19. Visual Theorems.

    ERIC Educational Resources Information Center

    Davis, Philip J.

    1993-01-01

    Argues for a mathematics education that interprets the word "theorem" in a sense that is wide enough to include the visual aspects of mathematical intuition and reasoning. Defines the term "visual theorems" and illustrates the concept using the Marigold of Theodorus. (Author/MDH)

  20. Visual Closure.

    ERIC Educational Resources Information Center

    Groffman, Sidney

    An experimental test of visual closure based on an information-theory concept of perception was devised to test the ability to discriminate visual stimuli with reduced cues. The test is to be administered in a timed individual situation in which the subject is presented with sets of incomplete drawings of simple objects that he is required to name…

  1. Mathematical Visualization

    ERIC Educational Resources Information Center

    Rogness, Jonathan

    2011-01-01

    Advances in computer graphics have provided mathematicians with the ability to create stunning visualizations, both to gain insight and to help demonstrate the beauty of mathematics to others. As educators these tools can be particularly important as we search for ways to work with students raised with constant visual stimulation, from video games…

  2. Visual Thinking.

    ERIC Educational Resources Information Center

    Arnheim, Rudolf

    Based on the more general principle that all thinking (including reasoning) is basically perceptual in nature, the author proposes that visual perception is not a passive recording of stimulus material but an active concern of the mind. He delineates the task of visually distinguishing changes in size, shape, and position and points out the…

  3. Neutron generators with size scalability, ease of fabrication and multiple ion source functionalities

    DOEpatents

    Elizondo-Decanini, Juan M

    2014-11-18

    A neutron generator is provided with a flat, rectilinear geometry and surface mounted metallizations. This construction provides scalability and ease of fabrication, and permits multiple ion source functionalities.

  4. Scalable tuning of building models to hourly data

    DOE PAGES

    Garrett, Aaron; New, Joshua Ryan

    2015-03-31

    Energy models of existing buildings are unreliable unless calibrated so they correlate well with actual energy usage. Manual tuning requires a skilled professional, is prohibitively expensive for small projects, imperfect, non-repeatable, non-transferable, and not scalable to the dozens of sensor channels that smart meters, smart appliances, and cheap/ubiquitous sensors are beginning to make available today. A scalable, automated methodology is needed to quickly and intelligently calibrate building energy models to all available data, increase the usefulness of those models, and facilitate speed-and-scale penetration of simulation-based capabilities into the marketplace for actualized energy savings. The "Autotune'' project is a novel, model-agnosticmore » methodology which leverages supercomputing, large simulation ensembles, and big data mining with multiple machine learning algorithms to allow automatic calibration of simulations that match measured experimental data in a way that is deployable on commodity hardware. This paper shares several methodologies employed to reduce the combinatorial complexity to a computationally tractable search problem for hundreds of input parameters. Furthermore, accuracy metrics are provided which quantify model error to measured data for either monthly or hourly electrical usage from a highly-instrumented, emulated-occupancy research home.« less

  5. Developing a scalable artificial photosynthesis technology through nanomaterials by design

    NASA Astrophysics Data System (ADS)

    Lewis, Nathan S.

    2016-12-01

    An artificial photosynthetic system that directly produces fuels from sunlight could provide an approach to scalable energy storage and a technology for the carbon-neutral production of high-energy-density transportation fuels. A variety of designs are currently being explored to create a viable artificial photosynthetic system, and the most technologically advanced systems are based on semiconducting photoelectrodes. Here, I discuss the development of an approach that is based on an architecture, first conceived around a decade ago, that combines arrays of semiconducting microwires with flexible polymeric membranes. I highlight the key steps that have been taken towards delivering a fully functional solar fuels generator, which have exploited advances in nanotechnology at all hierarchical levels of device construction, and include the discovery of earth-abundant electrocatalysts for fuel formation and materials for the stabilization of light absorbers. Finally, I consider the remaining scientific and engineering challenges facing the fulfilment of an artificial photosynthetic system that is simultaneously safe, robust, efficient and scalable.

  6. Scalable manufacturing of biomimetic moldable hydrogels for industrial applications.

    PubMed

    Yu, Anthony C; Chen, Haoxuan; Chan, Doreen; Agmon, Gillie; Stapleton, Lyndsay M; Sevit, Alex M; Tibbitt, Mark W; Acosta, Jesse D; Zhang, Tony; Franzia, Paul W; Langer, Robert; Appel, Eric A

    2016-12-13

    Hydrogels are a class of soft material that is exploited in many, often completely disparate, industrial applications, on account of their unique and tunable properties. Advances in soft material design are yielding next-generation moldable hydrogels that address engineering criteria in several industrial settings such as complex viscosity modifiers, hydraulic or injection fluids, and sprayable carriers. Industrial implementation of these viscoelastic materials requires extreme volumes of material, upwards of several hundred million gallons per year. Here, we demonstrate a paradigm for the scalable fabrication of self-assembled moldable hydrogels using rationally engineered, biomimetic polymer-nanoparticle interactions. Cellulose derivatives are linked together by selective adsorption to silica nanoparticles via dynamic and multivalent interactions. We show that the self-assembly process for gel formation is easily scaled in a linear fashion from 0.5 mL to over 15 L without alteration of the mechanical properties of the resultant materials. The facile and scalable preparation of these materials leveraging self-assembly of inexpensive, renewable, and environmentally benign starting materials, coupled with the tunability of their properties, make them amenable to a range of industrial applications. In particular, we demonstrate their utility as injectable materials for pipeline maintenance and product recovery in industrial food manufacturing as well as their use as sprayable carriers for robust application of fire retardants in preventing wildland fires.

  7. Scalable parallel distance field construction for large-scale applications

    DOE PAGES

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; ...

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate itsmore » efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.« less

  8. Scalable and Sustainable Electrochemical Allylic C–H Oxidation

    PubMed Central

    Chen, Yong; Tang, Jiaze; Chen, Ke; Eastgate, Martin D.; Baran, Phil S.

    2016-01-01

    New methods and strategies for the direct functionalization of C–H bonds are beginning to reshape the fabric of retrosynthetic analysis, impacting the synthesis of natural products, medicines, and even materials1. The oxidation of allylic systems has played a prominent role in this context as possibly the most widely applied C–H functionalization due to the utility of enones and allylic alcohols as versatile intermediates, along with their prevalence in natural and unnatural materials2. Allylic oxidations have been featured in hundreds of syntheses, including some natural product syntheses regarded as “classics”3. Despite many attempts to improve the efficiency and practicality of this powerful transformation, the vast majority of conditions still employ highly toxic reagents (based around toxic elements such as chromium, selenium, etc.) or expensive catalysts (palladium, rhodium, etc.)2. These requirements are highly problematic in industrial settings; currently, no scalable and sustainable solution to allylic oxidation exists. As such, this oxidation strategy is rarely embraced for large-scale synthetic applications, limiting the adoption of this important retrosynthetic strategy by industrial scientists. In this manuscript, we describe an electrochemical solution to this problem that exhibits broad substrate scope, operational simplicity, and high chemoselectivity. This method employs inexpensive and readily available materials, representing the first example of a scalable allylic C–H oxidation (demonstrated on 100 grams), finally opening the door for the adoption of this C–H oxidation strategy in large-scale industrial settings without significant environmental impact. PMID:27096371

  9. Scalable manufacturing of biomimetic moldable hydrogels for industrial applications

    NASA Astrophysics Data System (ADS)

    Yu, Anthony C.; Chen, Haoxuan; Chan, Doreen; Agmon, Gillie; Stapleton, Lyndsay M.; Sevit, Alex M.; Tibbitt, Mark W.; Acosta, Jesse D.; Zhang, Tony; Franzia, Paul W.; Langer, Robert; Appel, Eric A.

    2016-12-01

    Hydrogels are a class of soft material that is exploited in many, often completely disparate, industrial applications, on account of their unique and tunable properties. Advances in soft material design are yielding next-generation moldable hydrogels that address engineering criteria in several industrial settings such as complex viscosity modifiers, hydraulic or injection fluids, and sprayable carriers. Industrial implementation of these viscoelastic materials requires extreme volumes of material, upwards of several hundred million gallons per year. Here, we demonstrate a paradigm for the scalable fabrication of self-assembled moldable hydrogels using rationally engineered, biomimetic polymer–nanoparticle interactions. Cellulose derivatives are linked together by selective adsorption to silica nanoparticles via dynamic and multivalent interactions. We show that the self-assembly process for gel formation is easily scaled in a linear fashion from 0.5 mL to over 15 L without alteration of the mechanical properties of the resultant materials. The facile and scalable preparation of these materials leveraging self-assembly of inexpensive, renewable, and environmentally benign starting materials, coupled with the tunability of their properties, make them amenable to a range of industrial applications. In particular, we demonstrate their utility as injectable materials for pipeline maintenance and product recovery in industrial food manufacturing as well as their use as sprayable carriers for robust application of fire retardants in preventing wildland fires.

  10. Scalable orbital-angular-momentum sorting without destroying photon states

    NASA Astrophysics Data System (ADS)

    Wang, Fang-Xiang; Chen, Wei; Yin, Zhen-Qiang; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu

    2016-09-01

    Single photons with orbital angular momentum (OAM) have attracted substantial attention from researchers. A single photon can carry infinite OAM values theoretically. Thus, OAM photon states have been widely used in quantum information and fundamental quantum mechanics. Although there have been many methods for sorting quantum states with different OAM values, the nondestructive and efficient sorter of high-dimensional OAM remains a fundamental challenge. Here, we propose a scalable OAM sorter which can categorize different OAM states simultaneously, meanwhile, preserving both OAM and spin angular momentum. Fundamental elements of the sorter are composed of symmetric multiport beam splitters (BSs) and Dove prisms with cascading structure, which in principle can be flexibly and effectively combined to sort arbitrarily high-dimensional OAM photons. The scalable structures proposed here greatly reduce the number of BSs required for sorting high-dimensional OAM states. In view of the nondestructive and extensible features, the sorters can be used as fundamental devices not only for high-dimensional quantum information processing, but also for traditional optics.

  11. Scalable tuning of building models to hourly data

    SciTech Connect

    Garrett, Aaron; New, Joshua Ryan

    2015-03-31

    Energy models of existing buildings are unreliable unless calibrated so they correlate well with actual energy usage. Manual tuning requires a skilled professional, is prohibitively expensive for small projects, imperfect, non-repeatable, non-transferable, and not scalable to the dozens of sensor channels that smart meters, smart appliances, and cheap/ubiquitous sensors are beginning to make available today. A scalable, automated methodology is needed to quickly and intelligently calibrate building energy models to all available data, increase the usefulness of those models, and facilitate speed-and-scale penetration of simulation-based capabilities into the marketplace for actualized energy savings. The "Autotune'' project is a novel, model-agnostic methodology which leverages supercomputing, large simulation ensembles, and big data mining with multiple machine learning algorithms to allow automatic calibration of simulations that match measured experimental data in a way that is deployable on commodity hardware. This paper shares several methodologies employed to reduce the combinatorial complexity to a computationally tractable search problem for hundreds of input parameters. Furthermore, accuracy metrics are provided which quantify model error to measured data for either monthly or hourly electrical usage from a highly-instrumented, emulated-occupancy research home.

  12. Resolution scalable image coding with reversible cellular automata.

    PubMed

    Cappellari, Lorenzo; Milani, Simone; Cruz-Reyes, Carlos; Calvagno, Giancarlo

    2011-05-01

    In a resolution scalable image coding algorithm, a multiresolution representation of the data is often obtained using a linear filter bank. Reversible cellular automata have been recently proposed as simpler, nonlinear filter banks that produce a similar representation. The original image is decomposed into four subbands, such that one of them retains most of the features of the original image at a reduced scale. In this paper, we discuss the utilization of reversible cellular automata and arithmetic coding for scalable compression of binary and grayscale images. In the binary case, the proposed algorithm that uses simple local rules compares well with the JBIG compression standard, in particular for images where the foreground is made of a simple connected region. For complex images, more efficient local rules based upon the lifting principle have been designed. They provide compression performances very close to or even better than JBIG, depending upon the image characteristics. In the grayscale case, and in particular for smooth images such as depth maps, the proposed algorithm outperforms both the JBIG and the JPEG2000 standards under most coding conditions.

  13. Cheetah: A Framework for Scalable Hierarchical Collective Operations

    SciTech Connect

    Graham, Richard L; Gorentla Venkata, Manjunath; Ladd, Joshua S; Shamis, Pavel; Rabinovitz, Ishai; Filipov, Vasily; Shainer, Gilad

    2011-01-01

    Collective communication operations, used by many scientific applications, tend to limit overall parallel application performance and scalability. Computer systems are becoming more heterogeneous with increasing node and core-per-node counts. Also, a growing number of data-access mechanisms, of varying characteristics, are supported within a single computer system. We describe a new hierarchical collective communication framework that takes advantage of hardware-specific data-access mechanisms. It is flexible, with run-time hierarchy specification, and sharing of collective communication primitives between collective algorithms. Data buffers are shared between levels in the hierarchy reducing collective communication management overhead. We have implemented several versions of the Message Passing Interface (MPI) collective operations, MPI Barrier() and MPI Bcast(), and run experiments using up to 49, 152 processes on a Cray XT5, and a small InfiniBand based cluster. At 49, 152 processes our barrier implementation outperforms the optimized native implementation by 75%. 32 Byte and one Mega-Byte broadcasts outperform it by 62% and 11%, respectively, with better scalability characteristics. Improvements relative to the default Open MPI implementation are much larger.

  14. Scalable Multiprocessor for High-Speed Computing in Space

    NASA Technical Reports Server (NTRS)

    Lux, James; Lang, Minh; Nishimoto, Kouji; Clark, Douglas; Stosic, Dorothy; Bachmann, Alex; Wilkinson, William; Steffke, Richard

    2004-01-01

    A report discusses the continuing development of a scalable multiprocessor computing system for hard real-time applications aboard a spacecraft. "Hard realtime applications" signifies applications, like real-time radar signal processing, in which the data to be processed are generated at "hundreds" of pulses per second, each pulse "requiring" millions of arithmetic operations. In these applications, the digital processors must be tightly integrated with analog instrumentation (e.g., radar equipment), and data input/output must be synchronized with analog instrumentation, controlled to within fractions of a microsecond. The scalable multiprocessor is a cluster of identical commercial-off-the-shelf generic DSP (digital-signal-processing) computers plus generic interface circuits, including analog-to-digital converters, all controlled by software. The processors are computers interconnected by high-speed serial links. Performance can be increased by adding hardware modules and correspondingly modifying the software. Work is distributed among the processors in a parallel or pipeline fashion by means of a flexible master/slave control and timing scheme. Each processor operates under its own local clock; synchronization is achieved by broadcasting master time signals to all the processors, which compute offsets between the master clock and their local clocks.

  15. Biosurveillance of emerging biothreats using scalable genotype clustering.

    PubMed

    Gallego, Blanca; Sintchenko, Vitali; Wang, Qinning; Hiley, Lester; Gilbert, Gwendolyn L; Coiera, Enrico

    2009-02-01

    Developments in molecular fingerprinting of pathogens with epidemic potential have offered new opportunities for improving detection and monitoring of biothreats. However, the lack of scalable definitions for infectious disease clustering presents a barrier for effective use and evaluation of new data types for early warning systems. A novel working definition of an outbreak based on temporal and spatial clustering of molecular genotypes is introduced in this paper. It provides an unambiguous way of clustering of causative pathogens and is adjustable to local disease prevalence and availability of public health resources. The performance of this definition in prospective surveillance is assessed in the context of community outbreaks of food-borne salmonellosis. Molecular fingerprinting augmented with the scalable clustering allows the detection of more than 50% of the potential outbreaks before they reach the midpoint of the cluster duration. Clustering in time by imposing restrictions on intervals between collection dates results in a smaller number of outbreaks but does not significantly affect the timeliness of detection. Clustering in space and time by imposing restrictions on the spatial and temporal distance between cases results in a further reduction in the number of outbreaks and decreases the overall efficiency of prospective detection. Innovative bacterial genotyping technologies can enhance early warning systems for public health by aiding the detection of moderate and small epidemics.

  16. Scalable and sustainable electrochemical allylic C-H oxidation.

    PubMed

    Horn, Evan J; Rosen, Brandon R; Chen, Yong; Tang, Jiaze; Chen, Ke; Eastgate, Martin D; Baran, Phil S

    2016-05-05

    New methods and strategies for the direct functionalization of C-H bonds are beginning to reshape the field of retrosynthetic analysis, affecting the synthesis of natural products, medicines and materials. The oxidation of allylic systems has played a prominent role in this context as possibly the most widely applied C-H functionalization, owing to the utility of enones and allylic alcohols as versatile intermediates, and their prevalence in natural and unnatural materials. Allylic oxidations have featured in hundreds of syntheses, including some natural product syntheses regarded as "classics". Despite many attempts to improve the efficiency and practicality of this transformation, the majority of conditions still use highly toxic reagents (based around toxic elements such as chromium or selenium) or expensive catalysts (such as palladium or rhodium). These requirements are problematic in industrial settings; currently, no scalable and sustainable solution to allylic oxidation exists. This oxidation strategy is therefore rarely used for large-scale synthetic applications, limiting the adoption of this retrosynthetic strategy by industrial scientists. Here we describe an electrochemical C-H oxidation strategy that exhibits broad substrate scope, operational simplicity and high chemoselectivity. It uses inexpensive and readily available materials, and represents a scalable allylic C-H oxidation (demonstrated on 100 grams), enabling the adoption of this C-H oxidation strategy in large-scale industrial settings without substantial environmental impact.

  17. Ion-photon networks for scalable quantum computing

    NASA Astrophysics Data System (ADS)

    Clark, Susan; Hayes, David; Hucul, David; Debnath, Shantanu; Quraishi, Qudsia; Olmschenk, Steven; Matsukevich, Dzmitry; Maunz, Peter; Monroe, Christopher

    2011-05-01

    Trapped ions connected by photons are a promising avenue for large-scale quantum computing and quantum information transfer. Previous experiments with photonically connected, distant, trapped ions establish ion entanglement and teleportation. Here, we report advances toward combining these photonic gates between distant ions with Coulombic gates between nearby ions in order to demonstrate a scalable quantum network. Specifically, we show individual optical addressing of single ions, allowing for photonic entanglement to be performed separately from Coulombic gates. Additionally, we move from a four rod trap to a segmented blade-type trap to more easily hold a larger number of ions. Finally, we implement a more robust method of ion entanglement that lessens dephasing due to rf trap noise. These improvements are important steps to the realization of a scalable ion-photon network. This work was supported by grants from the U.S. Army Research Office with funding from IARPA, the DARPA OLE program, and the MURI program; the NSF PIF Program; the NSF Physics Frontier Center at JQI; the European Commission AQUTE program.

  18. A Highly Scalable Peptide-Based Assay System for Proteomics

    PubMed Central

    Kozlov, Igor A.; Thomsen, Elliot R.; Munchel, Sarah E.; Villegas, Patricia; Capek, Petr; Gower, Austin J.; K. Pond, Stephanie J.; Chudin, Eugene; Chee, Mark S.

    2012-01-01

    We report a scalable and cost-effective technology for generating and screening high-complexity customizable peptide sets. The peptides are made as peptide-cDNA fusions by in vitro transcription/translation from pools of DNA templates generated by microarray-based synthesis. This approach enables large custom sets of peptides to be designed in silico, manufactured cost-effectively in parallel, and assayed efficiently in a multiplexed fashion. The utility of our peptide-cDNA fusion pools was demonstrated in two activity-based assays designed to discover protease and kinase substrates. In the protease assay, cleaved peptide substrates were separated from uncleaved and identified by digital sequencing of their cognate cDNAs. We screened the 3,011 amino acid HCV proteome for susceptibility to cleavage by the HCV NS3/4A protease and identified all 3 known trans cleavage sites with high specificity. In the kinase assay, peptide substrates phosphorylated by tyrosine kinases were captured and identified by sequencing of their cDNAs. We screened a pool of 3,243 peptides against Abl kinase and showed that phosphorylation events detected were specific and consistent with the known substrate preferences of Abl kinase. Our approach is scalable and adaptable to other protein-based assays. PMID:22701568

  19. Scalable manufacturing of biomimetic moldable hydrogels for industrial applications

    PubMed Central

    Yu, Anthony C.; Chen, Haoxuan; Chan, Doreen; Agmon, Gillie; Stapleton, Lyndsay M.; Sevit, Alex M.; Tibbitt, Mark W.; Acosta, Jesse D.; Zhang, Tony; Franzia, Paul W.; Langer, Robert

    2016-01-01

    Hydrogels are a class of soft material that is exploited in many, often completely disparate, industrial applications, on account of their unique and tunable properties. Advances in soft material design are yielding next-generation moldable hydrogels that address engineering criteria in several industrial settings such as complex viscosity modifiers, hydraulic or injection fluids, and sprayable carriers. Industrial implementation of these viscoelastic materials requires extreme volumes of material, upwards of several hundred million gallons per year. Here, we demonstrate a paradigm for the scalable fabrication of self-assembled moldable hydrogels using rationally engineered, biomimetic polymer–nanoparticle interactions. Cellulose derivatives are linked together by selective adsorption to silica nanoparticles via dynamic and multivalent interactions. We show that the self-assembly process for gel formation is easily scaled in a linear fashion from 0.5 mL to over 15 L without alteration of the mechanical properties of the resultant materials. The facile and scalable preparation of these materials leveraging self-assembly of inexpensive, renewable, and environmentally benign starting materials, coupled with the tunability of their properties, make them amenable to a range of industrial applications. In particular, we demonstrate their utility as injectable materials for pipeline maintenance and product recovery in industrial food manufacturing as well as their use as sprayable carriers for robust application of fire retardants in preventing wildland fires. PMID:27911849

  20. Scalable and Fault Tolerant Failure Detection and Consensus

    SciTech Connect

    Katti, Amogh; Di Fatta, Giuseppe; Naughton III, Thomas J; Engelmann, Christian

    2015-01-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.

  1. Efficient and scalable graph similarity joins in MapReduce.

    PubMed

    Chen, Yifan; Zhao, Xiang; Xiao, Chuan; Zhang, Weiming; Tang, Jiuyang

    2014-01-01

    Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results.

  2. Scalable Nernst thermoelectric power using a coiled galfenol wire

    NASA Astrophysics Data System (ADS)

    Yang, Zihao; Codecido, Emilio A.; Marquez, Jason; Zheng, Yuanhua; Heremans, Joseph P.; Myers, Roberto C.

    2017-09-01

    The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15) wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  3. MPE graphics -- Scalable X11 graphics in MPI

    SciTech Connect

    Gropp, W.; Karrels, E.; Lusk, E.

    1994-12-31

    As parallel programs enter the mainstream, they need to provide the same facilities and ease-of-use features expected of uniprocessor programs. For many applications, this means that they need to provide graphical output. This talk discusses a library of routines that provide scalable X Window System graphics. These routines make use of the MPI message-passing standard to provide a safe and reliable system that can be easily used in parallel programs. At the same time they encapsulate commonly-used services to provide a convenient interface to X graphics facilities. The easiest way to provide X11 graphics to a parallel program is to allow each process to draw on the same X11 Window. That is, each process opens a connection to the X11 server and draws directly to it. In one sense, this is as scalable a system as possible, since the single graphics display is an unavoidable point of sequential access. However, in reality, an X server can only accept a relatively small number of connections. In addition, the latency associated with each transmission between a parallel process and the X Window server is relatively high. This talk addresses these issues.

  4. The Node Monitoring Component of a Scalable Systems Software Environment

    SciTech Connect

    Miller, Samuel James

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  5. Towards Scalable Strain Gauge-Based Joint Torque Sensors

    PubMed Central

    D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred

    2017-01-01

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446

  6. Efficient Buffer Management for Scalable Media-on-Demand

    NASA Astrophysics Data System (ADS)

    Waldvogel, Marcel; Deng, Wei; Janakiraman, Ramaprabhu

    2003-01-01

    Widespread availability of high-speed networks and fast, cheap computation have rendered high-quality Media-on-Demand (MoD) feasible. Research on scalable MoD has resulted in many efficient schemes that involve segmentation and asynchronous broadcast of media data, requiring clients to buffer and reorder out-of-order segments efficiently for serial playout. In such schemes, buffer space requirements run to several hundred megabytes and hence require efficient buffer management techniques involving both primary memory and secondary storage: while disk sizes have increased exponentially, access speeds have not kept pace at all. The conversion of out-of-order arrival to in-order playout suggests the use of external memory priority queues, but their content-agnostic nature prevents them from performing well under MoD loads. In this paper, we propose and evaluate a series of simple heuristic schemes which, in simulation studies and in combination with our scalable MoD scheme, achieve significant improvements in storage performance over existing schemes.

  7. Protection of multicast scalable video by secret sharing: simulation results

    NASA Astrophysics Data System (ADS)

    Eskicioglu, Ahmet M.; Dexter, Scott; Delp, Edward J., III

    2003-06-01

    Security is an increasingly important attribute for multimedia applications that require prevention of unauthorized access to copyrighted data. Two approaches have been used to protect scalable video content in distribution: Partial encryption and progressive encryption. Partial encryption provides protection for only selected portions of the video. Progressive encryption allows transcoding with simple packet truncation, and eliminates the need to decrypt the video packets at intermediate network nodes with low complexity. Centralized Key Management with Secret Sharing (CKMSS) is a recent approach in which the group manager assigns unique secret shares to the nodes in the hierarchical key distribution tree. It allows the reconstruction of different keys by communicating different activating shares for the same prepositioned information. Once the group key is established, it is used until a member joins/leaves the multicast group or periodic rekeying occurs. In this paper, we will present simulation results regarding the communication and processing requirements of the CKMSS scheme applied to scalable video. In particular, we have measured the rekey message size and the processing time needed by the server for each join/leave request and periodic rekey event.

  8. NOA: A Scalable Multi-Parent Clustering Hierarchy for WSNs

    SciTech Connect

    Cree, Johnathan V.; Delgado-Frias, Jose; Hughes, Michael A.; Burghard, Brion J.; Silvers, Kurt L.

    2012-08-10

    NOA is a multi-parent, N-tiered, hierarchical clustering algorithm that provides a scalable, robust and reliable solution to autonomous configuration of large-scale wireless sensor networks. The novel clustering hierarchy's inherent benefits can be utilized by in-network data processing techniques to provide equally robust, reliable and scalable in-network data processing solutions capable of reducing the amount of data sent to sinks. Utilizing a multi-parent framework, NOA reduces the cost of network setup when compared to hierarchical beaconing solutions by removing the expense of r-hop broadcasting (r is the radius of the cluster) needed to build the network and instead passes network topology information among shared children. NOA2, a two-parent clustering hierarchy solution, and NOA3, the three-parent variant, saw up to an 83% and 72% reduction in overhead, respectively, when compared to performing one round of a one-parent hierarchical beaconing, as well as 92% and 88% less overhead when compared to one round of two- and three-parent hierarchical beaconing hierarchy.

  9. Using Swarming Agents for Scalable Security in Large Network Environments

    SciTech Connect

    Crouse, Michael; White, Jacob L.; Fulp, Errin W.; Berenhaut, Kenneth S.; Fink, Glenn A.; Haack, Jereme N.

    2011-09-23

    The difficulty of securing computer infrastructures increases as they grow in size and complexity. Network-based security solutions such as IDS and firewalls cannot scale because of exponentially increasing computational costs inherent in detecting the rapidly growing number of threat signatures. Hostbased solutions like virus scanners and IDS suffer similar issues, and these are compounded when enterprises try to monitor these in a centralized manner. Swarm-based autonomous agent systems like digital ants and artificial immune systems can provide a scalable security solution for large network environments. The digital ants approach offers a biologically inspired design where each ant in the virtual colony can detect atoms of evidence that may help identify a possible threat. By assembling the atomic evidences from different ant types the colony may detect the threat. This decentralized approach can require, on average, fewer computational resources than traditional centralized solutions; however there are limits to its scalability. This paper describes how dividing a large infrastructure into smaller managed enclaves allows the digital ant framework to effectively operate in larger environments. Experimental results will show that using smaller enclaves allows for more consistent distribution of agents and results in faster response times.

  10. Scalable metagenomic taxonomy classification using a reference genome database

    PubMed Central

    Ames, Sasha K.; Hysom, David A.; Gardner, Shea N.; Lloyd, G. Scott; Gokhale, Maya B.; Allen, Jonathan E.

    2013-01-01

    Motivation: Deep metagenomic sequencing of biological samples has the potential to recover otherwise difficult-to-detect microorganisms and accurately characterize biological samples with limited prior knowledge of sample contents. Existing metagenomic taxonomic classification algorithms, however, do not scale well to analyze large metagenomic datasets, and balancing classification accuracy with computational efficiency presents a fundamental challenge. Results: A method is presented to shift computational costs to an off-line computation by creating a taxonomy/genome index that supports scalable metagenomic classification. Scalable performance is demonstrated on real and simulated data to show accurate classification in the presence of novel organisms on samples that include viruses, prokaryotes, fungi and protists. Taxonomic classification of the previously published 150 giga-base Tyrolean Iceman dataset was found to take <20 h on a single node 40 core large memory machine and provide new insights on the metagenomic contents of the sample. Availability: Software was implemented in C++ and is freely available at http://sourceforge.net/projects/lmat Contact: allen99@llnl.gov Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23828782

  11. TreeVector: scalable, interactive, phylogenetic trees for the web.

    PubMed

    Pethica, Ralph; Barker, Gary; Kovacs, Tim; Gough, Julian

    2010-01-28

    Phylogenetic trees are complex data forms that need to be graphically displayed to be human-readable. Traditional techniques of plotting phylogenetic trees focus on rendering a single static image, but increases in the production of biological data and large-scale analyses demand scalable, browsable, and interactive trees. We introduce TreeVector, a Scalable Vector Graphics-and Java-based method that allows trees to be integrated and viewed seamlessly in standard web browsers with no extra software required, and can be modified and linked using standard web technologies. There are now many bioinformatics servers and databases with a range of dynamic processes and updates to cope with the increasing volume of data. TreeVector is designed as a framework to integrate with these processes and produce user-customized phylogenies automatically. We also address the strengths of phylogenetic trees as part of a linked-in browsing process rather than an end graphic for print. TreeVector is fast and easy to use and is available to download precompiled, but is also open source. It can also be run from the web server listed below or the user's own web server. It has already been deployed on two recognized and widely used database Web sites.

  12. Visual cognition

    PubMed Central

    Cavanagh, Patrick

    2011-01-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719

  13. Scalable Approach To Construct Free-Standing and Flexible Carbon Networks for Lithium-Sulfur Battery.

    PubMed

    Li, Mengliu; Wahyudi, Wandi; Kumar, Pushpendra; Wu, Fengyu; Yang, Xiulin; Li, Henan; Li, Lain-Jong; Ming, Jun

    2017-03-08

    Reconstructing carbon nanomaterials (e.g., fullerene, carbon nanotubes (CNTs), and graphene) to multidimensional networks with hierarchical structure is a critical step in exploring their applications. Herein, a sacrificial template method by casting strategy is developed to prepare highly flexible and free-standing carbon film consisting of CNTs, graphene, or both. The scalable size, ultralight and binder-free characteristics, as well as the tunable process/property are promising for their large-scale applications, such as utilizing as interlayers in lithium-sulfur battery. The capability of holding polysulfides (i.e., suppressing the sulfur diffusion) for the networks made from CNTs, graphene, or their mixture is pronounced, among which CNTs are the best. The diffusion process of polysulfides can be visualized in a specially designed glass tube battery. X-ray photoelectron spectroscopy analysis of discharged electrodes was performed to characterize the species in electrodes. A detailed analysis of lithium diffusion constant, electrochemical impedance, and elementary distribution of sulfur in electrodes has been performed to further illustrate the differences of different carbon interlayers for Li-S batteries. The proposed simple and enlargeable production of carbon-based networks may facilitate their applications in battery industry even as a flexible cathode directly. The versatile and reconstructive strategy is extendable to prepare other flexible films and/or membranes for wider applications.

  14. On-line scalable image access for medical remote collaborative meetings

    NASA Astrophysics Data System (ADS)

    Tarando, Sebastian R.; Lucidarme, Olivier; Grenier, Philippe; Fetita, Catalin

    2015-03-01

    The increasing need of remote medical investigation services in the framework of collaborative multidisciplinary meetings (e.g. cancer follow-up) raises the challenge of on-line remote access of (large amount of) radiologic data in a limited period of time. This paper proposes a scalable compression framework of DICOM images providing low-latency display through low speed networks. The developed approach relies on useless information removal from images (i.e. not related with the patient body) and the exploitation of the JPEG2000 standard to achieve progressive quality encoding and access of the data. This mechanism also allows the efficient exploitation of any idle times (corresponding to on-line visual image analysis) to download the remaining data at lossless quality in a way transparent to the user, thus minimizing the perceived latency. The experiments performed in comparison with exchanging uncompressed or JPEGlossless compressed DICOM data, showed the benefit of the proposed approach for collaborative on-line remote diagnosis and follow-up services.

  15. Rocinante, a virtual collaborative visualizer

    SciTech Connect

    McDonald, M.J.; Ice, L.G.

    1996-12-31

    With the goal of improving the ability of people around the world to share the development and use of intelligent systems, Sandia National Laboratories` Intelligent Systems and Robotics Center is developing new Virtual Collaborative Engineering (VCE) and Virtual Collaborative Control (VCC) technologies. A key area of VCE and VCC research is in shared visualization of virtual environments. This paper describes a Virtual Collaborative Visualizer (VCV), named Rocinante, that Sandia developed for VCE and VCC applications. Rocinante allows multiple participants to simultaneously view dynamic geometrically-defined environments. Each viewer can exclude extraneous detail or include additional information in the scene as desired. Shared information can be saved and later replayed in a stand-alone mode. Rocinante automatically scales visualization requirements with computer system capabilities. Models with 30,000 polygons and 4 Megabytes of texture display at 12 to 15 frames per second (fps) on an SGI Onyx and at 3 to 8 fps (without texture) on Indigo 2 Extreme computers. In its networked mode, Rocinante synchronizes its local geometric model with remote simulators and sensory systems by monitoring data transmitted through UDP packets. Rocinante`s scalability and performance make it an ideal VCC tool. Users throughout the country can monitor robot motions and the thinking behind their motion planners and simulators.

  16. Reimagining the microscope in the 21st century using the scalable adaptive graphics environment

    PubMed Central

    Mateevitsi, Victor; Patel, Tushar; Leigh, Jason; Levy, Bruce

    2015-01-01

    Background: Whole-slide imaging (WSI), while technologically mature, remains in the early adopter phase of the technology adoption lifecycle. One reason for this current situation is that current methods of visualizing and using WSI closely follow long-existing workflows for glass slides. We set out to “reimagine” the digital microscope in the era of cloud computing by combining WSI with the rich collaborative environment of the Scalable Adaptive Graphics Environment (SAGE). SAGE is a cross-platform, open-source visualization and collaboration tool that enables users to access, display and share a variety of data-intensive information, in a variety of resolutions and formats, from multiple sources, on display walls of arbitrary size. Methods: A prototype of a WSI viewer app in the SAGE environment was created. While not full featured, it enabled the testing of our hypothesis that these technologies could be blended together to change the essential nature of how microscopic images are utilized for patient care, medical education, and research. Results: Using the newly created WSI viewer app, demonstration scenarios were created in the patient care and medical education scenarios. This included a live demonstration of a pathology consultation at the International Academy of Digital Pathology meeting in Boston in November 2014. Conclusions: SAGE is well suited to display, manipulate and collaborate using WSIs, along with other images and data, for a variety of purposes. It goes beyond how glass slides and current WSI viewers are being used today, changing the nature of digital pathology in the process. A fully developed WSI viewer app within SAGE has the potential to encourage the wider adoption of WSI throughout pathology. PMID:26110092

  17. SVGenes: a library for rendering genomic features in scalable vector graphic format.

    PubMed

    Etherington, Graham J; MacLean, Daniel

    2013-08-01

    Drawing genomic features in attractive and informative ways is a key task in visualization of genomics data. Scalable Vector Graphics (SVG) format is a modern and flexible open standard that provides advanced features including modular graphic design, advanced web interactivity and animation within a suitable client. SVGs do not suffer from loss of image quality on re-scaling and provide the ability to edit individual elements of a graphic on the whole object level independent of the whole image. These features make SVG a potentially useful format for the preparation of publication quality figures including genomic objects such as genes or sequencing coverage and for web applications that require rich user-interaction with the graphical elements. SVGenes is a Ruby-language library that uses SVG primitives to render typical genomic glyphs through a simple and flexible Ruby interface. The library implements a simple Page object that spaces and contains horizontal Track objects that in turn style, colour and positions features within them. Tracks are the level at which visual information is supplied providing the full styling capability of the SVG standard. Genomic entities like genes, transcripts and histograms are modelled in Glyph objects that are attached to a track and take advantage of SVG primitives to render the genomic features in a track as any of a selection of defined glyphs. The feature model within SVGenes is simple but flexible and not dependent on particular existing gene feature formats meaning graphics for any existing datasets can easily be created without need for conversion. The library is provided as a Ruby Gem from https://rubygems.org/gems/bio-svgenes under the MIT license, and open source code is available at https://github.com/danmaclean/bioruby-svgenes also under the MIT License. dan.maclean@tsl.ac.uk.

  18. SVGenes: a library for rendering genomic features in scalable vector graphic format

    PubMed Central

    Etherington, Graham J.; MacLean, Daniel

    2013-01-01

    Motivation: Drawing genomic features in attractive and informative ways is a key task in visualization of genomics data. Scalable Vector Graphics (SVG) format is a modern and flexible open standard that provides advanced features including modular graphic design, advanced web interactivity and animation within a suitable client. SVGs do not suffer from loss of image quality on re-scaling and provide the ability to edit individual elements of a graphic on the whole object level independent of the whole image. These features make SVG a potentially useful format for the preparation of publication quality figures including genomic objects such as genes or sequencing coverage and for web applications that require rich user-interaction with the graphical elements. Results: SVGenes is a Ruby-language library that uses SVG primitives to render typical genomic glyphs through a simple and flexible Ruby interface. The library implements a simple Page object that spaces and contains horizontal Track objects that in turn style, colour and positions features within them. Tracks are the level at which visual information is supplied providing the full styling capability of the SVG standard. Genomic entities like genes, transcripts and histograms are modelled in Glyph objects that are attached to a track and take advantage of SVG primitives to render the genomic features in a track as any of a selection of defined glyphs. The feature model within SVGenes is simple but flexible and not dependent on particular existing gene feature formats meaning graphics for any existing datasets can easily be created without need for conversion. Availability: The library is provided as a Ruby Gem from https://rubygems.org/gems/bio-svgenes under the MIT license, and open source code is available at https://github.com/danmaclean/bioruby-svgenes also under the MIT License. Contact: dan.maclean@tsl.ac.uk PMID:23749959

  19. Visual Impairment

    MedlinePlus

    ... or head with a baseball or having an automobile or motorcycle accident. Some babies have congenital blindness , ... how well he or she sees at various distances. Visual field test. Ophthalmologists use this test to ...

  20. Visual Confidence.

    PubMed

    Mamassian, Pascal

    2016-10-14

    Visual confidence refers to an observer's ability to judge the accuracy of her perceptual decisions. Even though confidence judgments have been recorded since the early days of psychophysics, only recently have they been recognized as essential for a deeper understanding of visual perception. The reluctance to study visual confidence may have come in part from obtaining convincing experimental evidence in favor of metacognitive abilities rather than just perceptual sensitivity. Some effort has thus been dedicated to offer different experimental paradigms to study visual confidence in humans and nonhuman animals. To understand the origins of confidence judgments, investigators have developed two competing frameworks. The approach based on signal decision theory is popular but fails to account for response times. In contrast, the approach based on accumulation of evidence models naturally includes the dynamics of perceptual decisions. These models can explain a range of results, including the apparently paradoxical dissociation between performance and confidence that is sometimes observed.

  1. Scalable graphene production: perspectives and challenges of plasma applications

    NASA Astrophysics Data System (ADS)

    Levchenko, Igor; Ostrikov, Kostya (Ken); Zheng, Jie; Li, Xingguo; Keidar, Michael; B. K. Teo, Kenneth

    2016-05-01

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h-1 m-2 was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various

  2. Scalable graphene production: perspectives and challenges of plasma applications.

    PubMed

    Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth

    2016-05-19

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of

  3. Visual cognition

    SciTech Connect

    Pinker, S.

    1985-01-01

    This book consists of essays covering issues in visual cognition presenting experimental techniques from cognitive psychology, methods of modeling cognitive processes on computers from artificial intelligence, and methods of studying brain organization from neuropsychology. Topics considered include: parts of recognition; visual routines; upward direction; mental rotation, and discrimination of left and right turns in maps; individual differences in mental imagery, computational analysis and the neurological basis of mental imagery: componental analysis.

  4. Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.

    2015-08-01

    The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  5. In-Situ Visualization Experiments with ParaView Cinema in RAGE

    SciTech Connect

    Kares, Robert John

    2015-10-15

    A previous paper described some numerical experiments performed using the ParaView/Catalyst in-situ visualization infrastructure deployed in the Los Alamos RAGE radiation-hydrodynamics code to produce images from a running large scale 3D ICF simulation. One challenge of the in-situ approach apparent in these experiments was the difficulty of choosing parameters likes isosurface values for the visualizations to be produced from the running simulation without the benefit of prior knowledge of the simulation results and the resultant cost of recomputing in-situ generated images when parameters are chosen suboptimally. A proposed method of addressing this difficulty is to simply render multiple images at runtime with a range of possible parameter values to produce a large database of images and to provide the user with a tool for managing the resulting database of imagery. Recently, ParaView/Catalyst has been extended to include such a capability via the so-called Cinema framework. Here I describe some initial experiments with the first delivery of Cinema and make some recommendations for future extensions of Cinema’s capabilities.

  6. Visual search, visual streams, and visual architectures.

    PubMed

    Green, M

    1991-10-01

    Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.

  7. Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.

    ERIC Educational Resources Information Center

    Wang, James Z.; Du, Yanping

    Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…

  8. Final Report. Center for Scalable Application Development Software

    SciTech Connect

    Mellor-Crummey, John

    2014-10-26

    The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codes for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.

  9. A scalable control plane for optical-packet-switched networks

    NASA Astrophysics Data System (ADS)

    Kang, J.; Reed, M. J.

    2005-02-01

    This paper describes the design considerations and architecture of a Generalized Multi-Protocol Label Switching (GMPLS)-based scalable control plane that we are prototyping for optical packet switched (OPS) networks. Functional components of the control plane include a user network interface (UNI), optical label coding, multi-layer routing/traffic engineering algorithm and integrated signaling protocol. Initial implementation and experimentation has demonstrated the feasibility of our prototype as a testbed for various control schemes for OPS networks. One key element of the architecture proposed is the use of external MPLS labeling controlled by the UNI. This proposal reduces the load on the OPS domain header processing while having little impact on the MPLS domain.

  10. Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS): architecture.

    PubMed

    Mandl, Kenneth D; Kohane, Isaac S; McFadden, Douglas; Weber, Griffin M; Natter, Marc; Mandel, Joshua; Schneeweiss, Sebastian; Weiler, Sarah; Klann, Jeffrey G; Bickel, Jonathan; Adams, William G; Ge, Yaorong; Zhou, Xiaobo; Perkins, James; Marsolo, Keith; Bernstam, Elmer; Showalter, John; Quarshie, Alexander; Ofili, Elizabeth; Hripcsak, George; Murphy, Shawn N

    2014-01-01

    We describe the architecture of the Patient Centered Outcomes Research Institute (PCORI) funded Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS, http://www.SCILHS.org) clinical data research network, which leverages the $48 billion dollar federal investment in health information technology (IT) to enable a queryable semantic data model across 10 health systems covering more than 8 million patients, plugging universally into the point of care, generating evidence and discovery, and thereby enabling clinician and patient participation in research during the patient encounter. Central to the success of SCILHS is development of innovative 'apps' to improve PCOR research methods and capacitate point of care functions such as consent, enrollment, randomization, and outreach for patient-reported outcomes. SCILHS adapts and extends an existing national research network formed on an advanced IT infrastructure built with open source, free, modular components.

  11. Scalable Lunar Surface Networks and Adaptive Orbit Access

    NASA Technical Reports Server (NTRS)

    Wang, Xudong

    2015-01-01

    Teranovi Technologies, Inc., has developed innovative network architecture, protocols, and algorithms for both lunar surface and orbit access networks. A key component of the overall architecture is a medium access control (MAC) protocol that includes a novel mechanism of overlaying time division multiple access (TDMA) and carrier sense multiple access with collision avoidance (CSMA/CA), ensuring scalable throughput and quality of service. The new MAC protocol is compatible with legacy Institute of Electrical and Electronics Engineers (IEEE) 802.11 networks. Advanced features include efficiency power management, adaptive channel width adjustment, and error control capability. A hybrid routing protocol combines the advantages of ad hoc on-demand distance vector (AODV) routing and disruption/delay-tolerant network (DTN) routing. Performance is significantly better than AODV or DTN and will be particularly effective for wireless networks with intermittent links, such as lunar and planetary surface networks and orbit access networks.

  12. Scalable lithography from Natural DNA Patterns via polyacrylamide gel

    NASA Astrophysics Data System (ADS)

    Qu, Jiehao; Hou, Xianliang; Fan, Wanchao; Xi, Guanghui; Diao, Hongyan; Liu, Xiangdon

    2015-12-01

    A facile strategy for fabricating scalable stamps has been developed using cross-linked polyacrylamide gel (PAMG) that controllably and precisely shrinks and swells with water content. Aligned patterns of natural DNA molecules were prepared by evaporative self-assembly on a PMMA substrate, and were transferred to unsaturated polyester resin (UPR) to form a negative replica. The negative was used to pattern the linear structures onto the surface of water-swollen PAMG, and the pattern sizes on the PAMG stamp were customized by adjusting the water content of the PAMG. As a result, consistent reproduction of DNA patterns could be achieved with feature sizes that can be controlled over the range of 40%-200% of the original pattern dimensions. This methodology is novel and may pave a new avenue for manufacturing stamp-based functional nanostructures in a simple and cost-effective manner on a large scale.

  13. A Scalable Nonuniform Pointer Analysis for Embedded Program

    NASA Technical Reports Server (NTRS)

    Venet, Arnaud

    2004-01-01

    In this paper we present a scalable pointer analysis for embedded applications that is able to distinguish between instances of recursively defined data structures and elements of arrays. The main contribution consists of an efficient yet precise algorithm that can handle multithreaded programs. We first perform an inexpensive flow-sensitive analysis of each function in the program that generates semantic equations describing the effect of the function on the memory graph. These equations bear numerical constraints that describe nonuniform points-to relationships. We then iteratively solve these equations in order to obtain an abstract storage graph that describes the shape of data structures at every point of the program for all possible thread interleavings. We bring experimental evidence that this approach is tractable and precise for real-size embedded applications.

  14. Final Report: Center for Programming Models for Scalable Parallel Computing

    SciTech Connect

    Mellor-Crummey, John

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  15. Neuromorphic adaptive plastic scalable electronics: analog learning systems.

    PubMed

    Srinivasa, Narayan; Cruz-Albrecht, Jose

    2012-01-01

    Decades of research to build programmable intelligent machines have demonstrated limited utility in complex, real-world environments. Comparing their performance with biological systems, these machines are less efficient by a factor of 1 million1 billion in complex, real-world environments. The Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program is a multifaceted Defense Advanced Research Projects Agency (DARPA) project that seeks to break the programmable machine paradigm and define a new path for creating useful, intelligent machines. Since real-world systems exhibit infinite combinatorial complexity, electronic neuromorphic machine technology would be preferable in a host of applications, but useful and practical implementations still do not exist. HRL Laboratories LLC has embarked on addressing these challenges, and, in this article, we provide an overview of our project and progress made thus far.

  16. Scalable sensing electronics towards a motion capture suit

    NASA Astrophysics Data System (ADS)

    Xu, Daniel; Gisby, Todd A.; Xie, Shane; Anderson, Iain A.

    2013-04-01

    Being able to accurately record body motion allows complex movements to be characterised and studied. This is especially important in the film or sport coaching industry. Unfortunately, the human body has over 600 skeletal muscles, giving rise to multiple degrees of freedom. In order to accurately capture motion such as hand gestures, elbow or knee flexion and extension, vast numbers of sensors are required. Dielectric elastomer (DE) sensors are an emerging class of electroactive polymer (EAP) that is soft, lightweight and compliant. These characteristics are ideal for a motion capture suit. One challenge is to design sensing electronics that can simultaneously measure multiple sensors. This paper describes a scalable capacitive sensing device that can measure up to 8 different sensors with an update rate of 20Hz.

  17. A Scalable Framework to Detect Personal Health Mentions on Twitter

    PubMed Central

    Fabbri, Daniel; Rosenbloom, S Trent

    2015-01-01

    Background Biomedical research has traditionally been conducted via surveys and the analysis of medical records. However, these resources are limited in their content, such that non-traditional domains (eg, online forums and social media) have an opportunity to supplement the view of an individual’s health. Objective The objective of this study was to develop a scalable framework to detect personal health status mentions on Twitter and assess the extent to which such information is disclosed. Methods We collected more than 250 million tweets via the Twitter streaming API over a 2-month period in 2014. The corpus was filtered down to approximately 250,000 tweets, stratified across 34 high-impact health issues, based on guidance from the Medical Expenditure Panel Survey. We created a labeled corpus of several thousand tweets via a survey, administered over Amazon Mechanical Turk, that documents when terms correspond to mentions of personal health issues or an alternative (eg, a metaphor). We engineered a scalable classifier for personal health mentions via feature selection and assessed its potential over the health issues. We further investigated the utility of the tweets by determining the extent to which Twitter users disclose personal health status. Results Our investigation yielded several notable findings. First, we find that tweets from a small subset of the health issues can train a scalable classifier to detect health mentions. Specifically, training on 2000 tweets from four health issues (cancer, depression, hypertension, and leukemia) yielded a classifier with precision of 0.77 on all 34 health issues. Second, Twitter users disclosed personal health status for all health issues. Notably, personal health status was disclosed over 50% of the time for 11 out of 34 (33%) investigated health issues. Third, the disclosure rate was dependent on the health issue in a statistically significant manner (P<.001). For instance, more than 80% of the tweets about

  18. Fast and fully-scalable synthesis of reduced graphene oxide

    NASA Astrophysics Data System (ADS)

    Abdolhosseinzadeh, Sina; Asgharzadeh, Hamed; Seop Kim, Hyoung

    2015-05-01

    Exfoliation of graphite is a promising approach for large-scale production of graphene. Oxidation of graphite effectively facilitates the exfoliation process, yet necessitates several lengthy washing and reduction processes to convert the exfoliated graphite oxide (graphene oxide, GO) to reduced graphene oxide (RGO). Although filtration, centrifugation and dialysis have been frequently used in the washing stage, none of them is favorable for large-scale production. Here, we report the synthesis of RGO by sonication-assisted oxidation of graphite in a solution of potassium permanganate and concentrated sulfuric acid followed by reduction with ascorbic acid prior to any washing processes. GO loses its hydrophilicity during the reduction stage which facilitates the washing step and reduces the time required for production of RGO. Furthermore, simultaneous oxidation and exfoliation significantly enhance the yield of few-layer GO. We hope this one-pot and fully-scalable protocol paves the road toward out of lab applications of graphene.

  19. A Modular, Scalable, Extensible, and Transparent Optical Packet Buffer

    NASA Astrophysics Data System (ADS)

    Small, Benjamin A.; Shacham, Assaf; Bergman, Keren

    2007-04-01

    We introduce a novel optical packet switching buffer architecture that is composed of multiple building-block modules, allowing for a large degree of scalability. The buffer supports independent and simultaneous read and write processes without packet rejection or misordering and can be considered a fully functional packet buffer. It can easily be programmed to support two prioritization schemes: first-in first-out (FIFO) and last-in first-out (LIFO). Because the system leverages semiconductor optical amplifiers as switching elements, wideband packets can be routed transparently. The operation of the system is discussed with illustrative packet sequences, which are then verified on an actual implementation composed of conventional fiber-optic componentry.

  20. Scalable load-balance measurement for SPMD codes

    SciTech Connect

    Gamblin, T; de Supinski, B R; Schulz, M; Fowler, R; Reed, D

    2008-08-05

    Good load balance is crucial on very large parallel systems, but the most sophisticated algorithms introduce dynamic imbalances through adaptation in domain decomposition or use of adaptive solvers. To observe and diagnose imbalance, developers need system-wide, temporally-ordered measurements from full-scale runs. This potentially requires data collection from multiple code regions on all processors over the entire execution. Doing this instrumentation naively can, in combination with the application itself, exceed available I/O bandwidth and storage capacity, and can induce severe behavioral perturbations. We present and evaluate a novel technique for scalable, low-error load balance measurement. This uses a parallel wavelet transform and other parallel encoding methods. We show that our technique collects and reconstructs system-wide measurements with low error. Compression time scales sublinearly with system size and data volume is several orders of magnitude smaller than the raw data. The overhead is low enough for online use in a production environment.

  1. Scalable antifouling reverse osmosis membranes utilizing perfluorophenyl azide photochemistry.

    PubMed

    McVerry, Brian T; Wong, Mavis C Y; Marsh, Kristofer L; Temple, James A T; Marambio-Jones, Catalina; Hoek, Eric M V; Kaner, Richard B

    2014-09-01

    We present a method to produce anti-fouling reverse osmosis (RO) membranes that maintains the process and scalability of current RO membrane manufacturing. Utilizing perfluorophenyl azide (PFPA) photochemistry, commercial reverse osmosis membranes were dipped into an aqueous solution containing PFPA-terminated poly(ethyleneglycol) species and then exposed to ultraviolet light under ambient conditions, a process that can easily be adapted to a roll-to-roll process. Successful covalent modification of commercial reverse osmosis membranes was confirmed with attenuated total reflectance infrared spectroscopy and contact angle measurements. By employing X-ray photoelectron spectroscopy, it was determined that PFPAs undergo UV-generated nitrene addition and bind to the membrane through an aziridine linkage. After modification with the PFPA-PEG derivatives, the reverse osmosis membranes exhibit high fouling-resistance. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Image and geometry processing with Oriented and Scalable Map.

    PubMed

    Hua, Hao

    2016-05-01

    We turn the Self-organizing Map (SOM) into an Oriented and Scalable Map (OS-Map) by generalizing the neighborhood function and the winner selection. The homogeneous Gaussian neighborhood function is replaced with the matrix exponential. Thus we can specify the orientation either in the map space or in the data space. Moreover, we associate the map's global scale with the locality of winner selection. Our model is suited for a number of graphical applications such as texture/image synthesis, surface parameterization, and solid texture synthesis. OS-Map is more generic and versatile than the task-specific algorithms for these applications. Our work reveals the overlooked strength of SOMs in processing images and geometries. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. A scalable correlator for multichannel diffuse correlation spectroscopy

    NASA Astrophysics Data System (ADS)

    Stapels, Christopher J.; Kolodziejski, Noah J.; McAdams, Daniel; Podolsky, Matthew J.; Fernandez, Daniel E.; Farkas, Dana; Christian, James F.

    2016-03-01

    Diffuse correlation spectroscopy (DCS) is a technique which enables powerful and robust non-invasive optical studies of tissue micro-circulation and vascular blood flow. The technique amounts to autocorrelation analysis of coherent photons after their migration through moving scatterers and subsequent collection by single-mode optical fibers. A primary cost driver of DCS instruments are the commercial hardware-based correlators, limiting the proliferation of multi-channel instruments for validation of perfusion analysis as a clinical diagnostic metric. We present the development of a low-cost scalable correlator enabled by microchip-based time-tagging, and a software-based multi-tau data analysis method. We will discuss the capabilities of the instrument as well as the implementation and validation of 2- and 8-channel systems built for live animal and pre-clinical settings.

  4. Photonic Architecture for Scalable Quantum Information Processing in Diamond

    NASA Astrophysics Data System (ADS)

    Nemoto, Kae; Trupke, Michael; Devitt, Simon J.; Stephens, Ashley M.; Scharfenberger, Burkhard; Buczak, Kathrin; Nöbauer, Tobias; Everitt, Mark S.; Schmiedmayer, Jörg; Munro, William J.

    2014-07-01

    Physics and information are intimately connected, and the ultimate information processing devices will be those that harness the principles of quantum mechanics. Many physical systems have been identified as candidates for quantum information processing, but none of them are immune from errors. The challenge remains to find a path from the experiments of today to a reliable and scalable quantum computer. Here, we develop an architecture based on a simple module comprising an optical cavity containing a single negatively charged nitrogen vacancy center in diamond. Modules are connected by photons propagating in a fiber-optical network and collectively used to generate a topological cluster state, a robust substrate for quantum information processing. In principle, all processes in the architecture can be deterministic, but current limitations lead to processes that are probabilistic but heralded. We find that the architecture enables large-scale quantum information processing with existing technology.

  5. Scalability and Parallelization of Monte-Carlo Tree Search

    NASA Astrophysics Data System (ADS)

    Bourki, Amine; Chaslot, Guillaume; Coulm, Matthieu; Danjean, Vincent; Doghmen, Hassen; Hoock, Jean-Baptiste; Hérault, Thomas; Rimmel, Arpad; Teytaud, Fabien; Teytaud, Olivier; Vayssière, Paul; Yu, Ziqin

    Monte-Carlo Tree Search is now a well established algorithm, in games and beyond. We analyze its scalability, and in particular its limitations and the implications in terms of parallelization. We focus on our Go program MoGo and our Havannah program Shakti. We use multicore machines and message-passing machines. For both games and on both type of machines we achieve adequate efficiency for the parallel version. However, in spite of promising results in self-play there are situations for which increasing the time per move does not solve anything. Therefore parallelization is not a solution to all our problems. Nonetheless, for problems where the Monte-Carlo part is less biased than in the game of Go, parallelization should be quite efficient, even without shared memory.

  6. Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS): Architecture

    PubMed Central

    Mandl, Kenneth D; Kohane, Isaac S; McFadden, Douglas; Weber, Griffin M; Natter, Marc; Mandel, Joshua; Schneeweiss, Sebastian; Weiler, Sarah; Klann, Jeffrey G; Bickel, Jonathan; Adams, William G; Ge, Yaorong; Zhou, Xiaobo; Perkins, James; Marsolo, Keith; Bernstam, Elmer; Showalter, John; Quarshie, Alexander; Ofili, Elizabeth; Hripcsak, George; Murphy, Shawn N

    2014-01-01

    We describe the architecture of the Patient Centered Outcomes Research Institute (PCORI) funded Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS, http://www.SCILHS.org) clinical data research network, which leverages the $48 billion dollar federal investment in health information technology (IT) to enable a queryable semantic data model across 10 health systems covering more than 8 million patients, plugging universally into the point of care, generating evidence and discovery, and thereby enabling clinician and patient participation in research during the patient encounter. Central to the success of SCILHS is development of innovative ‘apps’ to improve PCOR research methods and capacitate point of care functions such as consent, enrollment, randomization, and outreach for patient-reported outcomes. SCILHS adapts and extends an existing national research network formed on an advanced IT infrastructure built with open source, free, modular components. PMID:24821734

  7. A Scalable Implementation of Van der Waals Density Functionals

    NASA Astrophysics Data System (ADS)

    Wu, Jun; Gygi, Francois

    2010-03-01

    Recently developed Van der Waals density functionals[1] offer the promise to account for weak intermolecular interactions that are not described accurately by local exchange-correlation density functionals. In spite of recent progress [2], the computational cost of such calculations remains high. We present a scalable parallel implementation of the functional proposed by Dion et al.[1]. The method is implemented in the Qbox first-principles simulation code (http://eslab.ucdavis.edu/software/qbox). Application to large molecular systems will be presented. [4pt] [1] M. Dion et al. Phys. Rev. Lett. 92, 246401 (2004).[0pt] [2] G. Roman-Perez and J. M. Soler, Phys. Rev. Lett. 103, 096102 (2009).

  8. LVFS: A Scalable Petabye/Exabyte Data Storage System

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Masuoka, E. J.; Ye, G.; Devine, N. K.

    2013-12-01

    Managing petabytes of data with hundreds of millions of files is the first step necessary towards an effective big data computing and collaboration environment in a distributed system. We describe here the MODAPS LAADS Virtual File System (LVFS), a new storage architecture which replaces the previous MODAPS operational Level 1 Land Atmosphere Archive Distribution System (LAADS) NFS based approach to storing and distributing datasets from several instruments, such as MODIS, MERIS, and VIIRS. LAADS is responsible for the distribution of over 4 petabytes of data and over 300 million files across more than 500 disks. We present here the first LVFS big data comparative performance results and new capabilities not previously possible with the LAADS system. We consider two aspects in addressing inefficiencies of massive scales of data. First, is dealing in a reliable and resilient manner with the volume and quantity of files in such a dataset, and, second, minimizing the discovery and lookup times for accessing files in such large datasets. There are several popular file systems that successfully deal with the first aspect of the problem. Their solution, in general, is through distribution, replication, and parallelism of the storage architecture. The Hadoop Distributed File System (HDFS), Parallel Virtual File System (PVFS), and Lustre are examples of such file systems that deal with petabyte data volumes. The second aspect deals with data discovery among billions of files, the largest bottleneck in reducing access time. The metadata of a file, generally represented in a directory layout, is stored in ways that are not readily scalable. This is true for HDFS, PVFS, and Lustre as well. Recent experimental file systems, such as Spyglass or Pantheon, have attempted to address this problem through redesign of the metadata directory architecture. LVFS takes a radically different architectural approach by eliminating the need for a separate directory within the file system

  9. Scalable syntheses of the BET bromodomain inhibitor JQ1

    PubMed Central

    Syeda, Shameem Sultana; Jakkaraj, Sudhakar; Georg, Gunda I.

    2015-01-01

    We have developed methods involving the use of alternate, safer reagents for the scalable syntheses of the potent BET bromodomain inhibitor JQ1. A one-pot three step method, involving the conversion of a benzodiazepine to a thioamde using Lawesson’s reagent, followed by amidrazone formation and installation of the triazole moiety furnished JQ1. This method provides good yields and a facile purification process. For the synthesis of enantiomerically enriched (+)-JQ1, the highly toxic reagent diethyl chlorophosphate, used in a previous synthesis, was replaced with the safer reagent diphenyl chlorophosphate in the three-step one-pot triazole formation without effecting yields and enantiomeric purity of (+)-JQ1. PMID:26034331

  10. A Practical and Scalable Tool to Find Overlaps between Sequences

    PubMed Central

    Haj Rachid, Maan

    2015-01-01

    The evolution of the next generation sequencing technology increases the demand for efficient solutions, in terms of space and time, for several bioinformatics problems. This paper presents a practical and easy-to-implement solution for one of these problems, namely, the all-pairs suffix-prefix problem, using a compact prefix tree. The paper demonstrates an efficient construction of this time-efficient and space-economical tree data structure. The paper presents techniques for parallel implementations of the proposed solution. Experimental evaluation indicates superior results in terms of space and time over existing solutions. Results also show that the proposed technique is highly scalable in a parallel execution environment. PMID:25961045

  11. Enzyme-Free Scalable DNA Digital Design Techniques: A Review.

    PubMed

    Konampurath George, Aby; Singh, Harpreet

    2016-12-02

    With the recent developments in DNA nanotechnology, DNA has been used as the basic building block for the design of nanostructures, autonomous molecular motors, various devices, and circuits. DNA is considered as a possible candidate for replacing silicon for designing digital circuits in a near future, especially in implantable medical devices, because of its parallelism, computational powers, small size, light weight, and compatibility with bio-signals. The research in DNA digital design is in early stages of development, and electrical and computer engineers are not much attracted towards this field. In this paper, we give a brief review of the existing enzyme-free scalable DNA digital design techniques which are recently developed. With the developments in DNA circuits, it would be possible to design synthetic molecular systems, therapeutic molecular devices, and other molecular scale devices and instruments. The ultimate aim will be to build complex digital designs using DNA strands which may even be placed inside a human body.

  12. Enzyme-Free Scalable DNA Digital Design Techniques: A Review.

    PubMed

    George, Aby K; Singh, Harpreet

    2016-12-01

    With the recent developments in DNA nanotechnology, DNA has been used as the basic building block for the design of nanostructures, autonomous molecular motors, various devices, and circuits. DNA is considered as a possible candidate for replacing silicon for designing digital circuits in a near future, especially in implantable medical devices, because of its parallelism, computational powers, small size, light weight, and compatibility with bio-signals. The research in DNA digital design is in early stages of development, and electrical and computer engineers are not much attracted towards this field. In this paper, we give a brief review of the existing enzyme-free scalable DNA digital design techniques which are recently developed. With the developments in DNA circuits, it would be possible to design synthetic molecular systems, therapeutic molecular devices, and other molecular scale devices and instruments. The ultimate aim will be to build complex digital designs using DNA strands which may even be placed inside a human body.

  13. Scalable lithography from Natural DNA Patterns via polyacrylamide gel

    PubMed Central

    Qu, JieHao; Hou, XianLiang; Fan, WanChao; Xi, GuangHui; Diao, HongYan; Liu, XiangDon

    2015-01-01

    A facile strategy for fabricating scalable stamps has been developed using cross-linked polyacrylamide gel (PAMG) that controllably and precisely shrinks and swells with water content. Aligned patterns of natural DNA molecules were prepared by evaporative self-assembly on a PMMA substrate, and were transferred to unsaturated polyester resin (UPR) to form a negative replica. The negative was used to pattern the linear structures onto the surface of water-swollen PAMG, and the pattern sizes on the PAMG stamp were customized by adjusting the water content of the PAMG. As a result, consistent reproduction of DNA patterns could be achieved with feature sizes that can be controlled over the range of 40%–200% of the original pattern dimensions. This methodology is novel and may pave a new avenue for manufacturing stamp-based functional nanostructures in a simple and cost-effective manner on a large scale. PMID:26639572

  14. A Scalable P2P Video Streaming Framework

    NASA Astrophysics Data System (ADS)

    Lee, Ivan

    Peer-to-peer (P2P) networking technique represents a vast potential to overcome many constraints in the conventional content distribution networks, especially for the real-time applications such as P2P streaming. In this chapter, a P2P streaming system is examined, and the proposed system combines multiple-description source coding technique and a scalable streaming infrastructure. The proposed system aims to gradually offload congested traffic from a centralized bottleneck to the under-utilized P2P networks and hence, provides seamless transitions from client/server streaming to centralized P2P streaming and to decentralized P2P streaming. The performance of the proposed framework is evaluated in terms of video frame loss rate, which reflects the probability of freeze video frames.

  15. Ballistic Characterization of the Scalability of Mg Alloy AMX602

    NASA Astrophysics Data System (ADS)

    Jones, Tyrone L.; Kondoh, Katsuyoshi; Moore, David; Otsuka, Isamu; Annis, Alan; Nakazawa, Hiroto; Ohori, Yoshinori; Numasawa, Ryo; Takahashi, Masamichi

    The US Army Research Laboratory (ARL) and the Osaka University Joining and Welding Research Institute (JWRI) formed a collaborative partnership with Taber Extrusions, Epson, Pacific Sowa, Kurimoto, and National Material LP to domestically reproduce and scale-up military grade magnesium alloy AMX602 at the Taber Extrusion manufacturing facility in Gulfport, MS. AMX602 material was provided in the form of 38.1-mm (1.5-inch) wide bars, 101.6-mm (4-inch) wide plate, and 152.4-mm (6-in) wide plate. The ARL and the JWRI conducted mechanical analysis and dynamic impact examination to evaluate the lateral dimension scale-up of AMX602. The results were parametrically analyzed and compared to conventionally processed AZ31B-H24 and AA5083-H131. Details of the scalability of the AMX602 alloy are provided.

  16. On the scalability of ring fiber designs for OAM multiplexing.

    PubMed

    Ramachandran, S; Gregg, P; Kristensen, P; Golowich, S E

    2015-02-09

    The promise of the infinite-dimensionality of orbital angular momentum (OAM) and its application to free-space and fiber communications has attracted immense attention in recent years. In order to facilitate OAM-guidance, novel fibers have been proposed and developed, including a class of so-called ring-fibers. In these fibers, the wave-guiding region is a high-index annulus instead of a conventional circular core, which for reasons related to polarization-dependent differential phase shifts for light at waveguide boundaries, leads to enhanced stability for OAM modes. We review the theory and implementation of this nascent class of waveguides, and discuss the opportunities and limitations they present for OAM scalability.

  17. Characterization of scalable ion traps for quantum computation

    NASA Astrophysics Data System (ADS)

    Epstein, R. J.; Bollinger, J. J.; Leibfried, D.; Seidelin, S.; Britton, J.; Wesenberg, J. H.; Shiga, N.; Amini, J. M.; Blakestad, R. B.; Brown, K. R.; Home, J. P.; Itano, W. M.; Jost, J. D.; Langer, C.; Ozeri, R.; Wineland, D. J.

    2007-03-01

    We discuss the experimental characterization of several scalable ion trap architectures for quantum information processing. We have developed an apparatus for testing planar ion trap chips which features: a standardized chip carrier for ease of interchanging traps, a single-laser Raman cooling scheme, and photo-ionization loading of Mg^+ ions. The primary benchmark for a given trap is the heating rate of the ion motional degrees of freedom, which can reduce multi-ion quantum gate fidelities. As the heating rate depends on the ion trap geometry and materials, our testing apparatus allows for efficient iteration and optimization of trap parameters. With the recent ability to fabricate planar traps with sufficiently low heating rates for quantum computation ^2, we describe current results on the simulation and fabrication of planar traps with multiple intersecting trapping zones for versatile ion choreography. S. Seidelin et al., Phys. Rev. Lett. 96, 253003 (2006). J. Kim, et al., Quantum Inf. Comput. 5, 515 (2005).

  18. Lilith: A scalable secure tool for massively parallel distributed computing

    SciTech Connect

    Armstrong, R.C.; Camp, L.J.; Evensky, D.A.; Gentile, A.C.

    1997-06-01

    Changes in high performance computing have necessitated the ability to utilize and interrogate potentially many thousands of processors. The ASCI (Advanced Strategic Computing Initiative) program conducted by the United States Department of Energy, for example, envisions thousands of distinct operating systems connected by low-latency gigabit-per-second networks. In addition multiple systems of this kind will be linked via high-capacity networks with latencies as low as the speed of light will allow. Code which spans systems of this sort must be scalable; yet constructing such code whether for applications, debugging, or maintenance is an unsolved problem. Lilith is a research software platform that attempts to answer these questions with an end toward meeting these needs. Presently, Lilith exists as a test-bed, written in Java, for various spanning algorithms and security schemes. The test-bed software has, and enforces, hooks allowing implementation and testing of various security schemes.

  19. Scalable singular 3D modeling for digital battlefield applications

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz P.; Ternovskiy, Igor V.

    2000-10-01

    We propose a new classification algorithm to detect and classify targets of interest. It is based on an advanced brand of analytic geometry of manifolds, called theory of catastrophes. Physical Optics Corporation's (POC) scalable 3D model representation provides automatic and real-time analysis of a discrete frame of a sensed 2D imagery of terrain, urban, and target features. It then transforms this frame of discrete different-perspective 2D views of a target into a 3D continuous model called a pictogram. The unique local stereopsis feature of this modeling is the surprising ability to locally obtain a 3D pictogram from a single monoscopic photograph. The proposed 3D modeling, combined with more standard change detection algorithms and 3D terrain feature models, will constitute a novel classification algorithm and a new type of digital battlefield imagery for Imaging Systems.

  20. Development of a scalable pharmacogenomic clinical decision support service.

    PubMed

    Fusaro, Vincent A; Brownstein, Catherine; Wolf, Wendy; Clinton, Catherine; Savage, Sarah; Mandl, Kenneth D; Margulies, David; Manzi, Shannon

    2013-01-01

    Advances in sequencing technology are making genomic data more accessible within the healthcare environment. Published pharmacogenetic guidelines attempt to provide a clinical context for specific genomic variants; however, the actual implementation to convert genomic data into a clinical report integrated within an electronic medical record system is a major challenge for any hospital. We created a two-part solution that integrates with the medical record system and converts genetic variant results into an interpreted clinical report based on published guidelines. We successfully developed a scalable infrastructure to support TPMT genetic testing and are currently testing approximately two individuals per week in our production version. We plan to release an online variant to clinical interpretation reporting system in order to facilitate translation of pharmacogenetic information into clinical practice.

  1. Scalable lithography from Natural DNA Patterns via polyacrylamide gel.

    PubMed

    Qu, JieHao; Hou, XianLiang; Fan, WanChao; Xi, GuangHui; Diao, HongYan; Liu, XiangDon

    2015-12-07

    A facile strategy for fabricating scalable stamps has been developed using cross-linked polyacrylamide gel (PAMG) that controllably and precisely shrinks and swells with water content. Aligned patterns of natural DNA molecules were prepared by evaporative self-assembly on a PMMA substrate, and were transferred to unsaturated polyester resin (UPR) to form a negative replica. The negative was used to pattern the linear structures onto the surface of water-swollen PAMG, and the pattern sizes on the PAMG stamp were customized by adjusting the water content of the PAMG. As a result, consistent reproduction of DNA patterns could be achieved with feature sizes that can be controlled over the range of 40%-200% of the original pattern dimensions. This methodology is novel and may pave a new avenue for manufacturing stamp-based functional nanostructures in a simple and cost-effective manner on a large scale.

  2. Fast and fully-scalable synthesis of reduced graphene oxide

    PubMed Central

    Abdolhosseinzadeh, Sina; Asgharzadeh, Hamed; Seop Kim, Hyoung

    2015-01-01

    Exfoliation of graphite is a promising approach for large-scale production of graphene. Oxidation of graphite effectively facilitates the exfoliation process, yet necessitates several lengthy washing and reduction processes to convert the exfoliated graphite oxide (graphene oxide, GO) to reduced graphene oxide (RGO). Although filtration, centrifugation and dialysis have been frequently used in the washing stage, none of them is favorable for large-scale production. Here, we report the synthesis of RGO by sonication-assisted oxidation of graphite in a solution of potassium permanganate and concentrated sulfuric acid followed by reduction with ascorbic acid prior to any washing processes. GO loses its hydrophilicity during the reduction stage which facilitates the washing step and reduces the time required for production of RGO. Furthermore, simultaneous oxidation and exfoliation significantly enhance the yield of few-layer GO. We hope this one-pot and fully-scalable protocol paves the road toward out of lab applications of graphene. PMID:25976732

  3. Memory bandwidth-scalable motion estimation for mobile video coding

    NASA Astrophysics Data System (ADS)

    Hsieh, Jui-Hung; Tai, Wei-Cheng; Chang, Tian-Sheuan

    2011-12-01

    The heavy memory access of motion estimation (ME) execution consumes significant power and could limit ME execution when the available memory bandwidth (BW) is reduced because of access congestion or changes in the dynamics of the power environment of modern mobile devices. In order to adapt to the changing BW while maintaining the rate-distortion (R-D) performance, this article proposes a novel data BW-scalable algorithm for ME with mobile multimedia chips. The available BW is modeled in a R-D sense and allocated to fit the dynamic contents. The simulation result shows 70% BW savings while keeping equivalent R-D performance compared with H.264 reference software for low-motion CIF-sized video. For high-motion sequences, the result shows our algorithm can better use the available BW to save an average bit rate of up to 13% with up to 0.1-dB PSNR increase for similar BW usage.

  4. Scalable in-situ qubit calibration during repetitive error detection

    NASA Astrophysics Data System (ADS)

    Kelly, J.; Barends, R.; Fowler, A.; Mutus, J.; Campbell, B.; Chen, Y.; Chen, Z.; Chiaro, B.; Dunsworth, A.; Jeffrey, E.; Lucero, E.; Megrant, A.; Neeley, M.; Neill, C.; O'Malley, P. J. J.; Roushan, P.; Sank, D.; Quintana, C.; Vainsencher, A.; Wenner, J.; White, T.; Martinis, J. M.

    A quantum computer protects a quantum state from the environment through the careful manipulations of thousands or millions of physical qubits. However, operating such quantities of qubits at the necessary level of precision is an open challenge, as optimal control parameters can vary between qubits and drift in time. We present a method to optimize physical qubit parameters while error detection is running using a nine qubit system performing the bit-flip repetition code. We demonstrate how gate optimization can be parallelized in a large-scale qubit array and show that the presented method can be used to simultaneously compensate for independent or correlated qubit parameter drifts. Our method is O(1) scalable to systems of arbitrary size, providing a path towards controlling the large numbers of qubits needed for a fault-tolerant quantum computer.

  5. Scalable in situ qubit calibration during repetitive error detection

    NASA Astrophysics Data System (ADS)

    Kelly, J.; Barends, R.; Fowler, A. G.; Megrant, A.; Jeffrey, E.; White, T. C.; Sank, D.; Mutus, J. Y.; Campbell, B.; Chen, Yu; Chen, Z.; Chiaro, B.; Dunsworth, A.; Lucero, E.; Neeley, M.; Neill, C.; O'Malley, P. J. J.; Quintana, C.; Roushan, P.; Vainsencher, A.; Wenner, J.; Martinis, John M.

    2016-09-01

    We present a method to optimize qubit control parameters during error detection which is compatible with large-scale qubit arrays. We demonstrate our method to optimize single or two-qubit gates in parallel on a nine-qubit system. Additionally, we show how parameter drift can be compensated for during computation by inserting a frequency drift and using our method to remove it. We remove both drift on a single qubit and independent drifts on all qubits simultaneously. We believe this method will be useful in keeping error rates low on all physical qubits throughout the course of a computation. Our method is O (1 ) scalable to systems of arbitrary size, providing a path towards controlling the large numbers of qubits needed for a fault-tolerant quantum computer.

  6. Dual-Matrix Sampling for Scalable Translucent Material Rendering.

    PubMed

    Wu, Yu-Ting; Li, Tzu-Mao; Lin, Yu-Hsun; Chuang, Yung-Yu

    2015-03-01

    This paper introduces a scalable algorithm for rendering translucent materials with complex lighting. We represent the light transport with a diffusion approximation by a dual-matrix representation with the Light-to-Surface and Surface-to-Camera matrices. By exploiting the structures within the matrices, the proposed method can locate surface samples with little contribution by using only subsampled matrices and avoid wasting computation on these samples. The decoupled estimation of irradiance and diffuse BSSRDFs also allows us to have a tight error bound, making the adaptive diffusion approximation more efficient and accurate. Experiments show that our method outperforms previous methods for translucent material rendering, especially in large scenes with massive translucent surfaces shaded by complex illumination.

  7. Center for Programming Models for Scalable Parallel Computing

    SciTech Connect

    John Mellor-Crummey

    2008-02-29

    Rice University's achievements as part of the Center for Programming Models for Scalable Parallel Computing include: (1) design and implemention of cafc, the first multi-platform CAF compiler for distributed and shared-memory machines, (2) performance studies of the efficiency of programs written using the CAF and UPC programming models, (3) a novel technique to analyze explicitly-parallel SPMD programs that facilitates optimization, (4) design, implementation, and evaluation of new language features for CAF, including communication topologies, multi-version variables, and distributed multithreading to simplify development of high-performance codes in CAF, and (5) a synchronization strength reduction transformation for automatically replacing barrier-based synchronization with more efficient point-to-point synchronization. The prototype Co-array Fortran compiler cafc developed in this project is available as open source software from http://www.hipersoft.rice.edu/caf.

  8. SCALABLE FUSED LASSO SVM FOR CONNECTOME-BASED DISEASE PREDICTION

    PubMed Central

    Watanabe, Takanori; Scott, Clayton D.; Kessler, Daniel; Angstadt, Michael; Sripada, Chandra S.

    2015-01-01

    There is substantial interest in developing machine-based methods that reliably distinguish patients from healthy controls using high dimensional correlation maps known as functional connectomes (FC's) generated from resting state fMRI. To address the dimensionality of FC's, the current body of work relies on feature selection techniques that are blind to the spatial structure of the data. In this paper, we propose to use the fused Lasso regularized support vector machine to explicitly account for the 6-D structure of the FC (defined by pairs of points in 3-D brain space). In order to solve the resulting nonsmooth and large-scale optimization problem, we introduce a novel and scalable algorithm based on the alternating direction method. Experiments on real resting state scans show that our approach can recover results that are more neuroscientifically informative than previous methods. PMID:25892971

  9. Scalable Quantum Circuit and Control for a Superconducting Surface Code

    NASA Astrophysics Data System (ADS)

    Versluis, R.; Poletto, S.; Khammassi, N.; Tarasinski, B.; Haider, N.; Michalak, D. J.; Bruno, A.; Bertels, K.; DiCarlo, L.

    2017-09-01

    We present a scalable scheme for executing the error-correction cycle of a monolithic surface-code fabric composed of fast-flux-tunable transmon qubits with nearest-neighbor coupling. An eight-qubit unit cell forms the basis for repeating both the quantum hardware and coherent control, enabling spatial multiplexing. This control uses three fixed frequencies for all single-qubit gates and a unique frequency-detuning pattern for each qubit in the cell. By pipelining the interaction and readout steps of ancilla-based X - and Z -type stabilizer measurements, we can engineer detuning patterns that avoid all second-order transmon-transmon interactions except those exploited in controlled-phase gates, regardless of fabric size. Our scheme is applicable to defect-based and planar logical qubits, including lattice surgery.

  10. Overview of the Scalable Coherent Interface, IEEE STD 1596 (SCI)

    SciTech Connect

    Gustavson, D.B.; James, D.V.; Wiggers, H.A.

    1992-10-01

    The Scalable Coherent Interface standard defines a new generation of interconnection that spans the full range from supercomputer memory `bus` to campus-wide network. SCI provides bus-like services and a shared-memory software model while using an underlying, packet protocol on many independent communication links. Initially these links are 1 GByte/s (wires) and 1 GBit/s (fiber), but the protocol scales well to future faster or lower-cost technologies. The interconnect may use switches, meshes, and rings. The SCI distributed-shared-memory model is simple and versatile, enabling for the first time a smooth integration of highly parallel multiprocessors, workstations, personal computers, I/O, networking and data acquisition.

  11. ParaText : scalable text modeling and analysis.

    SciTech Connect

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-06-01

    Automated processing, modeling, and analysis of unstructured text (news documents, web content, journal articles, etc.) is a key task in many data analysis and decision making applications. As data sizes grow, scalability is essential for deep analysis. In many cases, documents are modeled as term or feature vectors and latent semantic analysis (LSA) is used to model latent, or hidden, relationships between documents and terms appearing in those documents. LSA supplies conceptual organization and analysis of document collections by modeling high-dimension feature vectors in many fewer dimensions. While past work on the scalability of LSA modeling has focused on the SVD, the goal of our work is to investigate the use of distributed memory architectures for the entire text analysis process, from data ingestion to semantic modeling and analysis. ParaText is a set of software components for distributed processing, modeling, and analysis of unstructured text. The ParaText source code is available under a BSD license, as an integral part of the Titan toolkit. ParaText components are chained-together into data-parallel pipelines that are replicated across processes on distributed-memory architectures. Individual components can be replaced or rewired to explore different computational strategies and implement new functionality. ParaText functionality can be embedded in applications on any platform using the native C++ API, Python, or Java. The ParaText MPI Process provides a 'generic' text analysis pipeline in a command-line executable that can be used for many serial and parallel analysis tasks. ParaText can also be deployed as a web service accessible via a RESTful (HTTP) API. In the web service configuration, any client can access the functionality provided by ParaText using commodity protocols ... from standard web browsers to custom clients written in any language.

  12. A scalable method for computing quadruplet wave-wave interactions

    NASA Astrophysics Data System (ADS)

    Van Vledder, Gerbrant

    2017-04-01

    Non-linear four-wave interactions are a key physical process in the evolution of wind generated ocean waves. The present generation operational wave models use the Discrete Interaction Approximation (DIA), but it accuracy is poor. It is now generally acknowledged that the DIA should be replaced with a more accurate method to improve predicted spectral shapes and derived parameters. The search for such a method is challenging as one should find a balance between accuracy and computational requirements. Such a method is presented here in the form of a scalable and adaptive method that can mimic both the time consuming exact Snl4 approach and the fast but inaccurate DIA, and everything in between. The method provides an elegant approach to improve the DIA, not by including more arbitrarily shaped wave number configurations, but by a mathematically consistent reduction of an exact method, viz. the WRT method. The adaptiveness is to adapt the abscissa of the locus integrand in relation to the magnitude of the known terms. The adaptiveness is extended to the highest level of the WRT method to select interacting wavenumber configurations in a hierarchical way in relation to their importance. This adaptiveness results in a speed-up of one to three orders of magnitude depending on the measure of accuracy. This definition of accuracy should not be expressed in terms of the quality of the transfer integral for academic spectra but rather in terms of wave model performance in a dynamic run. This has consequences for the balance between the required accuracy and the computational workload for evaluating these interactions. The performance of the scalable method on different scales is illustrated with results from academic spectra, simple growth curves to more complicated field cases using a 3G-wave model.

  13. Scalable Conjunction Processing using Spatiotemporally Indexed Ephemeris Data

    NASA Astrophysics Data System (ADS)

    Budianto-Ho, I.; Johnson, S.; Sivilli, R.; Alberty, C.; Scarberry, R.

    2014-09-01

    The collision warnings produced by the Joint Space Operations Center (JSpOC) are of critical importance in protecting U.S. and allied spacecraft against destructive collisions and protecting the lives of astronauts during space flight. As the Space Surveillance Network (SSN) improves its sensor capabilities for tracking small and dim space objects, the number of tracked objects increases from thousands to hundreds of thousands of objects, while the number of potential conjunctions increases with the square of the number of tracked objects. Classical filtering techniques such as apogee and perigee filters have proven insufficient. Novel and orders of magnitude faster conjunction analysis algorithms are required to find conjunctions in a timely manner. Stellar Science has developed innovative filtering techniques for satellite conjunction processing using spatiotemporally indexed ephemeris data that efficiently and accurately reduces the number of objects requiring high-fidelity and computationally-intensive conjunction analysis. Two such algorithms, one based on the k-d Tree pioneered in robotics applications and the other based on Spatial Hash Tables used in computer gaming and animation, use, at worst, an initial O(N log N) preprocessing pass (where N is the number of tracked objects) to build large O(N) spatial data structures that substantially reduce the required number of O(N^2) computations, substituting linear memory usage for quadratic processing time. The filters have been implemented as Open Services Gateway initiative (OSGi) plug-ins for the Continuous Anomalous Orbital Situation Discriminator (CAOS-D) conjunction analysis architecture. We have demonstrated the effectiveness, efficiency, and scalability of the techniques using a catalog of 100,000 objects, an analysis window of one day, on a 64-core computer with 1TB shared memory. Each algorithm can process the full catalog in 6 minutes or less, almost a twenty-fold performance improvement over the

  14. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  15. Scalable desktop visualisation of very large radio astronomy data cubes

    NASA Astrophysics Data System (ADS)

    Perkins, Simon; Questiaux, Jacques; Finniss, Stephen; Tyler, Robin; Blyth, Sarah; Kuttel, Michelle M.

    2014-07-01

    Observation data from radio telescopes is typically stored in three (or higher) dimensional data cubes, the resolution, coverage and size of which continues to grow as ever larger radio telescopes come online. The Square Kilometre Array, tabled to be the largest radio telescope in the world, will generate multi-terabyte data cubes - several orders of magnitude larger than the current norm. Despite this imminent data deluge, scalable approaches to file access in Astronomical visualisation software are rare: most current software packages cannot read astronomical data cubes that do not fit into computer system memory, or else provide access only at a serious performance cost. In addition, there is little support for interactive exploration of 3D data. We describe a scalable, hierarchical approach to 3D visualisation of very large spectral data cubes to enable rapid visualisation of large data files on standard desktop hardware. Our hierarchical approach, embodied in the AstroVis prototype, aims to provide a means of viewing large datasets that do not fit into system memory. The focus is on rapid initial response: our system initially rapidly presents a reduced, coarse-grained 3D view of the data cube selected, which is gradually refined. The user may select sub-regions of the cube to be explored in more detail, or extracted for use in applications that do not support large files. We thus shift the focus from data analysis informed by narrow slices of detailed information, to analysis informed by overview information, with details on demand. Our hierarchical solution to the rendering of large data cubes reduces the overall time to complete file reading, provides user feedback during file processing and is memory efficient. This solution does not require high performance computing hardware and can be implemented on any platform supporting the OpenGL rendering library.

  16. A scalable time-space multiscale domain decomposition method: adaptive time scale separation

    NASA Astrophysics Data System (ADS)

    Passieux, J.-C.; Ladevèze, P.; Néron, D.

    2010-09-01

    This paper deals with the scalability of a time-space multiscale domain decomposition method in the framework of time-dependent nonlinear problems. The strategy which is being studied is the multiscale LATIN method, whose scalability was shown in previous works when the distinction between macro and micro parts is made on the spatial level alone. The objective of this work is to propose an explanation of the loss-of-scalability phenomenon, along with a remedy which guarantees full scalability provided a suitable macro time part is chosen. This technique, which is quite general, is based on an adaptive separation of scales which is achieved by adding the most relevant functions to the temporal macrobasis automatically. When this method is used, the numerical scalability of the strategy is confirmed by the examples presented.

  17. UpSet: Visualization of Intersecting Sets

    PubMed Central

    Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter

    2016-01-01

    Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912

  18. Visualizing water

    NASA Astrophysics Data System (ADS)

    Baart, F.; van Gils, A.; Hagenaars, G.; Donchyts, G.; Eisemann, E.; van Velzen, J. W.

    2016-12-01

    A compelling visualization is captivating, beautiful and narrative. Here we show how melding the skills of computer graphics, art, statistics, and environmental modeling can be used to generate innovative, attractive and very informative visualizations. We focus on the topic of visualizing forecasts and measurements of water (water level, waves, currents, density, and salinity). For the field of computer graphics and arts, water is an important topic because it occurs in many natural scenes. For environmental modeling and statistics, water is an important topic because the water is essential for transport, a healthy environment, fruitful agriculture, and a safe environment.The different disciplines take different approaches to visualizing water. In computer graphics, one focusses on creating water as realistic looking as possible. The focus on realistic perception (versus the focus on the physical balance pursued by environmental scientists) resulted in fascinating renderings, as seen in recent games and movies. Visualization techniques for statistical results have benefited from the advancement in design and journalism, resulting in enthralling infographics. The field of environmental modeling has absorbed advances in contemporary cartography as seen in the latest interactive data-driven maps. We systematically review the design emerging types of water visualizations. The examples that we analyze range from dynamically animated forecasts, interactive paintings, infographics, modern cartography to web-based photorealistic rendering. By characterizing the intended audience, the design choices, the scales (e.g. time, space), and the explorability we provide a set of guidelines and genres. The unique contributions of the different fields show how the innovations in the current state of the art of water visualization have benefited from inter-disciplinary collaborations.

  19. A Collection of Visual Thesauri for Browsing Large Collections of Geographic Images.

    ERIC Educational Resources Information Center

    Ramsey, Marshall C.; Chen, Hsinchun; Zhu, Bin; Schatz, Bruce R.

    1999-01-01

    Discusses problems in creating indices and thesauri for digital libraries of geo-spatial multimedia content and proposes a scalable method to automatically generate visual thesauri of large collections of geo-spatial media using fuzzy, unsupervised machine-learning techniques. Uses satellite photographs as examples and discusses texture-based…

  20. Visual Prosthesis

    PubMed Central

    Schiller, Peter H.; Tehovnik, Edward J.

    2009-01-01

    There are more than 40 million blind individuals in the world whose plight would be greatly ameliorated by creating a visual prosthetic. We begin by outlining the basic operational characteristics of the visual system as this knowledge is essential for producing a prosthetic device based on electrical stimulation through arrays of implanted electrodes. We then list a series of tenets that we believe need to be followed in this effort. Central among these is our belief that the initial research in this area, which is in its infancy, should first be carried out in animals. We suggest that implantation of area V1 holds high promise as the area is of a large volume and can therefore accommodate extensive electrode arrays. We then proceed to consider coding operations that can effectively convert visual images viewed by a camera to stimulate electrode arrays to yield visual impressions that can provide shape, motion and depth information. We advocate experimental work that mimics electrical stimulation effects non-invasively in sighted human subjects using a camera from which visual images are converted into displays on a monitor akin to those created by electrical stimulation. PMID:19065857