Science.gov

Sample records for scalable isosurface visualization

  1. Scalable isosurface visualization of massive datasets on commodity off-the-shelf clusters

    PubMed Central

    Bajaj, Chandrajit

    2009-01-01

    Tomographic imaging and computer simulations are increasingly yielding massive datasets. Interactive and exploratory visualizations have rapidly become indispensable tools to study large volumetric imaging and simulation data. Our scalable isosurface visualization framework on commodity off-the-shelf clusters is an end-to-end parallel and progressive platform, from initial data access to the final display. Interactive browsing of extracted isosurfaces is made possible by using parallel isosurface extraction, and rendering in conjunction with a new specialized piece of image compositing hardware called Metabuffer. In this paper, we focus on the back end scalability by introducing a fully parallel and out-of-core isosurface extraction algorithm. It achieves scalability by using both parallel and out-of-core processing and parallel disks. It statically partitions the volume data to parallel disks with a balanced workload spectrum, and builds I/O-optimal external interval trees to minimize the number of I/O operations of loading large data from disk. We also describe an isosurface compression scheme that is efficient for progress extraction, transmission and storage of isosurfaces. PMID:19756231

  2. Faster isosurface ray tracing using implicit KD-trees.

    PubMed

    Wald, Ingo; Friedrich, Heiko; Marmitt, Gerd; Slusallek, Philipp; Seidel, Hans-Peter

    2005-01-01

    The visualization of high-quality isosurfaces at interactive rates is an important tool in many simulation and visualization applications. Today, isosurfaces are most often visualized by extracting a polygonal approximation that is then rendered via graphics hardware or by using a special variant of preintegrated volume rendering. However, these approaches have a number of limitations in terms of the quality of the isosurface, lack of performance for complex data sets, or supported shading models. An alternative isosurface rendering method that does not suffer from these limitations is to directly ray trace the isosurface. However, this approach has been much too slow for interactive applications unless massively parallel shared-memory supercomputers have been used. In this paper, we implement interactive isosurface ray tracing on commodity desktop PCs by building on recent advances in real-time ray tracing of polygonal scenes and using those to improve isosurface ray tracing performance as well. The high performance and scalability of our approach will be demonstrated with several practical examples, including the visualization of highly complex isosurface data sets, the interactive rendering of hybrid polygonal/isosurface scenes, including high-quality ray traced shading effects, and even interactive global illumination on isosurfaces.

  3. Large Scale Isosurface Bicubic Subdivision-Surface Wavelets for Representation and Visualization

    SciTech Connect

    Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.

    2000-01-05

    We introduce a new subdivision-surface wavelet transform for arbitrary two-manifolds with boundary that is the first to use simple lifting-style filtering operations with bicubic precision. We also describe a conversion process for re-mapping large-scale isosurfaces to have subdivision connectivity and fair parameterizations so that the new wavelet transform can be used for compression and visualization. The main idea enabling our wavelet transform is the circular symmetrization of the filters in irregular neighborhoods, which replaces the traditional separation of filters into two 1-D passes. Our wavelet transform uses polygonal base meshes to represent surface topology, from which a Catmull-Clark-style subdivision hierarchy is generated. The details between these levels of resolution are quickly computed and compactly stored as wavelet coefficients. The isosurface conversion process begins with a contour triangulation computed using conventional techniques, which we subsequently simplify with a variant edge-collapse procedure, followed by an edge-removal process. This provides a coarse initial base mesh, which is subsequently refined, relaxed and attracted in phases to converge to the contour. The conversion is designed to produce smooth, untangled and minimally-skewed parameterizations, which improves the subsequent compression after applying the transform. We have demonstrated our conversion and transform for an isosurface obtained from a high-resolution turbulent-mixing hydrodynamics simulation, showing the potential for compression and level-of-detail visualization.

  4. Effects of VR system fidelity on analyzing isosurface visualization of volume datasets.

    PubMed

    Laha, Bireswar; Bowman, Doug A; Socha, John J

    2014-04-01

    Volume visualization is an important technique for analyzing datasets from a variety of different scientific domains. Volume data analysis is inherently difficult because volumes are three-dimensional, dense, and unfamiliar, requiring scientists to precisely control the viewpoint and to make precise spatial judgments. Researchers have proposed that more immersive (higher fidelity) VR systems might improve task performance with volume datasets, and significant results tied to different components of display fidelity have been reported. However, more information is needed to generalize these results to different task types, domains, and rendering styles. We visualized isosurfaces extracted from synchrotron microscopic computed tomography (SR-μCT) scans of beetles, in a CAVE-like display. We ran a controlled experiment evaluating the effects of three components of system fidelity (field of regard, stereoscopy, and head tracking) on a variety of abstract task categories that are applicable to various scientific domains, and also compared our results with those from our prior experiment using 3D texture-based rendering. We report many significant findings. For example, for search and spatial judgment tasks with isosurface visualization, a stereoscopic display provides better performance, but for tasks with 3D texture-based rendering, displays with higher field of regard were more effective, independent of the levels of the other display components. We also found that systems with high field of regard and head tracking improve performance in spatial judgment tasks. Our results extend existing knowledge and produce new guidelines for designing VR systems to improve the effectiveness of volume data analysis.

  5. Interactive isosurface ray tracing of time-varying tetrahedral volumes.

    PubMed

    Wald, Ingo; Friedrich, Heiko; Knoll, Aaron; Hansen, Charles D

    2007-01-01

    We describe a system for interactively rendering isosurfaces of tetrahedral finite-element scalar fields using coherent ray tracing techniques on the CPU. By employing state-of-the art methods in polygonal ray tracing, namely aggressive packet/frustum traversal of a bounding volume hierarchy, we can accomodate large and time-varying unstructured data. In conjunction with this efficiency structure, we introduce a novel technique for intersecting ray packets with tetrahedral primitives. Ray tracing is flexible, allowing for dynamic changes in isovalue and time step, visualization of multiple isosurfaces, shadows, and depth-peeling transparency effects. The resulting system offers the intuitive simplicity of isosurfacing, guaranteed-correct visual results, and ultimately a scalable, dynamic and consistently interactive solution for visualizing unstructured volumes.

  6. Seamless multiresolution isosurfaces using wavelets

    SciTech Connect

    Udeshi, T.; Hudson, R.; Papka, M. E.

    2000-04-11

    Data sets that are being produced by today's simulations, such as the ones generated by DOE's ASCI program, are too large for real-time exploration and visualization. Therefore, new methods of visualizing these data sets need to be investigated. The authors present a method that combines isosurface representations of different resolutions into a seamless solution, virtually free of cracks and overlaps. The solution combines existing isosurface generation algorithms and wavelet theory to produce a real-time solution to multiple-resolution isosurfaces.

  7. Volume Haptics with Topology-Consistent Isosurfaces.

    PubMed

    Corenthy, Loc; Otaduy, Miguel A; Pastor, Luis; Garcia, Marcos

    2015-01-01

    Haptic interfaces offer an intuitive way to interact with and manipulate 3D datasets, and may simplify the interpretation of visual information. This work proposes an algorithm to provide haptic feedback directly from volumetric datasets, as an aid to regular visualization. The haptic rendering algorithm lets users perceive isosurfaces in volumetric datasets, and it relies on several design features that ensure a robust and efficient rendering. A marching tetrahedra approach enables the dynamic extraction of a piecewise linear continuous isosurface. Robustness is achieved using a continuous collision detection step coupled with state-of-the-art proxy-based rendering methods over the extracted isosurface. The introduced marching tetrahedra approach guarantees that the extracted isosurface will match the topology of an equivalent isosurface computed using trilinear interpolation. The proposed haptic rendering algorithm improves the consistency between haptic and visual cues computing a second proxy on the isosurface displayed on screen. Our experiments demonstrate the improvements on the isosurface extraction stage as well as the robustness and the efficiency of the complete algorithm.

  8. Case study of isosurface extraction algorithm performance

    SciTech Connect

    Sutton, P M; Hansen, C D; Shen, H; Schikore, D

    1999-12-14

    Isosurface extraction is an important and useful visualization method. Over the past ten years, the field has seen numerous isosurface techniques published leaving the user in a quandary about which one should be used. Some papers have published complexity analysis of the techniques yet empirical evidence comparing different methods is lacking. This case study presents a comparative study of several representative isosurface extraction algorithms. It reports and analyzes empirical measurements of execution times and memory behavior for each algorithm. The results show that asymptotically optimal techniques may not be the best choice when implemented on modern computer architectures.

  9. A graph algebra for scalable visual analytics.

    PubMed

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  10. Scalable Visual Reasoning: Supporting Collaboration through Distributed Analysis

    SciTech Connect

    Pike, William A.; May, Richard A.; Baddeley, Bob; Riensche, Roderick M.; Bruce, Joe; Younkin, Katarina

    2007-05-21

    We present a visualization environment called the Scalable Reasoning System (SRS) that provides a suite of tools for the collection, analysis, and dissemination of reasoning products. This environment is designed to function across multiple platforms, bringing the display of visual information and the capture of reasoning associated with that information to both mobile and desktop clients. The service-oriented architecture of SRS promotes collaboration and interaction between users regardless of their location or platform. Visualization services allow data processing to be centralized and analysis results collected from distributed clients in real time. We use the concept of “reasoning artifacts” to capture the analytic value attached to individual pieces of information and collections thereof, helping to fuse the foraging and sense-making loops in information analysis. Reasoning structures composed of these artifacts can be shared across platforms while maintaining references to the analytic activity (such as interactive visualization) that produced them.

  11. The Scalable Reasoning System: Lightweight Visualization for Distributed Analytics

    SciTech Connect

    Pike, William A.; Bruce, Joseph R.; Baddeley, Robert L.; Best, Daniel M.; Franklin, Lyndsey; May, Richard A.; Rice, Douglas M.; Riensche, Roderick M.; Younkin, Katarina

    2008-11-01

    A central challenge in visual analytics is the creation of accessible, widely distributable analysis applications that bring the benefits of visual discovery to as broad a user base as possible. Moreover, to support the role of visualization in the knowledge creation process, it is advantageous to allow users to describe the reasoning strategies they employ while interacting with analytic environments. We introduce an application suite called the Scalable Reasoning System (SRS), which provides web-based and mobile interfaces for visual analysis. The service-oriented analytic framework that underlies SRS provides a platform for deploying pervasive visual analytic environments across an enterprise. SRS represents a “lightweight” approach to visual analytics whereby thin client analytic applications can be rapidly deployed in a platform-agnostic fashion. Client applications support multiple coordinated views while giving analysts the ability to record evidence, assumptions, hypotheses and other reasoning artifacts. We describe the capabilities of SRS in the context of a real-world deployment at a regional law enforcement organization.

  12. Fast scalable visualization techniques for interactive billion-particle walkthrough

    NASA Astrophysics Data System (ADS)

    Liu, Xinlian

    This research develops a comprehensive framework for interactive walkthrough involving one billion particles in an immersive virtual environment to enable interrogative visualization of large atomistic simulation data. As a mixture of scientific and engineering approaches, the framework is based on four key techniques: adaptive data compression based on space-filling curves, octree-based visibility and occlusion culling, predictive caching based on machine learning, and scalable data reduction based on parallel and distributed processing. In terms of parallel rendering, this system combines functional parallelism, data parallelism, and temporal parallelism to improve interactivity. The visualization framework will be applicable not only to material simulation, but also to computational biology, applied mathematics, mechanical engineering, and nanotechnology, etc.

  13. Scalable and portable visualization of large atomistic datasets

    NASA Astrophysics Data System (ADS)

    Sharma, Ashish; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2004-10-01

    A scalable and portable code named Atomsviewer has been developed to interactively visualize a large atomistic dataset consisting of up to a billion atoms. The code uses a hierarchical view frustum-culling algorithm based on the octree data structure to efficiently remove atoms outside of the user's field-of-view. Probabilistic and depth-based occlusion-culling algorithms then select atoms, which have a high probability of being visible. Finally a multiresolution algorithm is used to render the selected subset of visible atoms at varying levels of detail. Atomsviewer is written in C++ and OpenGL, and it has been tested on a number of architectures including Windows, Macintosh, and SGI. Atomsviewer has been used to visualize tens of millions of atoms on a standard desktop computer and, in its parallel version, up to a billion atoms. Program summaryTitle of program: Atomsviewer Catalogue identifier: ADUM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: 2.4 GHz Pentium 4/Xeon processor, professional graphics card; Apple G4 (867 MHz)/G5, professional graphics card Operating systems under which the program has been tested: Windows 2000/XP, Mac OS 10.2/10.3, SGI IRIX 6.5 Programming languages used: C++, C and OpenGL Memory required to execute with typical data: 1 gigabyte of RAM High speed storage required: 60 gigabytes No. of lines in the distributed program including test data, etc.: 550 241 No. of bytes in the distributed program including test data, etc.: 6 258 245 Number of bits in a word: Arbitrary Number of processors used: 1 Has the code been vectorized or parallelized: No Distribution format: tar gzip file Nature of physical problem: Scientific visualization of atomic systems Method of solution: Rendering of atoms using computer graphic techniques, culling algorithms for data

  14. Trident: scalable compute archives: workflows, visualization, and analysis

    NASA Astrophysics Data System (ADS)

    Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Kotulla, Ralf; Henschel, Robert; Harbeck, Daniel

    2016-08-01

    The Astronomy scientific community has embraced Big Data processing challenges, e.g. associated with time-domain astronomy, and come up with a variety of novel and efficient data processing solutions. However, data processing is only a small part of the Big Data challenge. Efficient knowledge discovery and scientific advancement in the Big Data era requires new and equally efficient tools: modern user interfaces for searching, identifying and viewing data online without direct access to the data; tracking of data provenance; searching, plotting and analyzing metadata; interactive visual analysis, especially of (time-dependent) image data; and the ability to execute pipelines on supercomputing and cloud resources with minimal user overhead or expertise even to novice computing users. The Trident project at Indiana University offers a comprehensive web and cloud-based microservice software suite that enables the straight forward deployment of highly customized Scalable Compute Archive (SCA) systems; including extensive visualization and analysis capabilities, with minimal amount of additional coding. Trident seamlessly scales up or down in terms of data volumes and computational needs, and allows feature sets within a web user interface to be quickly adapted to meet individual project requirements. Domain experts only have to provide code or business logic about handling/visualizing their domain's data products and about executing their pipelines and application work flows. Trident's microservices architecture is made up of light-weight services connected by a REST API and/or a message bus; a web interface elements are built using NodeJS, AngularJS, and HighCharts JavaScript libraries among others while backend services are written in NodeJS, PHP/Zend, and Python. The software suite currently consists of (1) a simple work flow execution framework to integrate, deploy, and execute pipelines and applications (2) a progress service to monitor work flows and sub

  15. Scalable nanohelices for predictive studies and enhanced 3D visualization.

    PubMed

    Meagher, Kwyn A; Doblack, Benjamin N; Ramirez, Mercedes; Davila, Lilian P

    2014-11-12

    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO₂) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of "bulk" silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for

  16. Scalable Nanohelices for Predictive Studies and Enhanced 3D Visualization

    PubMed Central

    Meagher, Kwyn A.; Doblack, Benjamin N.; Ramirez, Mercedes; Davila, Lilian P.

    2014-01-01

    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications.  For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately.  To study the effect of local structure on the properties of these complex geometries one must develop realistic models.  To date, software packages are rather limited in creating atomistic helical models.  This work focuses on producing atomistic models of silica glass (SiO2) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of “bulk” silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented.  The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix.  With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions.  The second method involves a more robust code which allows flexibility in modeling nanohelical structures.  This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models.  Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created.  An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material.  In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures.  One application of these methods is the recent study of nanohelices

  17. ParaText : scalable text analysis and visualization.

    SciTech Connect

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-07-01

    Automated analysis of unstructured text documents (e.g., web pages, newswire articles, research publications, business reports) is a key capability for solving important problems in areas including decision making, risk assessment, social network analysis, intelligence analysis, scholarly research and others. However, as data sizes continue to grow in these areas, scalable processing, modeling, and semantic analysis of text collections becomes essential. In this paper, we present the ParaText text analysis engine, a distributed memory software framework for processing, modeling, and analyzing collections of unstructured text documents. Results on several document collections using hundreds of processors are presented to illustrate the exibility, extensibility, and scalability of the the entire process of text modeling from raw data ingestion to application analysis.

  18. Infrastructure for Scalable and Interoperable Visualization and Analysis Software Technology

    SciTech Connect

    Bethel, E. Wes

    2004-08-01

    This document describes the LBNL vision for issues to be considered when assembling a large, multi-institution visualization and analysis effort. It was drafted at the request of the PNNL National Visual Analytics Center in July 2004.

  19. Scalable Visualization, applied to Galaxies,Oceans & Brains

    NASA Astrophysics Data System (ADS)

    Pailthorpe, Bernard

    2001-06-01

    The frontiers of Scientific Visualisation now include problems arising with data that scales in size or complexity. New metaphors may be needed to navigate, analyse and display the data emerging from bio-diversity, genomic and soci- economic studies. This talk addresses the challenges in generating algorithms and software libraries which are suitable for the large scale data emerging from tera-scale simulations and instruments. With larger and more complex datasets, moving into the 100GB-1TB realm, scalable methodologies and tools are required. The collaborative efforts to address these challenges, currently underway at the San Diego Supercomputer Center and within the National Partnership for Advanced Computational Infrastructure (NPACI), will be summarised. The ultimate aim of this R&D program is to facilitate queries and analysis of multiple, large data sets derived from motivating applications in astrophysics, planetary-scale oceanographic simulations and human brain mapping. Research challenges in such science application domains provide the justification for developing such tools. Previously planetary-scale oceanographic simulations had resolutions limited to 2 deg. latitude and longitude. With Teraflop computing resources coming on line, such simulations will be conducted at 10x (and presently 100x) resolution, soon yielding multiple sets of 100 GByte numerical output. In mapping the human brain, up to four distinct imaging modalities are used, with datasets already at 10s of GBytes. The immediate research challenge is composite these images, facilitating simultaneous analysis of structural and functional information. These applications manifest the need for high capacity computer displays,moving beyond the usual 1 Mega-pixel desktops to 10 M-pixel and more. Developments in this area will be discussed.

  20. A transparently scalable visualization architecture for exploring the universe.

    PubMed

    Fu, Chi-Wing; Hanson, Andrew J

    2007-01-01

    Modern astronomical instruments produce enormous amounts of three-dimensional data describing the physical Universe. The currently available data sets range from the solar system to nearby stars and portions of the Milky Way Galaxy, including the interstellar medium and some extrasolar planets, and extend out to include galaxies billions of light years away. Because of its gigantic scale and the fact that it is dominated by empty space, modeling and rendering the Universe is very different from modeling and rendering ordinary three-dimensional virtual worlds at human scales. Our purpose is to introduce a comprehensive approach to an architecture solving this visualization problem that encompasses the entire Universe while seeking to be as scale-neutral as possible. One key element is the representation of model-rendering procedures using power scaled coordinates (PSC), along with various PSC-based techniques that we have devised to generalize and optimize the conventional graphics framework to the scale domains of astronomical visualization. Employing this architecture, we have developed an assortment of scale-independent modeling and rendering methods for a large variety of astronomical models, and have demonstrated scale-insensitive interactive visualizations of the physical Universe covering scales ranging from human scale to the Earth, to the solar system, to the Milky Way Galaxy, and to the entire observable Universe.

  1. CGLX: a scalable, high-performance visualization framework for networked display environments.

    PubMed

    Doerr, Kai-Uwe; Kuester, Falko

    2011-03-01

    The Cross Platform Cluster Graphics Library (CGLX) is a flexible and transparent OpenGL-based graphics framework for distributed, high-performance visualization systems. CGLX allows OpenGL based applications to utilize massively scalable visualization clusters such as multiprojector or high-resolution tiled display environments and to maximize the achievable performance and resolution. The framework features a programming interface for hardware-accelerated rendering of OpenGL applications on visualization clusters, mimicking a GLUT-like (OpenGL-Utility-Toolkit) interface to enable smooth translation of single-node applications to distributed parallel rendering applications. CGLX provides a unified, scalable, distributed OpenGL context to the user by intercepting and manipulating certain OpenGL directives. CGLX's interception mechanism, in combination with the core functionality for users to register callbacks, enables this framework to manage a visualization grid without additional implementation requirements to the user. Although CGLX grants access to its core engine, allowing users to change its default behavior, general development can occur in the context of a standalone desktop. The framework provides an easy-to-use graphical user interface (GUI) and tools to test, setup, and configure a visualization cluster. This paper describes CGLX's architecture, tools, and systems components. We present performance and scalability tests with different types of applications, and we compare the results with a Chromium-based approach.

  2. Scalable Multivariate Volume Visualization and Analysis Based on Dimension Projection and Parallel Coordinates.

    PubMed

    Guo, Hanqi; Xiao, He; Yuan, Xiaoru

    2012-09-01

    In this paper, we present an effective and scalable system for multivariate volume data visualization and analysis with a novel transfer function interface design that tightly couples parallel coordinates plots (PCP) and MDS-based dimension projection plots. In our system, the PCP visualizes the data distribution of each variate (dimension) and the MDS plots project features. They are integrated seamlessly to provide flexible feature classification without context switching between different data presentations during the user interaction. The proposed interface enables users to identify relevant correlation clusters and assign optical properties with lassos, magic wand, and other tools. Furthermore, direct sketching on the volume rendered images has been implemented to probe and edit features. With our system, users can interactively analyze multivariate volumetric data sets by navigating and exploring feature spaces in unified PCP and MDS plots. To further support large-scale multivariate volume data visualization and analysis, Scalable Pivot MDS (SPMDS), parallel adaptive continuous PCP rendering, as well as parallel rendering techniques are developed and integrated into our visualization system. Our experiments show that the system is effective in multivariate volume data visualization and its performance is highly scalable for data sets with different sizes and number of variates.

  3. AggreSet: Rich and Scalable Set Exploration using Visualizations of Element Aggregations.

    PubMed

    Yalçin, M Adil; Elmqvist, Niklas; Bederson, Benjamin B

    2016-01-01

    Datasets commonly include multi-value (set-typed) attributes that describe set memberships over elements, such as genres per movie or courses taken per student. Set-typed attributes describe rich relations across elements, sets, and the set intersections. Increasing the number of sets results in a combinatorial growth of relations and creates scalability challenges. Exploratory tasks (e.g. selection, comparison) have commonly been designed in separation for set-typed attributes, which reduces interface consistency. To improve on scalability and to support rich, contextual exploration of set-typed data, we present AggreSet. AggreSet creates aggregations for each data dimension: sets, set-degrees, set-pair intersections, and other attributes. It visualizes the element count per aggregate using a matrix plot for set-pair intersections, and histograms for set lists, set-degrees and other attributes. Its non-overlapping visual design is scalable to numerous and large sets. AggreSet supports selection, filtering, and comparison as core exploratory tasks. It allows analysis of set relations inluding subsets, disjoint sets and set intersection strength, and also features perceptual set ordering for detecting patterns in set matrices. Its interaction is designed for rich and rapid data exploration. We demonstrate results on a wide range of datasets from different domains with varying characteristics, and report on expert reviews and a case study using student enrollment and degree data with assistant deans at a major public university.

  4. Interactive Querying over Large Network Data: Scalability, Visualization, and Interaction Design.

    PubMed

    Pienta, Robert; Tamersoy, Acar; Tong, Hanghang; Endert, Alex; Chau, Duen Horng

    2015-01-01

    Given the explosive growth of modern graph data, new methods are needed that allow for the querying of complex graph structures without the need of a complicated querying languages; in short, interactive graph querying is desirable. We describe our work towards achieving our overall research goal of designing and developing an interactive querying system for large network data. We focus on three critical aspects: scalable data mining algorithms, graph visualization, and interaction design. We have already completed an approximate subgraph matching system called MAGE in our previous work that fulfills the algorithmic foundation allowing us to query a graph with hundreds of millions of edges. Our preliminary work on visual graph querying, Graphite, was the first step in the process to making an interactive graph querying system. We are in the process of designing the graph visualization and robust interaction needed to make truly interactive graph querying a reality.

  5. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    NASA Astrophysics Data System (ADS)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  6. Dynamic Isosurface Extraction and Level-of-Detail in Voxel Space

    SciTech Connect

    Lamphere, P.B.; Linebarger, J.M.

    1999-03-01

    A new visualization representation is described, which dramatically improves interactivity for scientific visualizations of structured grid data sets by creating isosurfaces at interactive speeds and with dynamically changeable levels-of-detail (LOD). This representation enables greater interactivity by allowing an analyst to dynamically specify both the desired isosurface threshold and required level-of-detail to be used while rendering the image. A scientist can therefore view very large isosurfaces at interactive speeds (with a low level-of-detail), but has the full data set always available for analysis. The key idea is that various levels-of-detail are represented as differently sized hexahedral virtual voxels, which are stored in a three-dimensional binary tree, or kd-tree; thus the level-of-detail representation is done in voxel space instead of the traditional approach which relies on surface or geometry space decimations. Utilizing the voxel space is an essential step to moving from a post-processing visualization paradigm to a quantitative, real-time paradigm. This algorithm has been implemented as an integral component of the EIGEN/VR project at Sandia National Laboratories, which provides a rich environment for scientists to interactively explore and visualize the results of very large-scale simulations performed on massively parallel supercomputers.

  7. Ray-tracing polymorphic multidomain spectral/hp elements for isosurface rendering.

    PubMed

    Nelson, Blake; Kirby, Robert M

    2006-01-01

    The purpose of this paper is to present a ray-tracing isosurface rendering algorithm for spectral/hp (high-order finite) element methods in which the visualization error is both quantified and minimized. Determination of the ray-isosurface intersection is accomplished by classic polynomial root-finding applied to a polynomial approximation obtained by projecting the finite element solution over element-partitioned segments along the ray. Combining the smoothness properties of spectral/hp elements with classic orthogonal polynomial approximation theory, we devise an adaptive scheme which allows the polynomial approximation along a ray-segment to be arbitrarily close to the true solution. The resulting images converge toward a pixel-exact image at a rate far faster than sampling the spectral/hp element solution and applying classic low-order visualization techniques such as marching cubes.

  8. Shrink-wrapped isosurface from cross sectional images.

    PubMed

    Choi, Y K; Hahn, J K

    2007-12-01

    This paper addresses a new surface reconstruction scheme for approximating the isosurface from a set of tomographic cross sectional images. Differently from the novel Marching Cubes (MC) algorithm, our method does not extract the iso-density surface (isosurface) directly from the voxel data but calculates the iso-density point (isopoint) first. After building a coarse initial mesh approximating the ideal isosurface by the cell-boundary representation, it metamorphoses the mesh into the final isosurface by a relaxation scheme, called shrink-wrapping process. Compared with the MC algorithm, our method is robust and does not make any cracks on surface. Furthermore, since it is possible to utilize lots of additional isopoints during the surface reconstruction process by extending the adjacency definition, theoretically the resulting surface can be better in quality than the MC algorithm. According to experiments, it is proved to be very robust and efficient for isosurface reconstruction from cross sectional images.

  9. Isosurface construction in any dimension using Convex Hulls.

    PubMed

    Bhaniramka, Praveen; Wenger, Rephael; Crawfis, Roger

    2004-01-01

    We present an algorithm for constructing isosurfaces in any dimension. The input to the algorithm is a set of scalar values in a d-dimensional regular grid of (topological) hypercubes. The output is a set of (d-1)-dimensional simplices forming a piecewise linear approximation to the isosurface. The algorithm constructs the isosurface piecewise within each hypercube in the grid using the convex hull of an appropriate set of points. We prove that our algorithm correctly produces a triangulation of a (d-1)-manifold with boundary. In dimensions three and four, lookup tables with 2(8) and 2(16) entries, respectively, can be used to speed the algorithm's running time. In three dimensions, this gives the popular Marching Cubes algorithm. We discuss applications of four-dimensional isosurface construction to time varying isosurfaces, interval volumes, and morphing.

  10. Reinventing the Contingency Wheel: Scalable Visual Analytics of Large Categorical Data.

    PubMed

    Alsallakh, B; Aigner, W; Miksch, S; Groller, M E

    2012-12-01

    Contingency tables summarize the relations between categorical variables and arise in both scientific and business domains. Asymmetrically large two-way contingency tables pose a problem for common visualization methods. The Contingency Wheel has been recently proposed as an interactive visual method to explore and analyze such tables. However, the scalability and readability of this method are limited when dealing with large and dense tables. In this paper we present Contingency Wheel++, new visual analytics methods that overcome these major shortcomings: (1) regarding automated methods, a measure of association based on Pearson's residuals alleviates the bias of the raw residuals originally used, (2) regarding visualization methods, a frequency-based abstraction of the visual elements eliminates overlapping and makes analyzing both positive and negative associations possible, and (3) regarding the interactive exploration environment, a multi-level overview+detail interface enables exploring individual data items that are aggregated in the visualization or in the table using coordinated views. We illustrate the applicability of these new methods with a use case and show how they enable discovering and analyzing nontrivial patterns and associations in large categorical data.

  11. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    PubMed

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  12. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform

    PubMed Central

    Poucke, Sven Van; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; Deyne, Cathy De

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner’s Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research. PMID:26731286

  13. Scalable Linear Visual Feature Learning via Online Parallel Nonnegative Matrix Factorization.

    PubMed

    Zhao, Xueyi; Li, Xi; Zhang, Zhongfei; Shen, Chunhua; Zhuang, Yueting; Gao, Lixin; Li, Xuelong

    2016-12-01

    Visual feature learning, which aims to construct an effective feature representation for visual data, has a wide range of applications in computer vision. It is often posed as a problem of nonnegative matrix factorization (NMF), which constructs a linear representation for the data. Although NMF is typically parallelized for efficiency, traditional parallelization methods suffer from either an expensive computation or a high runtime memory usage. To alleviate this problem, we propose a parallel NMF method called alternating least square block decomposition (ALSD), which efficiently solves a set of conditionally independent optimization subproblems based on a highly parallelized fine-grained grid-based blockwise matrix decomposition. By assigning each block optimization subproblem to an individual computing node, ALSD can be effectively implemented in a MapReduce-based Hadoop framework. In order to cope with dynamically varying visual data, we further present an incremental version of ALSD, which is able to incrementally update the NMF solution with a low computational cost. Experimental results demonstrate the efficiency and scalability of the proposed methods as well as their applications to image clustering and image retrieval.

  14. JuxtaView - A tool for interactive visualization of large imagery on scalable tiled displays

    USGS Publications Warehouse

    Krishnaprasad, N.K.; Vishwanath, V.; Venkataraman, S.; Rao, A.G.; Renambot, L.; Leigh, J.; Johnson, A.E.; Davis, B.

    2004-01-01

    JuxtaView is a cluster-based application for viewing ultra-high-resolution images on scalable tiled displays. We present in JuxtaView, a new parallel computing and distributed memory approach for out-of-core montage visualization, using LambdaRAM, a software-based network-level cache system. The ultimate goal of JuxtaView is to enable a user to interactively roam through potentially terabytes of distributed, spatially referenced image data such as those from electron microscopes, satellites and aerial photographs. In working towards this goal, we describe our first prototype implemented over a local area network, where the image is distributed using LambdaRAM, on the memory of all nodes of a PC cluster driving a tiled display wall. Aggressive pre-fetching schemes employed by LambdaRAM help to reduce latency involved in remote memory access. We compare LambdaRAM with a more traditional memory-mapped file approach for out-of-core visualization. ?? 2004 IEEE.

  15. CoreWall: A Scalable Interactive Tool for Visual Core Description, Data Visualization, and Stratigraphic Correlation

    NASA Astrophysics Data System (ADS)

    Rao, A. G.; Rack, F.; Kamp, B.; Fils, D.; Ito, E.; Morin, P.; Higgins, S.; Leigh, J.; Johnson, A.; Renambot, L.

    2005-12-01

    A primary need for studies of sediment, ice and rock cores is an integrated environment for visual core description. CoreWall is a tool that uses digital line-scan images of split-core surfaces as the fundamental template for all sediment descriptive work. Textual and image annotations support description about structures, lithologic variation, macroscopic grain size variation, bioturbation intensity, chemical composition, and micropaleontology at points of interest registered within the core image itself. The integration of core-section images with discrete data streams and nested annotations provide a robust approach to the description of sediment and rock cores. This project provides for the real-time and/or simultaneous display of multiple integrated databases, with all the data rectified (co-registered) to the fundamental template of the core image. This visualization tool enables rapid multidisciplinary interpretation during the Initial Core Description process. A prototype computer environment for working with the high-resolution data is the Personal GeoWall-2, a single computer used to drive six tiled LCD screens. As a wideband display, the Personal GeoWall-2 can show more content then a single display system. This new visualization tool is both scaleable and portable from the Personal GeoWall-2 environment down to a single screen driven by a laptop computer. Using the screen resolution, core sections are drawn at a life size scale with both core and downhole wireline logging data drawn alongside. Using standard computer interfaces, individuals can pan through meters of core imagery and data, annotating along the length of the core itself. They can zoom in on a high-resolution core image to see details that appear under the proper lighting in which the images were taken. Using the Internet, CoreWall can retrieve images and data files from remote databases or web portals/services, such as CHRONOS, allowing individuals from ship to shore to look at data and

  16. Interactive View-Dependent Rendering of Large Isosurfaces

    SciTech Connect

    Gregorski, B; Duchaineau, M; Lindstrom, P; Pascucci, V; Joy, K I

    2002-11-19

    We present an algorithm for interactively extracting and rendering isosurfaces of large volume datasets in a view-dependent fashion. A recursive tetrahedral mesh refinement scheme, based on longest edge bisection, is used to hierarchically decompose the data into a multiresolution structure. This data structure allows fast extraction of arbitrary isosurfaces to within user specified view-dependent error bounds. A data layout scheme based on hierarchical space filling curves provides access to the data in a cache coherent manner that follows the data access pattern indicated by the mesh refinement.

  17. A Scalable Cloud Library Empowering Big Data Management, Diagnosis, and Visualization of Cloud-Resolving Models

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.

    2015-12-01

    A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a

  18. ArrayXPath: mapping and visualizing microarray gene-expression data with integrated biological pathway resources using Scalable Vector Graphics.

    PubMed

    Chung, Hee-Joon; Kim, Mingoo; Park, Chan Hee; Kim, Jihoon; Kim, Ju Han

    2004-07-01

    Biological pathways can provide key information on the organization of biological systems. ArrayXPath (http://www.snubi.org/software/ArrayXPath/) is a web-based service for mapping and visualizing microarray gene-expression data for integrated biological pathway resources using Scalable Vector Graphics (SVG). By integrating major bio-databases and searching pathway resources, ArrayXPath automatically maps different types of identifiers from microarray probes and pathway elements. When one inputs gene-expression clusters, ArrayXPath produces a list of the best matching pathways for each cluster. We applied Fisher's exact test and the false discovery rate (FDR) to evaluate the statistical significance of the association between a cluster and a pathway while correcting the multiple-comparison problem. ArrayXPath produces Javascript-enabled SVGs for web-enabled interactive visualization of pathways integrated with gene-expression profiles.

  19. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt

    2013-01-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video

  20. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Astrophysics Data System (ADS)

    Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.

    2013-12-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video

  1. SBIR Phase II Final Report for Scalable Grid Technologies for Visualization Services

    SciTech Connect

    Sebastien Barre; Will Schroeder

    2006-10-15

    This project developed software tools for the automation of grid computing. In particular, the project focused in visualization and imaging tools (VTK, ParaView and ITK); i.e., we developed tools to automatically create Grid services from C++ programs implemented using the open-source VTK visualization and ITK segmentation and registration systems. This approach helps non-Grid experts to create applications using tools with which they are familiar, ultimately producing Grid services for visualization and image analysis by invocation of an automatic process.

  2. Dynamic isosurface extraction and level-of-detail in voxel space

    SciTech Connect

    Linebarger, J.M.; Lamphere, P.B.; Breckenridge, A.R.

    1998-06-01

    A new visualization technique is reported, which dramatically improves interactivity for scientific visualizations by working directly with voxel data and by employing efficient algorithms and data structures. This discussion covers the research software, the file structures, examples of data creation, data search, and triangle rendering codes that allow geometric surfaces to be extracted from volumetric data. Uniquely, these methods enable greater interactivity by allowing an analyst to dynamically specify both the desired isosurface threshold and required level-of-detail to be used while rendering the image. The key idea behind this visualization paradigm is that various levels-of-detail are represented as differently sized hexahedral virtual voxels, which are stored in a three-dimensional kd-tree; thus the level-of-detail representation is done in voxel space instead of the traditional approach which relies on surface or geometry space decimations. This algorithm has been implemented as an integral component in the EIGEN/VR project at Sandia National Laboratories, which provides a rich environment for scientists to interactively explore and visualize the results of very large-scale simulations performed on massively parallel supercomputers.

  3. Scalable Inference and Learning in Very Large Graphical Models Patterned after the Primate Visual Cortex

    DTIC Science & Technology

    2008-04-07

    interest in brain -like computing architectures. In July of 2005, Toin De’an, the principal investigator for this grant presented a papir at AAAI...Hintlon gave his Research Excellince Aw;rd lecture entitled "\\ht kind of a graphical model is the brain ?" In all three cases, the visual cortex is cast...distinctive features. There are cells in the retina, lateral genlat eandii p6imaxy visual corex whose rmqA[&tv lields s"an space andh timle and are

  4. Direct interval volume visualization.

    PubMed

    Ament, Marco; Weiskopf, Daniel; Carr, Hamish

    2010-01-01

    We extend direct volume rendering with a unified model for generalized isosurfaces, also called interval volumes, allowing a wider spectrum of visual classification. We generalize the concept of scale-invariant opacity—typical for isosurface rendering—to semi-transparent interval volumes. Scale-invariant rendering is independent of physical space dimensions and therefore directly facilitates the analysis of data characteristics. Our model represents sharp isosurfaces as limits of interval volumes and combines them with features of direct volume rendering. Our objective is accurate rendering, guaranteeing that all isosurfaces and interval volumes are visualized in a crack-free way with correct spatial ordering. We achieve simultaneous direct and interval volume rendering by extending preintegration and explicit peak finding with data-driven splitting of ray integration and hybrid computation in physical and data domains. Our algorithm is suitable for efficient parallel processing for interactive applications as demonstrated by our CUDA implementation.

  5. A scalable architecture for extracting, aligning, linking, and visualizing multi-Int data

    NASA Astrophysics Data System (ADS)

    Knoblock, Craig A.; Szekely, Pedro

    2015-05-01

    An analyst today has a tremendous amount of data available, but each of the various data sources typically exists in their own silos, so an analyst has limited ability to see an integrated view of the data and has little or no access to contextual information that could help in understanding the data. We have developed the Domain-Insight Graph (DIG) system, an innovative architecture for extracting, aligning, linking, and visualizing massive amounts of domain-specific content from unstructured sources. Under the DARPA Memex program we have already successfully applied this architecture to multiple application domains, including the enormous international problem of human trafficking, where we extracted, aligned and linked data from 50 million online Web pages. DIG builds on our Karma data integration toolkit, which makes it easy to rapidly integrate structured data from a variety of sources, including databases, spreadsheets, XML, JSON, and Web services. The ability to integrate Web services allows Karma to pull in live data from the various social media sites, such as Twitter, Instagram, and OpenStreetMaps. DIG then indexes the integrated data and provides an easy to use interface for query, visualization, and analysis.

  6. A Unified Air-Sea Visualization System: Survey on Gridding Structures

    NASA Technical Reports Server (NTRS)

    Anand, Harsh; Moorhead, Robert

    1995-01-01

    The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.

  7. High Scalability Video ISR Exploitation

    DTIC Science & Technology

    2012-10-01

    cloud computing, Hadoop , Map/Reduce, scene understanding, visual saliency, scalability, ISR, and Motion Intelligence (U) ABSTRACT (U) The...34 problem in large-scale text processing through cloud computing architectures like Apache Hadoop . Hadoop applies a parallel batch- processing paradigm...that reads data from multiple hard disks simultaneously called Map/Reduce. In contrast to Hadoop , Modern CV algorithms assume a sequential data stream

  8. An ISO-surface folding analysis method applied to premature neonatal brain development

    NASA Astrophysics Data System (ADS)

    Rodriguez-Carranza, Claudia E.; Rousseau, Francois; Iordanova, Bistra; Glenn, Orit; Vigneron, Daniel; Barkovich, James; Studholme, Colin

    2006-03-01

    In this paper we describe the application of folding measures to tracking in vivo cortical brain development in premature neonatal brain anatomy. The outer gray matter and the gray-white matter interface surfaces were extracted from semi-interactively segmented high-resolution T1 MRI data. Nine curvature- and geometric descriptor-based folding measures were applied to six premature infants, aged 28-37 weeks, using a direct voxelwise iso-surface representation. We have shown that using such an approach it is feasible to extract meaningful surfaces of adequate quality from typical clinically acquired neonatal MRI data. We have shown that most of the folding measures, including a new proposed measure, are sensitive to changes in age and therefore applicable in developing a model that tracks development in premature infants. For the first time gyrification measures have been computed on the gray-white matter interface and on cases whose age is representative of a period of intense brain development.

  9. Equalizer: a scalable parallel rendering framework.

    PubMed

    Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato

    2009-01-01

    Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.

  10. Finite Element Results Visualization for Unstructured Grids

    SciTech Connect

    Speck, Douglas E.; Dovey, Donald J.

    1996-07-15

    GRIZ is a general-purpose post-processing application supporting interactive visualization of finite element analysis results on unstructured grids. In addition to basic pseudocolor renderings of state variables over the mesh surface, GRIZ provides modern visualization techniques such as isocontours and isosurfaces, cutting planes, vector field display, and particle traces. GRIZ accepts both command-line and mouse-driven input, and is portable to virtually any UNIX platform which provides Motif and OpenGl libraries.

  11. Efficient visualization of unsteady and huge scalar and vector fields

    NASA Astrophysics Data System (ADS)

    Vetter, Michael; Olbrich, Stephan

    2016-04-01

    and methods, we are developing a stand-alone post-processor, adding further data structures and mapping algorithms, and cooperating with the ICON developers and users. With the implementation of a DSVR-based post-processor, a milestone was achieved. By using the DSVR post-processor the mentioned 3 processes are completely separated: the data set is processed in a batch mode - e.g. on the same supercomputer, which the data is generated on - and the interactive 3D rendering is done afterwards on the scientist's local system. At the actual status of implementation the DSVR post-processor supports the generation of isosurfaces and colored slicers on volume data set time series based on rectilinear grids as well as the visualization of pathlines on time varying flow fields based on either rectilinear grids or prism grids. The software implementation and evaluation is done on the supercomputers at DKRZ, including scalability tests using ICON output files in NetCDF format. The next milestones will be (a) the in-situ integration of the DSVR library in the ICON model and (b) the implementation of an isosurface algorithm for prism grids.

  12. Isosurface extraction and view-dependent filtering from time-varying fields using Persistent Time-Octree (PTOT).

    PubMed

    Wang, Cong; Chiang, Yi-Jen

    2009-01-01

    We develop a new algorithm for isosurface extraction and view-dependent filtering from large time-varying fields, by using a novel Persistent Time-Octree (PTOT) indexing structure. Previously, the Persistent Octree (POT) was proposed to perform isosurface extraction and view-dependent filtering, which combines the advantages of the interval tree (for optimal searches of active cells) and of the Branch-On-Need Octree (BONO, forview-dependent filtering), but it only works for steady-state(i.e., single time step) data. For time-varying fields, a 4D version of POT, 4D-POT, was proposed for 4D isocontour slicing, where slicing on the time domain gives all active cells in the queried timestep and isovalue. However, such slicing is not output sensitive and thus the searching is sub-optimal. Moreover, it was not known how to support view-dependent filtering in addition to time-domain slicing.In this paper, we develop a novel Persistent Time-Octree (PTOT) indexing structure, which has the advantages of POT and performs 4D isocontour slicing on the time domain with an output-sensitive and optimal searching. In addition, when we query the same isovalue q over m consecutive time steps, there is no additional searching overhead (except for reporting the additionalactive cells) compared to querying just the first time step. Such searching performance for finding active cells is asymptotically optimal, with asymptotically optimal space and preprocessing time as well. Moreover, our PTOT supports view-dependent filtering in addition to time-domain slicing. We propose a simple and effective out-of-core scheme, where we integrate our PTOT with implicit occluders, batched occlusion queries and batched CUDA computing tasks, so that we can greatly reduce the I/O cost as well as increase the amount of data being concurrently computed in GPU.This results in an efficient algorithm for isosurface extraction with view-dependent filtering utilizing a state-of-the-art programmable GPUfor time

  13. Visualizing the Positive-Negative Interface of Molecular Electrostatic Potentials as an Educational Tool for Assigning Chemical Polarity

    ERIC Educational Resources Information Center

    Schonborn, Konrad; Host, Gunnar; Palmerius, Karljohan

    2010-01-01

    To help in interpreting the polarity of a molecule, charge separation can be visualized by mapping the electrostatic potential at the van der Waals surface using a color gradient or by indicating positive and negative regions of the electrostatic potential using different colored isosurfaces. Although these visualizations capture the molecular…

  14. Scalable coherent interface

    SciTech Connect

    Alnaes, K.; Kristiansen, E.H. ); Gustavson, D.B. ); James, D.V. )

    1990-01-01

    The Scalable Coherent Interface (IEEE P1596) is establishing an interface standard for very high performance multiprocessors, supporting a cache-coherent-memory model scalable to systems with up to 64K nodes. This Scalable Coherent Interface (SCI) will supply a peak bandwidth per node of 1 GigaByte/second. The SCI standard should facilitate assembly of processor, memory, I/O and bus bridge cards from multiple vendors into massively parallel systems with throughput far above what is possible today. The SCI standard encompasses two levels of interface, a physical level and a logical level. The physical level specifies electrical, mechanical and thermal characteristics of connectors and cards that meet the standard. The logical level describes the address space, data transfer protocols, cache coherence mechanisms, synchronization primitives and error recovery. In this paper we address logical level issues such as packet formats, packet transmission, transaction handshake, flow control, and cache coherence. 11 refs., 10 figs.

  15. Sandia Scalable Encryption Software

    SciTech Connect

    Tarman, Thomas D.

    1997-08-13

    Sandia Scalable Encryption Library (SSEL) Version 1.0 is a library of functions that implement Sandia''s scalable encryption algorithm. This algorithm is used to encrypt Asynchronous Transfer Mode (ATM) data traffic, and is capable of operating on an arbitrary number of bits at a time (which permits scaling via parallel implementations), while being interoperable with differently scaled versions of this algorithm. The routines in this library implement 8 bit and 32 bit versions of a non-linear mixer which is compatible with Sandia''s hardware-based ATM encryptor.

  16. Visualization of High-Order Finite Element Methods

    DTIC Science & Technology

    2013-03-27

    Peters , Valerio Pascucci, Robert M. Kirby and Claudio T. Silva, "Topology Verification for Isosurface Extraction", IEEE Transactions on Visualization...Visualization of High-Order Methods Professor Robert M. Kirby , Mr. Robert Haimes University of Utah Office of Sponsored Programs University of Utah Salt Lake...ORGANIZATION REPORT NUMBER 19a. NAME OF RESPONSIBLE PERSON 19b. TELEPHONE NUMBER Robert Kirby 801-585-3421 3. DATES COVERED (From - To) 26-Sep-2008

  17. GRIZ. Finite Element Results Visualization for Unstructured Grids

    SciTech Connect

    Dovey, D.; Spelce, T.E.; Christon, M.A.

    1996-03-01

    GRIZ is a general-purpose post-processing application supporting interactive visualization of finite element analysis results on unstructured grids. In addition to basic pseudocolor renderings of state variables over the mesh surface, GRIZ provides modern visualization techniques such as isocontours and isosurfaces, cutting planes, vector field display, and particle traces. GRIZ accepts both command-line and mouse-driven input, and is portable to virtually any UNIX platform which provides Motif and OpenGl libraries.

  18. Scalable Parallel Utopia

    SciTech Connect

    King, D.; Pierson, L.

    1998-10-01

    This contribution proposes a 128 bit wide interface structure clocked at approximately 80 MHz that will operate at 10 Gbps as a strawman for a 0C192C Utopia Specification. In addition, the concept of scalable width of data transfers in order to maintain manageably low clock rates is proposed.

  19. Visualizing higher order finite elements. Final report

    SciTech Connect

    Thompson, David C; Pebay, Philippe Pierre

    2005-11-01

    This report contains an algorithm for decomposing higher-order finite elements into regions appropriate for isosurfacing and proves the conditions under which the algorithm will terminate. Finite elements are used to create piecewise polynomial approximants to the solution of partial differential equations for which no analytical solution exists. These polynomials represent fields such as pressure, stress, and momentum. In the past, these polynomials have been linear in each parametric coordinate. Each polynomial coefficient must be uniquely determined by a simulation, and these coefficients are called degrees of freedom. When there are not enough degrees of freedom, simulations will typically fail to produce a valid approximation to the solution. Recent work has shown that increasing the number of degrees of freedom by increasing the order of the polynomial approximation (instead of increasing the number of finite elements, each of which has its own set of coefficients) can allow some types of simulations to produce a valid approximation with many fewer degrees of freedom than increasing the number of finite elements alone. However, once the simulation has determined the values of all the coefficients in a higher-order approximant, tools do not exist for visual inspection of the solution. This report focuses on a technique for the visual inspection of higher-order finite element simulation results based on decomposing each finite element into simplicial regions where existing visualization algorithms such as isosurfacing will work. The requirements of the isosurfacing algorithm are enumerated and related to the places where the partial derivatives of the polynomial become zero. The original isosurfacing algorithm is then applied to each of these regions in turn.

  20. Scalable Work Stealing

    SciTech Connect

    Dinan, James S.; Larkins, D. B.; Sadayappan, Ponnuswamy; Krishnamoorthy, Sriram; Nieplocha, Jaroslaw

    2009-11-14

    Irregular and dynamic parallel applications pose significant challenges to achieving scalable performance on large-scale multicore clusters. These applications often require ongoing, dynamic load balancing in order to maintain efficiency. While effective at small scale, centralized load balancing schemes quickly become a bottleneck on large-scale clusters. Work stealing is a popular approach to distributed dynamic load balancing; however its performance on large-scale clusters is not well understood. Prior work on work stealing has largely focused on shared memory machines. In this work we investigate the design and scalability of work stealing on modern distributed memory systems. We demonstrate high efficiency and low overhead when scaling to 8,192 processors for three benchmark codes: a producer-consumer benchmark, the unbalanced tree search benchmark, and a multiresolution analysis kernel.

  1. Complexity in scalable computing.

    SciTech Connect

    Rouson, Damian W. I.

    2008-12-01

    The rich history of scalable computing research owes much to a rapid rise in computing platform scale in terms of size and speed. As platforms evolve, so must algorithms and the software expressions of those algorithms. Unbridled growth in scale inevitably leads to complexity. This special issue grapples with two facets of this complexity: scalable execution and scalable development. The former results from efficient programming of novel hardware with increasing numbers of processing units (e.g., cores, processors, threads or processes). The latter results from efficient development of robust, flexible software with increasing numbers of programming units (e.g., procedures, classes, components or developers). The progression in the above two parenthetical lists goes from the lowest levels of abstraction (hardware) to the highest (people). This issue's theme encompasses this entire spectrum. The lead author of each article resides in the Scalable Computing Research and Development Department at Sandia National Laboratories in Livermore, CA. Their co-authors hail from other parts of Sandia, other national laboratories and academia. Their research sponsors include several programs within the Department of Energy's Office of Advanced Scientific Computing Research and its National Nuclear Security Administration, along with Sandia's Laboratory Directed Research and Development program and the Office of Naval Research. The breadth of interests of these authors and their customers reflects in the breadth of applications this issue covers. This article demonstrates how to obtain scalable execution on the increasingly dominant high-performance computing platform: a Linux cluster with multicore chips. The authors describe how deep memory hierarchies necessitate reducing communication overhead by using threads to exploit shared register and cache memory. On a matrix-matrix multiplication problem, they achieve up to 96% parallel efficiency with a three-part strategy: intra

  2. Visualization of a Large Set of Hydrogen Atomic Orbital Contours Using New and Expanded Sets of Parametric Equations

    ERIC Educational Resources Information Center

    Rhile, Ian J.

    2014-01-01

    Atomic orbitals are a theme throughout the undergraduate chemistry curriculum, and visualizing them has been a theme in this journal. Contour plots as isosurfaces or contour lines in a plane are the most familiar representations of the hydrogen wave functions. In these representations, a surface of a fixed value of the wave function ? is plotted…

  3. A Scalable Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Aiken, Alexander

    2001-01-01

    The Scalable Analysis Toolkit (SAT) project aimed to demonstrate that it is feasible and useful to statically detect software bugs in very large systems. The technical focus of the project was on a relatively new class of constraint-based techniques for analysis software, where the desired facts about programs (e.g., the presence of a particular bug) are phrased as constraint problems to be solved. At the beginning of this project, the most successful forms of formal software analysis were limited forms of automatic theorem proving (as exemplified by the analyses used in language type systems and optimizing compilers), semi-automatic theorem proving for full verification, and model checking. With a few notable exceptions these approaches had not been demonstrated to scale to software systems of even 50,000 lines of code. Realistic approaches to large-scale software analysis cannot hope to make every conceivable formal method scale. Thus, the SAT approach is to mix different methods in one application by using coarse and fast but still adequate methods at the largest scales, and reserving the use of more precise but also more expensive methods at smaller scales for critical aspects (that is, aspects critical to the analysis problem under consideration) of a software system. The principled method proposed for combining a heterogeneous collection of formal systems with different scalability characteristics is mixed constraints. This idea had been used previously in small-scale applications with encouraging results: using mostly coarse methods and narrowly targeted precise methods, useful information (meaning the discovery of bugs in real programs) was obtained with excellent scalability.

  4. Scalable optical quantum computer

    SciTech Connect

    Manykin, E A; Mel'nichenko, E V

    2014-12-31

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  5. Scalable solvers and applications

    SciTech Connect

    Ribbens, C J

    2000-10-27

    The purpose of this report is to summarize research activities carried out under Lawrence Livermore National Laboratory (LLNL) research subcontract B501073. This contract supported the principal investigator (P1), Dr. Calvin Ribbens, during his sabbatical visit to LLNL from August 1999 through June 2000. Results and conclusions from the work are summarized below in two major sections. The first section covers contributions to the Scalable Linear Solvers and hypre projects in the Center for Applied Scientific Computing (CASC). The second section describes results from collaboration with Patrice Turchi of LLNL's Chemistry and Materials Science Directorate (CMS). A list of publications supported by this subcontract appears at the end of the report.

  6. SFT: Scalable Fault Tolerance

    SciTech Connect

    Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod

    2006-04-15

    In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparent and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5μs; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.

  7. Scientific Visualization for Atmospheric Data Analysis in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Engelke, Wito; Flatken, Markus; Garcia, Arturo S.; Bar, Christian; Gerndt, Andreas

    2016-04-01

    terabytes. The combination of different data sources (e.g., MOLA, HRSC, HiRISE) and selection of presented data (e.g., infrared, spectral, imagery) is also supported. Furthermore, the data is presented unchanged and with the highest possible resolution for the target setup (e.g., power-wall, workstation, laptop) and view distance. The visualization techniques for the volumetric data sets can handle VTK [6] based data sets and also support different grid types as well as a time component. In detail, the integrated volume rendering uses a GPU based ray casting algorithm which was adapted to work in spherical coordinate systems. This approach results in interactive frame-rates without compromising visual fidelity. Besides direct visualization via volume rendering the prototype supports interactive slicing, extraction of iso-surfaces and probing. The latter can also be used for side-by-side comparison and on-the-fly diagram generation within the application. Similarily to the surface data a combination of different data sources is supported as well. For example, the extracted iso-surface of a scalar pressure field can be used for the visualization of the temperature. The software development is supported by the ViSTA VR-toolkit [7] and supports different target systems as well as a wide range of VR-devices. Furthermore, the prototype is scalable to run on laptops, workstations and cluster setups. REFERENCES [1] A. S. Garcia, D. J. Roberts, T. Fernando, C. Bar, R. Wolff, J. Dodiya, W. Engelke, and A. Gerndt, "A collaborative workspace architecture for strengthening collaboration among space scientists," in IEEE Aerospace Conference, (Big Sky, Montana, USA), 7-14 March 2015. [2] W. Engelke, "Mars Cartography VR System 2/3." German Aerospace Center (DLR), 2015. Project Deliverable D4.2. [3] E. Hivon, F. K. Hansen, and A. J. Banday, "The healpix primer," arXivpreprint astro-ph/9905275, 1999. [4] K. M. Gorski, E. Hivon, A. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, and M

  8. AstroBlend: Visualization package for use with Blender

    NASA Astrophysics Data System (ADS)

    Naiman, J. P.

    2015-12-01

    AstroBlend is a visualization package for use in the three dimensional animation and modeling software, Blender. It reads data in via a text file or can use pre-fab isosurface files stored as OBJ or Wavefront files. AstroBlend supports a variety of codes such as FLASH (ascl:1010.082), Enzo (ascl:1010.072), and Athena (ascl:1010.014), and combines artistic 3D models with computational astrophysics datasets to create models and animations.

  9. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  10. Optimized scalable network switch

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    2010-02-23

    In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.

  11. Optimized scalable network switch

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2007-12-04

    In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.

  12. PADMA: PArallel Data Mining Agents for scalable text classification

    SciTech Connect

    Kargupta, H.; Hamzaoglu, I.; Stafford, B.

    1997-03-01

    This paper introduces PADMA (PArallel Data Mining Agents), a parallel agent based system for scalable text classification. PADMA contains modules for (1) parallel data accessing operations, (2) parallel hierarchical clustering, and (3) web-based data visualization. This paper introduces the general architecture of PADMA and presents a detailed description of its different modules.

  13. Declarative Visualization Queries

    NASA Astrophysics Data System (ADS)

    Pinheiro da Silva, P.; Del Rio, N.; Leptoukh, G. G.

    2011-12-01

    necessarily entirely exposed to scientists writing visualization queries, facilitates the automated construction of visualization pipelines. VisKo queries have been successfully used in support of visualization scenarios from Earth Science domains including: velocity model isosurfaces, gravity data raster, and contour map renderings. Our synergistic environment provided by our CYBER-ShARE initiative at the University of Texas at El Paso has allowed us to work closely with Earth Science experts that have both provided us our test data as well as validation as to whether the execution of VisKo queries are returning visualizations that can be used for data analysis. Additionally, we have employed VisKo queries to support visualization scenarios associated with Giovanni, an online platform for data analysis developed by NASA GES DISC. VisKo-enhanced visualizations included time series plotting of aerosol data as well as contour and raster map generation of gridded brightness-temperature data.

  14. Scalable large format 3D displays

    NASA Astrophysics Data System (ADS)

    Chang, Nelson L.; Damera-Venkata, Niranjan

    2010-02-01

    We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.

  15. Customer oriented SNR scalability scheme for scalable video coding

    NASA Astrophysics Data System (ADS)

    Li, Z. G.; Rahardja, S.

    2005-07-01

    Let the whole region be the whole bit rate range that customers are interested in, and a sub-region be a specific bit rate range. The weighting factor of each sub-region is determined according to customers' interest. A new type of region of interest (ROI) is defined for the SNR scalability as the gap between the coding efficiency of SNR scalability scheme and that of the state-of-the-art single layer coding for a sub-region is a monotonically non-increasing function of its weighting factor. This type of ROI is used as a performance index to design a customer oriented SNR scalability scheme. Our scheme can be used to achieve an optimal customer oriented scalable tradeoff (COST). The profit can thus be maximized.

  16. Scalable Nonlinear Compact Schemes

    SciTech Connect

    Ghosh, Debojyoti; Constantinescu, Emil M.; Brown, Jed

    2014-04-01

    In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.

  17. Scalable SCPPM Decoder

    NASA Technical Reports Server (NTRS)

    Quir, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy; Nakashima, Michael A.; Moision, Bruce E.

    2012-01-01

    A decoder was developed that decodes a serial concatenated pulse position modulation (SCPPM) encoded information sequence. The decoder takes as input a sequence of four bit log-likelihood ratios (LLR) for each PPM slot in a codeword via a XAUI 10-Gb/s quad optical fiber interface. If the decoder is unavailable, it passes the LLRs on to the next decoder via a XAUI 10-Gb/s quad optical fiber interface. Otherwise, it decodes the sequence and outputs information bits through a 1-GB/s Ethernet UDP/IP (User Datagram Protocol/Internet Protocol) interface. The throughput for a single decoder unit is 150-Mb/s at an average of four decoding iterations; by connecting a number of decoder units in series, a decoding rate equal to that of the aggregate rate is achieved. The unit is controlled through a 1-GB/s Ethernet UDP/IP interface. This ground station decoder was developed to demonstrate a deep space optical communication link capability, and is unique in the scalable design to achieve real-time SCPP decoding at the aggregate data rate.

  18. 3D visualization of biomedical CT images based on OpenGL and VRML techniques

    NASA Astrophysics Data System (ADS)

    Yin, Meng; Luo, Qingming; Xia, Fuhua

    2002-04-01

    Current high-performance computers and advanced image processing capabilities have made the application of three- dimensional visualization objects in biomedical computer tomographic (CT) images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3D data are typically stored and processed on powerful servers accessible by using TCP/IP, we should hold the results of the isosurface be applied in medical visualization generally. Furthermore, this project is a future part of PACS system our lab is working on. So in this system we use the 3D file format VRML2.0, which is used through the Web interface for manipulating 3D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm. Then we used OpenGL and MFC techniques to render the isosurface and manipulating voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3D image processing on personal computers is rather slow and the set of tools for 3D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed.

  19. iSIGHT-FD scalability test report.

    SciTech Connect

    Clay, Robert L.; Shneider, Max S.

    2008-07-01

    The engineering analysis community at Sandia National Laboratories uses a number of internal and commercial software codes and tools, including mesh generators, preprocessors, mesh manipulators, simulation codes, post-processors, and visualization packages. We define an analysis workflow as the execution of an ordered, logical sequence of these tools. Various forms of analysis (and in particular, methodologies that use multiple function evaluations or samples) involve executing parameterized variations of these workflows. As part of the DART project, we are evaluating various commercial workflow management systems, including iSIGHT-FD from Engineous. This report documents the results of a scalability test that was driven by DAKOTA and conducted on a parallel computer (Thunderbird). The purpose of this experiment was to examine the suitability and performance of iSIGHT-FD for large-scale, parameterized analysis workflows. As the results indicate, we found iSIGHT-FD to be suitable for this type of application.

  20. Medical visualization based on VRML technology and its application

    NASA Astrophysics Data System (ADS)

    Yin, Meng; Luo, Qingming; Lu, Qiang; Sheng, Rongbing; Liu, Yafeng

    2003-07-01

    Current high-performance computers and advanced image processing capabilities have made the application of three dimensional visualization objects in biomedical images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3-D data are typically stored and processed on powerful servers accessible by using TCP/IP, we held the results of the isosurface be applied in medical visualization generally. So in this system we use the 3-D file format VRML2.0, which is used through the Web interface for manipulating 3-D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm, using OpenGL and MFC techniques to render the isosurface and manipulate voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3-D image processing on personal computers is rather slow and the set of tools for 3-D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed. With the help of OCT and MPE scanning image system, applying these techniques to the visualization of rabbit brain, constructing data sets of hierarchical subdivisions of the cerebral information, we can establish a virtual environment on the World Wide Web for the rabbit brain research from its gross anatomy to its tissue and cellular levels of detail, providng graphical modeling and information management of both the outer and the inner space of the rabbit brain.

  1. Scalable, distributed data mining using an agent based architecture

    SciTech Connect

    Kargupta, H.; Hamzaoglu, I.; Stafford, B.

    1997-05-01

    Algorithm scalability and the distributed nature of both data and computation deserve serious attention in the context of data mining. This paper presents PADMA (PArallel Data Mining Agents), a parallel agent based system, that makes an effort to address these issues. PADMA contains modules for (1) parallel data accessing operations, (2) parallel hierarchical clustering, and (3) web-based data visualization. This paper describes the general architecture of PADMA and experimental results.

  2. Scalability study of solid xenon

    SciTech Connect

    Yoo, J.; Cease, H.; Jaskierny, W. F.; Markley, D.; Pahlka, R. B.; Balakishiyeva, D.; Saab, T.; Filipenko, M.

    2015-04-01

    We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified {\\it Bridgeman's technique} reproduces a large scale optically transparent solid xenon.

  3. A Scalable Database Infrastructure

    NASA Astrophysics Data System (ADS)

    Arko, R. A.; Chayes, D. N.

    2001-12-01

    The rapidly increasing volume and complexity of MG&G data, and the growing demand from funding agencies and the user community that it be easily accessible, demand that we improve our approach to data management in order to reach a broader user-base and operate more efficient and effectively. We have chosen an approach based on industry-standard relational database management systems (RDBMS) that use community-wide data specifications, where there is a clear and well-documented external interface that allows use of general purpose as well as customized clients. Rapid prototypes assembled with this approach show significant advantages over the traditional, custom-built data management systems that often use "in-house" legacy file formats, data specifications, and access tools. We have developed an effective database prototype based a public domain RDBMS (PostgreSQL) and metadata standard (FGDC), and used it as a template for several ongoing MG&G database management projects - including ADGRAV (Antarctic Digital Gravity Synthesis), MARGINS, the Community Review system of the Digital Library for Earth Science Education, multibeam swath bathymetry metadata, and the R/V Maurice Ewing onboard acquisition system. By using standard formats and specifications, and working from a common prototype, we are able to reuse code and deploy rapidly. Rather than spend time on low-level details such as storage and indexing (which are built into the RDBMS), we can focus on high-level details such as documentation and quality control. In addition, because many commercial off-the-shelf (COTS) and public domain data browsers and visualization tools have built-in RDBMS support, we can focus on backend development and leave the choice of a frontend client(s) up to the end user. While our prototype is running under an open source RDBMS on a single processor host, the choice of standard components allows this implementation to scale to commercial RDBMS products and multiprocessor servers as

  4. A Scalable Media Multicasting Scheme

    NASA Astrophysics Data System (ADS)

    Youwei, Zhang

    IP multicast has been proved to be unfeasible for deployment, Application Layer Multicast (ALM) Based on end multicast system is practical and more scalable than IP multicast in Internet. In this paper, an ALM protocol called Scalable multicast for High Definition streaming media (SHD) is proposed in which end to end transmission capability is fully cultivated for HD media transmission without increasing much control overhead. Similar to the transmission style of BiTtorrent, hosts only forward part of data piece according to the available bandwidth that improves the usage of bandwidth greatly. On the other hand, some novel strategies are adopted to overcome the disadvantages of BiTtorrent protocol in streaming media transmission. Data transmission between hosts is implemented in many-one transmission style in Hierarchical architecture in most circumstances. Simulations implemented on Internet-like topology indicate that SHD achieves low link stress, end to end latency and stability.

  5. A Scalable Tools Communication Infrastructure

    SciTech Connect

    Buntinas, Darius; Bosilca, George; Graham, Richard L; Vallee, Geoffroy R; Watson, Gregory R.

    2008-01-01

    The Scalable Tools Communication Infrastructure (STCI) is an open source collaborative effort intended to provide high-performance, scalable, resilient, and portable communications and process control services for a wide variety of user and system tools. STCI is aimed specifically at tools for ultrascale computing and uses a component architecture to simplify tailoring the infrastructure to a wide range of scenarios. This paper describes STCI's design philosophy, the various components that will be used to provide an STCI implementation for a range of ultrascale platforms, and a range of tool types. These include tools supporting parallel run-time environments, such as MPI, parallel application correctness tools and performance analysis tools, as well as system monitoring and management tools.

  6. Study on scalable coding algorithm for medical image.

    PubMed

    Hongxin, Chen; Zhengguang, Liu; Hongwei, Zhang

    2005-01-01

    According to the characteristics of medical image and wavelet transform, a scalable coding algorithm is presented, which can be used in image transmission by network. Wavelet transform makes up for the weakness of DCT transform and it is similar to the human visual system. The second generation of wavelet transform, the lifting scheme, can be completed by integer form, which is divided into several steps, and they can be realized by calculation form integer to integer. Lifting scheme can simplify the computing process and increase transform precision. According to the property of wavelet sub-bands, wavelet coefficients are organized on the basis of the sequence of their importance, so code stream is formed progressively and it is scalable in resolution. Experimental results show that the algorithm can be used effectively in medical image compression and suitable to long-distance browse.

  7. Perspective: n-type oxide thermoelectrics via visual search strategies

    NASA Astrophysics Data System (ADS)

    Xing, Guangzong; Sun, Jifeng; Ong, Khuong P.; Fan, Xiaofeng; Zheng, Weitao; Singh, David J.

    2016-05-01

    We discuss and present search strategies for finding new thermoelectric compositions based on first principles electronic structure and transport calculations. We illustrate them by application to a search for potential n-type oxide thermoelectric materials. This includes a screen based on visualization of electronic energy isosurfaces. We report compounds that show potential as thermoelectric materials along with detailed properties, including SrTiO3, which is a known thermoelectric, and appropriately doped KNbO3 and rutile TiO2.

  8. Visualization of Turbulence with OpenGL

    NASA Astrophysics Data System (ADS)

    Avril, A.; Makowski, M. A.; Umansky, M.; Kalling, R.; Schissel, D. P.

    2009-11-01

    Turbulence is an all-pervasive phenomenon in plasmas. The edge turbulence is of particular interest for the containment of plasmas during fusion processes. It is simulated with BOUT, a 4D (3 spatial + time coordinates) edge turbulence simulation code that is typical of modern codes in many ways. While predictive, the 4D outputs of these codes are difficult to visualize. In an effort to better understand the macroscopic trends of edge turbulence in toroidal plasmas, we are developing routines to render the BOUT output, using the OpenGL framework in C^++. These routines will allow us to follow the evolution of isosurfaces through time, and we anticipate gaining insight into the nonlinear dynamics of turbulence as a result. Additionally, these routines could potentially be used to visualize the output of other modeling codes.

  9. Illustrative volume visualization using GPU-based particle systems.

    PubMed

    van Pelt, Roy; Vilanova, Anna; van de Wetering, Huub

    2010-01-01

    Illustrative techniques are generally applied to produce stylized renderings. Various illustrative styles have been applied to volumetric data sets, producing clearer images and effectively conveying visual information. We adopt particle systems to produce user-configurable stylized renderings from the volume data, imitating traditional pen-and-ink drawings. In the following, we present an interactive GPU-based illustrative volume rendering framework, called VolFliesGPU. In this framework, isosurfaces are sampled by evenly distributed particle sets, delineating surface shape by illustrative styles. The appearance of these styles is based on locally-measured surface properties. For instance, hatches convey surface shape by orientation and shape characteristics are enhanced by color, mapped using a curvature-based transfer function. Hidden-surfaces are generally removed to avoid visual clutter, after that a combination of styles is applied per isosurface. Multiple surfaces and styles can be explored interactively, exploiting parallelism in both graphics hardware and particle systems. We achieve real-time interaction and prompt parametrization of the illustrative styles, using an intuitive GPGPU paradigm that delivers the computational power to drive our particle system and visualization algorithms.

  10. Rapid and scalable assembly of firefly luciferase substrates†

    PubMed Central

    McCutcheon, David C.; Porterfield, William B.; Prescher, Jennifer A.

    2015-01-01

    Bioluminescence imaging with luciferase-luciferin pairs is a popular method for visualizing biological processes in vivo. Unfortunately, most luciferins are difficult to access and remain prohibitively expensive for some imaging applications. Here we report cost-effective and efficient syntheses of D-luciferin and 6′-aminoluciferin, two widely used bioluminescent substrates. Our approach employs inexpensive anilines and Appel's salt to generate the luciferin cores in a single pot. Additionally, the syntheses are scalable and can provide multi-gram quantities of both substrates. The streamlined production and improved accessibility of luciferin reagents will bolster in vivo imaging efforts. PMID:25525906

  11. Scripts for Scalable Monitoring of Parallel Filesystem Infrastructure

    SciTech Connect

    Caldwell, Blake

    2014-02-27

    Scripts for scalable monitoring of parallel filesystem infrastructure provide frameworks for monitoring the health of block storage arrays and large InfiniBand fabrics. The block storage framework uses Python multiprocessing to within scale the number monitored arrays to scale with the number of processors in the system. This enables live monitoring of HPC-scale filesystem with 10-50 storage arrays. For InfiniBand monitoring, there are scripts included that monitor InfiniBand health of each host along with visualization tools for mapping the topology of complex fabric topologies.

  12. Scalable WIM: effective exploration in large-scale astrophysical environments.

    PubMed

    Li, Yinggang; Fu, Chi-Wing; Hanson, Andrew J

    2006-01-01

    Navigating through large-scale virtual environments such as simulations of the astrophysical Universe is difficult. The huge spatial range of astronomical models and the dominance of empty space make it hard for users to travel across cosmological scales effectively, and the problem of wayfinding further impedes the user's ability to acquire reliable spatial knowledge of astronomical contexts. We introduce a new technique called the scalable world-in-miniature (WIM) map as a unifying interface to facilitate travel and wayfinding in a virtual environment spanning gigantic spatial scales: Power-law spatial scaling enables rapid and accurate transitions among widely separated regions; logarithmically mapped miniature spaces offer a global overview mode when the full context is too large; 3D landmarks represented in the WIM are enhanced by scale, positional, and directional cues to augment spatial context awareness; a series of navigation models are incorporated into the scalable WIM to improve the performance of travel tasks posed by the unique characteristics of virtual cosmic exploration. The scalable WIM user interface supports an improved physical navigation experience and assists pragmatic cognitive understanding of a visualization context that incorporates the features of large-scale astronomy.

  13. Highly scalable coherent fiber combining

    NASA Astrophysics Data System (ADS)

    Antier, M.; Bourderionnet, J.; Larat, C.; Lallier, E.; Brignon, A.

    2015-10-01

    An architecture for active coherent fiber laser beam combining using an interferometric measurement is demonstrated. This technique allows measuring the exact phase errors of each fiber beam in a single shot. Therefore, this method is a promising candidate toward very large number of combined fibers. Our experimental system, composed of 16 independent fiber channels, is used to evaluate the achieved phase locking stability in terms of phase shift error and bandwidth. We show that only 8 pixels per fiber on the camera is required for a stable close loop operation with a residual phase error of λ/20 rms, which demonstrates the scalability of this concept. Furthermore we propose a beam shaping technique to increase the combining efficiency.

  14. Scalable Performance Measurement and Analysis

    SciTech Connect

    Gamblin, Todd

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  15. Rate control scheme for consistent video quality in scalable video codec.

    PubMed

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  16. Visual Interface for Materials Simulations

    SciTech Connect

    Muller, Richard P.; Dorsey, David M.

    2004-08-01

    VIMES (Visual Inteface for Materials Simulations) is a graphical user interface (GUI) for pre- and post-processing alomistic materials science calculations. The code includes tools for building and visualizing simple crystals, supercells, and surfaces, as well as tools for managing and modifying the input to Sandia materials simulations codes such as Quest (Peter Schultz, SNL 9235) and Towhee (Marcus Martin, SNL 9235). It is often useful to have a graphical interlace to construct input for materials simulations codes and to analyze the output of these programs. VIMES has been designed not only to build and visualize different materials systems, but also to allow several Sandia codes to be easier to use and analyze. Furthermore. VIMES has been designed to be reasonably easy to extend to new materials programs. We anticipate that users of Sandia materials simulations codes will use VIMCS to simplify the submission and analysis of these simulations. VIMES uses standard OpenGL graphics (as implemented in the Python programming language) to display the molecules. The algorithms used to rotate, zoom, and pan molecules are all standard applications using the OpenGL libraries. VIMES uses the Marching Cubes algorithm for isosurfacing 3D data such as molecular orbitals or electron densities around the molecules.

  17. Scalable Video Transcaling for the Wireless Internet

    NASA Astrophysics Data System (ADS)

    Radha, Hayder; van der Schaar, Mihaela; Karande, Shirish

    2004-12-01

    The rapid and unprecedented increase in the heterogeneity of multimedia networks and devices emphasizes the need for scalable and adaptive video solutions both for coding and transmission purposes. However, in general, there is an inherent trade-off between the level of scalability and the quality of scalable video streams. In other words, the higher the bandwidth variation, the lower the overall video quality of the scalable stream that is needed to support the desired bandwidth range. In this paper, we introduce the notion of wireless video transcaling (TS), which is a generalization of (nonscalable) transcoding. With TS, a scalable video stream, that covers a given bandwidth range, is mapped into one or more scalable video streams covering different bandwidth ranges. Our proposed TS framework exploits the fact that the level of heterogeneity changes at different points of the video distribution tree over wireless and mobile Internet networks. This provides the opportunity to improve the video quality by performing the appropriate TS process. We argue that an Internet/wireless network gateway represents a good candidate for performing TS. Moreover, we describe hierarchical TS (HTS), which provides a "transcaler" with the option of choosing among different levels of TS processes with different complexities. We illustrate the benefits of TS by considering the recently developed MPEG-4 fine granularity scalability (FGS) video coding. Extensive simulation results of video TS over bit rate ranges supported by emerging wireless LANs are presented.

  18. Scalable analysis tools for sensitivity analysis and UQ (3160) results.

    SciTech Connect

    Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.

    2009-09-01

    The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.

  19. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    PubMed

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  20. Scalable Computation of Streamlines on Very Large Datasets

    SciTech Connect

    Pugmire, Dave; Garth, Christoph; Childs, Hank; Ahern, Sean; Weber, Gunther H

    2009-01-01

    nderstanding vector fields resulting from large scientific simulations is an important and often difficult task. Streamlines, curves that are tangential to a ve ctor field at each point, are a powerful visualization method in this context. Application of streamline-based visualization to very large vector field data repr esents a significant challenge due to the non-local and data-dependent nature of streamline computation, and requires careful balancing of computational demands placed on I/O, memory, communication, and processors. In this paper we review two parallelization approaches based on established parallelization paradigms (stat ic decomposition and on-demand loading) and present a novel hybrid algorithm for computing streamlines. Our algorithm is aimed at good scalability and performanc e across the widely varying computational characteristics of streamline-based problems. We perform performance and scalability studies of all three algorithms on a number of prototypical application problems and demonstrate that our hybrid scheme is able to perform well in different settings.

  1. Fully scalable video coding with packed stream

    NASA Astrophysics Data System (ADS)

    Lopez, Manuel F.; Rodriguez, Sebastian G.; Ortiz, Juan Pablo; Dana, Jose Miguel; Ruiz, Vicente G.; Garcia, Inmaculada

    2005-03-01

    Scalable video coding is a technique which allows a compressed video stream to be decoded in several different ways. This ability allows a user to adaptively recover a specific version of a video depending on its own requirements. Video sequences have temporal, spatial and quality scalabilities. In this work we introduce a novel fully scalable video codec. It is based on a motion-compensated temporal filtering (MCTF) of the video sequences and it uses some of the basic elements of JPEG 2000. This paper describes several specific proposals for video on demand and video-conferencing applications over non-reliable packet-switching data networks.

  2. Scalable Multi-Platform Distribution of Spatial 3d Contents

    NASA Astrophysics Data System (ADS)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  3. Scalable encryption using alpha rooting

    NASA Astrophysics Data System (ADS)

    Wharton, Eric J.; Panetta, Karen A.; Agaian, Sos S.

    2008-04-01

    Full and partial encryption methods are important for subscription based content providers, such as internet and cable TV pay channels. Providers need to be able to protect their products while at the same time being able to provide demonstrations to attract new customers without giving away the full value of the content. If an algorithm were introduced which could provide any level of full or partial encryption in a fast and cost effective manner, the applications to real-time commercial implementation would be numerous. In this paper, we present a novel application of alpha rooting, using it to achieve fast and straightforward scalable encryption with a single algorithm. We further present use of the measure of enhancement, the Logarithmic AME, to select optimal parameters for the partial encryption. When parameters are selected using the measure, the output image achieves a balance between protecting the important data in the image while still containing a good overall representation of the image. We will show results for this encryption method on a number of images, using histograms to evaluate the effectiveness of the encryption.

  4. Temporally Scalable Visual SLAM using a Reduced Pose Graph

    DTIC Science & Technology

    2012-05-25

    generated from Kinect data for illustration purposes. II. RELATED WORK The pose graph optimization approach to SLAM was first introduced by Lu and Milios [19...such as the Microsoft Kinect ) and results for both camera types are presented in Section V. Additionally, our approach can incorporate IMU (roll and...camera, a Kinect sensor and a Microstrain IMU among other sensors. The data was collected in a large building over a period of six months. There were

  5. Efficient entropy coding for scalable video coding

    NASA Astrophysics Data System (ADS)

    Choi, Woong Il; Yang, Jungyoup; Jeon, Byeungwoo

    2005-10-01

    The standardization for the scalable extension of H.264 has called for additional functionality based on H.264 standard to support the combined spatio-temporal and SNR scalability. For the entropy coding of H.264 scalable extension, Context-based Adaptive Binary Arithmetic Coding (CABAC) scheme is considered so far. In this paper, we present a new context modeling scheme by using inter layer correlation between the syntax elements. As a result, it improves coding efficiency of entropy coding in H.264 scalable extension. In simulation results of applying the proposed scheme to encoding the syntax element mb_type, it is shown that improvement in coding efficiency of the proposed method is up to 16% in terms of bit saving due to estimation of more adequate probability model.

  6. Joint Experimentation on Scalable Parallel Processors (JESPP)

    DTIC Science & Technology

    2006-04-01

    SCALABLE PARALLEL PROCESSORS (JESPP) 6. AUTHOR(S) Dan M. Davis, Robert F. Lucas, Ke-Thia Yao, Gene Wagenbreth 5. FUNDING NUMBERS C...List of Papers • Robert J. Graebener, Gregory Rafuse, Robert Miller & Ke-Thia Yao, “The Road to Successful Joint Experimentation Starts at the...2003. • Robert F. Lucas & Dan M. Davis, “Joint Experimentation on Scalable Parallel Processors“, Interservice/Industry Training, Simulation, and

  7. A Statistical Direct Volume Rendering Framework for Visualization of Uncertain Data.

    PubMed

    Sakhaee, Elham; Entezari, Alireza

    2016-12-08

    With uncertainty present in almost all modalities of data acquisition, reduction, transformation, and representation, there is a growing demand for mathematical analysis of uncertainty propagation in data processing pipelines. In this paper, we present a statistical framework for quantification of uncertainty and its propagation in the main stages of the visualization pipeline. We propose a novel generalization of Irwin-Hall distributions from the statistical viewpoint of splines and box-splines, that enables interpolation of random variables. Moreover, we introduce a probabilistic transfer function classification model that allows for incorporating probability density functions into the volume rendering integral. Our statistical framework allows for incorporating distributions from various sources of uncertainty which makes it suitable in a wide range of visualization applications. We demonstrate effectiveness of our approach in visualization of ensemble data, visualizing large datasets at reduced scale, iso-surface extraction, and visualization of noisy data.

  8. Scalable extensions of HEVC for next generation services

    NASA Astrophysics Data System (ADS)

    Misra, Kiran; Segall, Andrew; Zhao, Jie; Kim, Seung-Hwan

    2013-02-01

    The high efficiency video coding (HEVC) standard being developed by ITU-T VCEG and ISO/IEC MPEG achieves a compression goal of reducing the bitrate by half for the same visual quality when compared with earlier video compression standards such as H.264/AVC. It achieves this goal with the use of several new tools such as quad-tree based partitioning of data, larger block sizes, improved intra prediction, the use of sophisticated prediction of motion information, inclusion of an in-loop sample adaptive offset process etc. This paper describes an approach where the HEVC framework is extended to achieve spatial scalability using a multi-loop approach. The enhancement layer inter-predictive coding efficiency is improved by including within the decoded picture buffer multiple up-sampled versions of the decoded base layer picture. This approach has the advantage of achieving significant coding gains with a simple extension of the base layer tools such as inter-prediction, motion information signaling etc. Coding efficiency of the enhancement layer is further improved using adaptive loop filter and internal bit-depth increment. The performance of the proposed scalable video coding approach is compared to simulcast transmission of video data using high efficiency model version 6.1 (HM-6.1). The bitrate savings are measured using Bjontegaard Delta (BD) rate for a spatial scalability factor of 2 and 1.5 respectively when compared with simulcast anchors. It is observed that the proposed approach provides an average luma BD rate gains of 33.7% and 50.5% respectively.

  9. Wanted: Scalable Tracers for Diffusion Measurements

    PubMed Central

    2015-01-01

    Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core–shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say “reinforced Ficoll” or “reinforced hyperbranched polyglycerol.” PMID:25319586

  10. Wanted: scalable tracers for diffusion measurements.

    PubMed

    Saxton, Michael J

    2014-11-13

    Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core-shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say "reinforced Ficoll" or "reinforced hyperbranched polyglycerol."

  11. GPView: A program for wave function analysis and visualization.

    PubMed

    Shi, Tian; Wang, Ping

    2016-11-01

    In this manuscript, we will introduce a recently developed program GPView, which can be used for wave function analysis and visualization. The wave function analysis module can calculate and generate 3D cubes for various types of molecular orbitals and electron density of electronic excited states, such as natural orbitals, natural transition orbitals, natural difference orbitals, hole-particle density, detachment-attachment density and transition density. The visualization module of GPView can display molecular and electronic (iso-surfaces) structures. It is also able to animate single trajectories of molecular dynamics and non-adiabatic excited state molecular dynamics using the data stored in existing files. There are also other utilities to extract and process the output of quantum chemistry calculations. The GPView provides full graphic user interface (GUI), so it very easy to use. It is available from website http://life-tp.com/gpview.

  12. Garuda: a scalable tiled display wall using commodity PCs.

    PubMed

    Nirnimesh; Harish, Pawan; Narayanan, P J

    2007-01-01

    Cluster-based tiled display walls can provide cost-effective and scalable displays with high resolution and a large display area. The software to drive them needs to scale too if arbitrarily large displays are to be built. Chromium is a popular software API used to construct such displays. Chromium transparently renders any OpenGL application to a tiled display by partitioning and sending individual OpenGL primitives to each client per frame. Visualization applications often deal with massive geometric data with millions of primitives. Transmitting them every frame results in huge network requirements that adversely affect the scalability of the system. In this paper, we present Garuda, a client-server-based display wall framework that uses off-the-shelf hardware and a standard network. Garuda is scalable to large tile configurations and massive environments. It can transparently render any application built using the Open Scene Graph (OSG) API to a tiled display without any modification by the user. The Garuda server uses an object-based scene structure represented using a scene graph. The server determines the objects visible to each display tile using a novel adaptive algorithm that culls the scene graph to a hierarchy of frustums. Required parts of the scene graph are transmitted to the clients, which cache them to exploit the interframe redundancy. A multicast-based protocol is used to transmit the geometry to exploit the spatial redundancy present in tiled display systems. A geometry push philosophy from the server helps keep the clients in sync with one another. Neither the server nor a client needs to render the entire scene, making the system suitable for interactive rendering of massive models. Transparent rendering is achieved by intercepting the cull, draw, and swap functions of OSG and replacing them with our own. We demonstrate the performance and scalability of the Garuda system for different configurations of display wall. We also show that the

  13. Scalable k-means statistics with Titan.

    SciTech Connect

    Thompson, David C.; Bennett, Janine C.; Pebay, Philippe Pierre

    2009-11-01

    This report summarizes existing statistical engines in VTK/Titan and presents both the serial and parallel k-means statistics engines. It is a sequel to [PT08], [BPRT09], and [PT09] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, and contingency engines. The ease of use of the new parallel k-means engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the k-means engine.

  14. Validation of a Scalable Solar Sailcraft

    NASA Technical Reports Server (NTRS)

    Murphy, D. M.

    2006-01-01

    The NASA In-Space Propulsion (ISP) program sponsored intensive solar sail technology and systems design, development, and hardware demonstration activities over the past 3 years. Efforts to validate a scalable solar sail system by functional demonstration in relevant environments, together with test-analysis correlation activities on a scalable solar sail system have recently been successfully completed. A review of the program, with descriptions of the design, results of testing, and analytical model validations of component and assembly functional, strength, stiffness, shape, and dynamic behavior are discussed. The scaled performance of the validated system is projected to demonstrate the applicability to flight demonstration and important NASA road-map missions.

  15. Scalable still image coding based on wavelet

    NASA Astrophysics Data System (ADS)

    Yan, Yang; Zhang, Zhengbing

    2005-02-01

    The scalable image coding is an important objective of the future image coding technologies. In this paper, we present a kind of scalable image coding scheme based on wavelet transform. This method uses the famous EZW (Embedded Zero tree Wavelet) algorithm; we give a high-quality encoding to the ROI (region of interest) of the original image and a rough encoding to the rest. This method is applied well in limited memory space condition, and we encode the region of background according to the memory capacity. In this way, we can store the encoded image in limited memory space easily without losing its main information. Simulation results show it is effective.

  16. Medusa: A Scalable MR Console Using USB

    PubMed Central

    Stang, Pascal P.; Conolly, Steven M.; Santos, Juan M.; Pauly, John M.; Scott, Greig C.

    2012-01-01

    MRI pulse sequence consoles typically employ closed proprietary hardware, software, and interfaces, making difficult any adaptation for innovative experimental technology. Yet MRI systems research is trending to higher channel count receivers, transmitters, gradient/shims, and unique interfaces for interventional applications. Customized console designs are now feasible for researchers with modern electronic components, but high data rates, synchronization, scalability, and cost present important challenges. Implementing large multi-channel MR systems with efficiency and flexibility requires a scalable modular architecture. With Medusa, we propose an open system architecture using the Universal Serial Bus (USB) for scalability, combined with distributed processing and buffering to address the high data rates and strict synchronization required by multi-channel MRI. Medusa uses a modular design concept based on digital synthesizer, receiver, and gradient blocks, in conjunction with fast programmable logic for sampling and synchronization. Medusa is a form of synthetic instrument, being reconfigurable for a variety of medical/scientific instrumentation needs. The Medusa distributed architecture, scalability, and data bandwidth limits are presented, and its flexibility is demonstrated in a variety of novel MRI applications. PMID:21954200

  17. Medusa: a scalable MR console using USB.

    PubMed

    Stang, Pascal P; Conolly, Steven M; Santos, Juan M; Pauly, John M; Scott, Greig C

    2012-02-01

    Magnetic resonance imaging (MRI) pulse sequence consoles typically employ closed proprietary hardware, software, and interfaces, making difficult any adaptation for innovative experimental technology. Yet MRI systems research is trending to higher channel count receivers, transmitters, gradient/shims, and unique interfaces for interventional applications. Customized console designs are now feasible for researchers with modern electronic components, but high data rates, synchronization, scalability, and cost present important challenges. Implementing large multichannel MR systems with efficiency and flexibility requires a scalable modular architecture. With Medusa, we propose an open system architecture using the universal serial bus (USB) for scalability, combined with distributed processing and buffering to address the high data rates and strict synchronization required by multichannel MRI. Medusa uses a modular design concept based on digital synthesizer, receiver, and gradient blocks, in conjunction with fast programmable logic for sampling and synchronization. Medusa is a form of synthetic instrument, being reconfigurable for a variety of medical/scientific instrumentation needs. The Medusa distributed architecture, scalability, and data bandwidth limits are presented, and its flexibility is demonstrated in a variety of novel MRI applications.

  18. Scalable IP switching based on optical interconnect

    NASA Astrophysics Data System (ADS)

    Luo, Zhixiang; Cao, Mingcui; Liu, Erwu

    2000-10-01

    IP traffic on the Internet and enterprise networks has been growing exponentially in the last several years, and much attention is being focused on the use of IP multicast for real-time multimedia applications. The current soft and general-purpose CPU-based routers face great stress since they have great latency and low forwarding speeds. Based on the ASICs, layer 2 switching provides high-speed packet forwarding. Integrating high-speed of Layer 2 switching with the flexibility of Layer 3 routing, Layer 3 switching (IP switching) has been put forward in order to avoid the performance bottleneck associated with Layer 3 forwarding. In this paper, we present a prototype system of a scalable IP switching based on scalable ATM switching fabric and optical interconnect. The IP switching system mainly consists of the input/output interface unit, scalable ATM switching fabric and IP control component. Optical interconnects between the input fan-out stage and the interconnect stage, also the interconnect stage and the output concentration stage provide high-speed data paths. And the interconnect stage is composed of 16 X 16 CMOS-SEED ATM switching modules. With 64 ports of OC-12 interface, the maximum throughput of the prototype system is about 20 million packets per second (MPPS) for 256 bytes average packet length, and the packet loss ratio is less than 10e-9. Benefiting from the scalable architecture and the optical interconnect, this IP switching system can easily scale to very large network size.

  19. Scalable Domain Decomposed Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  20. Visualizing Higher Order Finite Elements: FY05 Yearly Report.

    SciTech Connect

    Thompson, David; Pebay, Philippe Pierre

    2005-11-01

    This report contains an algorithm for decomposing higher-order finite elementsinto regions appropriate for isosurfacing and proves the conditions under which thealgorithm will terminate. Finite elements are used to create piecewise polynomialapproximants to the solution of partial differential equations for which no analyticalsolution exists. These polynomials represent fields such as pressure, stress, and mo-mentim. In the past, these polynomials have been linear in each parametric coordinate.Each polynomial coefficient must be uniquely determined by a simulation, and thesecoefficients are called degrees of freedom. When there are not enough degrees of free-dom, simulations will typically fail to produce a valid approximation to the solution.Recent work has shown that increasing the number of degrees of freedom by increas-ing the order of the polynomial approximation (instead of increasing the number offinite elements, each of which has its own set of coefficients) can allow some typesof simulations to produce a valid approximation with many fewer degrees of freedomthan increasing the number of finite elements alone. However, once the simulation hasdetermined the values of all the coefficients in a higher-order approximant, tools donot exist for visual inspection of the solution.This report focuses on a technique for the visual inspection of higher-order finiteelement simulation results based on decomposing each finite element into simplicialregions where existing visualization algorithms such as isosurfacing will work. Therequirements of the isosurfacing algorithm are enumerated and related to the placeswhere the partial derivatives of the polynomial become zero. The original isosurfacingalgorithm is then applied to each of these regions in turn.3 AcknowledgementThe authors would like to thank David Day and Louis Romero for their insight into poly-nomial system solvers and the LDRD Senior Council for the opportunity to pursue thisresearch. The authors were

  1. Scalable metadata environments (MDE): artistically impelled immersive environments for large-scale data exploration

    NASA Astrophysics Data System (ADS)

    West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram

    2014-02-01

    Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.

  2. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  3. Scalable coherent interface: Links to the future

    SciTech Connect

    Gustavson, D.B. ); Kristiansen, E. )

    1991-11-01

    Now that the Scalable Coherent Interface (SCI) has solved the bandwidth problem, what can we use it for SCI was developed to support closely coupled multiprocessors and their caches in a distributed shared-memory environment, but its scalability and the efficient generality of its architecture make it work very well over a wide range of applications. It can replace a local area network for connecting workstations on a campus. It can be powerful I/O channel for a supercomputer. It can be the processor-cache-memory-I/O connection in a highly parallel computer. It can gather data from enormous particle detectors and distribute it among thousands of processors. It can connect a desktop microprocessor to memory chips a few millimeters away, disk drivers a few meters away, and servers a few kilometers away.

  4. Scalable coherent interface: Links to the future

    SciTech Connect

    Gustavson, D.B.; Kristiansen, E.

    1991-11-01

    Now that the Scalable Coherent Interface (SCI) has solved the bandwidth problem, what can we use it for? SCI was developed to support closely coupled multiprocessors and their caches in a distributed shared-memory environment, but its scalability and the efficient generality of its architecture make it work very well over a wide range of applications. It can replace a local area network for connecting workstations on a campus. It can be powerful I/O channel for a supercomputer. It can be the processor-cache-memory-I/O connection in a highly parallel computer. It can gather data from enormous particle detectors and distribute it among thousands of processors. It can connect a desktop microprocessor to memory chips a few millimeters away, disk drivers a few meters away, and servers a few kilometers away.

  5. Scalable descriptive and correlative statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2008-12-01

    This report summarizes the existing statistical engines in VTK/Titan and presents the parallel versions thereof which have already been implemented. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.

  6. Scalable Quantum Information Processing and Applications

    DTIC Science & Technology

    2008-01-19

    Read-out Channel Depletion Gate (-V) Read-out Channel Depletion Gate (-V) Source Drain Qubit Control Gates for Quantum Teleportation Spin Coherent...REPORT Scalable Quantum Information Processing and Applications: Final Report 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: The main goal of this...Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS Quantum repeater, quantum computing, quantum information processing

  7. Dilution Refrigerator Technology for Scalable Quantum Computing

    DTIC Science & Technology

    2014-05-22

    has successfully designed, built, tested, and delivered a cryogen free dilution refrigerator for scalable quantum computing. This document is intended... Cryogenics , quantum computing REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR/MONITOR’S ACRONYM(S) ARO 8. PERFORMING...W911NF-10-C-0004. High Precision Devices, Inc. has successfully designed, built, tested, and delivered a cryogen free dilution refrigerator for

  8. Pursuing Scalability for hypre's Conceptual Interfaces

    SciTech Connect

    Falgout, R D; Jones, J E; Yang, U M

    2004-07-21

    The software library hypre provides high performance preconditioners and solvers for the solution of large, sparse linear systems on massively parallel computers as well as conceptual interfaces that allow users to access the library in the way they naturally think about their problems. These interfaces include a stencil-based structured interface (Struct); a semi-structured interface (semiStruct), which is appropriate for applications that are mostly structured, e.g. block structured grids, composite grids in structured adaptive mesh refinement applications, and overset grids; a finite element interface (FEI) for unstructured problems, as well as a conventional linear-algebraic interface (IJ). It is extremely important to provide an efficient, scalable implementation of these interfaces in order to support the scalable solvers of the library, especially when using tens of thousands of processors. This paper describes the data structures, parallel implementation and resulting performance of the IJ, Struct and semiStruct interfaces. It investigates their scalability, presents successes as well as pitfalls of some of the approaches and suggests ways of dealing with them.

  9. DISP: Optimizations towards Scalable MPI Startup

    SciTech Connect

    Fu, Huansong; Pophale, Swaroop S; Gorentla Venkata, Manjunath; Yu, Weikuan

    2016-01-01

    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namely Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.

  10. ParaText : scalable solutions for processing and searching very large document collections : final LDRD report.

    SciTech Connect

    Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-09-01

    This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages of information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.

  11. Design and implementation of scalable tape archiver

    NASA Technical Reports Server (NTRS)

    Nemoto, Toshihiro; Kitsuregawa, Masaru; Takagi, Mikio

    1996-01-01

    In order to reduce costs, computer manufacturers try to use commodity parts as much as possible. Mainframes using proprietary processors are being replaced by high performance RISC microprocessor-based workstations, which are further being replaced by the commodity microprocessor used in personal computers. Highly reliable disks for mainframes are also being replaced by disk arrays, which are complexes of disk drives. In this paper we try to clarify the feasibility of a large scale tertiary storage system composed of 8-mm tape archivers utilizing robotics. In the near future, the 8-mm tape archiver will be widely used and become a commodity part, since recent rapid growth of multimedia applications requires much larger storage than disk drives can provide. We designed a scalable tape archiver which connects as many 8-mm tape archivers (element archivers) as possible. In the scalable archiver, robotics can exchange a cassette tape between two adjacent element archivers mechanically. Thus, we can build a large scalable archiver inexpensively. In addition, a sophisticated migration mechanism distributes frequently accessed tapes (hot tapes) evenly among all of the element archivers, which improves the throughput considerably. Even with the failures of some tape drives, the system dynamically redistributes hot tapes to the other element archivers which have live tape drives. Several kinds of specially tailored huge archivers are on the market, however, the 8-mm tape scalable archiver could replace them. To maintain high performance in spite of high access locality when a large number of archivers are attached to the scalable archiver, it is necessary to scatter frequently accessed cassettes among the element archivers and to use the tape drives efficiently. For this purpose, we introduce two cassette migration algorithms, foreground migration and background migration. Background migration transfers cassettes between element archivers to redistribute frequently accessed

  12. An atmospheric visual analysis and exploration system.

    PubMed

    Song, Yuyan; Ye, Jing; Svakhine, Nikolai; Lasher-Trapp, Sonia; Baldwin, Mike; Ebert, David S

    2006-01-01

    Meteorological research involves the analysis of multi-field, multi-scale, and multi-source data sets. In order to better understand these data sets, models and measurements at different resolutions must be analyzed. Unfortunately, traditional atmospheric visualization systems only provide tools to view a limited number of variables and small segments of the data. These tools are often restricted to two-dimensional contour or vector plots or three-dimensional isosurfaces. The meteorologist must mentally synthesize the data from multiple plots to glean the information needed to produce a coherent picture of the weather phenomenon of interest. In order to provide better tools to meteorologists and reduce system limitations, we have designed an integrated atmospheric visual analysis and exploration system for interactive analysis of weather data sets. Our system allows for the integrated visualization of 1D, 2D, and 3D atmospheric data sets in common meteorological grid structures and utilizes a variety of rendering techniques. These tools provide meteorologists with new abilities to analyze their data and answer questions on regions of interest, ranging from physics-based atmospheric rendering to illustrative rendering containing particles and glyphs. In this paper, we will discuss the use and performance of our visual analysis for two important meteorological applications. The first application is warm rain formation in small cumulus clouds. Here, our three-dimensional, interactive visualization of modeled drop trajectories within spatially correlated fields from a cloud simulation has provided researchers with new insight. Our second application is improving and validating severe storm models, specifically the Weather Research and Forecasting (WRF) model. This is done through correlative visualization of WRF model and experimental Doppler storm data.

  13. Scalable resource management in high performance computers.

    SciTech Connect

    Frachtenberg, E.; Petrini, F.; Fernandez Peinador, J.; Coll, S.

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  14. Scalable Unix tools on parallel processors

    SciTech Connect

    Gropp, W.; Lusk, E.

    1994-12-31

    The introduction of parallel processors that run a separate copy of Unix on each process has introduced new problems in managing the user`s environment. This paper discusses some generalizations of common Unix commands for managing files (e.g. 1s) and processes (e.g. ps) that are convenient and scalable. These basic tools, just like their Unix counterparts, are text-based. We also discuss a way to use these with a graphical user interface (GUI). Some notes on the implementation are provided. Prototypes of these commands are publicly available.

  15. Scalable Synthesis of (−)-Thapsigargin

    PubMed Central

    2016-01-01

    Total syntheses of the complex, highly oxygenated sesquiterpenes thapsigargin (1) and nortrilobolide (2) are presented. Access to analogues of these promising bioactive natural products has been limited to tedious isolation and semisynthetic efforts. Elegant prior total syntheses demonstrated the feasibility of creating these entitites in 36–42 step processes. The currently reported route proceeds in a scalable and more concise fashion by utilizing two-phase terpene synthesis logic. Salient features of the work include application of the classic photosantonin rearrangement and precisely choreographed installation of the multiple oxygenations present on the guaianolide skeleton. PMID:28149952

  16. Scalable analog wavefront sensor with subpixel resolution

    NASA Astrophysics Data System (ADS)

    Wilcox, Michael

    2006-06-01

    Standard Shack-Hartman wavefront sensors use a CCD element to sample position and distortion of a target or guide star. Digital sampling of the element and transfer to a memory space for subsequent computation adds significant temporal delay, thus, limiting the spatial frequency and scalability of the system as a wavefront sensor. A new approach to sampling uses information processing principles in an insect compound eye. Analog circuitry eliminates digital sampling and extends the useful range of the system to control a deformable mirror and make a faster, more capable wavefront sensor.

  17. Scalable networks for discrete quantum random walks

    SciTech Connect

    Fujiwara, S.; Osaki, H.; Buluta, I.M.; Hasegawa, S.

    2005-09-15

    Recently, quantum random walks (QRWs) have been thoroughly studied in order to develop new quantum algorithms. In this paper we propose scalable quantum networks for discrete QRWs on circles, lines, and also in higher dimensions. In our method the information about the position of the walker is stored in a quantum register and the network consists of only one-qubit rotation and (controlled){sup n}-NOT gates, therefore it is purely computational and independent of the physical implementation. As an example, we describe the experimental realization in an ion-trap system.

  18. First experience with the scalable coherent interface

    SciTech Connect

    Mueller, H. . ECP Division); RD24 Collaboration

    1994-02-01

    The research project RD24 is studying applications of the Scalable Coherent Interface (IEEE-1596) standard for the large hadron collider (LHC). First SCI node chips from Dolphin were used to demonstrate the use and functioning of SCI's packet protocols and to measure data rates. The authors present results from a first, two-node SCI ringlet at CERN, based on a R3000 RISC processor node and DMA node on a MC68040 processor bus. A diagnostic link analyzer monitors the SCI packet protocols up to full link bandwidth. In its second phase, RD24 will build a first implementation of a multi-ringlet SCI data merger.

  19. Scalable Optical-Fiber Communication Networks

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Peterson, John C.

    1993-01-01

    Scalable arbitrary fiber extension network (SAFEnet) is conceptual fiber-optic communication network passing digital signals among variety of computers and input/output devices at rates from 200 Mb/s to more than 100 Gb/s. Intended for use with very-high-speed computers and other data-processing and communication systems in which message-passing delays must be kept short. Inherent flexibility makes it possible to match performance of network to computers by optimizing configuration of interconnections. In addition, interconnections made redundant to provide tolerance to faults.

  20. SCIMITAR: Scalable Stream-Processing for Sensor Information Brokering

    DTIC Science & Technology

    2013-11-01

    paradigms, one might consider use any of the highly scalable batched Map-Reduce technologies as, for example, implemented in Hadoop [10]. Although...extremely scalable for information processing, this approach cannot pro- vide a scalable, low-latency approach to information. Hadoop needs to register...information in the Hadoop NameNode ser- vice, and then read from disk for any brokering function that could be supported by Hadoop . Whereas successful

  1. Scalable Quantum Networks for Distributed Computing and Sensing

    DTIC Science & Technology

    2016-04-01

    AFRL-AFOSR-UK-TR-2016-0007 Scalable Quantum Networks for Distributed Computing and Sensing Ian Walmsley THE UNIVERSITY OF OXFORD Final Report 04/01...MM-YYYY) 12/07/2015 2. REPORT TYPE Final 3. DATES COVERED (From - To) 01-Sep-2012 to 31-Aug-2015 4. TITLE AND SUBTITLE Scalable Quantum Networks...SUPPLEMENTARY NOTES 14. ABSTRACT We identified two barriers to the implementation of large-scale photonic quantum networks. First, as scalability requires

  2. An Open Infrastructure for Scalable, Reconfigurable Analysis

    SciTech Connect

    de Supinski, B R; Fowler, R; Gamblin, T; Mueller, F; Ratn, P; Schulz, M

    2008-05-15

    Petascale systems will have hundreds of thousands of processor cores so their applications must be massively parallel. Effective use of petascale systems will require efficient interprocess communication through memory hierarchies and complex network topologies. Tools to collect and analyze detailed data about this communication would facilitate its optimization. However, several factors complicate tool design. First, large-scale runs on petascale systems will be a precious commodity, so scalable tools must have almost no overhead. Second, the volume of performance data from petascale runs could easily overwhelm hand analysis and, thus, tools must collect only data that is relevant to diagnosing performance problems. Analysis must be done in-situ, when available processing power is proportional to the data. We describe a tool framework that overcomes these complications. Our approach allows application developers to combine existing techniques for measurement, analysis, and data aggregation to develop application-specific tools quickly. Dynamic configuration enables application developers to select exactly the measurements needed and generic components support scalable aggregation and analysis of this data with little additional effort.

  3. A scalable and operationally simple radical trifluoromethylation

    PubMed Central

    Beatty, Joel W.; Douglas, James J.; Cole, Kevin P.; Stephenson, Corey R. J.

    2015-01-01

    The large number of reagents that have been developed for the synthesis of trifluoromethylated compounds is a testament to the importance of the CF3 group as well as the associated synthetic challenge. Current state-of-the-art reagents for appending the CF3 functionality directly are highly effective; however, their use on preparative scale has minimal precedent because they require multistep synthesis for their preparation, and/or are prohibitively expensive for large-scale application. For a scalable trifluoromethylation methodology, trifluoroacetic acid and its anhydride represent an attractive solution in terms of cost and availability; however, because of the exceedingly high oxidation potential of trifluoroacetate, previous endeavours to use this material as a CF3 source have required the use of highly forcing conditions. Here we report a strategy for the use of trifluoroacetic anhydride for a scalable and operationally simple trifluoromethylation reaction using pyridine N-oxide and photoredox catalysis to affect a facile decarboxylation to the CF3 radical. PMID:26258541

  4. Towards Scalable Graph Computation on Mobile Devices

    PubMed Central

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  5. Towards Scalable Graph Computation on Mobile Devices.

    PubMed

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2014-10-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  6. Designing Scalable PGAS Communication Subsystems on Cray Gemini Interconnect

    SciTech Connect

    Vishnu, Abhinav; Daily, Jeffrey A.; Palmer, Bruce J.

    2012-12-26

    The Cray Gemini Interconnect has been recently introduced as a next generation network architecture for building multi-petaflop supercomputers. Cray XE6 systems including LANL Cielo, NERSC Hopper, ORNL Titan and proposed NCSA BlueWaters leverage the Gemini Interconnect as their primary Interconnection network. At the same time, programming models such as the Message Passing Interface (MPI) and Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) and Co-Array Fortran (CAF) have become available on these systems. Global Arrays is a popular PGAS model used in a variety of application domains including hydrodynamics, chemistry and visualization. Global Arrays uses Aggregate Re- mote Memory Copy Interface (ARMCI) as the communication runtime system for Remote Memory Access communication. This paper presents a design, implementation and performance evaluation of scalable and high performance communication subsystems on Cray Gemini Interconnect using ARMCI. The design space is explored and time-space complexities of commu- nication protocols for one-sided communication primitives such as contiguous and uniformly non-contiguous datatypes, atomic memory operations (AMOs) and memory synchronization is presented. An implementation of the proposed design (referred as ARMCI-Gemini) demonstrates the efficacy on communication primitives, application kernels such as LU decomposition and full applications such as Smooth Particle Hydrodynamics (SPH) application.

  7. GRIZ: Finite element analysis results visualization for unstructured grids. User manual

    SciTech Connect

    Dovey, D.J.; Spelce, T.E.

    1993-10-01

    GRIZ supports interactive visualization of finite element analysis results on unstructured grids. GRIZ is a general-purpose post-processing application which is designed to work with a variety of an analysis codes. Currently, GRIZ is capable of calculating and displaying derived variables for the DYNA3D, NIKE3D and TOPAZ3D analysis codes. GRIZ reads in data files in the ``MDG plotfile`` format. GRIZ provides support for modern 3D visualization techniques such as isosurface display, cutting planes and display of vector data. GRIZ also incorporates the ability to animate data over time and to store animation frames to a video disk. GRIZ is designed to utilize the capabilities of modern graphics workstations which provide hardware support for 3D graphics, thereby giving the user as much interactive performance as possible. This should make it easier for analysts to explore and interrogate their analysis results.

  8. Visual Analytics for Power Grid Contingency Analysis

    SciTech Connect

    Wong, Pak C.; Huang, Zhenyu; Chen, Yousu; Mackey, Patrick S.; Jin, Shuangshuang

    2014-01-20

    Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure to do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.

  9. Scalable Feature Matching by Dual Cascaded Scalar Quantization for Image Retrieval.

    PubMed

    Zhou, Wengang; Yang, Ming; Wang, Xiaoyu; Li, Houqiang; Lin, Yuanqing; Tian, Qi

    2016-01-01

    In this paper, we investigate the problem of scalable visual feature matching in large-scale image search and propose a novel cascaded scalar quantization scheme in dual resolution. We formulate the visual feature matching as a range-based neighbor search problem and approach it by identifying hyper-cubes with a dual-resolution scalar quantization strategy. Specifically, for each dimension of the PCA-transformed feature, scalar quantization is performed at both coarse and fine resolutions. The scalar quantization results at the coarse resolution are cascaded over multiple dimensions to index an image database. The scalar quantization results over multiple dimensions at the fine resolution are concatenated into a binary super-vector and stored into the index list for efficient verification. The proposed cascaded scalar quantization (CSQ) method is free of the costly visual codebook training and thus is independent of any image descriptor training set. The index structure of the CSQ is flexible enough to accommodate new image features and scalable to index large-scale image database. We evaluate our approach on the public benchmark datasets for large-scale image retrieval. Experimental results demonstrate the competitive retrieval performance of the proposed method compared with several recent retrieval algorithms on feature quantization.

  10. Network selection, Information filtering and Scalable computation

    NASA Astrophysics Data System (ADS)

    Ye, Changqing

    -complete factorizations, possibly with a high percentage of missing values. This promotes additional sparsity beyond rank reduction. Computationally, we design methods based on a ``decomposition and combination'' strategy, to break large-scale optimization into many small subproblems to solve in a recursive and parallel manner. On this basis, we implement the proposed methods through multi-platform shared-memory parallel programming, and through Mahout, a library for scalable machine learning and data mining, for mapReduce computation. For example, our methods are scalable to a dataset consisting of three billions of observations on a single machine with sufficient memory, having good timings. Both theoretical and numerical investigations show that the proposed methods exhibit significant improvement in accuracy over state-of-the-art scalable methods.

  11. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells.

    PubMed

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-03-09

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level.

  12. A scalable distributed paradigm for multi-user interaction with tiled rear projection display walls.

    PubMed

    Roman, Pablo; Lazarov, Maxim; Majumder, Aditi

    2010-01-01

    We present the first distributed paradigm for multiple users to interact simultaneously with large tiled rear projection display walls. Unlike earlier works, our paradigm allows easy scalability across different applications, interaction modalities, displays and users. The novelty of the design lies in its distributed nature allowing well-compartmented, application independent, and application specific modules. This enables adapting to different 2D applications and interaction modalities easily by changing a few application specific modules. We demonstrate four challenging 2D applications on a nine projector display to demonstrate the application scalability of our method: map visualization, virtual graffiti, virtual bulletin board and an emergency management system. We demonstrate the scalability of our method to multiple interaction modalities by showing both gesture-based and laser-based user interfaces. Finally, we improve earlier distributed methods to register multiple projectors. Previous works need multiple patterns to identify the neighbors, the configuration of the display and the registration across multiple projectors in logarithmic time with respect to the number of projectors in the display. We propose a new approach that achieves this using a single pattern based on specially augmented QR codes in constant time. Further, previous distributed registration algorithms are prone to large misregistrations. We propose a novel radially cascading geometric registration technique that yields significantly better accuracy. Thus, our improvements allow a significantly more efficient and accurate technique for distributed self-registration of multi-projector display walls.

  13. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells

    PubMed Central

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-01-01

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level. PMID:27005630

  14. Scalable problems and memory bounded speedup

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Ni, Lionel M.

    1992-01-01

    In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time speedup and memory-bounded speedup. The latter two consider the relationship between speedup and problem scalability. Two sets of speedup formulations are derived for these three models. One set considers uneven workload allocation and communication overhead and gives more accurate estimation. Another set considers a simplified case and provides a clear picture on the impact of the sequential portion of an application on the possible performance gain from parallel processing. The simplified fixed-size speedup is Amdahl's law. The simplified fixed-time speedup is Gustafson's scaled speedup. The simplified memory-bounded speedup contains both Amdahl's law and Gustafson's scaled speedup as special cases. This study leads to a better understanding of parallel processing.

  15. A versatile scalable PET processing system

    SciTech Connect

    H. Dong, A. Weisenberger, J. McKisson, Xi Wenze, C. Cuevas, J. Wilson, L. Zukerman

    2011-06-01

    Positron Emission Tomography (PET) historically has major clinical and preclinical applications in cancerous oncology, neurology, and cardiovascular diseases. Recently, in a new direction, an application specific PET system is being developed at Thomas Jefferson National Accelerator Facility (Jefferson Lab) in collaboration with Duke University, University of Maryland at Baltimore (UMAB), and West Virginia University (WVU) targeted for plant eco-physiology research. The new plant imaging PET system is versatile and scalable such that it could adapt to several plant imaging needs - imaging many important plant organs including leaves, roots, and stems. The mechanical arrangement of the detectors is designed to accommodate the unpredictable and random distribution in space of the plant organs without requiring the plant be disturbed. Prototyping such a system requires a new data acquisition system (DAQ) and data processing system which are adaptable to the requirements of these unique and versatile detectors.

  16. BASSET: Scalable Gateway Finder in Large Graphs

    SciTech Connect

    Tong, H; Papadimitriou, S; Faloutsos, C; Yu, P S; Eliassi-Rad, T

    2010-11-03

    Given a social network, who is the best person to introduce you to, say, Chris Ferguson, the poker champion? Or, given a network of people and skills, who is the best person to help you learn about, say, wavelets? The goal is to find a small group of 'gateways': persons who are close enough to us, as well as close enough to the target (person, or skill) or, in other words, are crucial in connecting us to the target. The main contributions are the following: (a) we show how to formulate this problem precisely; (b) we show that it is sub-modular and thus it can be solved near-optimally; (c) we give fast, scalable algorithms to find such gateways. Experiments on real data sets validate the effectiveness and efficiency of the proposed methods, achieving up to 6,000,000x speedup.

  17. Scalable ranked retrieval using document images

    NASA Astrophysics Data System (ADS)

    Jain, Rajiv; Oard, Douglas W.; Doermann, David

    2013-12-01

    Despite the explosion of text on the Internet, hard copy documents that have been scanned as images still play a significant role for some tasks. The best method to perform ranked retrieval on a large corpus of document images, however, remains an open research question. The most common approach has been to perform text retrieval using terms generated by optical character recognition. This paper, by contrast, examines whether a scalable segmentation-free image retrieval algorithm, which matches sub-images containing text or graphical objects, can provide additional benefit in satisfying a user's information needs on a large, real world dataset. Results on 7 million scanned pages from the CDIP v1.0 test collection show that content based image retrieval finds a substantial number of documents that text retrieval misses, and that when used as a basis for relevance feedback can yield improvements in retrieval effectiveness.

  18. A scalable sparse eigensolver for petascale applications

    NASA Astrophysics Data System (ADS)

    Keceli, Murat; Zhang, Hong; Zapol, Peter; Dixon, David; Wagner, Albert

    2015-03-01

    Exploiting locality of chemical interactions and therefore sparsity is necessary to push the limits of quantum simulations beyond petascale. However, sparse numerical algorithms are known to have poor strong scaling. Here, we show that shift-and-invert parallel spectral transformations (SIPs) method can scale up to two-hundred thousand cores for density functional based tight-binding (DFTB), or semi-empirical molecular orbital (SEMO) applications. We demonstrated the robustness and scalability of the SIPs method on various kinds of systems including metallic carbon nanotubes, diamond crystals and water clusters. We analyzed how sparsity patterns and eigenvalue spectrums of these different type of applications affect the computational performance of the SIPs. The SIPs method enables us to perform simulations with more than five hundred thousands of basis functions utilizing more than hundreds of thousands of cores. SIPs has a better scaling for memory and computational time in contrast to dense eigensolvers, and it does not require fast interconnects.

  19. Parallel scalability of Hartree–Fock calculations

    SciTech Connect

    Chow, Edmond Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-14

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree–Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  20. Scalable, extensible, and portable numerical libraries

    SciTech Connect

    Gropp, W.; Smith, B.

    1995-01-01

    Designing a scalable and portable numerical library requires consideration of many factors, including choice of parallel communication technology, data structures, and user interfaces. The PETSc library (Portable Extensible Tools for Scientific computing) makes use of modern software technology to provide a flexible and portable implementation. This talk will discuss the use of a meta-communication layer (allowing the user to choose different transport layers such as MPI, p4, pvm, or vendor-specific libraries) for portability, an aggressive data-structure-neutral implementation that minimizes dependence on particular data structures (even vectors), permitting the library to adapt to the user rather than the other way around, and the separation of implementation language from user-interface language. Examples are presented.

  1. Scalable asynchronous execution of cellular automata

    NASA Astrophysics Data System (ADS)

    Folino, Gianluigi; Giordano, Andrea; Mastroianni, Carlo

    2016-10-01

    The performance and scalability of cellular automata, when executed on parallel/distributed machines, are limited by the necessity of synchronizing all the nodes at each time step, i.e., a node can execute only after the execution of the previous step at all the other nodes. However, these synchronization requirements can be relaxed: a node can execute one step after synchronizing only with the adjacent nodes. In this fashion, different nodes can execute different time steps. This can be a notable advantageous in many novel and increasingly popular applications of cellular automata, such as smart city applications, simulation of natural phenomena, etc., in which the execution times can be different and variable, due to the heterogeneity of machines and/or data and/or executed functions. Indeed, a longer execution time at a node does not slow down the execution at all the other nodes but only at the neighboring nodes. This is particularly advantageous when the nodes that act as bottlenecks vary during the application execution. The goal of the paper is to analyze the benefits that can be achieved with the described asynchronous implementation of cellular automata, when compared to the classical all-to-all synchronization pattern. The performance and scalability have been evaluated through a Petri net model, as this model is very useful to represent the synchronization barrier among nodes. We examined the usual case in which the territory is partitioned into a number of regions, and the computation associated with a region is assigned to a computing node. We considered both the cases of mono-dimensional and two-dimensional partitioning. The results show that the advantage obtained through the asynchronous execution, when compared to the all-to-all synchronous approach is notable, and it can be as large as 90% in terms of speedup.

  2. Network-aware scalable video monitoring system for emergency situations with operator-managed fidelity control

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos

    2014-05-01

    In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier

  3. Visual field

    MedlinePlus

    Perimetry; Tangent screen exam; Automated perimetry exam; Goldmann visual field exam; Humphrey visual field exam ... Confrontation visual field exam : This is a quick and basic check of the visual field. The health care provider ...

  4. Visual Impairment

    MedlinePlus

    ... Loss Surgery? A Week of Healthy Breakfasts Shyness Visual Impairment KidsHealth > For Teens > Visual Impairment Print A ... with the brain, making vision impossible. What Is Visual Impairment? Many people have some type of visual ...

  5. Coupling Advanced Modeling and Visualization to Improve High-Impact Tropical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Green, Bryan

    2009-01-01

    To meet the goals of extreme weather event warning, this approach couples a modeling and visualization system that integrates existing NASA technologies and improves the modeling system's parallel scalability to take advantage of petascale supercomputers. It also streamlines the data flow for fast processing and 3D visualizations, and develops visualization modules to fuse NASA satellite data.

  6. Validity of the Developmental Test of Visual-Motor Integration Supplemental Developmental Test of Visual Perception.

    PubMed

    Brown, Ted; Rodger, Sylvia

    2008-06-01

    Visual perceptual skills of school-age children are often assessed using the Supplemental Developmental Test of Visual Perception of the Developmental Test of Visual-Motor Integration. The study purpose was to consider the construct validity of this test by evaluating its scalability (interval level measurement), unidimensionality, differential item functioning, and hierarchical ordering of its items. Visual perceptual performance scores from a sample of 356 typically developing children (171 boys and 185 girls ages 5 to 11 years) were used to complete a Rasch analysis of the test. Seven items were discarded for poor fit, while none of the items exhibited differential item functioning by sex. The construct validity, scalability, hierarchical ordering, and lack of differential item functioning requirements were met by the final test version. Since 7 test items did not fit the Rasch analysis specifications, the clinical value of the test is questionable and limited.

  7. Trelliscope: A System for Detailed Visualization in Analysis of Large Complex Data

    SciTech Connect

    Hafen, Ryan P.; Gosink, Luke J.; McDermott, Jason E.; Rodland, Karin D.; Kleese-Van Dam, Kerstin; Cleveland, William S.

    2013-12-01

    Visualization plays a critical role in the statistical model building and data analysis process. Data analysts, well-versed in statistical and machine learning methods, visualize data to hypothesize and validate models. These analysts need flexible, scalable visualization tools that are not decoupled from their analysis environment. In this paper we introduce Trelliscope, a visualization framework for statistical analysis of large complex data. Trelliscope extends Trellis, an effective visualization framework that divides data into subsets and applies a plotting method to each subset, arranging the results in rows and columns of panels. Trelliscope provides a way to create, arrange and interactively view panels for very large datasets, enabling flexible detailed visualization for data of any size. Scalability is achieved using distributed computing technologies coupled with . We discuss the underlying principles, design, and scalable architecture of Trelliscope, and illustrate its use on three analysis projects in the domains of proteomics, high energy physics, and power systems engineering.

  8. Visual Perception versus Visual Function.

    ERIC Educational Resources Information Center

    Lieberman, Laurence M.

    1984-01-01

    Disfunctions are drawn between visual perception and visual function, and four optometrists respond with further analysis of the visual perception-visual function controversy and its implications for children with learning problems. (CL)

  9. Visualization for Molecular Dynamics Simulation of Gas and Metal Surface Interaction

    NASA Astrophysics Data System (ADS)

    Puzyrkov, D.; Polyakov, S.; Podryga, V.

    2016-02-01

    The development of methods, algorithms and applications for visualization of molecular dynamics simulation outputs is discussed. The visual analysis of the results of such calculations is a complex and actual problem especially in case of the large scale simulations. To solve this challenging task it is necessary to decide on: 1) what data parameters to render, 2) what type of visualization to choose, 3) what development tools to use. In the present work an attempt to answer these questions was made. For visualization it was offered to draw particles in the corresponding 3D coordinates and also their velocity vectors, trajectories and volume density in the form of isosurfaces or fog. We tested the way of post-processing and visualization based on the Python language with use of additional libraries. Also parallel software was developed that allows processing large volumes of data in the 3D regions of the examined system. This software gives the opportunity to achieve desired results that are obtained in parallel with the calculations, and at the end to collect discrete received frames into a video file. The software package "Enthought Mayavi2" was used as the tool for visualization. This visualization application gave us the opportunity to study the interaction of a gas with a metal surface and to closely observe the adsorption effect.

  10. Visual Text Analytics for Impromptu Analysts

    SciTech Connect

    Love, Oriana J.; Best, Daniel M.; Bruce, Joseph R.; Dowson, Scott T.; Larmey, Christopher S.

    2011-10-23

    The Scalable Reasoning System (SRS) is a lightweight visual analytics framework that makes analytical capabilities widely accessible to a class of users we have deemed “impromptu analysts.” By focusing on a deployment of SRS, the Lessons Learned Explorer (LLEx), we examine how to develop visualizations around analytical-oriented goals and data availability. We discuss how to help impromptu analysts to explore deeper patterns. Through designing consistent interactions, we arrive at an interdependent view capable of showcasing patterns. With the combination of SRS widget visualizations and interactions around the underlying textual data, we aim to transition the casual, infrequent user into a viable–albeit impromptu–analyst.

  11. Provenance Storage, Querying, and Visualization in PBase

    SciTech Connect

    Kianmajd, Parisa; Ludascher, Bertram; Missier, Paolo; Chirigati, Fernando; Wei, Yaxing; Koop, David; Dey, Saumen

    2015-01-01

    We present PBase, a repository for scientific workflows and their corresponding provenance information that facilitates the sharing of experiments among the scientific community. PBase is interoperable since it uses ProvONE, a standard provenance model for scientific workflows. Workflows and traces are stored in RDF, and with the support of SPARQL and the tree cover encoding, the repository provides a scalable infrastructure for querying the provenance data. Furthermore, through its user interface, it is possible to: visualize workflows and execution traces; visualize reachability relations within these traces; issue SPARQL queries; and visualize query results.

  12. Visualization of CFD Results in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Wasfy, Tamer M.; Noor Ahmed K.

    2001-01-01

    An object-oriented event-driven immersive virtual environment (VE) is described for the visualization of computational fluid dynamics (CFD) results. The VE incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. The fluid domain is discretized using either a multi-block structured grid or an unstructured finite element mesh. The VE allows natural 'fly-through' visualization of the model, the CFD grid, and the model's surroundings. In order to help visualize the flow and its effects on the model, the VE incorporates the following objects: stream objects (lines, surface-restricted lines. ribbons. and volumes); colored surfaces; elevation surfaces; surface arrows; global and local iso-surfaces; vortex cores; and separation/attachment surfaces and lines. Most of these objects can be used for dynamically probing the flow. Particles and arrow animations can be displayed on top of stream objects. Primitive response quantities as well as derived quantities can be used. A recursive tree search algorithm is used for real-time point and value search in the CFD grid.

  13. Scalability and interoperability within glideinWMS

    SciTech Connect

    Bradley, D.; Sfiligoi, I.; Padhi, S.; Frey, J.; Tannenbaum, T.; /Wisconsin U., Madison

    2010-01-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  14. Scalability and interoperability within glideinWMS

    NASA Astrophysics Data System (ADS)

    Bradley, D.; Sfiligoi, I.; Padhi, S.; Frey, J.; Tannenbaum, T.

    2010-04-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  15. SCTP as scalable video coding transport

    NASA Astrophysics Data System (ADS)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  16. Developing a scalable inert gas ion thruster

    NASA Technical Reports Server (NTRS)

    James, E.; Ramsey, W.; Steiner, G.

    1982-01-01

    Analytical studies to identify and then design a high performance scalable ion thruster operating with either argon or xenon for use in large space systems are presented. The magnetoelectrostatic containment concept is selected for its efficient ion generation capabilities. The iterative nature of the bounding magnetic fields allows the designer to scale both the diameter and length, so that the thruster can be adapted to spacecraft growth over time. Three different thruster assemblies (conical, hexagonal and hemispherical) are evaluated for a 12 cm diameter thruster and performance mapping of the various thruster configurations shows that conical discharge chambers produce the most efficient discharge operation, achieving argon efficiencies of 50-80% mass utilization at 240-310 eV/ion and xenon efficiencies of 60-97% at 240-280 eV/ion. Preliminary testing of the large 30 cm thruster, using argon propellant, indicates a 35% improvement over the 12 cm thruster in mass utilization efficiency. Since initial performance is found to be better than projected, a larger 50 cm thruster is already in the development stage.

  17. SCAN: A Scalable Model of Attentional Selection.

    PubMed

    Hudson, Patrick T.W.; van den Herik, H Jaap; Postma, Eric O.

    1997-08-01

    This paper describes the SCAN (Signal Channelling Attentional Network) model, a scalable neural network model for attentional scanning. The building block of SCAN is a gating lattice, a sparsely-connected neural network defined as a special case of the Ising lattice from statistical mechanics. The process of spatial selection through covert attention is interpreted as a biological solution to the problem of translation-invariant pattern processing. In SCAN, a sequence of pattern translations combines active selection with translation-invariant processing. Selected patterns are channelled through a gating network, formed by a hierarchical fractal structure of gating lattices, and mapped onto an output window. We show how the incorporation of an expectation-generating classifier network (e.g. Carpenter and Grossberg's ART network) into SCAN allows attentional selection to be driven by expectation. Simulation studies show the SCAN model to be capable of attending and identifying object patterns that are part of a realistically sized natural image. Copyright 1997 Elsevier Science Ltd.

  18. Deep Hashing for Scalable Image Search.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2017-03-03

    In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the nonlinear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label supervised DH (MSDH) by including a discriminative term into the objective function of DH which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multilabel settings, respectively. Extensive experimental results on eight widely used image search datasets show that our proposed methods achieve very competitive results with the state-of-thearts.

  19. Lightweight and scalable secure communication in VANET

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoling; Lu, Yang; Zhu, Xiaojuan; Qiu, Shuwei

    2015-05-01

    To avoid a message to be tempered and forged in vehicular ad hoc network (VANET), the digital signature method is adopted by IEEE1609.2. However, the costs of the method are excessively high for large-scale networks. The paper efficiently copes with the issue with a secure communication framework by introducing some lightweight cryptography primitives. In our framework, point-to-point and broadcast communications for vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) are studied, mainly based on symmetric cryptography. A new issue incurred is symmetric key management. Thus, we develop key distribution and agreement protocols for two-party key and group key under different environments, whether a road side unit (RSU) is deployed or not. The analysis shows that our protocols provide confidentiality, authentication, perfect forward secrecy, forward secrecy and backward secrecy. The proposed group key agreement protocol especially solves the key leak problem caused by members joining or leaving in existing key agreement protocols. Due to aggregated signature and substitution of XOR for point addition, the average computation and communication costs do not significantly increase with the increase in the number of vehicles; hence, our framework provides good scalability.

  20. A scalable neuristor built with Mott memristors

    NASA Astrophysics Data System (ADS)

    Pickett, Matthew D.; Medeiros-Ribeiro, Gilberto; Williams, R. Stanley

    2013-02-01

    The Hodgkin-Huxley model for action potential generation in biological axons is central for understanding the computational capability of the nervous system and emulating its functionality. Owing to the historical success of silicon complementary metal-oxide-semiconductors, spike-based computing is primarily confined to software simulations and specialized analogue metal-oxide-semiconductor field-effect transistor circuits. However, there is interest in constructing physical systems that emulate biological functionality more directly, with the goal of improving efficiency and scale. The neuristor was proposed as an electronic device with properties similar to the Hodgkin-Huxley axon, but previous implementations were not scalable. Here we demonstrate a neuristor built using two nanoscale Mott memristors, dynamical devices that exhibit transient memory and negative differential resistance arising from an insulating-to-conducting phase transition driven by Joule heating. This neuristor exhibits the important neural functions of all-or-nothing spiking with signal gain and diverse periodic spiking, using materials and structures that are amenable to extremely high-density integration with or without silicon transistors.

  1. Scalable Combinatorial Tools for Health Disparities Research

    PubMed Central

    Langston, Michael A.; Levine, Robert S.; Kilbourne, Barbara J.; Rogers, Gary L.; Kershenbaum, Anne D.; Baktash, Suzanne H.; Coughlin, Steven S.; Saxton, Arnold M.; Agboto, Vincent K.; Hood, Darryl B.; Litchveld, Maureen Y.; Oyana, Tonny J.; Matthews-Juarez, Patricia; Juarez, Paul D.

    2014-01-01

    Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject. PMID:25310540

  2. Scalable cell alignment on optical media substrates.

    PubMed

    Anene-Nzelu, Chukwuemeka G; Choudhury, Deepak; Li, Huipeng; Fraiszudeen, Azmall; Peh, Kah-Yim; Toh, Yi-Chin; Ng, Sum Huan; Leo, Hwa Liang; Yu, Hanry

    2013-07-01

    Cell alignment by underlying topographical cues has been shown to affect important biological processes such as differentiation and functional maturation in vitro. However, the routine use of cell culture substrates with micro- or nano-topographies, such as grooves, is currently hampered by the high cost and specialized facilities required to produce these substrates. Here we present cost-effective commercially available optical media as substrates for aligning cells in culture. These optical media, including CD-R, DVD-R and optical grating, allow different cell types to attach and grow well on them. The physical dimension of the grooves in these optical media allowed cells to be aligned in confluent cell culture with maximal cell-cell interaction and these cell alignment affect the morphology and differentiation of cardiac (H9C2), skeletal muscle (C2C12) and neuronal (PC12) cell lines. The optical media is amenable to various chemical modifications with fibronectin, laminin and gelatin for culturing different cell types. These low-cost commercially available optical media can serve as scalable substrates for research or drug safety screening applications in industry scales.

  3. Memory Scalability and Efficiency Analysis of Parallel Codes

    SciTech Connect

    Janjusic, Tommy; Kartsaklis, Christos

    2015-01-01

    Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an application s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).

  4. TriG: Next Generation Scalable Spaceborne GNSS Receiver

    NASA Technical Reports Server (NTRS)

    Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.

    2012-01-01

    TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.

  5. Toward Scalable Ion Traps for Quantum Information Processing

    DTIC Science & Technology

    2010-01-01

    Deterministic quantum teleportation of atomic qubits Nature 429 737 [15] Jost J D, Home J P, Amini J M, Hanneke D, Ozeri R, Langer C, Bollinger J J, Leibfried...Toward scalable ion traps for quantum information processing This article has been downloaded from IOPscience. Please scroll down to see the full...AND SUBTITLE Toward Scalable ion Traps For Quantum Information Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR

  6. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  7. Visualization of AMR data with multi-level dual-mesh interpolation.

    PubMed

    Moran, Patrick J; Ellsworth, David

    2011-12-01

    We present a new technique for providing interpolation within cell-centered Adaptive Mesh Refinement (AMR) data that achieves C(0) continuity throughout the 3D domain. Our technique improves on earlier work in that it does not require that adjacent patches differ by at most one refinement level. Our approach takes the dual of each mesh patch and generates "stitching cells" on the fly to fill the gaps between dual meshes. We demonstrate applications of our technique with data from Enzo, an AMR cosmological structure formation simulation code. We show ray-cast visualizations that include contributions from particle data (dark matter and stars, also output by Enzo) and gridded hydrodynamic data. We also show results from isosurface studies, including surfaces in regions where adjacent patches differ by more than one refinement level.

  8. Visualization of time-varying MRI data for MS lesion analysis

    NASA Astrophysics Data System (ADS)

    Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella

    2001-05-01

    Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.

  9. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  10. Responsive, Flexible and Scalable Broader Impacts (Invited)

    NASA Astrophysics Data System (ADS)

    Decharon, A.; Companion, C.; Steinman, M.

    2010-12-01

    In many educator professional development workshops, scientists present content in a slideshow-type format and field questions afterwards. Drawbacks of this approach include: inability to begin the lecture with content that is responsive to audience needs; lack of flexible access to specific material within the linear presentation; and “Q&A” sessions are not easily scalable to broader audiences. Often this type of traditional interaction provides little direct benefit to the scientists. The Centers for Ocean Sciences Education Excellence - Ocean Systems (COSEE-OS) applies the technique of concept mapping with demonstrated effectiveness in helping scientists and educators “get on the same page” (deCharon et al., 2009). A key aspect is scientist professional development geared towards improving face-to-face and online communication with non-scientists. COSEE-OS promotes scientist-educator collaboration, tests the application of scientist-educator maps in new contexts through webinars, and is piloting the expansion of maps as long-lived resources for the broader community. Collaboration - COSEE-OS has developed and tested a workshop model bringing scientists and educators together in a peer-oriented process, often clarifying common misconceptions. Scientist-educator teams develop online concept maps that are hyperlinked to “assets” (i.e., images, videos, news) and are responsive to the needs of non-scientist audiences. In workshop evaluations, 91% of educators said that the process of concept mapping helped them think through science topics and 89% said that concept mapping helped build a bridge of communication with scientists (n=53). Application - After developing a concept map, with COSEE-OS staff assistance, scientists are invited to give webinar presentations that include live “Q&A” sessions. The webinars extend the reach of scientist-created concept maps to new contexts, both geographically and topically (e.g., oil spill), with a relatively small

  11. Statistical Scalability Analysis of Communication Operations in Distributed Applications

    SciTech Connect

    Vetter, J S; McCracken, M O

    2001-02-27

    Current trends in high performance computing suggest that users will soon have widespread access to clusters of multiprocessors with hundreds, if not thousands, of processors. This unprecedented degree of parallelism will undoubtedly expose scalability limitations in existing applications, where scalability is the ability of a parallel algorithm on a parallel architecture to effectively utilize an increasing number of processors. Users will need precise and automated techniques for detecting the cause of limited scalability. This paper addresses this dilemma. First, we argue that users face numerous challenges in understanding application scalability: managing substantial amounts of experiment data, extracting useful trends from this data, and reconciling performance information with their application's design. Second, we propose a solution to automate this data analysis problem by applying fundamental statistical techniques to scalability experiment data. Finally, we evaluate our operational prototype on several applications, and show that statistical techniques offer an effective strategy for assessing application scalability. In particular, we find that non-parametric correlation of the number of tasks to the ratio of the time for individual communication operations to overall communication time provides a reliable measure for identifying communication operations that scale poorly.

  12. Developing a personal computer-based data visualization system using public domain software

    NASA Astrophysics Data System (ADS)

    Chen, Philip C.

    1999-03-01

    The current research will investigate the possibility of developing a computing-visualization system using a public domain software system built on a personal computer. Visualization Toolkit (VTK) is available on UNIX and PC platforms. VTK uses C++ to build an executable. It has abundant programming classes/objects that are contained in the system library. Users can also develop their own classes/objects in addition to those existing in the class library. Users can develop applications with any of the C++, Tcl/Tk, and JAVA environments. The present research will show how a data visualization system can be developed with VTK running on a personal computer. The topics will include: execution efficiency; visual object quality; availability of the user interface design; and exploring the feasibility of the VTK-based World Wide Web data visualization system. The present research will feature a case study showing how to use VTK to visualize meteorological data with techniques including, iso-surface, volume rendering, vector display, and composite analysis. The study also shows how the VTK outline, axes, and two-dimensional annotation text and title are enhancing the data presentation. The present research will also demonstrate how VTK works in an internet environment while accessing an executable with a JAVA application programing in a webpage.

  13. Scalability of Localized Arc Filament Plasma Actuators

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2008-01-01

    Temporal flow control of a jet has been widely studied in the past to enhance jet mixing or reduce jet noise. Most of this research, however, has been done using small diameter low Reynolds number jets that often have little resemblance to the much larger jets common in real world applications because the flow actuators available lacked either the power or bandwidth to sufficiently impact these larger higher energy jets. The Localized Arc Filament Plasma Actuators (LAFPA), developed at the Ohio State University (OSU), have demonstrated the ability to impact a small high speed jet in experiments conducted at OSU and the power to perturb a larger high Reynolds number jet in experiments conducted at the NASA Glenn Research Center. However, the response measured in the large-scale experiments was significantly reduced for the same number of actuators compared to the jet response found in the small-scale experiments. A computational study has been initiated to simulate the LAFPA system with additional actuators on a large-scale jet to determine the number of actuators required to achieve the same desired response for a given jet diameter. Central to this computational study is a model for the LAFPA that both accurately represents the physics of the actuator and can be implemented into a computational fluid dynamics solver. One possible model, based on pressure waves created by the rapid localized heating that occurs at the actuator, is investigated using simplified axisymmetric simulations. The results of these simulations will be used to determine the validity of the model before more realistic and time consuming three-dimensional simulations are conducted to ultimately determine the scalability of the LAFPA system.

  14. Physical principles for scalable neural recording.

    PubMed

    Marblestone, Adam H; Zamft, Bradley M; Maguire, Yael G; Shapiro, Mikhail G; Cybulski, Thaddeus R; Glaser, Joshua I; Amodei, Dario; Stranges, P Benjamin; Kalhor, Reza; Dalrymple, David A; Seo, Dongjin; Alon, Elad; Maharbiz, Michel M; Carmena, Jose M; Rabaey, Jan M; Boyden, Edward S; Church, George M; Kording, Konrad P

    2013-01-01

    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power-bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and

  15. Memory-Scalable GPU Spatial Hierarchy Construction.

    PubMed

    Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D

    2011-04-01

    Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.

  16. A Robust Scalable Transportation System Concept

    NASA Technical Reports Server (NTRS)

    Hahn, Andrew; DeLaurentis, Daniel

    2006-01-01

    This report documents the 2005 Revolutionary System Concept for Aeronautics (RSCA) study entitled "A Robust, Scalable Transportation System Concept". The objective of the study was to generate, at a high-level of abstraction, characteristics of a new concept for the National Airspace System, or the new NAS, under which transportation goals such as increased throughput, delay reduction, and improved robustness could be realized. Since such an objective can be overwhelmingly complex if pursued at the lowest levels of detail, instead a System-of-Systems (SoS) approach was adopted to model alternative air transportation architectures at a high level. The SoS approach allows the consideration of not only the technical aspects of the NAS", but also incorporates policy, socio-economic, and alternative transportation system considerations into one architecture. While the representations of the individual systems are basic, the higher level approach allows for ways to optimize the SoS at the network level, determining the best topology (i.e. configuration of nodes and links). The final product (concept) is a set of rules of behavior and network structure that not only satisfies national transportation goals, but represents the high impact rules that accomplish those goals by getting the agents to "do the right thing" naturally. The novel combination of Agent Based Modeling and Network Theory provides the core analysis methodology in the System-of-Systems approach. Our method of approach is non-deterministic which means, fundamentally, it asks and answers different questions than deterministic models. The nondeterministic method is necessary primarily due to our marriage of human systems with technological ones in a partially unknown set of future worlds. Our goal is to understand and simulate how the SoS, human and technological components combined, evolve.

  17. Parallel Heuristics for Scalable Community Detection

    SciTech Connect

    Lu, Howard; Kalyanaraman, Anantharaman; Halappanavar, Mahantesh; Choudhury, Sutanay

    2014-05-17

    Community detection has become a fundamental operation in numerous graph-theoretic applications. It is used to reveal natural divisions that exist within real world networks without imposing prior size or cardinality constraints on the set of communities. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed by Blondel et al. in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method is also inherently sequential, thereby limiting its scalability to problems that can be solved on desktops. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose multiple heuristics that are designed to break the sequential barrier. Our heuristics are agnostic to the underlying parallel architecture. For evaluation purposes, we implemented our heuristics on shared memory (OpenMP) and distributed memory (MapReduce-MPI) machines, and tested them over real world graphs derived from multiple application domains (internet, biological, natural language processing). Experimental results demonstrate the ability of our heuristics to converge to high modularity solutions comparable to those output by the serial algorithm in nearly the same number of iterations, while also drastically reducing time to solution.

  18. Physical principles for scalable neural recording

    PubMed Central

    Zamft, Bradley M.; Maguire, Yael G.; Shapiro, Mikhail G.; Cybulski, Thaddeus R.; Glaser, Joshua I.; Amodei, Dario; Stranges, P. Benjamin; Kalhor, Reza; Dalrymple, David A.; Seo, Dongjin; Alon, Elad; Maharbiz, Michel M.; Carmena, Jose M.; Rabaey, Jan M.; Boyden, Edward S.; Church, George M.; Kording, Konrad P.

    2013-01-01

    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power–bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and

  19. Myria: Scalable Analytics as a Service

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  20. Computing and Visualizing Reachable Volumes for Maneuvering Satellites

    SciTech Connect

    Jiang, M; de Vries, W H; Pertica, A J; Olivier, S S

    2011-09-11

    Detecting and predicting maneuvering satellites is an important problem for Space Situational Awareness. The spatial envelope of all possible locations within reach of such a maneuvering satellite is known as the Reachable Volume (RV). As soon as custody of a satellite is lost, calculating the RV and its subsequent time evolution is a critical component in the rapid recovery of the satellite. In this paper, we present a Monte Carlo approach to computing the RV for a given object. Essentially, our approach samples all possible trajectories by randomizing thrust-vectors, thrust magnitudes and time of burn. At any given instance, the distribution of the 'point-cloud' of the virtual particles defines the RV. For short orbital time-scales, the temporal evolution of the point-cloud can result in complex, multi-reentrant manifolds. Visualization plays an important role in gaining insight and understanding into this complex and evolving manifold. In the second part of this paper, we focus on how to effectively visualize the large number of virtual trajectories and the computed RV. We present a real-time out-of-core rendering technique for visualizing the large number of virtual trajectories. We also examine different techniques for visualizing the computed volume of probability density distribution, including volume slicing, convex hull and isosurfacing. We compare and contrast these techniques in terms of computational cost and visualization effectiveness, and describe the main implementation issues encountered during our development process. Finally, we will present some of the results from our end-to-end system for computing and visualizing RVs using examples of maneuvering satellites.

  1. Hierarchical aggregation for information visualization: overview, techniques, and design guidelines.

    PubMed

    Elmqvist, Niklas; Fekete, Jean-Daniel

    2010-01-01

    We present a model for building, visualizing, and interacting with multiscale representations of information visualization techniques using hierarchical aggregation. The motivation for this work is to make visual representations more visually scalable and less cluttered. The model allows for augmenting existing techniques with multiscale functionality, as well as for designing new visualization and interaction techniques that conform to this new class of visual representations. We give some examples of how to use the model for standard information visualization techniques such as scatterplots, parallel coordinates, and node-link diagrams, and discuss existing techniques that are based on hierarchical aggregation. This yields a set of design guidelines for aggregated visualizations. We also present a basic vocabulary of interaction techniques suitable for navigating these multiscale visualizations.

  2. A scalable climate health justice assessment model

    PubMed Central

    McDonald, Yolanda J.; Grineski, Sara E.; Collins, Timothy W.; Kim, Young-An

    2014-01-01

    This paper introduces a scalable “climate health justice” model for assessing and projecting incidence, treatment costs, and sociospatial disparities for diseases with well-documented climate change linkages. The model is designed to employ low-cost secondary data, and it is rooted in a perspective that merges normative environmental justice concerns with theoretical grounding in health inequalities. Since the model employs International Classification of Diseases, Ninth Revision Clinical Modification (ICD-9-CM) disease codes, it is transferable to other contexts, appropriate for use across spatial scales, and suitable for comparative analyses. We demonstrate the utility of the model through analysis of 2008–2010 hospitalization discharge data at state and county levels in Texas (USA). We identified several disease categories (i.e., cardiovascular, gastrointestinal, heat-related, and respiratory) associated with climate change, and then selected corresponding ICD-9 codes with the highest hospitalization counts for further analyses. Selected diseases include ischemic heart disease, diarrhea, heat exhaustion/cramps/stroke/syncope, and asthma. Cardiovascular disease ranked first among the general categories of diseases for age-adjusted hospital admission rate (5286.37 per 100,000). In terms of specific selected diseases (per 100,000 population), asthma ranked first (517.51), followed by ischemic heart disease (195.20), diarrhea (75.35), and heat exhaustion/cramps/stroke/syncope (7.81). Charges associated with the selected diseases over the 3-year period amounted to US$5.6 billion. Blacks were disproportionately burdened by the selected diseases in comparison to non-Hispanic whites, while Hispanics were not. Spatial distributions of the selected disease rates revealed geographic zones of disproportionate risk. Based upon a downscaled regional climate-change projection model, we estimate a >5% increase in the incidence and treatment costs of asthma attributable to

  3. Scalable Designs for Planar Ion Trap Arrays

    NASA Astrophysics Data System (ADS)

    Slusher, R. E.

    2007-03-01

    , ``Architecture for a large-scale ion-trap quantum computer,'' Nature, Vol.417, pp.709--711, (2002). S. Seidelin, J. Chiaverini, R. Reicle, J. J. Bollinger, D. Leibfried, J. Briton, J. H. Wesenberg, R. B. Blakestad, R. J. Epstein, D. B. Hume, J. D. Jost, C. Langer, R. Ozeri, N. Shiga, and D. J. Wineland, ``Amicrofabricated surface-electrode ion trap for scalable quantum informtion processing,'' quant-ph/0601173, (2006). J. Kim, S. Pau, Z. Ma, H.R. McLellan, J.V. Gates, A. Kornblit, and R.E. Slusher, ``System design for large-scale ion trap quantum information processor,'' Quantum Inf. Comput., Vol 5, pp 515--537, (2005).

  4. A scalable climate health justice assessment model.

    PubMed

    McDonald, Yolanda J; Grineski, Sara E; Collins, Timothy W; Kim, Young-An

    2015-05-01

    This paper introduces a scalable "climate health justice" model for assessing and projecting incidence, treatment costs, and sociospatial disparities for diseases with well-documented climate change linkages. The model is designed to employ low-cost secondary data, and it is rooted in a perspective that merges normative environmental justice concerns with theoretical grounding in health inequalities. Since the model employs International Classification of Diseases, Ninth Revision Clinical Modification (ICD-9-CM) disease codes, it is transferable to other contexts, appropriate for use across spatial scales, and suitable for comparative analyses. We demonstrate the utility of the model through analysis of 2008-2010 hospitalization discharge data at state and county levels in Texas (USA). We identified several disease categories (i.e., cardiovascular, gastrointestinal, heat-related, and respiratory) associated with climate change, and then selected corresponding ICD-9 codes with the highest hospitalization counts for further analyses. Selected diseases include ischemic heart disease, diarrhea, heat exhaustion/cramps/stroke/syncope, and asthma. Cardiovascular disease ranked first among the general categories of diseases for age-adjusted hospital admission rate (5286.37 per 100,000). In terms of specific selected diseases (per 100,000 population), asthma ranked first (517.51), followed by ischemic heart disease (195.20), diarrhea (75.35), and heat exhaustion/cramps/stroke/syncope (7.81). Charges associated with the selected diseases over the 3-year period amounted to US$5.6 billion. Blacks were disproportionately burdened by the selected diseases in comparison to non-Hispanic whites, while Hispanics were not. Spatial distributions of the selected disease rates revealed geographic zones of disproportionate risk. Based upon a downscaled regional climate-change projection model, we estimate a >5% increase in the incidence and treatment costs of asthma attributable to

  5. Laplacian embedded regression for scalable manifold regularization.

    PubMed

    Chen, Lin; Tsang, Ivor W; Xu, Dong

    2012-06-01

    world data sets show the effectiveness and scalability of the proposed framework.

  6. Visual agnosia.

    PubMed

    Álvarez, R; Masjuan, J

    2016-03-01

    Visual agnosia is defined as an impairment of object recognition, in the absence of visual acuity or cognitive dysfunction that would explain this impairment. This condition is caused by lesions in the visual association cortex, sparing primary visual cortex. There are 2 main pathways that process visual information: the ventral stream, tasked with object recognition, and the dorsal stream, in charge of locating objects in space. Visual agnosia can therefore be divided into 2 major groups depending on which of the two streams is damaged. The aim of this article is to conduct a narrative review of the various visual agnosia syndromes, including recent developments in a number of these syndromes.

  7. Advances in 3D visualization of air quality data

    NASA Astrophysics Data System (ADS)

    San José, R.; Pérez, J. L.; González, R. M.

    2012-10-01

    The air quality models produce a considerable amount of data, raw data can be hard to conceptualize, particularly when the size of the data sets can be terabytes, so to understand the atmospheric processes and consequences of air pollution it is necessary to analyse the results of the air pollution simulations. The basis of the development of the visualization is shaped by the requirements of the different group of users. We show different possibilities to represent 3D atmospheric data and geographic data. We present several examples developed with IDV software, which is a generic tool that can be used directly with the simulation results. The rest of solutions are specific applications developed by the authors which are the integration of different tools and technologies. In the case of the buildings has been necessary to make a 3D model from the buildings data using COLLADA standard format. In case of the Google Earth approach, for the atmospheric part we use Ferret software. In the case of gvSIG.-3D for the atmospheric visualization we have used different geometric figures available: "QuadPoints", "Polylines", "Spheres" and isosurfaces. The last one is also displayed following the VRML standard.

  8. WIFIRE: A Scalable Data-Driven Monitoring, Dynamic Prediction and Resilience Cyberinfrastructure for Wildfires

    NASA Astrophysics Data System (ADS)

    Altintas, I.; Block, J.; Braun, H.; de Callafon, R. A.; Gollner, M. J.; Smarr, L.; Trouve, A.

    2013-12-01

    Recent studies confirm that climate change will cause wildfires to increase in frequency and severity in the coming decades especially for California and in much of the North American West. The most critical sustainability issue in the midst of these ever-changing dynamics is how to achieve a new social-ecological equilibrium of this fire ecology. Wildfire wind speeds and directions change in an instant, and first responders can only be effective when they take action as quickly as the conditions change. To deliver information needed for sustainable policy and management in this dynamically changing fire regime, we must capture these details to understand the environmental processes. We are building an end-to-end cyberinfrastructure (CI), called WIFIRE, for real-time and data-driven simulation, prediction and visualization of wildfire behavior. The WIFIRE integrated CI system supports social-ecological resilience to the changing fire ecology regime in the face of urban dynamics and climate change. Networked observations, e.g., heterogeneous satellite data and real-time remote sensor data is integrated with computational techniques in signal processing, visualization, modeling and data assimilation to provide a scalable, technological, and educational solution to monitor weather patterns to predict a wildfire's Rate of Spread. Our collaborative WIFIRE team of scientists, engineers, technologists, government policy managers, private industry, and firefighters architects implement CI pathways that enable joint innovation for wildfire management. Scientific workflows are used as an integrative distributed programming model and simplify the implementation of engineering modules for data-driven simulation, prediction and visualization while allowing integration with large-scale computing facilities. WIFIRE will be scalable to users with different skill-levels via specialized web interfaces and user-specified alerts for environmental events broadcasted to receivers before

  9. BactoGeNIE: A large-scale comparative genome visualization for big displays

    DOE PAGES

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; ...

    2015-08-13

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less

  10. BactoGeNIE: A large-scale comparative genome visualization for big displays

    SciTech Connect

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; Marai, Elisabeta G.; Leigh, Jason

    2015-08-13

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.

  11. Visualization methods for high-resolution, transient, 3-D, finite element situations

    SciTech Connect

    Christon, M.A.

    1995-01-10

    Scientific visualization is the process whereby numerical data is transformed into a visual form to augment the process of discovery and understanding. Visualizing the data generated by large-scale, transient, three-dimensional finite element simulations poses many challenges due to geometric complexity, the presence of multiple materials and multiple element types, and the inherent unstructured nature of the meshes. In this paper, the direct use of finite element data structures, nodal assembly procedures, and element interpolants for volumetric adaptive surface extraction, surface rendering, vector grids and particle tracing is discussed. A brief description of a {open_quotes}direct-to-disk{close_quotes} animation system is presented, and case studies which demonstrate the use of isosurfaces, vector plots, cutting planes, reference surfaces and particle tracing are then discussed in the context of several case studies for transient incompressible viscous flow, and acoustic fluid-structure interaction simulations. An overview of the implications of massively parallel computers on visualization is presented to highlight the issues in parallel visualization methodology, algorithms. data locality and the ultimate requirements for temporary and archival data storage and network bandwidth.

  12. Scalability enhancement of AODV using local link repairing

    NASA Astrophysics Data System (ADS)

    Jain, Jyoti; Gupta, Roopam; Bandhopadhyay, T. K.

    2014-09-01

    Dynamic change in the topology of an ad hoc network makes it difficult to design an efficient routing protocol. Scalability of an ad hoc network is also one of the important criteria of research in this field. Most of the research works in ad hoc network focus on routing and medium access protocols and produce simulation results for limited-size networks. Ad hoc on-demand distance vector (AODV) is one of the best reactive routing protocols. In this article, modified routing protocols based on local link repairing of AODV are proposed. Method of finding alternate routes for next-to-next node is proposed in case of link failure. These protocols are beacon-less, means periodic hello message is removed from the basic AODV to improve scalability. Few control packet formats have been changed to accommodate suggested modification. Proposed protocols are simulated to investigate scalability performance and compared with basic AODV protocol. This also proves that local link repairing of proposed protocol improves scalability of the network. From simulation results, it is clear that scalability performance of routing protocol is improved because of link repairing method. We have tested protocols for different terrain area with approximate constant node densities and different traffic load.

  13. FMOE-MR: content-driven multiresolution MPEG-4 fine grained scalable layered video encoding

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, S.; Luo, X.; Bhandarkar, S. M.; Li, K.

    2007-01-01

    The MPEG-4 Fine Grained Scalability (FGS) profile aims at scalable layered video encoding, in order to ensure efficient video streaming in networks with fluctuating bandwidths. In this paper, we propose a novel technique, termed as FMOEMR, which delivers significantly improved rate distortion performance compared to existing MPEG-4 Base Layer encoding techniques. The video frames are re-encoded at high resolution at semantically and visually important regions of the video (termed as Features, Motion and Objects) that are defined using a mask (FMO-Mask) and at low resolution in the remaining regions. The multiple-resolution re-rendering step is implemented such that further MPEG-4 compression leads to low bit rate Base Layer video encoding. The Features, Motion and Objects Encoded-Multi- Resolution (FMOE-MR) scheme is an integrated approach that requires only encoder-side modifications, and is transparent to the decoder. Further, since the FMOE-MR scheme incorporates "smart" video preprocessing, it requires no change in existing MPEG-4 codecs. As a result, it is straightforward to use the proposed FMOE-MR scheme with any existing MPEG codec, thus allowing great flexibility in implementation. In this paper, we have described, and implemented, unsupervised and semi-supervised algorithms to create the FMO-Mask from a given video sequence, using state-of-the-art computer vision algorithms.

  14. NEXUS Scalable and Distributed Next-Generation Avionics Bus for Space Missions

    NASA Technical Reports Server (NTRS)

    He, Yutao; Shalom, Eddy; Chau, Savio N.; Some, Raphael R.; Bolotin, Gary S.

    2011-01-01

    A paper discusses NEXUS, a common, next-generation avionics interconnect that is transparently compatible with wired, fiber-optic, and RF physical layers; provides a flexible, scalable, packet switched topology; is fault-tolerant with sub-microsecond detection/recovery latency; has scalable bandwidth from 1 Kbps to 10 Gbps; has guaranteed real-time determinism with sub-microsecond latency/jitter; has built-in testability; features low power consumption (< 100 mW per Gbps); is lightweight with about a 5,000-logic-gate footprint; and is implemented in a small Bus Interface Unit (BIU) with reconfigurable back-end providing interface to legacy subsystems. NEXUS enhances a commercial interconnect standard, Serial RapidIO, to meet avionics interconnect requirements without breaking the standard. This unified interconnect technology can be used to meet performance, power, size, and reliability requirements of all ranges of equipment, sensors, and actuators at chip-to-chip, board-to-board, or box-to-box boundary. Early results from in-house modeling activity of Serial RapidIO using VisualSim indicate that the use of a switched, high-performance avionics network will provide a quantum leap in spacecraft onboard science and autonomy capability for science and exploration missions.

  15. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  16. Freeprocessing: Transparent in situ visualization via data interception

    PubMed Central

    Fogal, Thomas; Proch, Fabian; Schiewe, Alexander; Hasemann, Olaf; Kempf, Andreas; Krüger, Jens

    2014-01-01

    In situ visualization has become a popular method for avoiding the slowest component of many visualization pipelines: reading data from disk. Most previous in situ work has focused on achieving visualization scalability on par with simulation codes, or on the data movement concerns that become prevalent at extreme scales. In this work, we consider in situ analysis with respect to ease of use and programmability. We describe an abstraction that opens up new applications for in situ visualization, and demonstrate that this abstraction and an expanded set of use cases can be realized without a performance cost. PMID:25995996

  17. geoKepler Workflow Module for Computationally Scalable and Reproducible Geoprocessing and Modeling

    NASA Astrophysics Data System (ADS)

    Cowart, C.; Block, J.; Crawl, D.; Graham, J.; Gupta, A.; Nguyen, M.; de Callafon, R.; Smarr, L.; Altintas, I.

    2015-12-01

    The NSF-funded WIFIRE project has developed an open-source, online geospatial workflow platform for unifying geoprocessing tools and models for for fire and other geospatially dependent modeling applications. It is a product of WIFIRE's objective to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. geoKepler includes a set of reusable GIS components, or actors, for the Kepler Scientific Workflow System (https://kepler-project.org). Actors exist for reading and writing GIS data in formats such as Shapefile, GeoJSON, KML, and using OGC web services such as WFS. The actors also allow for calling geoprocessing tools in other packages such as GDAL and GRASS. Kepler integrates functions from multiple platforms and file formats into one framework, thus enabling optimal GIS interoperability, model coupling, and scalability. Products of the GIS actors can be fed directly to models such as FARSITE and WRF. Kepler's ability to schedule and scale processes using Hadoop and Spark also makes geoprocessing ultimately extensible and computationally scalable. The reusable workflows in geoKepler can be made to run automatically when alerted by real-time environmental conditions. Here, we show breakthroughs in the speed of creating complex data for hazard assessments with this platform. We also demonstrate geoKepler workflows that use Data Assimilation to ingest real-time weather data into wildfire simulations, and for data mining techniques to gain insight into environmental conditions affecting fire behavior. Existing machine learning tools and libraries such as R and MLlib are being leveraged for this purpose in Kepler, as well as Kepler's Distributed Data Parallel (DDP) capability to provide a framework for scalable processing. geoKepler workflows can be executed via an iPython notebook as a part of a Jupyter hub at UC San Diego for sharing and reporting of the scientific analysis and results from

  18. Computing and Visualizing Reachable Volumes for Maneuvering Satellites

    DTIC Science & Technology

    2011-09-01

    Computing and Visualizing Reachable Volumes for Maneuvering Satellites Ming Jiang, Willem H. de Vries, Alexander J. Pertica , Scot S. Olivier...Handbook. Elsevier, 2004. 6. M. Jiang, M. Andereck, A. J. Pertica , and S. S. Olivier. A Scalable Visualization System for Improving Space Situational...Jiang, J. Leek, J. L. Levatin, S. Nikolaev, A. J. Pertica , D. W. Phillion, H. K. Springer, and W. H. de Vries. High-Performance Computer Modeling of

  19. An application architecture for large data visualization : a case study /.

    SciTech Connect

    Law, C.; Ahrens, James; Henderson, Amy

    2001-01-01

    In this case study we present an open-source visualization application with a data-parallel novel application architecture. The architecture is unique because is uses the Tcl scripting language to synchronize the user integuce with the VTK parallel visualization pipeline and parallel-rendering module. The resulting application shows scalable performance, and is easily extendable becuuse of its simple modulur architecture. We demonstrate the application with a 9.8 gigabyte structured-grid ocean model

  20. Current parallel I/O limitations to scalable data analysis.

    SciTech Connect

    Mascarenhas, Ajith Arthur; Pebay, Philippe Pierre

    2011-07-01

    This report describes the limitations to parallel scalability which we have encountered when applying our otherwise optimally scalable parallel statistical analysis tool kit to large data sets distributed across the parallel file system of the current premier DOE computational facility. This report describes our study to evaluate the effect of parallel I/O on the overall scalability of a parallel data analysis pipeline using our scalable parallel statistics tool kit [PTBM11]. In this goal, we tested it using the Jaguar-pf DOE/ORNL peta-scale platform on a large combustion simulation data under a variety of process counts and domain decompositions scenarios. In this report we have recalled the foundations of the parallel statistical analysis tool kit which we have designed and implemented, with the specific double intent of reproducing typical data analysis workflows, and achieving optimal design for scalable parallel implementations. We have briefly reviewed those earlier results and publications which allow us to conclude that we have achieved both goals. However, in this report we have further established that, when used in conjuction with a state-of-the-art parallel I/O system, as can be found on the premier DOE peta-scale platform, the scaling properties of the overall analysis pipeline comprising parallel data access routines degrade rapidly. This finding is problematic and must be addressed if peta-scale data analysis is to be made scalable, or even possible. In order to attempt to address these parallel I/O limitations, we will investigate the use the Adaptable IO System (ADIOS) [LZL+10] to improve I/O performance, while maintaining flexibility for a variety of IO options, such MPI IO, POSIX IO. This system is developed at ORNL and other collaborating institutions, and is being tested extensively on Jaguar-pf. Simulation code being developed on these systems will also use ADIOS to output the data thereby making it easier for other systems, such as ours, to

  1. Generation of pedigree diagrams for web display using scalable vector graphics from a clinical trials database.

    PubMed Central

    Fernando, S. K.; Brandt, C.; Nadkarni, P.

    2001-01-01

    The standard method of studying inherited disease is to observe its pattern of distribution in families, that is, its pattern in a pedigree. For clinical studies focused on inherited disease, a pedigree diagram is a valuable visual tool for the display of inheritance patterns. We describe the creation of a web-based pedigree display module for Trial/DB, a Web accessible database developed at the Yale Center for Medical Informatics (YCMI) to support clinical research studies. The pedigree diagram is generated dynamically from the database. The icons representing each subject in the pedigree are selectable hyperlinks that will display detailed clinical data collected on the subject. Microsoft Active Server Page and Scalable Vector Graphics (SVG) are used to create the interactive pedigree diagrams. PMID:11825175

  2. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  3. SSEL1.0. Sandia Scalable Encryption Software

    SciTech Connect

    Tarman, T.D.

    1996-08-29

    Sandia Scalable Encryption Library (SSEL) Version 1.0 is a library of functions that implement Sandia`s scalable encryption algorithm. This algorithm is used to encrypt Asynchronous Transfer Mode (ATM) data traffic, and is capable of operating on an arbitrary number of bits at a time (which permits scaling via parallel implementations), while being interoperable with differently scaled versions of this algorithm. The routines in this library implement 8 bit and 32 bit versions of a non-linear mixer which is compatible with Sandia`s hardware-based ATM encryptor.

  4. Providing scalable system software for high-end simulations

    SciTech Connect

    Greenberg, D.

    1997-12-31

    Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.

  5. Scalable photonic crystal chips for high sensitivity protein detection.

    PubMed

    Liang, Feng; Clarke, Nigel; Patel, Parth; Loncar, Marko; Quan, Qimin

    2013-12-30

    Scalable microfabrication technology has enabled semiconductor and microelectronics industries, among other fields. Meanwhile, rapid and sensitive bio-molecule detection is increasingly important for drug discovery and biomedical diagnostics. In this work, we designed and demonstrated that photonic crystal sensor chips have high sensitivity for protein detection and can be mass-produced with scalable deep-UV lithography. We demonstrated label-free detection of carcinoembryonic antigen from pg/mL to μg/mL, with high quality factor photonic crystal nanobeam cavities.

  6. Scalable multi-variate analytics of seismic and satellite-based observational data.

    PubMed

    Yuan, Xiaoru; He, Xiao; Guo, Hanqi; Guo, Peihong; Kendall, Wesley; Huang, Jian; Zhang, Yongxian

    2010-01-01

    Over the past few years, large human populations around the world have been affected by an increase in significant seismic activities. For both conducting basic scientific research and for setting critical government policies, it is crucial to be able to explore and understand seismic and geographical information obtained through all scientific instruments. In this work, we present a visual analytics system that enables explorative visualization of seismic data together with satellite-based observational data, and introduce a suite of visual analytical tools. Seismic and satellite data are integrated temporally and spatially. Users can select temporal ;and spatial ranges to zoom in on specific seismic events, as well as to inspect changes both during and after the events. Tools for designing high dimensional transfer functions have been developed to enable efficient and intuitive comprehension of the multi-modal data. Spread-sheet style comparisons are used for data drill-down as well as presentation. Comparisons between distinct seismic events are also provided for characterizing event-wise differences. Our system has been designed for scalability in terms of data size, complexity (i.e. number of modalities), and varying form factors of display environments.

  7. Pathfinder: Visual Analysis of Paths in Graphs

    PubMed Central

    Partl, C.; Gratzl, S.; Streit, M.; Wassermann, A. M.; Pfister, H.; Schmalstieg, D.; Lex, A.

    2016-01-01

    The analysis of paths in graphs is highly relevant in many domains. Typically, path-related tasks are performed in node-link layouts. Unfortunately, graph layouts often do not scale to the size of many real world networks. Also, many networks are multivariate, i.e., contain rich attribute sets associated with the nodes and edges. These attributes are often critical in judging paths, but directly visualizing attributes in a graph layout exacerbates the scalability problem. In this paper, we present visual analysis solutions dedicated to path-related tasks in large and highly multivariate graphs. We show that by focusing on paths, we can address the scalability problem of multivariate graph visualization, equipping analysts with a powerful tool to explore large graphs. We introduce Pathfinder (Figure 1), a technique that provides visual methods to query paths, while considering various constraints. The resulting set of paths is visualized in both a ranked list and as a node-link diagram. For the paths in the list, we display rich attribute data associated with nodes and edges, and the node-link diagram provides topological context. The paths can be ranked based on topological properties, such as path length or average node degree, and scores derived from attribute data. Pathfinder is designed to scale to graphs with tens of thousands of nodes and edges by employing strategies such as incremental query results. We demonstrate Pathfinder's fitness for use in scenarios with data from a coauthor network and biological pathways. PMID:27942090

  8. A scalable portable object-oriented framework for parallel multisensor data-fusion applications in HPC systems

    NASA Astrophysics Data System (ADS)

    Gupta, Pankaj; Prasad, Guru

    2004-04-01

    Multi-sensor Data Fusion is synergistic integration of multiple data sets. Data fusion includes processes for aligning, associating and combining data and information in estimating and predicting the state of objects, their relationships, and characterizing situations and their significance. The combination of complex data sets and the need for real-time data storage and retrieval compounds the data fusion problem. The systematic development and use of data fusion techniques are particularly critical in applications requiring massive, diverse, ambiguous, and time-critical data. Such conditions are characteristic of new emerging requirements; e.g., network-centric and information-centric warfare, low intensity conflicts such as special operations, counter narcotics, antiterrorism, information operations and CALOW (Conventional Arms, Limited Objectives Warfare), economic and political intelligence. In this paper, Aximetric presents a novel, scalable, object-oriented, metamodel framework for parallel, cluster-based data-fusion engine on High Performance Computing (HPC) Systems. The data-clustering algorithms provide a fast, scalable technique to sift through massive, complex data sets coming through multiple streams in real-time. The load-balancing algorithm provides the capability to evenly distribute the workload among processors on-the-fly and achieve real-time scalability. The proposed data-fusion engine exploits unique data-structures for fast storage, retrieval and interactive visualization of the multiple data streams.

  9. Scalable Track Initiation for Optical Space Surveillance

    NASA Astrophysics Data System (ADS)

    Schumacher, P.; Wilkins, M. P.

    2012-09-01

    least cubic and commonly quartic or higher. Therefore, practical implementations require attention to the scalability of the algorithms, when one is dealing with the very large number of observations from large surveillance telescopes. We address two broad categories of algorithms. The first category includes and extends the classical methods of Laplace and Gauss, as well as the more modern method of Gooding, in which one solves explicitly for the apparent range to the target in terms of the given data. In particular, recent ideas offered by Mortari and Karimi allow us to construct a family of range-solution methods that can be scaled to many processors efficiently. We find that the orbit solutions (data association hypotheses) can be ranked by means of a concept we call persistence, in which a simple statistical measure of likelihood is based on the frequency of occurrence of combinations of observations in consistent orbit solutions. Of course, range-solution methods can be expected to perform poorly if the orbit solutions of most interest are not well conditioned. The second category of algorithms addresses this difficulty. Instead of solving for range, these methods attach a set of range hypotheses to each measured line of sight. Then all pair-wise combinations of observations are considered and the family of Lambert problems is solved for each pair. These algorithms also have polynomial complexity, though now the complexity is quadratic in the number of observations and also quadratic in the number of range hypotheses. We offer a novel type of admissible-region analysis, constructing partitions of the orbital element space and deriving rigorous upper and lower bounds on the possible values of the range for each partition. This analysis allows us to parallelize with respect to the element partitions and to reduce the number of range hypotheses that have to be considered in each processor simply by making the partitions smaller. Naturally, there are many ways to

  10. ElVis: A System for the Accurate and Interactive Visualization of High-Order Finite Element Solutions.

    PubMed

    Nelson, B; Liu, E; Kirby, R M; Haimes, R

    2012-12-01

    This paper presents the Element Visualizer (ElVis), a new, open-source scientific visualization system for use with high-order finite element solutions to PDEs in three dimensions. This system is designed to minimize visualization errors of these types of fields by querying the underlying finite element basis functions (e.g., high-order polynomials) directly, leading to pixel-exact representations of solutions and geometry. The system interacts with simulation data through runtime plugins, which only require users to implement a handful of operations fundamental to finite element solvers. The data in turn can be visualized through the use of cut surfaces, contours, isosurfaces, and volume rendering. These visualization algorithms are implemented using NVIDIA's OptiX GPU-based ray-tracing engine, which provides accelerated ray traversal of the high-order geometry, and CUDA, which allows for effective parallel evaluation of the visualization algorithms. The direct interface between ElVis and the underlying data differentiates it from existing visualization tools. Current tools assume the underlying data is composed of linear primitives; high-order data must be interpolated with linear functions as a result. In this work, examples drawn from aerodynamic simulations-high-order discontinuous Galerkin finite element solutions of aerodynamic flows in particular-will demonstrate the superiority of ElVis' pixel-exact approach when compared with traditional linear-interpolation methods. Such methods can introduce a number of inaccuracies in the resulting visualization, making it unclear if visual artifacts are genuine to the solution data or if these artifacts are the result of interpolation errors. Linear methods additionally cannot properly visualize curved geometries (elements or boundaries) which can greatly inhibit developers' debugging efforts. As we will show, pixel-exact visualization exhibits none of these issues, removing the visualization scheme as a source of

  11. Scalable Robust Principal Component Analysis using Grassmann Averages.

    PubMed

    Hauberg, Soren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael

    2015-12-23

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average (GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average (TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  12. Scalable Robust Principal Component Analysis Using Grassmann Averages.

    PubMed

    Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J

    2016-11-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  13. : A Scalable and Transparent System for Simulating MPI Programs

    SciTech Connect

    Perumalla, Kalyan S

    2010-01-01

    is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, and MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.

  14. Scalable complexity-distortion model for fast motion estimation

    NASA Astrophysics Data System (ADS)

    Yi, Xiaoquan; Ling, Nam

    2005-07-01

    Recently established international video coding standard H.264/AVC and the upcoming standard on scalable video coding (SVC) bring part of the solution to high compression ratio requirement and heterogeneity requirement. However, these algorithms have unbearable complexities for real-time encoding. Therefore, there is an important challenge to reduce encoding complexity, preferably in a scalable manner. Motion estimation and motion compensation techniques provide significant coding gain but are the most time-intensive parts in an encoder system. They present tremendous research challenges to design a flexible, rate-distortion optimized, yet computationally efficient encoder, especially for various applications. In this paper, we present a scalable motion estimation framework for complexitydistortion consideration. We propose a new progressive initial search (PIS) method to generate an accurate initial search point, followed by a fast search method, which can greatly benefit from the tighter bounds of the PIS. Such approach offers not only significant speedup but also an optimal distortion performance for a given complexity constrain. We analyze the relationship between computational complexity and distortion (C-D) through probabilistic distance measure extending from the complexity and distortion theory. A configurable complexity quantization parameter (Q) is introduced. Simulation results demonstrate that the proposed scalable complexity-distortion framework enables video encoder to conveniently adjust its complexity while providing best possible services.

  15. Estimates of the Sampling Distribution of Scalability Coefficient H

    ERIC Educational Resources Information Center

    Van Onna, Marieke J. H.

    2004-01-01

    Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…

  16. Data Intensive Architecture for Scalable Cyber Analytics

    SciTech Connect

    Olsen, Bryan K.; Johnson, John R.; Critchlow, Terence J.

    2011-11-15

    Cyber analysts are tasked with the identification and mitigation of network exploits and threats. These compromises are difficult to identify due to the characteristics of cyber communication, the volume of traffic, and the duration of possible attack. It is necessary to have analytical tools to help analysts identify anomalies that span seconds, days, and weeks. Unfortunately, providing analytical tools effective access to the volumes of underlying data requires novel architectures, which is often overlooked in operational deployments. Our work is focused on a summary record of communication, called a flow. Flow records are intended to summarize a communication session between a source and a destination, providing a level of aggregation from the base data. Despite this aggregation, many enterprise network perimeter sensors store millions of network flow records per day. The volume of data makes analytics difficult, requiring the development of new techniques to efficiently identify temporal patterns and potential threats. The massive volume makes analytics difficult, but there are other characteristics in the data which compound the problem. Within the billions of records of communication that transact, there are millions of distinct IP addresses involved. Characterizing patterns of entity behavior is very difficult with the vast number of entities that exist in the data. Research has struggled to validate a model for typical network behavior with hopes it will enable the identification of atypical behavior. Complicating matters more, typically analysts are only able to visualize and interact with fractions of data and have the potential to miss long term trends and behaviors. Our analysis approach focuses on aggregate views and visualization techniques to enable flexible and efficient data exploration as well as the capability to view trends over long periods of time. Realizing that interactively exploring summary data allowed analysts to effectively identify

  17. Visual signatures in video visualization.

    PubMed

    Chen, Min; Botchen, Ralf P; Hashim, Rudy R; Weiskopf, Daniel; Ertl, Thomas; Thornton, Ian M

    2006-01-01

    Video visualization is a computation process that extracts meaningful information from original video data sets and conveys the extracted information to users in appropriate visual representations. This paper presents a broad treatment of the subject, following a typical research pipeline involving concept formulation, system development, a path-finding user study, and a field trial with real application data. In particular, we have conducted a fundamental study on the visualization of motion events in videos. We have, for the first time, deployed flow visualization techniques in video visualization. We have compared the effectiveness of different abstract visual representations of videos. We have conducted a user study to examine whether users are able to learn to recognize visual signatures of motions, and to assist in the evaluation of different visualization techniques. We have applied our understanding and the developed techniques to a set of application video clips. Our study has demonstrated that video visualization is both technically feasible and cost-effective. It has provided the first set of evidence confirming that ordinary users can be accustomed to the visual features depicted in video visualizations, and can learn to recognize visual signatures of a variety of motion events.

  18. Visual Imagery without Visual Perception?

    ERIC Educational Resources Information Center

    Bertolo, Helder

    2005-01-01

    The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…

  19. Oceanotron, Scalable Server for Marine Observations

    NASA Astrophysics Data System (ADS)

    Loubrieu, T.; Bregent, S.; Blower, J. D.; Griffiths, G.

    2013-12-01

    Ifremer, French marine institute, is deeply involved in data management for different ocean in-situ observation programs (ARGO, OceanSites, GOSUD, ...) or other European programs aiming at networking ocean in-situ observation data repositories (myOcean, seaDataNet, Emodnet). To capitalize the effort for implementing advance data dissemination services (visualization, download with subsetting) for these programs and generally speaking water-column observations repositories, Ifremer decided to develop the oceanotron server (2010). Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpeNDAP, ...), the server is designed to manage plugins: - StorageUnits : which enable to read specific data repository formats (netCDF/OceanSites, RDBMS schema, ODV binary format). - FrontDesks : which get external requests and send results for interoperable protocols (OGC/WMS, OGC/SOS, OpenDAP). In between a third type of plugin may be inserted: - TransformationUnits : which enable ocean business related transformation of the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). The server is released under open-source license so that partners can develop their own plugins. Within MyOcean project, University of Reading has plugged a WMS implementation as an oceanotron frontdesk. The modules are connected together by sharing the same information model for marine observations (or sampling features: vertical profiles, point series and trajectories), dataset metadata and queries. The shared information model is based on OGC/Observation & Measurement and Unidata/Common Data Model initiatives. The model is implemented in java (http://www.ifremer.fr/isi/oceanotron/javadoc/). This inner-interoperability level enables to capitalize ocean business expertise in software development without being indentured to

  20. Scalable and Adaptive Streaming of 3D Mesh to Heterogeneous Devices

    NASA Astrophysics Data System (ADS)

    Abderrahim, Zeineb; Bouhlel, Mohamed Salim

    2016-12-01

    This article comprises a presentation of a web platform for the diffusion and visualization of 3D compressed data on the web. Indeed, the major goal of this work resides in the proposal of the transfer adaptation of the three-dimensional data to resources (network bandwidth, the type of visualization terminals, display resolution, user's preferences...). Also, it is an attempt to provide an effective consultation adapted to the user's request (preferences, levels of the requested detail, etc.). Such a platform can adapt the levels of detail to the change in the bandwidth and the rendering time when loading the mesh at the client level. In addition, the levels of detail are adapted to the distance between the object and the camera. These features are able to minimize the latency time and to make the real time interaction possible. The experiences as well as the comparison with the existing solutions show auspicious results in terms of latency, scalability and the quality of the experience offered to the users.

  1. Scalable quantum memory in the ultrastrong coupling regime.

    PubMed

    Kyaw, T H; Felicetti, S; Romero, G; Solano, E; Kwek, L-C

    2015-03-02

    Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances.

  2. Scalable parallel distance field construction for large-scale applications

    SciTech Connect

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; Kolla, Hemanth; Chen, Jacqueline H.

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.

  3. A look at scalable dense linear algebra libraries

    SciTech Connect

    Dongarra, J.J. |; van de Geijn, R.; Walker, D.W.

    1992-07-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization are presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 Gflop/s (double precision) for the largest problem considered.

  4. Scalable Synthesis of Cortistatin A and Related Structures

    PubMed Central

    Shi, Jun; Manolikakes, Georg; Yeh, Chien-Hung; Guerrero, Carlos A.; Shenvi, Ryan A.; Shigehisa, Hiroki

    2011-01-01

    Full details are provided for an improved synthesis of cortistatin A and related structures as well as the underlying logic and evolution of strategy. The highly functionalized cortistatin A-ring embedded with a key heteroadamantane was synthesized by a simple and scalable 5-step sequence. A chemoselective, tandem geminal dihalogenation of an unactivated methyl group, a reductive fragmentation/trapping/elimination of a bromocyclopropane, and a facile chemoselective etherification reaction afforded the cortistatin A core, dubbed “cortistatinone”. A selective Δ16-alkene reduction with Raney Ni provided cortistatin A. With this scalable and practical route, copious quantities of cortistatinone, Δ16-cortistatin A-the equipotent direct precursor to cortistatin A, and its related analogs were prepared for further biological studies. PMID:21539314

  5. Scalable fabrication of triboelectric nanogenerators for commercial applications

    NASA Astrophysics Data System (ADS)

    Dhakar, Lokesh; Shan, Xuechuan; Wang, Zhiping; Yang, Bin; Eng Hock Tay, Francis; Heng, Chun-Huat; Lee, Chengkuo

    2015-12-01

    Harvesting mechanical energy from irregular sources is a potential way to charge batteries for devices and sensor nodes. Triboelectric effect has been extensively utilized in energy harvesting devices as a method to convert mechanical energy into electrical energy. As triboelectric nanogenerators have immense potential to be commercialized, it is important to develop scalable fabrication methods to manufacture these devices. This paper presents scalable fabrication steps to realize large scale triboelectric nanogenerators. Roll-to-roll UV embossing and lamination techniques are used to fabricate different components of large scale triboelectric nanogenerators. The device generated a peak-to-peak voltage and current of 486 V and 21.2 μA, respectively at a frequency of 5 Hz.

  6. Scalable digital hardware for a trapped ion quantum computer

    NASA Astrophysics Data System (ADS)

    Mount, Emily; Gaultney, Daniel; Vrijsen, Geert; Adams, Michael; Baek, So-Young; Hudek, Kai; Isabella, Louis; Crain, Stephen; van Rynbach, Andre; Maunz, Peter; Kim, Jungsang

    2016-12-01

    Many of the challenges of scaling quantum computer hardware lie at the interface between the qubits and the classical control signals used to manipulate them. Modular ion trap quantum computer architectures address scalability by constructing individual quantum processors interconnected via a network of quantum communication channels. Successful operation of such quantum hardware requires a fully programmable classical control system capable of frequency stabilizing the continuous wave lasers necessary for loading, cooling, initialization, and detection of the ion qubits, stabilizing the optical frequency combs used to drive logic gate operations on the ion qubits, providing a large number of analog voltage sources to drive the trap electrodes, and a scheme for maintaining phase coherence among all the controllers that manipulate the qubits. In this work, we describe scalable solutions to these hardware development challenges.

  7. Scalable Quantum Photonics with Single Color Centers in Silicon Carbide.

    PubMed

    Radulaski, Marina; Widmann, Matthias; Niethammer, Matthias; Zhang, Jingyuan Linda; Lee, Sang-Yun; Rendler, Torsten; Lagoudakis, Konstantinos G; Son, Nguyen Tien; Janzén, Erik; Ohshima, Takeshi; Wrachtrup, Jörg; Vučković, Jelena

    2017-02-24

    Silicon carbide is a promising platform for single photon sources, quantum bits (qubits), and nanoscale sensors based on individual color centers. Toward this goal, we develop a scalable array of nanopillars incorporating single silicon vacancy centers in 4H-SiC, readily available for efficient interfacing with free-space objective and lensed-fibers. A commercially obtained substrate is irradiated with 2 MeV electron beams to create vacancies. Subsequent lithographic process forms 800 nm tall nanopillars with 400-1400 nm diameters. We obtain high collection efficiency of up to 22 kcounts/s optical saturation rates from a single silicon vacancy center while preserving the single photon emission and the optically induced electron-spin polarization properties. Our study demonstrates silicon carbide as a readily available platform for scalable quantum photonics architecture relying on single photon sources and qubits.

  8. Thermally assisted MRAMs: ultimate scalability and logic functionalities

    NASA Astrophysics Data System (ADS)

    Prejbeanu, I. L.; Bandiera, S.; Alvarez-Hérault, J.; Sousa, R. C.; Dieny, B.; Nozières, J.-P.

    2013-02-01

    This paper is focused on thermally assisted magnetic random access memories (TA-MRAMs). It explains how the heating produced by Joule dissipation around the tunnel barrier of magnetic tunnel junctions (MTJs) can be used advantageously to assist writing in MRAMs. The main idea is to apply a heating pulse to the junction simultaneously with a magnetic field (field-induced thermally assisted (TA) switching). Since the heating current also provides a spin-transfer torque (current-induced TA switching), the magnetic field lines can be removed to increase the storage density of TA-MRAMs. Ultimately, thermally induced anisotropy reorientation (TIAR)-assisted spin-transfer torque switching can be used in MTJs with perpendicular magnetic anisotropy to obtain ultimate downsize scalability with reduced power consumption. TA writing allows extending the downsize scalability of MRAMs as it does in hard disk drive technology, but it also allows introducing new functionalities particularly useful for security applications (Match-in-Place™ technology).

  9. Scalable quantum memory in the ultrastrong coupling regime

    PubMed Central

    Kyaw, T. H.; Felicetti, S.; Romero, G.; Solano, E.; Kwek, L.-C.

    2015-01-01

    Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances. PMID:25727251

  10. Development of Scalable Culture Systems for Human Embryonic Stem Cells

    PubMed Central

    Azarin, Samira M.; Palecek, Sean P.

    2009-01-01

    The use of human pluripotent stem cells, including embryonic and induced pluripotent stem cells, in therapeutic applications will require the development of robust, scalable culture technologies for undifferentiated cells. Advances made in large-scale cultures of other mammalian cells will facilitate expansion of undifferentiated human embryonic stem cells (hESCs), but challenges specific to hESCs will also have to be addressed, including development of defined, humanized culture media and substrates, monitoring spontaneous differentiation and heterogeneity in the cultures, and maintaining karyotypic integrity in the cells. This review will describe our current understanding of environmental factors that regulate hESC self-renewal and efforts to provide these cues in various scalable bioreactor culture systems. PMID:20161686

  11. Scalable, full-colour and controllable chromotropic plasmonic printing

    PubMed Central

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization. PMID:26567803

  12. Scalable cluster administration - Chiba City I approach and lessons learned.

    SciTech Connect

    Navarro, J. P.; Evard, R.; Nurmi, D.; Desai, N.

    2002-07-01

    Systems administrators of large clusters often need to perform the same administrative activity hundreds or thousands of times. Often such activities are time-consuming, especially the tasks of installing and maintaining software. By combining network services such as DHCP, TFTP, FTP, HTTP, and NFS with remote hardware control, cluster administrators can automate all administrative tasks. Scalable cluster administration addresses the following challenge: What systems design techniques can cluster builders use to automate cluster administration on very large clusters? We describe the approach used in the Mathematics and Computer Science Division of Argonne National Laboratory on Chiba City I, a 314-node Linux cluster; and we analyze the scalability, flexibility, and reliability benefits and limitations from that approach.

  13. Scalable graphene coatings for enhanced condensation heat transfer.

    PubMed

    Preston, Daniel J; Mafra, Daniela L; Miljkovic, Nenad; Kong, Jing; Wang, Evelyn N

    2015-05-13

    Water vapor condensation is commonly observed in nature and routinely used as an effective means of transferring heat with dropwise condensation on nonwetting surfaces exhibiting heat transfer improvement compared to filmwise condensation on wetting surfaces. However, state-of-the-art techniques to promote dropwise condensation rely on functional hydrophobic coatings that either have challenges with chemical stability or are so thick that any potential heat transfer improvement is negated due to the added thermal resistance of the coating. In this work, we show the effectiveness of ultrathin scalable chemical vapor deposited (CVD) graphene coatings to promote dropwise condensation while offering robust chemical stability and maintaining low thermal resistance. Heat transfer enhancements of 4× were demonstrated compared to filmwise condensation, and the robustness of these CVD coatings was superior to typical hydrophobic monolayer coatings. Our results indicate that graphene is a promising surface coating to promote dropwise condensation of water in industrial conditions with the potential for scalable application via CVD.

  14. Mathematical Visualization

    ERIC Educational Resources Information Center

    Rogness, Jonathan

    2011-01-01

    Advances in computer graphics have provided mathematicians with the ability to create stunning visualizations, both to gain insight and to help demonstrate the beauty of mathematics to others. As educators these tools can be particularly important as we search for ways to work with students raised with constant visual stimulation, from video games…

  15. Visual Knowledge.

    ERIC Educational Resources Information Center

    Chipman, Susan F.

    Visual knowledge is an enormously important part of our total knowledge. The psychological study of learning and knowledge has focused almost exclusively on verbal materials. Today, the advance of technology is making the use of visual communication increasingly feasible and popular. However, this enthusiasm involves the illusion that visual…

  16. Visual Theorems.

    ERIC Educational Resources Information Center

    Davis, Philip J.

    1993-01-01

    Argues for a mathematics education that interprets the word "theorem" in a sense that is wide enough to include the visual aspects of mathematical intuition and reasoning. Defines the term "visual theorems" and illustrates the concept using the Marigold of Theodorus. (Author/MDH)

  17. Visual Thinking.

    ERIC Educational Resources Information Center

    Arnheim, Rudolf

    Based on the more general principle that all thinking (including reasoning) is basically perceptual in nature, the author proposes that visual perception is not a passive recording of stimulus material but an active concern of the mind. He delineates the task of visually distinguishing changes in size, shape, and position and points out the…

  18. Scalable Real Time Data Management for Smart Grid

    SciTech Connect

    Yin, Jian; Kulkarni, Anand V.; Purohit, Sumit; Gorton, Ian; Akyol, Bora A.

    2011-12-16

    This paper presents GridMW, a scalable and reliable data middleware for smart grids. Smart grids promise to improve the efficiency of power grid systems and reduce green house emissions through incorporating power generation from renewable sources and shaping demand to match the supply. As a result, power grid systems will become much more dynamic and require constant adjustments, which requires analysis and decision making applications to improve the efficiency and reliability of smart grid systems.

  19. Scalable Power-Component Models for Concept Testing

    DTIC Science & Technology

    2011-08-16

    Abrams) Diesel 150-1000 hp (Others) Alternator 24 Vdc Bi-directional 150 kW DC-DC Converter 400 kW AC to DC Converter Energy Storage Power Conversion...unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Outline • Motivation and Scope • Integrated Starter Generator Model • Battery Model...and systems engineering. • Scope: Scalable, generic MATLAB/Simulink models in three areas: – Electromechanical machines (Integrated Starter

  20. Efficient Byzantine Fault Tolerance for Scalable Storage and Services

    DTIC Science & Technology

    2009-07-01

    k O p s/ se c) No Redundancy Zzyzx-noPQ Zzyzx Zyzzyva (B=10) Zyzzyva (B=1) Zzyz x +f +1 Figure 5.5.6: Throughput vs. client processes when f = 1 and...28 ix x CONTENTS 3.4.4 Linearizability and Immediate Recovery...need only the minimal number of responsive servers to ensure high throughput, provide single roundtrip latency, and provide scalability through

  1. Space Situational Awareness Data Processing Scalability Utilizing Google Cloud Services

    NASA Astrophysics Data System (ADS)

    Greenly, D.; Duncan, M.; Wysack, J.; Flores, F.

    Space Situational Awareness (SSA) is a fundamental and critical component of current space operations. The term SSA encompasses the awareness, understanding and predictability of all objects in space. As the population of orbital space objects and debris increases, the number of collision avoidance maneuvers grows and prompts the need for accurate and timely process measures. The SSA mission continually evolves to near real-time assessment and analysis demanding the need for higher processing capabilities. By conventional methods, meeting these demands requires the integration of new hardware to keep pace with the growing complexity of maneuver planning algorithms. SpaceNav has implemented a highly scalable architecture that will track satellites and debris by utilizing powerful virtual machines on the Google Cloud Platform. SpaceNav algorithms for processing CDMs outpace conventional means. A robust processing environment for tracking data, collision avoidance maneuvers and various other aspects of SSA can be created and deleted on demand. Migrating SpaceNav tools and algorithms into the Google Cloud Platform will be discussed and the trials and tribulations involved. Information will be shared on how and why certain cloud products were used as well as integration techniques that were implemented. Key items to be presented are: 1.Scientific algorithms and SpaceNav tools integrated into a scalable architecture a) Maneuver Planning b) Parallel Processing c) Monte Carlo Simulations d) Optimization Algorithms e) SW Application Development/Integration into the Google Cloud Platform 2. Compute Engine Processing a) Application Engine Automated Processing b) Performance testing and Performance Scalability c) Cloud MySQL databases and Database Scalability d) Cloud Data Storage e) Redundancy and Availability

  2. Scalable Deployment of Advanced Building Energy Management Systems

    DTIC Science & Technology

    2013-05-01

    January 2011, respectively. These savings were smaller compared with savings opportunities in the cooling season because of the cold weather during the...FINAL REPORT Scalable Deployment of Advanced Building Energy Management Systems ESTCP Project EW-201015 MAY 2013 Veronica Adetola... Management Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER

  3. Scalable Advanced Network Services Based on Coordinated Active Components

    DTIC Science & Technology

    2004-02-01

    as a means of customizing both high functionality and scalable communication components to meet the needs of specific services. • A service...considering both the service quality for the user and the efficient use of the infrastructure (cost). ( 4 ) Finally, the synthesizer needs to configure the...response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed , and completing

  4. Performance and Scalability of the NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for scientific applications. In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for scientific applications.

  5. Scalable Anonymous Group Communication in the Anytrust Model

    DTIC Science & Technology

    2012-04-10

    nets messaging phase was high and not a significant improvement over the shuffle alone. Herbivore [31] makes low latency guar- antees (100s of...practical anonymity systems such as Tor [16] or Herbivore [31], where a small number of “wrong” choices—e.g., the choice of entry and exit relay in Tor—can...of-service attacks makes them largely impractical. Herbivore [31] attempts to make DC-nets more scalable, but it provides unconditional anonymity only

  6. Scalable Solutions for Interactive Virtual Humans that can Manipulate Objects

    DTIC Science & Technology

    2005-01-01

    A scalable approach is therefore sought for addressing such different requirements in an unified framework. Related Work Only few animation frameworks... animation of human grasping using forward and in- verse kinematics. Computer & Graphics 23:145–154. Baerlocher, P., and Boulic, R. 1998. Task-priority...formu- lations for the kinematic control of highly redundant artic - ulated structures. In Proceedings of IEEE IROS’98, 323– 329. Baerlocher, P. 2001

  7. Economical and scalable synthesis of 6-amino-2-cyanobenzothiazole

    PubMed Central

    Hauser, Jacob R; Beard, Hester A; Bayana, Mary E; Jolley, Katherine E; Warriner, Stuart L

    2016-01-01

    Summary 2-Cyanobenzothiazoles (CBTs) are useful building blocks for: 1) luciferin derivatives for bioluminescent imaging; and 2) handles for bioorthogonal ligations. A particularly versatile CBT is 6-amino-2-cyanobenzothiazole (ACBT), which has an amine handle for straight-forward derivatisation. Here we present an economical and scalable synthesis of ACBT based on a cyanation catalysed by 1,4-diazabicyclo[2.2.2]octane (DABCO), and discuss its advantages for scale-up over previously reported routes. PMID:27829906

  8. Amira: Multi-Dimensional Scientific Visualization for the GeoSciences in the 21st Century

    NASA Astrophysics Data System (ADS)

    Bartsch, H.; Erlebacher, G.

    2003-12-01

    amira (www.amiravis.com) is a general purpose framework for 3D scientific visualization that meets the needs of the non-programmer, the script writer, and the advanced programmer alike. Provided modules may be visually assembled in an interactive manner to create complex visual displays. These modules and their associated user interfaces are controlled either through a mouse, or via an interactive scripting mechanism based on Tcl. We provide interactive demonstrations of the various features of Amira and explain how these may be used to enhance the comprehension of datasets in use in the Earth Sciences community. Its features will be illustrated on scalar and vector fields on grid types ranging from Cartesian to fully unstructured. Specialized extension modules developed by some of our collaborators will be illustrated [1]. These include a module to automatically choose values for salient isosurface identification and extraction, and color maps suitable for volume rendering. During the session, we will present several demonstrations of remote networking, processing of very large spatio-temporal datasets, and various other projects that are underway. In particular, we will demonstrate WEB-IS, a java-applet interface to Amira that allows script editing via the web, and selected data analysis [2]. [1] G. Erlebacher, D. A. Yuen, F. Dubuffet, "Case Study: Visualization and Analysis of High Rayleigh Number -- 3D Convection in the Earth's Mantle", Proceedings of Visualization 2002, pp. 529--532. [2] Y. Wang, G. Erlebacher, Z. A. Garbow, D. A. Yuen, "Web-Based Service of a Visualization Package 'amira' for the Geosciences", Visual Geosciences, 2003.

  9. A Systems Approach to Scalable Transportation Network Modeling

    SciTech Connect

    Perumalla, Kalyan S

    2006-01-01

    Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory and speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.

  10. A scalable micro-mixer for biomedical applications

    NASA Astrophysics Data System (ADS)

    Cortelezzi, Luca; Ferrari, Simone; Dubini, Angelo

    2016-11-01

    Our study presents a geometrically scalable active micro-mixer suitable for biomedical/bioengineering applications and potentially assimilable in a Lab-on-Chip. We designed our micro-mixer with the goal of satisfying the following constraints: small dimensions, because the device must be able to process volumes of fluid in the range of 10-6 ÷10-9 liters; high mixing speed, because mixing should be obtained in the shortest possible time; constructive simplicity, to facilitate realizability, assimilability and reusability of the micro-mixer; and geometrical scalability, because the micro-mixer should be assimilable to microfluidic systems of different dimensions. We studied numerically the mixing performance of our micro-mixer both in two- and three-dimensions. We characterize the mixing performance in terms of Reynolds, Strouhal and Péclet numbers in order to establish a practical range of operating conditions for our micro-mixer. Finally, we show that our micro-mixer is geometrically scalable, ie., micro-mixers of different geometrical dimensions having the same nondimensional specifications produce nearly the same mixing performance.

  11. Event metadata records as a testbed for scalable data mining

    NASA Astrophysics Data System (ADS)

    van Gemmeren, P.; Malon, D.

    2010-04-01

    At a data rate of 200 hertz, event metadata records ("TAGs," in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise "data mining," but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  12. Scalable fault tolerant image communication and storage grid

    NASA Astrophysics Data System (ADS)

    Slik, David; Seiler, Oliver; Altman, Tym; Montour, Mike; Kermani, Mohammad; Proseilo, Walter; Terry, David; Kawahara, Midori; Leckie, Chris; Muir, Dale

    2003-05-01

    Increasing production and use of digital medical imagery are driving new approaches to information storage and management. Traditional, centralized approaches to image communication, storage and archiving are becoming increasingly expensive to scale and operate with high levels of reliability. Multi-site, geographically-distributed deployments connected by limited-bandwidth networks present further scalability, reliability, and availability challenges. A grid storage architecture built from a distributed network of low cost, off-the-shelf servers (nodes) provides scalable data and metadata storage, processing, and communication without single points of failure. Imaging studies are stored, replicated, cached, managed, and retrieved based on defined rules, and nodes within the grid can acquire studies and respond to queries. Grid nodes transparently load-balance queries, storage/retrieval requests, and replicate data for automated backup and disaster recovery. This approach reduces latency, increases availability, provides near-linear scalability and allows the creation of a geographically distributed medical imaging network infrastructure. This paper presents some key concepts in grid storage and discusses the results of a clinical deployment of a multi-site storage grid for cancer care in the province of British Columbia.

  13. Scalability, Timing, and System Design Issues for Intrinsic Evolvable Hardware

    NASA Technical Reports Server (NTRS)

    Hereford, James; Gwaltney, David

    2004-01-01

    In this paper we address several issues pertinent to intrinsic evolvable hardware (EHW). The first issue is scalability; namely, how the design space scales as the programming string for the programmable device gets longer. We develop a model for population size and the number of generations as a function of the programming string length, L, and show that the number of circuit evaluations is an O(L2) process. We compare our model to several successful intrinsic EHW experiments and discuss the many implications of our model. The second issue that we address is the timing of intrinsic EHW experiments. We show that the processing time is a small part of the overall time to derive or evolve a circuit and that major improvements in processor speed alone will have only a minimal impact on improving the scalability of intrinsic EHW. The third issue we consider is the system-level design of intrinsic EHW experiments. We review what other researchers have done to break the scalability barrier and contend that the type of reconfigurable platform and the evolutionary algorithm are tied together and impose limits on each other.

  14. The intergroup protocols: Scalable group communication for the internet

    SciTech Connect

    Berket, Karlo

    2000-12-04

    Reliable group ordered delivery of multicast messages in a distributed system is a useful service that simplifies the programming of distributed applications. Such a service helps to maintain the consistency of replicated information and to coordinate the activities of the various processes. With the increasing popularity of the Internet, there is an increasing interest in scaling the protocols that provide this service to the environment of the Internet. The InterGroup protocol suite, described in this dissertation, provides such a service, and is intended for the environment of the Internet with scalability to large numbers of nodes and high latency links. The InterGroup protocols approach the scalability problem from various directions. They redefine the meaning of group membership, allow voluntary membership changes, add a receiver-oriented selection of delivery guarantees that permits heterogeneity of the receiver set, and provide a scalable reliability service. The InterGroup system comprises several components, executing at various sites within the system. Each component provides part of the services necessary to implement a group communication system for the wide-area. The components can be categorized as: (1) control hierarchy, (2) reliable multicast, (3) message distribution and delivery, and (4) process group membership. We have implemented a prototype of the InterGroup protocols in Java, and have tested the system performance in both local-area and wide-area networks.

  15. Design and Implementation of Ceph: A Scalable Distributed File System

    SciTech Connect

    Weil, S A; Brandt, S A; Miller, E L; Long, D E; Maltzahn, C

    2006-04-19

    File system designers continue to look to new architectures to improve scalability. Object-based storage diverges from server-based (e.g. NFS) and SAN-based storage systems by coupling processors and memory with disk drives, delegating low-level allocation to object storage devices (OSDs) and decoupling I/O (read/write) from metadata (file open/close) operations. Even recent object-based systems inherit decades-old architectural choices going back to early UNIX file systems, however, limiting their ability to effectively scale to hundreds of petabytes. We present Ceph, a distributed file system that provides excellent performance and reliability with unprecedented scalability. Ceph maximizes the separation between data and metadata management by replacing allocation tables with a pseudo-random data distribution function (CRUSH) designed for heterogeneous and dynamic clusters of unreliable OSDs. We leverage OSD intelligence to distribute data replication, failure detection and recovery with semi-autonomous OSDs running a specialized local object storage file system (EBOFS). Finally, Ceph is built around a dynamic distributed metadata management cluster that provides extremely efficient metadata management that seamlessly adapts to a wide range of general purpose and scientific computing file system workloads. We present performance measurements under a variety of workloads that show superior I/O performance and scalable metadata management (more than a quarter million metadata ops/sec).

  16. The Flatworld Simulation Control Architecture (FSCA): A Framework for Scalable Immersive Visualization Systems

    DTIC Science & Technology

    2004-12-01

    handling using the X10 home automation protocol. Each 3D graphics client renders its scene according to an assigned virtual camera position. By having...control protocol. DMX is a versatile and robust framework which overcomes limitations of the X10 home automation protocol which we are currently using

  17. SciDAC Institute for Ultra-Scale Visualization: Activity Recognition for Ultra-Scale Visualization

    SciTech Connect

    Silver, Deborah

    2014-04-30

    Understanding the science behind ultra-scale simulations requires extracting meaning from data sets of hundreds of terabytes or more. Developing scalable parallel visualization algorithms is a key step enabling scientists to interact and visualize their data at this scale. However, at extreme scales, the datasets are so huge, there is not even enough time to view the data, let alone explore it with basic visualization methods. Automated tools are necessary for knowledge discovery -- to help sift through the information and isolate characteristic patterns, thereby enabling the scientist to study local interactions, the origin of features and their evolution in large volumes of data. These tools must be able to operate on data of this scale and work with the visualization process. In this project, we developed a framework for activity detection to allow scientists to model and extract spatio-temporal patterns from time-varying data.

  18. Encryption and authentication for scalable multimedia: current state of the art and challenges

    NASA Astrophysics Data System (ADS)

    Zhu, Bin B.; Swanson, Mitchell D.; Li, Shipeng

    2004-10-01

    Scalable coding is a technology that encodes a multimedia signal in a scalable manner where various representations can be extracted from a single codestream to fit a wide range of applications. Many new scalable coders such as JPEG 2000 and MPEG-4 FGS offer fine granularity scalability to provide near continuous optimal tradeoff between quality and rates in a large range. This fine granularity scalability poses great new challenges to the design of encryption and authentication systems for scalable media in Digital Rights Management (DRM) and other applications. It may be desirable or even mandatory to maintain a certain level of scalability in the encrypted or signed codestream so that no decryption or re-signing is needed when legitimate adaptations are applied. In other words, the encryption and authentication should be scalable, i.e., adaptation friendly. Otherwise secrets have to be shared with every intermediate stage along the content delivery system which performs adaptation manipulations. Sharing secrets with many parties would jeopardize the overall security of a system since the security depends on the weakest component of the system. In this paper, we first describe general requirements and desirable features for an encryption or authentication system for scalable media, esp. those not encountered with the non-scalable case. Then we present an overview of the current state of the art of technologies in scalable encryption and authentication. These technologies include full and selective encryption schemes that maintain the original or coarser granularity of scalability offered by an unencrypted scalable codestream, layered access control and block level authentication that reduce the fine granularity of scalability to a block level, among others. Finally, we summarize existing challenges and propose future research directions.

  19. Runtime volume visualization for parallel CFD

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu

    1995-01-01

    This paper discusses some aspects of design of a data distributed, massively parallel volume rendering library for runtime visualization of parallel computational fluid dynamics simulations in a message-passing environment. Unlike the traditional scheme in which visualization is a postprocessing step, the rendering is done in place on each node processor. Computational scientists who run large-scale simulations on a massively parallel computer can thus perform interactive monitoring of their simulations. The current library provides an interface to handle volume data on rectilinear grids. The same design principles can be generalized to handle other types of grids. For demonstration, we run a parallel Navier-Stokes solver making use of this rendering library on the Intel Paragon XP/S. The interactive visual response achieved is found to be very useful. Performance studies show that the parallel rendering process is scalable with the size of the simulation as well as with the parallel computer.

  20. Visual cognition

    PubMed Central

    Cavanagh, Patrick

    2011-01-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719

  1. Visual Impairment

    MedlinePlus

    ... or head with a baseball or having an automobile or motorcycle accident. Some babies have congenital blindness , ... how well he or she sees at various distances. Visual field test. Ophthalmologists use this test to ...

  2. Rocinante, a virtual collaborative visualizer

    SciTech Connect

    McDonald, M.J.; Ice, L.G.

    1996-12-31

    With the goal of improving the ability of people around the world to share the development and use of intelligent systems, Sandia National Laboratories` Intelligent Systems and Robotics Center is developing new Virtual Collaborative Engineering (VCE) and Virtual Collaborative Control (VCC) technologies. A key area of VCE and VCC research is in shared visualization of virtual environments. This paper describes a Virtual Collaborative Visualizer (VCV), named Rocinante, that Sandia developed for VCE and VCC applications. Rocinante allows multiple participants to simultaneously view dynamic geometrically-defined environments. Each viewer can exclude extraneous detail or include additional information in the scene as desired. Shared information can be saved and later replayed in a stand-alone mode. Rocinante automatically scales visualization requirements with computer system capabilities. Models with 30,000 polygons and 4 Megabytes of texture display at 12 to 15 frames per second (fps) on an SGI Onyx and at 3 to 8 fps (without texture) on Indigo 2 Extreme computers. In its networked mode, Rocinante synchronizes its local geometric model with remote simulators and sensory systems by monitoring data transmitted through UDP packets. Rocinante`s scalability and performance make it an ideal VCC tool. Users throughout the country can monitor robot motions and the thinking behind their motion planners and simulators.

  3. Visual cognition

    SciTech Connect

    Pinker, S.

    1985-01-01

    This book consists of essays covering issues in visual cognition presenting experimental techniques from cognitive psychology, methods of modeling cognitive processes on computers from artificial intelligence, and methods of studying brain organization from neuropsychology. Topics considered include: parts of recognition; visual routines; upward direction; mental rotation, and discrimination of left and right turns in maps; individual differences in mental imagery, computational analysis and the neurological basis of mental imagery: componental analysis.

  4. VPLS: an effective technology for building scalable transparent LAN services

    NASA Astrophysics Data System (ADS)

    Dong, Ximing; Yu, Shaohua

    2005-02-01

    Virtual Private LAN Service (VPLS) is generating considerable interest with enterprises and service providers as it offers multipoint transparent LAN service (TLS) over MPLS networks. This paper describes an effective technology - VPLS, which links virtual switch instances (VSIs) through MPLS to form an emulated Ethernet switch and build Scalable Transparent Lan Services. It first focuses on the architecture of VPLS with Ethernet bridging technique at the edge and MPLS at the core, then it tries to elucidate the data forwarding mechanism within VPLS domain, including learning and aging MAC addresses on a per LSP basis, flooding of unknown frames and replication for unknown, multicast, and broadcast frames. The loop-avoidance mechanism, known as split horizon forwarding, is also analyzed. Another important aspect of VPLS service is its basic operation, including autodiscovery and signaling, is discussed. From the perspective of efficiency and scalability the paper compares two important signaling mechanism, BGP and LDP, which are used to set up a PW between the PEs and bind the PWs to a particular VSI. With the extension of VPLS and the increase of full mesh of PWs between PE devices (n*(n-1)/2 PWs in all, a n2 complete problem), VPLS instance could have a large number of remote PE associations, resulting in an inefficient use of network bandwidth and system resources as the ingress PE has to replicate each frame and append MPLS labels for remote PE. So the latter part of this paper focuses on the scalability issue: the Hierarchical VPLS. Within the architecture of HVPLS, this paper addresses two ways to cope with a possibly large number of MAC addresses, which make VPLS operate more efficiently.

  5. Focal plane array with modular pixel array components for scalability

    SciTech Connect

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  6. A scalable parallel algorithm for multiple objective linear programs

    NASA Technical Reports Server (NTRS)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  7. Simplex-stochastic collocation method with improved scalability

    SciTech Connect

    Edeling, W.N.; Dwight, R.P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  8. Scalable syntheses of the BET bromodomain inhibitor JQ1.

    PubMed

    Syeda, Shameem Sultana; Jakkaraj, Sudhakar; Georg, Gunda I

    2015-06-03

    We have developed methods involving the use of alternate, safer reagents for the scalable syntheses of the potent BET bromodomain inhibitor JQ1. A one-pot three step method, involving the conversion of a benzodiazepine to a thioamde using Lawesson's reagent, followed by amidrazone formation and installation of the triazole moiety furnished JQ1. This method provides good yields and a facile purification process. For the synthesis of enantiomerically enriched (+)-JQ1, the highly toxic reagent diethyl chlorophosphate, used in a previous synthesis, was replaced with the safer reagent diphenyl chlorophosphate in the three-step one-pot triazole formation without effecting yields and enantiomeric purity of (+)-JQ1.

  9. Scalable Architecture for Multihop Wireless ad Hoc Networks

    NASA Technical Reports Server (NTRS)

    Arabshahi, Payman; Gray, Andrew; Okino, Clayton; Yan, Tsun-Yee

    2004-01-01

    A scalable architecture for wireless digital data and voice communications via ad hoc networks has been proposed. Although the details of the architecture and of its implementation in hardware and software have yet to be developed, the broad outlines of the architecture are fairly clear: This architecture departs from current commercial wireless communication architectures, which are characterized by low effective bandwidth per user and are not well suited to low-cost, rapid scaling in large metropolitan areas. This architecture is inspired by a vision more akin to that of more than two dozen noncommercial community wireless networking organizations established by volunteers in North America and several European countries.

  10. Scalability and Performance of a Large Linux Cluster

    SciTech Connect

    BRIGHTWELL,RONALD B.; PLIMPTON,STEVEN J.

    2000-01-20

    In this paper the authors present performance results from several parallel benchmarks and applications on a 400-node Linux cluster at Sandia National Laboratories. They compare the results on the Linux cluster to performance obtained on a traditional distributed-memory massively parallel processing machine, the Intel TeraFLOPS. They discuss the characteristics of these machines that influence the performance results and identify the key components of the system software that they feel are important to allow for scalability of commodity-based PC clusters to hundreds and possibly thousands of processors.

  11. Using overlay network architectures for scalable video distribution

    NASA Astrophysics Data System (ADS)

    Patrikakis, Charalampos Z.; Despotopoulos, Yannis; Fafali, Paraskevi; Cha, Jihun; Kim, Kyuheon

    2004-11-01

    Within the last years, the enormous growth of Internet based communication as well as the rapid increase of available processing power has lead to the widespread use of multimedia streaming as a means to convey information. This work aims at providing an open architecture designed to support scalable streaming to a large number of clients using application layer multicast. The architecture is based on media relay nodes that can be deployed transparently to any existing media distribution scheme, which can support media streamed using the RTP and RTSP protocols. The architecture is based on overlay networks at application level, featuring rate adaptation mechanisms for responding to network congestion.

  12. pcircle - A Suite of Scalable Parallel File System Tools

    SciTech Connect

    WANG, FEIYI

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  13. Scalable brain network construction on white matter fibers

    NASA Astrophysics Data System (ADS)

    Chung, Moo K.; Adluru, Nagesh; Dalton, Kim M.; Alexander, Andrew L.; Davidson, Richard J.

    2011-03-01

    DTI offers a unique opportunity to characterize the structural connectivity of the human brain non-invasively by tracing white matter fiber tracts. Whole brain tractography studies routinely generate up to half million tracts per brain, which serves as edges in an extremely large 3D graph with up to half million edges. Currently there is no agreed-upon method for constructing the brain structural network graphs out of large number of white matter tracts. In this paper, we present a scalable iterative framework called the ɛ-neighbor method for building a network graph and apply it to testing abnormal connectivity in autism.

  14. Scalable C-H Oxidation with Copper: Synthesis of Polyoxypregnanes.

    PubMed

    See, Yi Yang; Herrmann, Aaron T; Aihara, Yoshinori; Baran, Phil S

    2015-11-04

    Steroids bearing C12 oxidations are widespread in nature, yet only one preparative chemical method addresses this challenge in a low-yielding and not fully understood fashion: Schönecker's Cu-mediated oxidation. This work shines new light onto this powerful C-H oxidation method through mechanistic investigation, optimization, and wider application. Culminating in a scalable, rapid, high-yielding, and operationally simple protocol, this procedure is applied to the first synthesis of several parent polyoxypregnane natural products, representing a gateway to over 100 family members.

  15. Scalable and Robust Randomized Benchmarking of Quantum Processes

    NASA Astrophysics Data System (ADS)

    Magesan, Easwar; Gambetta, J. M.; Emerson, Joseph

    2011-05-01

    In this Letter we propose a fully scalable randomized benchmarking protocol for quantum information processors. We prove that the protocol provides an efficient and reliable estimate of the average error-rate for a set operations (gates) under a very general noise model that allows for both time and gate-dependent errors. In particular we obtain a sequence of fitting models for the observable fidelity decay as a function of a (convergent) perturbative expansion of the gate errors about the mean error. We illustrate the protocol through numerical examples.

  16. Scalable Production Method for Graphene Oxide Water Vapor Separation Membranes

    SciTech Connect

    Fifield, Leonard S.; Shin, Yongsoon; Liu, Wei; Gotthold, David W.

    2016-01-01

    ABSTRACT

    Membranes for selective water vapor separation were assembled from graphene oxide suspension using techniques compatible with high volume industrial production. The large-diameter graphene oxide flake suspensions were synthesized from graphite materials via relatively efficient chemical oxidation steps with attention paid to maintaining flake size and achieving high graphene oxide concentrations. Graphene oxide membranes produced using scalable casting methods exhibited water vapor flux and water/nitrogen selectivity performance meeting or exceeding that of membranes produced using vacuum-assisted laboratory techniques. (PNNL-SA-117497)

  17. Visual Prosthesis

    PubMed Central

    Schiller, Peter H.; Tehovnik, Edward J.

    2009-01-01

    There are more than 40 million blind individuals in the world whose plight would be greatly ameliorated by creating a visual prosthetic. We begin by outlining the basic operational characteristics of the visual system as this knowledge is essential for producing a prosthetic device based on electrical stimulation through arrays of implanted electrodes. We then list a series of tenets that we believe need to be followed in this effort. Central among these is our belief that the initial research in this area, which is in its infancy, should first be carried out in animals. We suggest that implantation of area V1 holds high promise as the area is of a large volume and can therefore accommodate extensive electrode arrays. We then proceed to consider coding operations that can effectively convert visual images viewed by a camera to stimulate electrode arrays to yield visual impressions that can provide shape, motion and depth information. We advocate experimental work that mimics electrical stimulation effects non-invasively in sighted human subjects using a camera from which visual images are converted into displays on a monitor akin to those created by electrical stimulation. PMID:19065857

  18. Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.

    2015-08-01

    The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  19. In-Situ Visualization Experiments with ParaView Cinema in RAGE

    SciTech Connect

    Kares, Robert John

    2015-10-15

    A previous paper described some numerical experiments performed using the ParaView/Catalyst in-situ visualization infrastructure deployed in the Los Alamos RAGE radiation-hydrodynamics code to produce images from a running large scale 3D ICF simulation. One challenge of the in-situ approach apparent in these experiments was the difficulty of choosing parameters likes isosurface values for the visualizations to be produced from the running simulation without the benefit of prior knowledge of the simulation results and the resultant cost of recomputing in-situ generated images when parameters are chosen suboptimally. A proposed method of addressing this difficulty is to simply render multiple images at runtime with a range of possible parameter values to produce a large database of images and to provide the user with a tool for managing the resulting database of imagery. Recently, ParaView/Catalyst has been extended to include such a capability via the so-called Cinema framework. Here I describe some initial experiments with the first delivery of Cinema and make some recommendations for future extensions of Cinema’s capabilities.

  20. UpSet: Visualization of Intersecting Sets

    PubMed Central

    Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter

    2016-01-01

    Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912

  1. Visual cognition

    SciTech Connect

    Pinker, S.

    1985-01-01

    This collection of research papers on visual cognition first appeared as a special issue of Cognition: International Journal of Cognitive Science. The study of visual cognition has seen enormous progress in the past decade, bringing important advances in our understanding of shape perception, visual imagery, and mental maps. Many of these discoveries are the result of converging investigations in different areas, such as cognitive and perceptual psychology, artificial intelligence, and neuropsychology. This volume is intended to highlight a sample of work at the cutting edge of this research area for the benefit of students and researchers in a variety of disciplines. The tutorial introduction that begins the volume is designed to help the nonspecialist reader bridge the gap between the contemporary research reported here and earlier textbook introductions or literature reviews.

  2. Neutron generators with size scalability, ease of fabrication and multiple ion source functionalities

    DOEpatents

    Elizondo-Decanini, Juan M

    2014-11-18

    A neutron generator is provided with a flat, rectilinear geometry and surface mounted metallizations. This construction provides scalability and ease of fabrication, and permits multiple ion source functionalities.

  3. Efficient and scalable graph similarity joins in MapReduce.

    PubMed

    Chen, Yifan; Zhao, Xiang; Xiao, Chuan; Zhang, Weiming; Tang, Jiuyang

    2014-01-01

    Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results.

  4. Scalable and Fault Tolerant Failure Detection and Consensus

    SciTech Connect

    Katti, Amogh; Di Fatta, Giuseppe; Naughton III, Thomas J; Engelmann, Christian

    2015-01-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.

  5. MPE graphics -- Scalable X11 graphics in MPI

    SciTech Connect

    Gropp, W.; Karrels, E.; Lusk, E.

    1994-12-31

    As parallel programs enter the mainstream, they need to provide the same facilities and ease-of-use features expected of uniprocessor programs. For many applications, this means that they need to provide graphical output. This talk discusses a library of routines that provide scalable X Window System graphics. These routines make use of the MPI message-passing standard to provide a safe and reliable system that can be easily used in parallel programs. At the same time they encapsulate commonly-used services to provide a convenient interface to X graphics facilities. The easiest way to provide X11 graphics to a parallel program is to allow each process to draw on the same X11 Window. That is, each process opens a connection to the X11 server and draws directly to it. In one sense, this is as scalable a system as possible, since the single graphics display is an unavoidable point of sequential access. However, in reality, an X server can only accept a relatively small number of connections. In addition, the latency associated with each transmission between a parallel process and the X Window server is relatively high. This talk addresses these issues.

  6. Biosurveillance of emerging biothreats using scalable genotype clustering.

    PubMed

    Gallego, Blanca; Sintchenko, Vitali; Wang, Qinning; Hiley, Lester; Gilbert, Gwendolyn L; Coiera, Enrico

    2009-02-01

    Developments in molecular fingerprinting of pathogens with epidemic potential have offered new opportunities for improving detection and monitoring of biothreats. However, the lack of scalable definitions for infectious disease clustering presents a barrier for effective use and evaluation of new data types for early warning systems. A novel working definition of an outbreak based on temporal and spatial clustering of molecular genotypes is introduced in this paper. It provides an unambiguous way of clustering of causative pathogens and is adjustable to local disease prevalence and availability of public health resources. The performance of this definition in prospective surveillance is assessed in the context of community outbreaks of food-borne salmonellosis. Molecular fingerprinting augmented with the scalable clustering allows the detection of more than 50% of the potential outbreaks before they reach the midpoint of the cluster duration. Clustering in time by imposing restrictions on intervals between collection dates results in a smaller number of outbreaks but does not significantly affect the timeliness of detection. Clustering in space and time by imposing restrictions on the spatial and temporal distance between cases results in a further reduction in the number of outbreaks and decreases the overall efficiency of prospective detection. Innovative bacterial genotyping technologies can enhance early warning systems for public health by aiding the detection of moderate and small epidemics.

  7. Scalable manufacturing of biomimetic moldable hydrogels for industrial applications

    NASA Astrophysics Data System (ADS)

    Yu, Anthony C.; Chen, Haoxuan; Chan, Doreen; Agmon, Gillie; Stapleton, Lyndsay M.; Sevit, Alex M.; Tibbitt, Mark W.; Acosta, Jesse D.; Zhang, Tony; Franzia, Paul W.; Langer, Robert; Appel, Eric A.

    2016-12-01

    Hydrogels are a class of soft material that is exploited in many, often completely disparate, industrial applications, on account of their unique and tunable properties. Advances in soft material design are yielding next-generation moldable hydrogels that address engineering criteria in several industrial settings such as complex viscosity modifiers, hydraulic or injection fluids, and sprayable carriers. Industrial implementation of these viscoelastic materials requires extreme volumes of material, upwards of several hundred million gallons per year. Here, we demonstrate a paradigm for the scalable fabrication of self-assembled moldable hydrogels using rationally engineered, biomimetic polymer–nanoparticle interactions. Cellulose derivatives are linked together by selective adsorption to silica nanoparticles via dynamic and multivalent interactions. We show that the self-assembly process for gel formation is easily scaled in a linear fashion from 0.5 mL to over 15 L without alteration of the mechanical properties of the resultant materials. The facile and scalable preparation of these materials leveraging self-assembly of inexpensive, renewable, and environmentally benign starting materials, coupled with the tunability of their properties, make them amenable to a range of industrial applications. In particular, we demonstrate their utility as injectable materials for pipeline maintenance and product recovery in industrial food manufacturing as well as their use as sprayable carriers for robust application of fire retardants in preventing wildland fires.

  8. Developing a scalable artificial photosynthesis technology through nanomaterials by design

    NASA Astrophysics Data System (ADS)

    Lewis, Nathan S.

    2016-12-01

    An artificial photosynthetic system that directly produces fuels from sunlight could provide an approach to scalable energy storage and a technology for the carbon-neutral production of high-energy-density transportation fuels. A variety of designs are currently being explored to create a viable artificial photosynthetic system, and the most technologically advanced systems are based on semiconducting photoelectrodes. Here, I discuss the development of an approach that is based on an architecture, first conceived around a decade ago, that combines arrays of semiconducting microwires with flexible polymeric membranes. I highlight the key steps that have been taken towards delivering a fully functional solar fuels generator, which have exploited advances in nanotechnology at all hierarchical levels of device construction, and include the discovery of earth-abundant electrocatalysts for fuel formation and materials for the stabilization of light absorbers. Finally, I consider the remaining scientific and engineering challenges facing the fulfilment of an artificial photosynthetic system that is simultaneously safe, robust, efficient and scalable.

  9. A Highly Scalable Peptide-Based Assay System for Proteomics

    PubMed Central

    Kozlov, Igor A.; Thomsen, Elliot R.; Munchel, Sarah E.; Villegas, Patricia; Capek, Petr; Gower, Austin J.; K. Pond, Stephanie J.; Chudin, Eugene; Chee, Mark S.

    2012-01-01

    We report a scalable and cost-effective technology for generating and screening high-complexity customizable peptide sets. The peptides are made as peptide-cDNA fusions by in vitro transcription/translation from pools of DNA templates generated by microarray-based synthesis. This approach enables large custom sets of peptides to be designed in silico, manufactured cost-effectively in parallel, and assayed efficiently in a multiplexed fashion. The utility of our peptide-cDNA fusion pools was demonstrated in two activity-based assays designed to discover protease and kinase substrates. In the protease assay, cleaved peptide substrates were separated from uncleaved and identified by digital sequencing of their cognate cDNAs. We screened the 3,011 amino acid HCV proteome for susceptibility to cleavage by the HCV NS3/4A protease and identified all 3 known trans cleavage sites with high specificity. In the kinase assay, peptide substrates phosphorylated by tyrosine kinases were captured and identified by sequencing of their cDNAs. We screened a pool of 3,243 peptides against Abl kinase and showed that phosphorylation events detected were specific and consistent with the known substrate preferences of Abl kinase. Our approach is scalable and adaptable to other protein-based assays. PMID:22701568

  10. The Node Monitoring Component of a Scalable Systems Software Environment

    SciTech Connect

    Miller, Samuel James

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  11. ISMuS: interactive, scalable, multimedia streaming platform

    NASA Astrophysics Data System (ADS)

    Cha, Jihun; Kim, Hyun-Cheol; Jeong, Seyoon; Kim, Kyuheon; Patrikakis, Charalampos; van der Schaar, Mihaela

    2005-08-01

    Technical evolutions in the field of information technology have changed many aspects of the industries and the life of human beings. Internet and broadcasting technologies act as core ingredients for this revolution. Various new services that were never possible are now available to general public by utilizing these technologies. Multimedia service via IP networks becomes one of easily accessible service in these days. Technical advances in Internet services, the provision of constantly increasing network bandwidth capacity, and the evolution of multimedia technologies have made the demands for multimedia streaming services increased explosively. With this increasing demand Internet becomes deluged with multimedia traffics. Although multimedia streaming services became indispensable, the quality of a multimedia service over Internet can not be technically guaranteed. Recently users demand multimedia service whose quality is competitive to the traditional TV broadcasting service with additional functionalities. Such additional functionalities include interactivity, scalability, and adaptability. A multimedia that comprises these ancillary functionalities is often called richmedia. In order to satisfy aforementioned requirements, Interactive Scalable Multimedia Streaming (ISMuS) platform is designed and developed. In this paper, the architecture, implementation, and additional functionalities of ISMuS platform are presented. The presented platform is capable of providing user interactions based on MPEG-4 Systems technology [1] and supporting an efficient multimedia distribution through an overlay network technology. Loaded with feature-rich technologies, the platform can serve both on-demand and broadcast-like richmedia services.

  12. Resolution scalable image coding with reversible cellular automata.

    PubMed

    Cappellari, Lorenzo; Milani, Simone; Cruz-Reyes, Carlos; Calvagno, Giancarlo

    2011-05-01

    In a resolution scalable image coding algorithm, a multiresolution representation of the data is often obtained using a linear filter bank. Reversible cellular automata have been recently proposed as simpler, nonlinear filter banks that produce a similar representation. The original image is decomposed into four subbands, such that one of them retains most of the features of the original image at a reduced scale. In this paper, we discuss the utilization of reversible cellular automata and arithmetic coding for scalable compression of binary and grayscale images. In the binary case, the proposed algorithm that uses simple local rules compares well with the JBIG compression standard, in particular for images where the foreground is made of a simple connected region. For complex images, more efficient local rules based upon the lifting principle have been designed. They provide compression performances very close to or even better than JBIG, depending upon the image characteristics. In the grayscale case, and in particular for smooth images such as depth maps, the proposed algorithm outperforms both the JBIG and the JPEG2000 standards under most coding conditions.

  13. Scalable tuning of building models to hourly data

    DOE PAGES

    Garrett, Aaron; New, Joshua Ryan

    2015-03-31

    Energy models of existing buildings are unreliable unless calibrated so they correlate well with actual energy usage. Manual tuning requires a skilled professional, is prohibitively expensive for small projects, imperfect, non-repeatable, non-transferable, and not scalable to the dozens of sensor channels that smart meters, smart appliances, and cheap/ubiquitous sensors are beginning to make available today. A scalable, automated methodology is needed to quickly and intelligently calibrate building energy models to all available data, increase the usefulness of those models, and facilitate speed-and-scale penetration of simulation-based capabilities into the marketplace for actualized energy savings. The "Autotune'' project is a novel, model-agnosticmore » methodology which leverages supercomputing, large simulation ensembles, and big data mining with multiple machine learning algorithms to allow automatic calibration of simulations that match measured experimental data in a way that is deployable on commodity hardware. This paper shares several methodologies employed to reduce the combinatorial complexity to a computationally tractable search problem for hundreds of input parameters. Furthermore, accuracy metrics are provided which quantify model error to measured data for either monthly or hourly electrical usage from a highly-instrumented, emulated-occupancy research home.« less

  14. Scalable parallel distance field construction for large-scale applications

    DOE PAGES

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; ...

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate itsmore » efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.« less

  15. Scalable tuning of building models to hourly data

    SciTech Connect

    Garrett, Aaron; New, Joshua Ryan

    2015-03-31

    Energy models of existing buildings are unreliable unless calibrated so they correlate well with actual energy usage. Manual tuning requires a skilled professional, is prohibitively expensive for small projects, imperfect, non-repeatable, non-transferable, and not scalable to the dozens of sensor channels that smart meters, smart appliances, and cheap/ubiquitous sensors are beginning to make available today. A scalable, automated methodology is needed to quickly and intelligently calibrate building energy models to all available data, increase the usefulness of those models, and facilitate speed-and-scale penetration of simulation-based capabilities into the marketplace for actualized energy savings. The "Autotune'' project is a novel, model-agnostic methodology which leverages supercomputing, large simulation ensembles, and big data mining with multiple machine learning algorithms to allow automatic calibration of simulations that match measured experimental data in a way that is deployable on commodity hardware. This paper shares several methodologies employed to reduce the combinatorial complexity to a computationally tractable search problem for hundreds of input parameters. Furthermore, accuracy metrics are provided which quantify model error to measured data for either monthly or hourly electrical usage from a highly-instrumented, emulated-occupancy research home.

  16. Cheetah: A Framework for Scalable Hierarchical Collective Operations

    SciTech Connect

    Graham, Richard L; Gorentla Venkata, Manjunath; Ladd, Joshua S; Shamis, Pavel; Rabinovitz, Ishai; Filipov, Vasily; Shainer, Gilad

    2011-01-01

    Collective communication operations, used by many scientific applications, tend to limit overall parallel application performance and scalability. Computer systems are becoming more heterogeneous with increasing node and core-per-node counts. Also, a growing number of data-access mechanisms, of varying characteristics, are supported within a single computer system. We describe a new hierarchical collective communication framework that takes advantage of hardware-specific data-access mechanisms. It is flexible, with run-time hierarchy specification, and sharing of collective communication primitives between collective algorithms. Data buffers are shared between levels in the hierarchy reducing collective communication management overhead. We have implemented several versions of the Message Passing Interface (MPI) collective operations, MPI Barrier() and MPI Bcast(), and run experiments using up to 49, 152 processes on a Cray XT5, and a small InfiniBand based cluster. At 49, 152 processes our barrier implementation outperforms the optimized native implementation by 75%. 32 Byte and one Mega-Byte broadcasts outperform it by 62% and 11%, respectively, with better scalability characteristics. Improvements relative to the default Open MPI implementation are much larger.

  17. Scalable Multiprocessor for High-Speed Computing in Space

    NASA Technical Reports Server (NTRS)

    Lux, James; Lang, Minh; Nishimoto, Kouji; Clark, Douglas; Stosic, Dorothy; Bachmann, Alex; Wilkinson, William; Steffke, Richard

    2004-01-01

    A report discusses the continuing development of a scalable multiprocessor computing system for hard real-time applications aboard a spacecraft. "Hard realtime applications" signifies applications, like real-time radar signal processing, in which the data to be processed are generated at "hundreds" of pulses per second, each pulse "requiring" millions of arithmetic operations. In these applications, the digital processors must be tightly integrated with analog instrumentation (e.g., radar equipment), and data input/output must be synchronized with analog instrumentation, controlled to within fractions of a microsecond. The scalable multiprocessor is a cluster of identical commercial-off-the-shelf generic DSP (digital-signal-processing) computers plus generic interface circuits, including analog-to-digital converters, all controlled by software. The processors are computers interconnected by high-speed serial links. Performance can be increased by adding hardware modules and correspondingly modifying the software. Work is distributed among the processors in a parallel or pipeline fashion by means of a flexible master/slave control and timing scheme. Each processor operates under its own local clock; synchronization is achieved by broadcasting master time signals to all the processors, which compute offsets between the master clock and their local clocks.

  18. Scalable manufacturing of biomimetic moldable hydrogels for industrial applications.

    PubMed

    Yu, Anthony C; Chen, Haoxuan; Chan, Doreen; Agmon, Gillie; Stapleton, Lyndsay M; Sevit, Alex M; Tibbitt, Mark W; Acosta, Jesse D; Zhang, Tony; Franzia, Paul W; Langer, Robert; Appel, Eric A

    2016-12-13

    Hydrogels are a class of soft material that is exploited in many, often completely disparate, industrial applications, on account of their unique and tunable properties. Advances in soft material design are yielding next-generation moldable hydrogels that address engineering criteria in several industrial settings such as complex viscosity modifiers, hydraulic or injection fluids, and sprayable carriers. Industrial implementation of these viscoelastic materials requires extreme volumes of material, upwards of several hundred million gallons per year. Here, we demonstrate a paradigm for the scalable fabrication of self-assembled moldable hydrogels using rationally engineered, biomimetic polymer-nanoparticle interactions. Cellulose derivatives are linked together by selective adsorption to silica nanoparticles via dynamic and multivalent interactions. We show that the self-assembly process for gel formation is easily scaled in a linear fashion from 0.5 mL to over 15 L without alteration of the mechanical properties of the resultant materials. The facile and scalable preparation of these materials leveraging self-assembly of inexpensive, renewable, and environmentally benign starting materials, coupled with the tunability of their properties, make them amenable to a range of industrial applications. In particular, we demonstrate their utility as injectable materials for pipeline maintenance and product recovery in industrial food manufacturing as well as their use as sprayable carriers for robust application of fire retardants in preventing wildland fires.

  19. Scalable and sustainable electrochemical allylic C-H oxidation

    NASA Astrophysics Data System (ADS)

    Horn, Evan J.; Rosen, Brandon R.; Chen, Yong; Tang, Jiaze; Chen, Ke; Eastgate, Martin D.; Baran, Phil S.

    2016-05-01

    New methods and strategies for the direct functionalization of C-H bonds are beginning to reshape the field of retrosynthetic analysis, affecting the synthesis of natural products, medicines and materials. The oxidation of allylic systems has played a prominent role in this context as possibly the most widely applied C-H functionalization, owing to the utility of enones and allylic alcohols as versatile intermediates, and their prevalence in natural and unnatural materials. Allylic oxidations have featured in hundreds of syntheses, including some natural product syntheses regarded as “classics”. Despite many attempts to improve the efficiency and practicality of this transformation, the majority of conditions still use highly toxic reagents (based around toxic elements such as chromium or selenium) or expensive catalysts (such as palladium or rhodium). These requirements are problematic in industrial settings; currently, no scalable and sustainable solution to allylic oxidation exists. This oxidation strategy is therefore rarely used for large-scale synthetic applications, limiting the adoption of this retrosynthetic strategy by industrial scientists. Here we describe an electrochemical C-H oxidation strategy that exhibits broad substrate scope, operational simplicity and high chemoselectivity. It uses inexpensive and readily available materials, and represents a scalable allylic C-H oxidation (demonstrated on 100 grams), enabling the adoption of this C-H oxidation strategy in large-scale industrial settings without substantial environmental impact.

  20. Scalable and Sustainable Electrochemical Allylic C–H Oxidation

    PubMed Central

    Chen, Yong; Tang, Jiaze; Chen, Ke; Eastgate, Martin D.; Baran, Phil S.

    2016-01-01

    New methods and strategies for the direct functionalization of C–H bonds are beginning to reshape the fabric of retrosynthetic analysis, impacting the synthesis of natural products, medicines, and even materials1. The oxidation of allylic systems has played a prominent role in this context as possibly the most widely applied C–H functionalization due to the utility of enones and allylic alcohols as versatile intermediates, along with their prevalence in natural and unnatural materials2. Allylic oxidations have been featured in hundreds of syntheses, including some natural product syntheses regarded as “classics”3. Despite many attempts to improve the efficiency and practicality of this powerful transformation, the vast majority of conditions still employ highly toxic reagents (based around toxic elements such as chromium, selenium, etc.) or expensive catalysts (palladium, rhodium, etc.)2. These requirements are highly problematic in industrial settings; currently, no scalable and sustainable solution to allylic oxidation exists. As such, this oxidation strategy is rarely embraced for large-scale synthetic applications, limiting the adoption of this important retrosynthetic strategy by industrial scientists. In this manuscript, we describe an electrochemical solution to this problem that exhibits broad substrate scope, operational simplicity, and high chemoselectivity. This method employs inexpensive and readily available materials, representing the first example of a scalable allylic C–H oxidation (demonstrated on 100 grams), finally opening the door for the adoption of this C–H oxidation strategy in large-scale industrial settings without significant environmental impact. PMID:27096371

  1. Using Swarming Agents for Scalable Security in Large Network Environments

    SciTech Connect

    Crouse, Michael; White, Jacob L.; Fulp, Errin W.; Berenhaut, Kenneth S.; Fink, Glenn A.; Haack, Jereme N.

    2011-09-23

    The difficulty of securing computer infrastructures increases as they grow in size and complexity. Network-based security solutions such as IDS and firewalls cannot scale because of exponentially increasing computational costs inherent in detecting the rapidly growing number of threat signatures. Hostbased solutions like virus scanners and IDS suffer similar issues, and these are compounded when enterprises try to monitor these in a centralized manner. Swarm-based autonomous agent systems like digital ants and artificial immune systems can provide a scalable security solution for large network environments. The digital ants approach offers a biologically inspired design where each ant in the virtual colony can detect atoms of evidence that may help identify a possible threat. By assembling the atomic evidences from different ant types the colony may detect the threat. This decentralized approach can require, on average, fewer computational resources than traditional centralized solutions; however there are limits to its scalability. This paper describes how dividing a large infrastructure into smaller managed enclaves allows the digital ant framework to effectively operate in larger environments. Experimental results will show that using smaller enclaves allows for more consistent distribution of agents and results in faster response times.

  2. Scalable orbital-angular-momentum sorting without destroying photon states

    NASA Astrophysics Data System (ADS)

    Wang, Fang-Xiang; Chen, Wei; Yin, Zhen-Qiang; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu

    2016-09-01

    Single photons with orbital angular momentum (OAM) have attracted substantial attention from researchers. A single photon can carry infinite OAM values theoretically. Thus, OAM photon states have been widely used in quantum information and fundamental quantum mechanics. Although there have been many methods for sorting quantum states with different OAM values, the nondestructive and efficient sorter of high-dimensional OAM remains a fundamental challenge. Here, we propose a scalable OAM sorter which can categorize different OAM states simultaneously, meanwhile, preserving both OAM and spin angular momentum. Fundamental elements of the sorter are composed of symmetric multiport beam splitters (BSs) and Dove prisms with cascading structure, which in principle can be flexibly and effectively combined to sort arbitrarily high-dimensional OAM photons. The scalable structures proposed here greatly reduce the number of BSs required for sorting high-dimensional OAM states. In view of the nondestructive and extensible features, the sorters can be used as fundamental devices not only for high-dimensional quantum information processing, but also for traditional optics.

  3. NOA: A Scalable Multi-Parent Clustering Hierarchy for WSNs

    SciTech Connect

    Cree, Johnathan V.; Delgado-Frias, Jose; Hughes, Michael A.; Burghard, Brion J.; Silvers, Kurt L.

    2012-08-10

    NOA is a multi-parent, N-tiered, hierarchical clustering algorithm that provides a scalable, robust and reliable solution to autonomous configuration of large-scale wireless sensor networks. The novel clustering hierarchy's inherent benefits can be utilized by in-network data processing techniques to provide equally robust, reliable and scalable in-network data processing solutions capable of reducing the amount of data sent to sinks. Utilizing a multi-parent framework, NOA reduces the cost of network setup when compared to hierarchical beaconing solutions by removing the expense of r-hop broadcasting (r is the radius of the cluster) needed to build the network and instead passes network topology information among shared children. NOA2, a two-parent clustering hierarchy solution, and NOA3, the three-parent variant, saw up to an 83% and 72% reduction in overhead, respectively, when compared to performing one round of a one-parent hierarchical beaconing, as well as 92% and 88% less overhead when compared to one round of two- and three-parent hierarchical beaconing hierarchy.

  4. On-line scalable image access for medical remote collaborative meetings

    NASA Astrophysics Data System (ADS)

    Tarando, Sebastian R.; Lucidarme, Olivier; Grenier, Philippe; Fetita, Catalin

    2015-03-01

    The increasing need of remote medical investigation services in the framework of collaborative multidisciplinary meetings (e.g. cancer follow-up) raises the challenge of on-line remote access of (large amount of) radiologic data in a limited period of time. This paper proposes a scalable compression framework of DICOM images providing low-latency display through low speed networks. The developed approach relies on useless information removal from images (i.e. not related with the patient body) and the exploitation of the JPEG2000 standard to achieve progressive quality encoding and access of the data. This mechanism also allows the efficient exploitation of any idle times (corresponding to on-line visual image analysis) to download the remaining data at lossless quality in a way transparent to the user, thus minimizing the perceived latency. The experiments performed in comparison with exchanging uncompressed or JPEGlossless compressed DICOM data, showed the benefit of the proposed approach for collaborative on-line remote diagnosis and follow-up services.

  5. Scalable Approach To Construct Free-Standing and Flexible Carbon Networks for Lithium-Sulfur Battery.

    PubMed

    Li, Mengliu; Wahyudi, Wandi; Kumar, Pushpendra; Wu, Fengyu; Yang, Xiulin; Li, Henan; Li, Lain-Jong; Ming, Jun

    2017-03-08

    Reconstructing carbon nanomaterials (e.g., fullerene, carbon nanotubes (CNTs), and graphene) to multidimensional networks with hierarchical structure is a critical step in exploring their applications. Herein, a sacrificial template method by casting strategy is developed to prepare highly flexible and free-standing carbon film consisting of CNTs, graphene, or both. The scalable size, ultralight and binder-free characteristics, as well as the tunable process/property are promising for their large-scale applications, such as utilizing as interlayers in lithium-sulfur battery. The capability of holding polysulfides (i.e., suppressing the sulfur diffusion) for the networks made from CNTs, graphene, or their mixture is pronounced, among which CNTs are the best. The diffusion process of polysulfides can be visualized in a specially designed glass tube battery. X-ray photoelectron spectroscopy analysis of discharged electrodes was performed to characterize the species in electrodes. A detailed analysis of lithium diffusion constant, electrochemical impedance, and elementary distribution of sulfur in electrodes has been performed to further illustrate the differences of different carbon interlayers for Li-S batteries. The proposed simple and enlargeable production of carbon-based networks may facilitate their applications in battery industry even as a flexible cathode directly. The versatile and reconstructive strategy is extendable to prepare other flexible films and/or membranes for wider applications.

  6. A Collection of Visual Thesauri for Browsing Large Collections of Geographic Images.

    ERIC Educational Resources Information Center

    Ramsey, Marshall C.; Chen, Hsinchun; Zhu, Bin; Schatz, Bruce R.

    1999-01-01

    Discusses problems in creating indices and thesauri for digital libraries of geo-spatial multimedia content and proposes a scalable method to automatically generate visual thesauri of large collections of geo-spatial media using fuzzy, unsupervised machine-learning techniques. Uses satellite photographs as examples and discusses texture-based…

  7. Reimagining the microscope in the 21st century using the scalable adaptive graphics environment

    PubMed Central

    Mateevitsi, Victor; Patel, Tushar; Leigh, Jason; Levy, Bruce

    2015-01-01

    Background: Whole-slide imaging (WSI), while technologically mature, remains in the early adopter phase of the technology adoption lifecycle. One reason for this current situation is that current methods of visualizing and using WSI closely follow long-existing workflows for glass slides. We set out to “reimagine” the digital microscope in the era of cloud computing by combining WSI with the rich collaborative environment of the Scalable Adaptive Graphics Environment (SAGE). SAGE is a cross-platform, open-source visualization and collaboration tool that enables users to access, display and share a variety of data-intensive information, in a variety of resolutions and formats, from multiple sources, on display walls of arbitrary size. Methods: A prototype of a WSI viewer app in the SAGE environment was created. While not full featured, it enabled the testing of our hypothesis that these technologies could be blended together to change the essential nature of how microscopic images are utilized for patient care, medical education, and research. Results: Using the newly created WSI viewer app, demonstration scenarios were created in the patient care and medical education scenarios. This included a live demonstration of a pathology consultation at the International Academy of Digital Pathology meeting in Boston in November 2014. Conclusions: SAGE is well suited to display, manipulate and collaborate using WSIs, along with other images and data, for a variety of purposes. It goes beyond how glass slides and current WSI viewers are being used today, changing the nature of digital pathology in the process. A fully developed WSI viewer app within SAGE has the potential to encourage the wider adoption of WSI throughout pathology. PMID:26110092

  8. City Forensics: Using Visual Elements to Predict Non-Visual City Attributes.

    PubMed

    Arietta, Sean M; Efros, Alexei A; Ramamoorthi, Ravi; Agrawala, Maneesh

    2014-12-01

    We present a method for automatically identifying and validating predictive relationships between the visual appearance of a city and its non-visual attributes (e.g. crime statistics, housing prices, population density etc.). Given a set of street-level images and (location, city-attribute-value) pairs of measurements, we first identify visual elements in the images that are discriminative of the attribute. We then train a predictor by learning a set of weights over these elements using non-linear Support Vector Regression. To perform these operations efficiently, we implement a scalable distributed processing framework that speeds up the main computational bottleneck (extracting visual elements) by an order of magnitude. This speedup allows us to investigate a variety of city attributes across 6 different American cities. We find that indeed there is a predictive relationship between visual elements and a number of city attributes including violent crime rates, theft rates, housing prices, population density, tree presence, graffiti presence, and the perception of danger. We also test human performance for predicting theft based on street-level images and show that our predictor outperforms this baseline with 33% higher accuracy on average. Finally, we present three prototype applications that use our system to (1) define the visual boundary of city neighborhoods, (2) generate walking directions that avoid or seek out exposure to city attributes, and (3) validate user-specified visual elements for prediction.

  9. Visualizing inequality

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2016-07-01

    The study of socioeconomic inequality is of substantial importance, scientific and general alike. The graphic visualization of inequality is commonly conveyed by Lorenz curves. While Lorenz curves are a highly effective statistical tool for quantifying the distribution of wealth in human societies, they are less effective a tool for the visual depiction of socioeconomic inequality. This paper introduces an alternative to Lorenz curves-the hill curves. On the one hand, the hill curves are a potent scientific tool: they provide detailed scans of the rich-poor gaps in human societies under consideration, and are capable of accommodating infinitely many degrees of freedom. On the other hand, the hill curves are a powerful infographic tool: they visualize inequality in a most vivid and tangible way, with no quantitative skills that are required in order to grasp the visualization. The application of hill curves extends far beyond socioeconomic inequality. Indeed, the hill curves are highly effective 'hyperspectral' measures of statistical variability that are applicable in the context of size distributions at large. This paper establishes the notion of hill curves, analyzes them, and describes their application in the context of general size distributions.

  10. Advanced Visualization and Analysis of Climate Data using DV3D and UV-CDAT

    NASA Astrophysics Data System (ADS)

    Maxwell, T. P.

    2012-12-01

    This paper describes DV3D, a Vistrails package of high-level modules for the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT) interactive visual exploration system that enables exploratory analysis of diverse and rich data sets stored in the Earth System Grid Federation (ESGF). DV3D provides user-friendly workflow interfaces for advanced visualization and analysis of climate data at a level appropriate for scientists. The application builds on VTK, an open-source, object-oriented library, for visualization and analysis. DV3D provides the high-level interfaces, tools, and application integrations required to make the analysis and visualization power of VTK readily accessible to users without exposing burdensome details such as actors, cameras, renderers, and transfer functions. It can run as a desktop application or distributed over a set of nodes for hyperwall or distributed visualization applications. DV3D is structured as a set of modules which can be linked to create workflows in Vistrails. Figure 1 displays a typical DV3D workflow as it would appear in the Vistrails workflow builder interface of UV-CDAT and, on the right, the visualization spreadsheet output of the workflow. Each DV3D module encapsulates a complex VTK pipeline with numerous supporting objects. Each visualization module implements a unique interactive 3D display. The integrated Vistrails visualization spreadsheet offers multiple synchronized visualization displays for desktop or hyperwall. The currently available displays include volume renderers, volume slicers, 3D isosurfaces, 3D hovmoller, and various vector plots. The DV3D GUI offers a rich selection of interactive query, browse, navigate, and configure options for all displays. All configuration operations are saved as Vistrails provenance. DV3D's seamless integration with UV-CDAT's climate data management system (CDMS) and other climate data analysis tools provides a wide range of climate data analysis operations, e

  11. A unified toolkit for information and scientific visualization

    NASA Astrophysics Data System (ADS)

    Wylie, Brian; Baumes, Jeffrey

    2009-01-01

    We present an expansion of the popular open source Visualization Toolkit (VTK) to support the ingestion, processing, and display of informatics data. The result is a flexible, component-based pipeline framework for the integration and deployment of algorithms in the scientific and informatics fields. This project, code named "Titan", is one of the first efforts to address the unification of information and scientific visualization in a systematic fashion. The result includes a wide range of informatics-oriented functionality: database access, graph algorithms, graph layouts, views, charts, UI components and more. Further, the data distribution, parallel processing and client/server capabilities of VTK provide an excellent platform for scalable analysis.

  12. Scalable graphene production: perspectives and challenges of plasma applications

    NASA Astrophysics Data System (ADS)

    Levchenko, Igor; Ostrikov, Kostya (Ken); Zheng, Jie; Li, Xingguo; Keidar, Michael; B. K. Teo, Kenneth

    2016-05-01

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h-1 m-2 was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various

  13. Scalable graphene production: perspectives and challenges of plasma applications.

    PubMed

    Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth

    2016-05-19

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of

  14. AstroVis: Visualizing astronomical data cubes

    NASA Astrophysics Data System (ADS)

    Finniss, Stephen; Tyler, Robin; Questiaux, Jacques

    2016-08-01

    AstroVis enables rapid visualization of large data files on platforms supporting the OpenGL rendering library. Radio astronomical observations are typically three dimensional and stored as data cubes. AstroVis implements a scalable approach to accessing these files using three components: a File Access Component (FAC) that reduces the impact of reading time, which speeds up access to the data; the Image Processing Component (IPC), which breaks up the data cube into smaller pieces that can be processed locally and gives a representation of the whole file; and Data Visualization, which implements an approach of Overview + Detail to reduces the dimensions of the data being worked with and the amount of memory required to store it. The result is a 3D display paired with a 2D detail display that contains a small subsection of the original file in full resolution without reducing the data in any way.

  15. A Principled Way of Assessing Visualization Literacy.

    PubMed

    Boy, Jeremy; Rensink, Ronald A; Bertini, Enrico; Fekete, Jean-Daniel

    2014-12-01

    We describe a method for assessing the visualization literacy (VL) of a user. Assessing how well people understand visualizations has great value for research (e. g., to avoid confounds), for design (e. g., to best determine the capabilities of an audience), for teaching (e. g., to assess the level of new students), and for recruiting (e. g., to assess the level of interviewees). This paper proposes a method for assessing VL based on Item Response Theory. It describes the design and evaluation of two VL tests for line graphs, and presents the extension of the method to bar charts and scatterplots. Finally, it discusses the reimplementation of these tests for fast, effective, and scalable web-based use.

  16. Vials: Visualizing Alternative Splicing of Genes

    PubMed Central

    Strobelt, Hendrik; Alsallakh, Bilal; Botros, Joseph; Peterson, Brant; Borowsky, Mark; Pfister, Hanspeter; Lex, Alexander

    2016-01-01

    Alternative splicing is a process by which the same DNA sequence is used to assemble different proteins, called protein isoforms. Alternative splicing works by selectively omitting some of the coding regions (exons) typically associated with a gene. Detection of alternative splicing is difficult and uses a combination of advanced data acquisition methods and statistical inference. Knowledge about the abundance of isoforms is important for understanding both normal processes and diseases and to eventually improve treatment through targeted therapies. The data, however, is complex and current visualizations for isoforms are neither perceptually efficient nor scalable. To remedy this, we developed Vials, a novel visual analysis tool that enables analysts to explore the various datasets that scientists use to make judgments about isoforms: the abundance of reads associated with the coding regions of the gene, evidence for junctions, i.e., edges connecting the coding regions, and predictions of isoform frequencies. Vials is scalable as it allows for the simultaneous analysis of many samples in multiple groups. Our tool thus enables experts to (a) identify patterns of isoform abundance in groups of samples and (b) evaluate the quality of the data. We demonstrate the value of our tool in case studies using publicly available datasets. PMID:26529712

  17. Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.

    ERIC Educational Resources Information Center

    Wang, James Z.; Du, Yanping

    Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…

  18. Final Report: Center for Programming Models for Scalable Parallel Computing

    SciTech Connect

    Mellor-Crummey, John

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  19. Overview of the Scalable Coherent Interface, IEEE STD 1596 (SCI)

    SciTech Connect

    Gustavson, D.B.; James, D.V.; Wiggers, H.A.

    1992-10-01

    The Scalable Coherent Interface standard defines a new generation of interconnection that spans the full range from supercomputer memory `bus` to campus-wide network. SCI provides bus-like services and a shared-memory software model while using an underlying, packet protocol on many independent communication links. Initially these links are 1 GByte/s (wires) and 1 GBit/s (fiber), but the protocol scales well to future faster or lower-cost technologies. The interconnect may use switches, meshes, and rings. The SCI distributed-shared-memory model is simple and versatile, enabling for the first time a smooth integration of highly parallel multiprocessors, workstations, personal computers, I/O, networking and data acquisition.

  20. A scalable control plane for optical-packet-switched networks

    NASA Astrophysics Data System (ADS)

    Kang, J.; Reed, M. J.

    2005-02-01

    This paper describes the design considerations and architecture of a Generalized Multi-Protocol Label Switching (GMPLS)-based scalable control plane that we are prototyping for optical packet switched (OPS) networks. Functional components of the control plane include a user network interface (UNI), optical label coding, multi-layer routing/traffic engineering algorithm and integrated signaling protocol. Initial implementation and experimentation has demonstrated the feasibility of our prototype as a testbed for various control schemes for OPS networks. One key element of the architecture proposed is the use of external MPLS labeling controlled by the UNI. This proposal reduces the load on the OPS domain header processing while having little impact on the MPLS domain.

  1. Enzyme-Free Scalable DNA Digital Design Techniques: A Review.

    PubMed

    Konampurath George, Aby; Singh, Harpreet

    2016-12-02

    With the recent developments in DNA nanotechnology, DNA has been used as the basic building block for the design of nanostructures, autonomous molecular motors, various devices, and circuits. DNA is considered as a possible candidate for replacing silicon for designing digital circuits in a near future, especially in implantable medical devices, because of its parallelism, computational powers, small size, light weight, and compatibility with bio-signals. The research in DNA digital design is in early stages of development, and electrical and computer engineers are not much attracted towards this field. In this paper, we give a brief review of the existing enzyme-free scalable DNA digital design techniques which are recently developed. With the developments in DNA circuits, it would be possible to design synthetic molecular systems, therapeutic molecular devices, and other molecular scale devices and instruments. The ultimate aim will be to build complex digital designs using DNA strands which may even be placed inside a human body.

  2. Enzyme-Free Scalable DNA Digital Design Techniques: A Review.

    PubMed

    George, Aby K; Singh, Harpreet

    2016-12-01

    With the recent developments in DNA nanotechnology, DNA has been used as the basic building block for the design of nanostructures, autonomous molecular motors, various devices, and circuits. DNA is considered as a possible candidate for replacing silicon for designing digital circuits in a near future, especially in implantable medical devices, because of its parallelism, computational powers, small size, light weight, and compatibility with bio-signals. The research in DNA digital design is in early stages of development, and electrical and computer engineers are not much attracted towards this field. In this paper, we give a brief review of the existing enzyme-free scalable DNA digital design techniques which are recently developed. With the developments in DNA circuits, it would be possible to design synthetic molecular systems, therapeutic molecular devices, and other molecular scale devices and instruments. The ultimate aim will be to build complex digital designs using DNA strands which may even be placed inside a human body.

  3. Image and geometry processing with Oriented and Scalable Map.

    PubMed

    Hua, Hao

    2016-05-01

    We turn the Self-organizing Map (SOM) into an Oriented and Scalable Map (OS-Map) by generalizing the neighborhood function and the winner selection. The homogeneous Gaussian neighborhood function is replaced with the matrix exponential. Thus we can specify the orientation either in the map space or in the data space. Moreover, we associate the map's global scale with the locality of winner selection. Our model is suited for a number of graphical applications such as texture/image synthesis, surface parameterization, and solid texture synthesis. OS-Map is more generic and versatile than the task-specific algorithms for these applications. Our work reveals the overlooked strength of SOMs in processing images and geometries.

  4. Scalable syntheses of the BET bromodomain inhibitor JQ1

    PubMed Central

    Syeda, Shameem Sultana; Jakkaraj, Sudhakar; Georg, Gunda I.

    2015-01-01

    We have developed methods involving the use of alternate, safer reagents for the scalable syntheses of the potent BET bromodomain inhibitor JQ1. A one-pot three step method, involving the conversion of a benzodiazepine to a thioamde using Lawesson’s reagent, followed by amidrazone formation and installation of the triazole moiety furnished JQ1. This method provides good yields and a facile purification process. For the synthesis of enantiomerically enriched (+)-JQ1, the highly toxic reagent diethyl chlorophosphate, used in a previous synthesis, was replaced with the safer reagent diphenyl chlorophosphate in the three-step one-pot triazole formation without effecting yields and enantiomeric purity of (+)-JQ1. PMID:26034331

  5. Scalable lithography from Natural DNA Patterns via polyacrylamide gel

    PubMed Central

    Qu, JieHao; Hou, XianLiang; Fan, WanChao; Xi, GuangHui; Diao, HongYan; Liu, XiangDon

    2015-01-01

    A facile strategy for fabricating scalable stamps has been developed using cross-linked polyacrylamide gel (PAMG) that controllably and precisely shrinks and swells with water content. Aligned patterns of natural DNA molecules were prepared by evaporative self-assembly on a PMMA substrate, and were transferred to unsaturated polyester resin (UPR) to form a negative replica. The negative was used to pattern the linear structures onto the surface of water-swollen PAMG, and the pattern sizes on the PAMG stamp were customized by adjusting the water content of the PAMG. As a result, consistent reproduction of DNA patterns could be achieved with feature sizes that can be controlled over the range of 40%–200% of the original pattern dimensions. This methodology is novel and may pave a new avenue for manufacturing stamp-based functional nanostructures in a simple and cost-effective manner on a large scale. PMID:26639572

  6. Scalable Lunar Surface Networks and Adaptive Orbit Access

    NASA Technical Reports Server (NTRS)

    Wang, Xudong

    2015-01-01

    Teranovi Technologies, Inc., has developed innovative network architecture, protocols, and algorithms for both lunar surface and orbit access networks. A key component of the overall architecture is a medium access control (MAC) protocol that includes a novel mechanism of overlaying time division multiple access (TDMA) and carrier sense multiple access with collision avoidance (CSMA/CA), ensuring scalable throughput and quality of service. The new MAC protocol is compatible with legacy Institute of Electrical and Electronics Engineers (IEEE) 802.11 networks. Advanced features include efficiency power management, adaptive channel width adjustment, and error control capability. A hybrid routing protocol combines the advantages of ad hoc on-demand distance vector (AODV) routing and disruption/delay-tolerant network (DTN) routing. Performance is significantly better than AODV or DTN and will be particularly effective for wireless networks with intermittent links, such as lunar and planetary surface networks and orbit access networks.

  7. A Scalable Implementation of Van der Waals Density Functionals

    NASA Astrophysics Data System (ADS)

    Wu, Jun; Gygi, Francois

    2010-03-01

    Recently developed Van der Waals density functionals[1] offer the promise to account for weak intermolecular interactions that are not described accurately by local exchange-correlation density functionals. In spite of recent progress [2], the computational cost of such calculations remains high. We present a scalable parallel implementation of the functional proposed by Dion et al.[1]. The method is implemented in the Qbox first-principles simulation code (http://eslab.ucdavis.edu/software/qbox). Application to large molecular systems will be presented. [4pt] [1] M. Dion et al. Phys. Rev. Lett. 92, 246401 (2004).[0pt] [2] G. Roman-Perez and J. M. Soler, Phys. Rev. Lett. 103, 096102 (2009).

  8. Characterization of scalable ion traps for quantum computation

    NASA Astrophysics Data System (ADS)

    Epstein, R. J.; Bollinger, J. J.; Leibfried, D.; Seidelin, S.; Britton, J.; Wesenberg, J. H.; Shiga, N.; Amini, J. M.; Blakestad, R. B.; Brown, K. R.; Home, J. P.; Itano, W. M.; Jost, J. D.; Langer, C.; Ozeri, R.; Wineland, D. J.

    2007-03-01

    We discuss the experimental characterization of several scalable ion trap architectures for quantum information processing. We have developed an apparatus for testing planar ion trap chips which features: a standardized chip carrier for ease of interchanging traps, a single-laser Raman cooling scheme, and photo-ionization loading of Mg^+ ions. The primary benchmark for a given trap is the heating rate of the ion motional degrees of freedom, which can reduce multi-ion quantum gate fidelities. As the heating rate depends on the ion trap geometry and materials, our testing apparatus allows for efficient iteration and optimization of trap parameters. With the recent ability to fabricate planar traps with sufficiently low heating rates for quantum computation ^2, we describe current results on the simulation and fabrication of planar traps with multiple intersecting trapping zones for versatile ion choreography. S. Seidelin et al., Phys. Rev. Lett. 96, 253003 (2006). J. Kim, et al., Quantum Inf. Comput. 5, 515 (2005).

  9. A Scalable P2P Video Streaming Framework

    NASA Astrophysics Data System (ADS)

    Lee, Ivan

    Peer-to-peer (P2P) networking technique represents a vast potential to overcome many constraints in the conventional content distribution networks, especially for the real-time applications such as P2P streaming. In this chapter, a P2P streaming system is examined, and the proposed system combines multiple-description source coding technique and a scalable streaming infrastructure. The proposed system aims to gradually offload congested traffic from a centralized bottleneck to the under-utilized P2P networks and hence, provides seamless transitions from client/server streaming to centralized P2P streaming and to decentralized P2P streaming. The performance of the proposed framework is evaluated in terms of video frame loss rate, which reflects the probability of freeze video frames.

  10. Development of a scalable pharmacogenomic clinical decision support service.

    PubMed

    Fusaro, Vincent A; Brownstein, Catherine; Wolf, Wendy; Clinton, Catherine; Savage, Sarah; Mandl, Kenneth D; Margulies, David; Manzi, Shannon

    2013-01-01

    Advances in sequencing technology are making genomic data more accessible within the healthcare environment. Published pharmacogenetic guidelines attempt to provide a clinical context for specific genomic variants; however, the actual implementation to convert genomic data into a clinical report integrated within an electronic medical record system is a major challenge for any hospital. We created a two-part solution that integrates with the medical record system and converts genetic variant results into an interpreted clinical report based on published guidelines. We successfully developed a scalable infrastructure to support TPMT genetic testing and are currently testing approximately two individuals per week in our production version. We plan to release an online variant to clinical interpretation reporting system in order to facilitate translation of pharmacogenetic information into clinical practice.

  11. Final Report. Center for Scalable Application Development Software

    SciTech Connect

    Mellor-Crummey, John

    2014-10-26

    The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codes for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.

  12. A Scalable Framework to Detect Personal Health Mentions on Twitter

    PubMed Central

    Fabbri, Daniel; Rosenbloom, S Trent

    2015-01-01

    Background Biomedical research has traditionally been conducted via surveys and the analysis of medical records. However, these resources are limited in their content, such that non-traditional domains (eg, online forums and social media) have an opportunity to supplement the view of an individual’s health. Objective The objective of this study was to develop a scalable framework to detect personal health status mentions on Twitter and assess the extent to which such information is disclosed. Methods We collected more than 250 million tweets via the Twitter streaming API over a 2-month period in 2014. The corpus was filtered down to approximately 250,000 tweets, stratified across 34 high-impact health issues, based on guidance from the Medical Expenditure Panel Survey. We created a labeled corpus of several thousand tweets via a survey, administered over Amazon Mechanical Turk, that documents when terms correspond to mentions of personal health issues or an alternative (eg, a metaphor). We engineered a scalable classifier for personal health mentions via feature selection and assessed its potential over the health issues. We further investigated the utility of the tweets by determining the extent to which Twitter users disclose personal health status. Results Our investigation yielded several notable findings. First, we find that tweets from a small subset of the health issues can train a scalable classifier to detect health mentions. Specifically, training on 2000 tweets from four health issues (cancer, depression, hypertension, and leukemia) yielded a classifier with precision of 0.77 on all 34 health issues. Second, Twitter users disclosed personal health status for all health issues. Notably, personal health status was disclosed over 50% of the time for 11 out of 34 (33%) investigated health issues. Third, the disclosure rate was dependent on the health issue in a statistically significant manner (P<.001). For instance, more than 80% of the tweets about

  13. A scalable correlator for multichannel diffuse correlation spectroscopy

    NASA Astrophysics Data System (ADS)

    Stapels, Christopher J.; Kolodziejski, Noah J.; McAdams, Daniel; Podolsky, Matthew J.; Fernandez, Daniel E.; Farkas, Dana; Christian, James F.

    2016-03-01

    Diffuse correlation spectroscopy (DCS) is a technique which enables powerful and robust non-invasive optical studies of tissue micro-circulation and vascular blood flow. The technique amounts to autocorrelation analysis of coherent photons after their migration through moving scatterers and subsequent collection by single-mode optical fibers. A primary cost driver of DCS instruments are the commercial hardware-based correlators, limiting the proliferation of multi-channel instruments for validation of perfusion analysis as a clinical diagnostic metric. We present the development of a low-cost scalable correlator enabled by microchip-based time-tagging, and a software-based multi-tau data analysis method. We will discuss the capabilities of the instrument as well as the implementation and validation of 2- and 8-channel systems built for live animal and pre-clinical settings.

  14. Lilith: A scalable secure tool for massively parallel distributed computing

    SciTech Connect

    Armstrong, R.C.; Camp, L.J.; Evensky, D.A.; Gentile, A.C.

    1997-06-01

    Changes in high performance computing have necessitated the ability to utilize and interrogate potentially many thousands of processors. The ASCI (Advanced Strategic Computing Initiative) program conducted by the United States Department of Energy, for example, envisions thousands of distinct operating systems connected by low-latency gigabit-per-second networks. In addition multiple systems of this kind will be linked via high-capacity networks with latencies as low as the speed of light will allow. Code which spans systems of this sort must be scalable; yet constructing such code whether for applications, debugging, or maintenance is an unsolved problem. Lilith is a research software platform that attempts to answer these questions with an end toward meeting these needs. Presently, Lilith exists as a test-bed, written in Java, for various spanning algorithms and security schemes. The test-bed software has, and enforces, hooks allowing implementation and testing of various security schemes.

  15. Scalable load-balance measurement for SPMD codes

    SciTech Connect

    Gamblin, T; de Supinski, B R; Schulz, M; Fowler, R; Reed, D

    2008-08-05

    Good load balance is crucial on very large parallel systems, but the most sophisticated algorithms introduce dynamic imbalances through adaptation in domain decomposition or use of adaptive solvers. To observe and diagnose imbalance, developers need system-wide, temporally-ordered measurements from full-scale runs. This potentially requires data collection from multiple code regions on all processors over the entire execution. Doing this instrumentation naively can, in combination with the application itself, exceed available I/O bandwidth and storage capacity, and can induce severe behavioral perturbations. We present and evaluate a novel technique for scalable, low-error load balance measurement. This uses a parallel wavelet transform and other parallel encoding methods. We show that our technique collects and reconstructs system-wide measurements with low error. Compression time scales sublinearly with system size and data volume is several orders of magnitude smaller than the raw data. The overhead is low enough for online use in a production environment.

  16. A Practical and Scalable Tool to Find Overlaps between Sequences

    PubMed Central

    Haj Rachid, Maan

    2015-01-01

    The evolution of the next generation sequencing technology increases the demand for efficient solutions, in terms of space and time, for several bioinformatics problems. This paper presents a practical and easy-to-implement solution for one of these problems, namely, the all-pairs suffix-prefix problem, using a compact prefix tree. The paper demonstrates an efficient construction of this time-efficient and space-economical tree data structure. The paper presents techniques for parallel implementations of the proposed solution. Experimental evaluation indicates superior results in terms of space and time over existing solutions. Results also show that the proposed technique is highly scalable in a parallel execution environment. PMID:25961045

  17. Scalable lithography from Natural DNA Patterns via polyacrylamide gel

    NASA Astrophysics Data System (ADS)

    Qu, Jiehao; Hou, Xianliang; Fan, Wanchao; Xi, Guanghui; Diao, Hongyan; Liu, Xiangdon

    2015-12-01

    A facile strategy for fabricating scalable stamps has been developed using cross-linked polyacrylamide gel (PAMG) that controllably and precisely shrinks and swells with water content. Aligned patterns of natural DNA molecules were prepared by evaporative self-assembly on a PMMA substrate, and were transferred to unsaturated polyester resin (UPR) to form a negative replica. The negative was used to pattern the linear structures onto the surface of water-swollen PAMG, and the pattern sizes on the PAMG stamp were customized by adjusting the water content of the PAMG. As a result, consistent reproduction of DNA patterns could be achieved with feature sizes that can be controlled over the range of 40%-200% of the original pattern dimensions. This methodology is novel and may pave a new avenue for manufacturing stamp-based functional nanostructures in a simple and cost-effective manner on a large scale.

  18. A Scalable Nonuniform Pointer Analysis for Embedded Program

    NASA Technical Reports Server (NTRS)

    Venet, Arnaud

    2004-01-01

    In this paper we present a scalable pointer analysis for embedded applications that is able to distinguish between instances of recursively defined data structures and elements of arrays. The main contribution consists of an efficient yet precise algorithm that can handle multithreaded programs. We first perform an inexpensive flow-sensitive analysis of each function in the program that generates semantic equations describing the effect of the function on the memory graph. These equations bear numerical constraints that describe nonuniform points-to relationships. We then iteratively solve these equations in order to obtain an abstract storage graph that describes the shape of data structures at every point of the program for all possible thread interleavings. We bring experimental evidence that this approach is tractable and precise for real-size embedded applications.

  19. Dual-Matrix Sampling for Scalable Translucent Material Rendering.

    PubMed

    Wu, Yu-Ting; Li, Tzu-Mao; Lin, Yu-Hsun; Chuang, Yung-Yu

    2015-03-01

    This paper introduces a scalable algorithm for rendering translucent materials with complex lighting. We represent the light transport with a diffusion approximation by a dual-matrix representation with the Light-to-Surface and Surface-to-Camera matrices. By exploiting the structures within the matrices, the proposed method can locate surface samples with little contribution by using only subsampled matrices and avoid wasting computation on these samples. The decoupled estimation of irradiance and diffuse BSSRDFs also allows us to have a tight error bound, making the adaptive diffusion approximation more efficient and accurate. Experiments show that our method outperforms previous methods for translucent material rendering, especially in large scenes with massive translucent surfaces shaded by complex illumination.

  20. Neuromorphic adaptive plastic scalable electronics: analog learning systems.

    PubMed

    Srinivasa, Narayan; Cruz-Albrecht, Jose

    2012-01-01

    Decades of research to build programmable intelligent machines have demonstrated limited utility in complex, real-world environments. Comparing their performance with biological systems, these machines are less efficient by a factor of 1 million1 billion in complex, real-world environments. The Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program is a multifaceted Defense Advanced Research Projects Agency (DARPA) project that seeks to break the programmable machine paradigm and define a new path for creating useful, intelligent machines. Since real-world systems exhibit infinite combinatorial complexity, electronic neuromorphic machine technology would be preferable in a host of applications, but useful and practical implementations still do not exist. HRL Laboratories LLC has embarked on addressing these challenges, and, in this article, we provide an overview of our project and progress made thus far.

  1. Scalable sensing electronics towards a motion capture suit

    NASA Astrophysics Data System (ADS)

    Xu, Daniel; Gisby, Todd A.; Xie, Shane; Anderson, Iain A.

    2013-04-01

    Being able to accurately record body motion allows complex movements to be characterised and studied. This is especially important in the film or sport coaching industry. Unfortunately, the human body has over 600 skeletal muscles, giving rise to multiple degrees of freedom. In order to accurately capture motion such as hand gestures, elbow or knee flexion and extension, vast numbers of sensors are required. Dielectric elastomer (DE) sensors are an emerging class of electroactive polymer (EAP) that is soft, lightweight and compliant. These characteristics are ideal for a motion capture suit. One challenge is to design sensing electronics that can simultaneously measure multiple sensors. This paper describes a scalable capacitive sensing device that can measure up to 8 different sensors with an update rate of 20Hz.

  2. Center for Programming Models for Scalable Parallel Computing

    SciTech Connect

    John Mellor-Crummey

    2008-02-29

    Rice University's achievements as part of the Center for Programming Models for Scalable Parallel Computing include: (1) design and implemention of cafc, the first multi-platform CAF compiler for distributed and shared-memory machines, (2) performance studies of the efficiency of programs written using the CAF and UPC programming models, (3) a novel technique to analyze explicitly-parallel SPMD programs that facilitates optimization, (4) design, implementation, and evaluation of new language features for CAF, including communication topologies, multi-version variables, and distributed multithreading to simplify development of high-performance codes in CAF, and (5) a synchronization strength reduction transformation for automatically replacing barrier-based synchronization with more efficient point-to-point synchronization. The prototype Co-array Fortran compiler cafc developed in this project is available as open source software from http://www.hipersoft.rice.edu/caf.

  3. SCALABLE FUSED LASSO SVM FOR CONNECTOME-BASED DISEASE PREDICTION

    PubMed Central

    Watanabe, Takanori; Scott, Clayton D.; Kessler, Daniel; Angstadt, Michael; Sripada, Chandra S.

    2015-01-01

    There is substantial interest in developing machine-based methods that reliably distinguish patients from healthy controls using high dimensional correlation maps known as functional connectomes (FC's) generated from resting state fMRI. To address the dimensionality of FC's, the current body of work relies on feature selection techniques that are blind to the spatial structure of the data. In this paper, we propose to use the fused Lasso regularized support vector machine to explicitly account for the 6-D structure of the FC (defined by pairs of points in 3-D brain space). In order to solve the resulting nonsmooth and large-scale optimization problem, we introduce a novel and scalable algorithm based on the alternating direction method. Experiments on real resting state scans show that our approach can recover results that are more neuroscientifically informative than previous methods. PMID:25892971

  4. Scalable in-situ qubit calibration during repetitive error detection

    NASA Astrophysics Data System (ADS)

    Kelly, J.; Barends, R.; Fowler, A.; Mutus, J.; Campbell, B.; Chen, Y.; Chen, Z.; Chiaro, B.; Dunsworth, A.; Jeffrey, E.; Lucero, E.; Megrant, A.; Neeley, M.; Neill, C.; O'Malley, P. J. J.; Roushan, P.; Sank, D.; Quintana, C.; Vainsencher, A.; Wenner, J.; White, T.; Martinis, J. M.

    A quantum computer protects a quantum state from the environment through the careful manipulations of thousands or millions of physical qubits. However, operating such quantities of qubits at the necessary level of precision is an open challenge, as optimal control parameters can vary between qubits and drift in time. We present a method to optimize physical qubit parameters while error detection is running using a nine qubit system performing the bit-flip repetition code. We demonstrate how gate optimization can be parallelized in a large-scale qubit array and show that the presented method can be used to simultaneously compensate for independent or correlated qubit parameter drifts. Our method is O(1) scalable to systems of arbitrary size, providing a path towards controlling the large numbers of qubits needed for a fault-tolerant quantum computer.

  5. Scalable in situ qubit calibration during repetitive error detection

    NASA Astrophysics Data System (ADS)

    Kelly, J.; Barends, R.; Fowler, A. G.; Megrant, A.; Jeffrey, E.; White, T. C.; Sank, D.; Mutus, J. Y.; Campbell, B.; Chen, Yu; Chen, Z.; Chiaro, B.; Dunsworth, A.; Lucero, E.; Neeley, M.; Neill, C.; O'Malley, P. J. J.; Quintana, C.; Roushan, P.; Vainsencher, A.; Wenner, J.; Martinis, John M.

    2016-09-01

    We present a method to optimize qubit control parameters during error detection which is compatible with large-scale qubit arrays. We demonstrate our method to optimize single or two-qubit gates in parallel on a nine-qubit system. Additionally, we show how parameter drift can be compensated for during computation by inserting a frequency drift and using our method to remove it. We remove both drift on a single qubit and independent drifts on all qubits simultaneously. We believe this method will be useful in keeping error rates low on all physical qubits throughout the course of a computation. Our method is O (1 ) scalable to systems of arbitrary size, providing a path towards controlling the large numbers of qubits needed for a fault-tolerant quantum computer.

  6. Memory bandwidth-scalable motion estimation for mobile video coding

    NASA Astrophysics Data System (ADS)

    Hsieh, Jui-Hung; Tai, Wei-Cheng; Chang, Tian-Sheuan

    2011-12-01

    The heavy memory access of motion estimation (ME) execution consumes significant power and could limit ME execution when the available memory bandwidth (BW) is reduced because of access congestion or changes in the dynamics of the power environment of modern mobile devices. In order to adapt to the changing BW while maintaining the rate-distortion (R-D) performance, this article proposes a novel data BW-scalable algorithm for ME with mobile multimedia chips. The available BW is modeled in a R-D sense and allocated to fit the dynamic contents. The simulation result shows 70% BW savings while keeping equivalent R-D performance compared with H.264 reference software for low-motion CIF-sized video. For high-motion sequences, the result shows our algorithm can better use the available BW to save an average bit rate of up to 13% with up to 0.1-dB PSNR increase for similar BW usage.

  7. Massive graph visualization : LDRD final report.

    SciTech Connect

    Wylie, Brian Neil; Moreland, Kenneth D.

    2007-10-01

    Graphs are a vital way of organizing data with complex correlations. A good visualization of a graph can fundamentally change human understanding of the data. Consequently, there is a rich body of work on graph visualization. Although there are many techniques that are effective on small to medium sized graphs (tens of thousands of nodes), there is a void in the research for visualizing massive graphs containing millions of nodes. Sandia is one of the few entities in the world that has the means and motivation to handle data on such a massive scale. For example, homeland security generates graphs from prolific media sources such as television, telephone, and the Internet. The purpose of this project is to provide the groundwork for visualizing such massive graphs. The research provides for two major feature gaps: a parallel, interactive visualization framework and scalable algorithms to make the framework usable to a practical application. Both the frameworks and algorithms are designed to run on distributed parallel computers, which are already available at Sandia. Some features are integrated into the ThreatView{trademark} application and future work will integrate further parallel algorithms.

  8. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  9. Scalable desktop visualisation of very large radio astronomy data cubes

    NASA Astrophysics Data System (ADS)

    Perkins, Simon; Questiaux, Jacques; Finniss, Stephen; Tyler, Robin; Blyth, Sarah; Kuttel, Michelle M.

    2014-07-01

    Observation data from radio telescopes is typically stored in three (or higher) dimensional data cubes, the resolution, coverage and size of which continues to grow as ever larger radio telescopes come online. The Square Kilometre Array, tabled to be the largest radio telescope in the world, will generate multi-terabyte data cubes - several orders of magnitude larger than the current norm. Despite this imminent data deluge, scalable approaches to file access in Astronomical visualisation software are rare: most current software packages cannot read astronomical data cubes that do not fit into computer system memory, or else provide access only at a serious performance cost. In addition, there is little support for interactive exploration of 3D data. We describe a scalable, hierarchical approach to 3D visualisation of very large spectral data cubes to enable rapid visualisation of large data files on standard desktop hardware. Our hierarchical approach, embodied in the AstroVis prototype, aims to provide a means of viewing large datasets that do not fit into system memory. The focus is on rapid initial response: our system initially rapidly presents a reduced, coarse-grained 3D view of the data cube selected, which is gradually refined. The user may select sub-regions of the cube to be explored in more detail, or extracted for use in applications that do not support large files. We thus shift the focus from data analysis informed by narrow slices of detailed information, to analysis informed by overview information, with details on demand. Our hierarchical solution to the rendering of large data cubes reduces the overall time to complete file reading, provides user feedback during file processing and is memory efficient. This solution does not require high performance computing hardware and can be implemented on any platform supporting the OpenGL rendering library.

  10. ParaText : scalable text modeling and analysis.

    SciTech Connect

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-06-01

    Automated processing, modeling, and analysis of unstructured text (news documents, web content, journal articles, etc.) is a key task in many data analysis and decision making applications. As data sizes grow, scalability is essential for deep analysis. In many cases, documents are modeled as term or feature vectors and latent semantic analysis (LSA) is used to model latent, or hidden, relationships between documents and terms appearing in those documents. LSA supplies conceptual organization and analysis of document collections by modeling high-dimension feature vectors in many fewer dimensions. While past work on the scalability of LSA modeling has focused on the SVD, the goal of our work is to investigate the use of distributed memory architectures for the entire text analysis process, from data ingestion to semantic modeling and analysis. ParaText is a set of software components for distributed processing, modeling, and analysis of unstructured text. The ParaText source code is available under a BSD license, as an integral part of the Titan toolkit. ParaText components are chained-together into data-parallel pipelines that are replicated across processes on distributed-memory architectures. Individual components can be replaced or rewired to explore different computational strategies and implement new functionality. ParaText functionality can be embedded in applications on any platform using the native C++ API, Python, or Java. The ParaText MPI Process provides a 'generic' text analysis pipeline in a command-line executable that can be used for many serial and parallel analysis tasks. ParaText can also be deployed as a web service accessible via a RESTful (HTTP) API. In the web service configuration, any client can access the functionality provided by ParaText using commodity protocols ... from standard web browsers to custom clients written in any language.

  11. Scalable Conjunction Processing using Spatiotemporally Indexed Ephemeris Data

    NASA Astrophysics Data System (ADS)

    Budianto-Ho, I.; Johnson, S.; Sivilli, R.; Alberty, C.; Scarberry, R.

    2014-09-01

    The collision warnings produced by the Joint Space Operations Center (JSpOC) are of critical importance in protecting U.S. and allied spacecraft against destructive collisions and protecting the lives of astronauts during space flight. As the Space Surveillance Network (SSN) improves its sensor capabilities for tracking small and dim space objects, the number of tracked objects increases from thousands to hundreds of thousands of objects, while the number of potential conjunctions increases with the square of the number of tracked objects. Classical filtering techniques such as apogee and perigee filters have proven insufficient. Novel and orders of magnitude faster conjunction analysis algorithms are required to find conjunctions in a timely manner. Stellar Science has developed innovative filtering techniques for satellite conjunction processing using spatiotemporally indexed ephemeris data that efficiently and accurately reduces the number of objects requiring high-fidelity and computationally-intensive conjunction analysis. Two such algorithms, one based on the k-d Tree pioneered in robotics applications and the other based on Spatial Hash Tables used in computer gaming and animation, use, at worst, an initial O(N log N) preprocessing pass (where N is the number of tracked objects) to build large O(N) spatial data structures that substantially reduce the required number of O(N^2) computations, substituting linear memory usage for quadratic processing time. The filters have been implemented as Open Services Gateway initiative (OSGi) plug-ins for the Continuous Anomalous Orbital Situation Discriminator (CAOS-D) conjunction analysis architecture. We have demonstrated the effectiveness, efficiency, and scalability of the techniques using a catalog of 100,000 objects, an analysis window of one day, on a 64-core computer with 1TB shared memory. Each algorithm can process the full catalog in 6 minutes or less, almost a twenty-fold performance improvement over the

  12. Camouflage Visualization

    DTIC Science & Technology

    1992-04-07

    results of the test are difficult to quantify and to compare with previous observations. The more recent need to develop camouflage measures to defeat...Number: 609-9160002 2-2 Final Report. Camouflage Visualization System A:)TTRIBU!TES)) EMTY ATmUE INSERT TARGET BACKROUND....... IMG IMMAGE IRSG- RHG...atmospheric attenuation models, such as LOWTRAN. As a reference, the calculated results are compared to LOWTRAN predictions to show the performance of our

  13. Battlefield Visualization

    DTIC Science & Technology

    2007-11-02

    A study analyzing battlefield visualization (BV) as a component of information dominance and superiority. This study outlines basic requirements for effective BV in terms of terrain data, information systems (synthetic environment; COA development and analysis tools) and BV development management, with a focus on technology insertion strategies. This study also reports on existing BV systems and provides 16 recommendations for Army BV support efforts, including interested organization, funding levels and duration of effort for each recommended action.

  14. Visualizing Progress

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Reality Capture Technologies, Inc. is a spinoff company from Ames Research Center. Offering e-business solutions for optimizing management, design and production processes, RCT uses visual collaboration environments (VCEs) such as those used to prepare the Mars Pathfinder mission.The product, 4-D Reality Framework, allows multiple users from different locations to manage and share data. The insurance industry is one targeted commercial application for this technology.

  15. Flow visualization

    NASA Technical Reports Server (NTRS)

    Weinstein, Leonard M.

    1991-01-01

    Flow visualization techniques are reviewed, with particular attention given to those applicable to liquid helium flows. Three techniques capable of obtaining qualitative and quantitative measurements of complex 3D flow fields are discussed including focusing schlieren, particle image volocimetry, and holocinematography (HCV). It is concluded that the HCV appears to be uniquely capable of obtaining full time-varying, 3D velocity field data, but is limited to the low speeds typical of liquid helium facilities.

  16. Flow visualization

    NASA Astrophysics Data System (ADS)

    Weinstein, Leonard M.

    Flow visualization techniques are reviewed, with particular attention given to those applicable to liquid helium flows. Three techniques capable of obtaining qualitative and quantitative measurements of complex 3D flow fields are discussed including focusing schlieren, particle image volocimetry, and holocinematography (HCV). It is concluded that the HCV appears to be uniquely capable of obtaining full time-varying, 3D velocity field data, but is limited to the low speeds typical of liquid helium facilities.

  17. Mobile Virtual Reality : A Solution for Big Data Visualization

    NASA Astrophysics Data System (ADS)

    Marshall, E.; Seichter, N. D.; D'sa, A.; Werner, L. A.; Yuen, D. A.

    2015-12-01

    Pursuits in geological sciences and other branches of quantitative sciences often require data visualization frameworks that are in continual need of improvement and new ideas. Virtual reality is a medium of visualization that has large audiences originally designed for gaming purposes; Virtual reality can be captured in Cave-like environment but they are unwieldy and expensive to maintain. Recent efforts by major companies such as Facebook have focussed more on a large market , The Oculus is the first of such kind of mobile devices The operating system Unity makes it possible for us to convert the data files into a mesh of isosurfaces and be rendered into 3D. A user is immersed inside of the virtual reality and is able to move within and around the data using arrow keys and other steering devices, similar to those employed in XBox.. With introductions of products like the Oculus Rift and Holo Lens combined with ever increasing mobile computing strength, mobile virtual reality data visualization can be implemented for better analysis of 3D geological and mineralogical data sets. As more new products like the Surface Pro 4 and other high power yet very mobile computers are introduced to the market, the RAM and graphics card capacity necessary to run these models is more available, opening doors to this new reality. The computing requirements needed to run these models are a mere 8 GB of RAM and 2 GHz of CPU speed, which many mobile computers are starting to exceed. Using Unity 3D software to create a virtual environment containing a visual representation of the data, any data set converted into FBX or OBJ format which can be traversed by wearing the Oculus Rift device. This new method for analysis in conjunction with 3D scanning has potential applications in many fields, including the analysis of precious stones or jewelry. Using hologram technology to capture in high-resolution the 3D shape, color, and imperfections of minerals and stones, detailed review and

  18. Semantic Integrative Digital Pathology: Insights into Microsemiological Semantics and Image Analysis Scalability.

    PubMed

    Racoceanu, Daniel; Capron, Frédérique

    2016-01-01

    be devoted to morphological microsemiology (microscopic morphology semantics). Besides insuring the traceability of the results (second opinion) and supporting the orchestration of high-content image analysis modules, the role of semantics will be crucial for the correlation between digital pathology and noninvasive medical imaging modalities. In addition, semantics has an important role in modelling the links between traditional microscopy and recent label-free technologies. The massive amount of visual data is challenging and represents a characteristic intrinsic to digital pathology. The design of an operational integrative microscopy framework needs to focus on scalable multiscale imaging formalism. In this sense, we prospectively consider some of the most recent scalable methodologies adapted to digital pathology as marked point processes for nuclear atypia and point-set mathematical morphology for architecture grading. To orchestrate this scalable framework, semantics-based WSI management (analysis, exploration, indexing, retrieval and report generation support) represents an important means towards approaches to integrating big data into biomedicine. This insight reflects our vision through an instantiation of essential bricks of this type of architecture. The generic approach introduced here is applicable to a number of challenges related to molecular imaging, high-content image management and, more generally, bioinformatics.

  19. Top Ten Interaction Challenges in Extreme-Scale Visual Analytics

    SciTech Connect

    Wong, Pak C.; Shen, Han-Wei; Chen, Chaomei

    2012-05-31

    The chapter presents ten selected user interfaces and interaction challenges in extreme-scale visual analytics. The study of visual analytics is often referred to as 'the science of analytical reasoning facilitated by interactive visual interfaces' in the literature. The discussion focuses on the issues of applying visual analytics technologies to extreme-scale scientific and non-scientific data ranging from petabyte to exabyte in sizes. The ten challenges are: in situ interactive analysis, user-driven data reduction, scalability and multi-level hierarchy, representation of evidence and uncertainty, heterogeneous data fusion, data summarization and triage for interactive query, analytics of temporally evolving features, the human bottleneck, design and engineering development, and the Renaissance of conventional wisdom. The discussion addresses concerns that arise from different areas of hardware, software, computation, algorithms, and human factors. The chapter also evaluates the likelihood of success in meeting these challenges in the near future.

  20. Visualizing confusion matrices for multidimensional signal detection correlational methods

    NASA Astrophysics Data System (ADS)

    Zhou, Yue; Wischgoll, Thomas; Blaha, Leslie M.; Smith, Ross; Vickery, Rhonda J.

    2013-12-01

    Advances in modeling and simulation for General Recognition Theory have produced more data than can be easily visualized using traditional techniques. In this area of psychological modeling, domain experts are struggling to find effective ways to compare large-scale simulation results. This paper describes methods that adapt the web-based D3 visualization framework combined with pre-processing tools to enable domain specialists to more easily interpret their data. The D3 framework utilizes Javascript and scalable vector graphics (SVG) to generate visualizations that can run readily within the web browser for domain specialists. Parallel coordinate plots and heat maps were developed for identification-confusion matrix data, and the results were shown to a GRT expert for an informal evaluation of their utility. There is a clear benefit to model interpretation from these visualizations when researchers need to interpret larger amounts of simulated data.

  1. STRING 3: An Advanced Groundwater Flow Visualization Tool

    NASA Astrophysics Data System (ADS)

    Schröder, Simon; Michel, Isabel; Biedert, Tim; Gräfe, Marius; Seidel, Torsten; König, Christoph

    2016-04-01

    The visualization of 3D groundwater flow is a challenging task. Previous versions of our software STRING [1] solely focused on intuitive visualization of complex flow scenarios for non-professional audiences. STRING, developed by Fraunhofer ITWM (Kaiserslautern, Germany) and delta h Ingenieurgesellschaft mbH (Witten, Germany), provides the necessary means for visualization of both 2D and 3D data on planar and curved surfaces. In this contribution we discuss how to extend this approach to a full 3D tool and its challenges in continuation of Michel et al. [2]. This elevates STRING from a post-production to an exploration tool for experts. In STRING moving pathlets provide an intuition of velocity and direction of both steady-state and transient flows. The visualization concept is based on the Lagrangian view of the flow. To capture every detail of the flow an advanced method for intelligent, time-dependent seeding is used building on the Finite Pointset Method (FPM) developed by Fraunhofer ITWM. Lifting our visualization approach from 2D into 3D provides many new challenges. With the implementation of a seeding strategy for 3D one of the major problems has already been solved (see Schröder et al. [3]). As pathlets only provide an overview of the velocity field other means are required for the visualization of additional flow properties. We suggest the use of Direct Volume Rendering and isosurfaces for scalar features. In this regard we were able to develop an efficient approach for combining the rendering through raytracing of the volume and regular OpenGL geometries. This is achieved through the use of Depth Peeling or A-Buffers for the rendering of transparent geometries. Animation of pathlets requires a strict boundary of the simulation domain. Hence, STRING needs to extract the boundary, even from unstructured data, if it is not provided. In 3D we additionally need a good visualization of the boundary itself. For this the silhouette based on the angle of

  2. Interactive Correlation Analysis and Visualization of Climate Data

    SciTech Connect

    Ma, Kwan-Liu

    2016-09-21

    The relationship between our ability to analyze and extract insights from visualization of climate model output and the capability of the available resources to make those visualizations has reached a crisis point. The large volume of data currently produced by climate models is overwhelming the current, decades-old visualization workflow. The traditional methods for visualizing climate output also have not kept pace with changes in the types of grids used, the number of variables involved, and the number of different simulations performed with a climate model or the feature-richness of high-resolution simulations. This project has developed new and faster methods for visualization in order to get the most knowledge out of the new generation of high-resolution climate models. While traditional climate images will continue to be useful, there is need for new approaches to visualization and analysis of climate data if we are to gain all the insights available in ultra-large data sets produced by high-resolution model output and ensemble integrations of climate models such as those produced for the Coupled Model Intercomparison Project. Towards that end, we have developed new visualization techniques for performing correlation analysis. We have also introduced highly scalable, parallel rendering methods for visualizing large-scale 3D data. This project was done jointly with climate scientists and visualization researchers at Argonne National Laboratory and NCAR.

  3. Visual bioethics.

    PubMed

    Lauritzen, Paul

    2008-12-01

    Although images are pervasive in public policy debates in bioethics, few who work in the field attend carefully to the way that images function rhetorically. If the use of images is discussed at all, it is usually to dismiss appeals to images as a form of manipulation. Yet it is possible to speak meaningfully of visual arguments. Examining the appeal to images of the embryo and fetus in debates about abortion and stem cell research, I suggest that bioethicists would be well served by attending much more carefully to how images function in public policy debates.

  4. LiveGantt: Interactively Visualizing a Large Manufacturing Schedule.

    PubMed

    Jo, Jaemin; Huh, Jaeseok; Park, Jonghun; Kim, Bohyoung; Seo, Jinwook

    2014-12-01

    In this paper, we introduce LiveGantt as a novel interactive schedule visualization tool that helps users explore highly-concurrent large schedules from various perspectives. Although a Gantt chart is the most common approach to illustrate schedules, currently available Gantt chart visualization tools suffer from limited scalability and lack of interactions. LiveGantt is built with newly designed algorithms and interactions to improve conventional charts with better scalability, explorability, and reschedulability. It employs resource reordering and task aggregation to display the schedules in a scalable way. LiveGantt provides four coordinated views and filtering techniques to help users explore and interact with the schedules in more flexible ways. In addition, LiveGantt is equipped with an efficient rescheduler to allow users to instantaneously modify their schedules based on their scheduling experience in the fields. To assess the usefulness of the application of LiveGantt, we conducted a case study on manufacturing schedule data with four industrial engineering researchers. Participants not only grasped an overview of a schedule but also explored the schedule from multiple perspectives to make enhancements.

  5. Novel Visual Sensor Coverage and Deployment in Time Aware PTZ Wireless Visual Sensor Networks

    PubMed Central

    Yap, Florence G. H.; Yen, Hong-Hsu

    2016-01-01

    In this paper, we consider the visual sensor deployment algorithm in Pan-Tilt-Zoom (PTZ) Wireless Visual Sensor Networks (WVSNs). With PTZ capability, a sensor’s visual coverage can be extended to reduce the number of visual sensors that need to be deployed. The coverage zone of a visual sensor in PTZ WVSN is composed of two regions, a Direct Coverage Region (DCR) and a PTZ Coverage Region (PTZCR). In the PTZCR, a visual sensor needs a mechanical pan-tilt-zoom operation to cover an object. This mechanical operation can take seconds, so the sensor might not be able to adjust the camera in time to capture the visual data. In this paper, for the first time, we study this PTZ time-aware PTZ WVSN deployment problem. We formulate this PTZ time-aware PTZ WVSN deployment problem as an optimization problem where the objective is to minimize the total visual sensor deployment cost so that each area is either covered in the DCR or in the PTZCR while considering the PTZ time constraint. The proposed Time Aware Coverage Zone (TACZ) model successfully captures the PTZ visual sensor coverage in terms of camera focal range, angle span zone coverage and camera PTZ time. Then a novel heuristic, called Time Aware Deployment with PTZ camera (TADPTZ) algorithm, is proposed to solve the problem. From our computational experiments, we found out that TACZ model outperforms the existing M coverage model under all network scenarios. In addition, as compared to the optimal solutions, the TACZ model is scalable and adaptable to the different PTZ time requirements when deploying large PTZ WVSNs. PMID:28042829

  6. Novel Visual Sensor Coverage and Deployment in Time Aware PTZ Wireless Visual Sensor Networks.

    PubMed

    Yap, Florence G H; Yen, Hong-Hsu

    2016-12-30

    In this paper, we consider the visual sensor deployment algorithm in Pan-Tilt-Zoom (PTZ) Wireless Visual Sensor Networks (WVSNs). With PTZ capability, a sensor's visual coverage can be extended to reduce the number of visual sensors that need to be deployed. The coverage zone of a visual sensor in PTZ WVSN is composed of two regions, a Direct Coverage Region (DCR) and a PTZ Coverage Region (PTZCR). In the PTZCR, a visual sensor needs a mechanical pan-tilt-zoom operation to cover an object. This mechanical operation can take seconds, so the sensor might not be able to adjust the camera in time to capture the visual data. In this paper, for the first time, we study this PTZ time-aware PTZ WVSN deployment problem. We formulate this PTZ time-aware PTZ WVSN deployment problem as an optimization problem where the objective is to minimize the total visual sensor deployment cost so that each area is either covered in the DCR or in the PTZCR while considering the PTZ time constraint. The proposed Time Aware Coverage Zone (TACZ) model successfully captures the PTZ visual sensor coverage in terms of camera focal range, angle span zone coverage and camera PTZ time. Then a novel heuristic, called Time Aware Deployment with PTZ camera (TADPTZ) algorithm, is proposed to solve the problem. From our computational experiments, we found out that TACZ model outperforms the existing M coverage model under all network scenarios. In addition, as compared to the optimal solutions, the TACZ model is scalable and adaptable to the different PTZ time requirements when deploying large PTZ WVSNs.

  7. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  8. High-performance, scalable optical network-on-chip architectures

    NASA Astrophysics Data System (ADS)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  9. A Scalable Gaussian Process Analysis Algorithm for Biomass Monitoring

    SciTech Connect

    Chandola, Varun; Vatsavai, Raju

    2011-01-01

    Biomass monitoring is vital for studying the carbon cycle of earth's ecosystem and has several significant implications, especially in the context of understanding climate change and its impacts. Recently, several change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments, but do not satisfy one or both of the two requirements of the biomass monitoring problem, i.e., {\\em operating in online mode} and {\\em handling periodic time series}. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) have been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. We focus on addressing the scalability issues associated with the proposed GP based change detection algorithm. This paper makes several significant contributions. First, we propose a GP based online time series change detection algorithm and demonstrate its effectiveness in detecting different types of changes in {\\em Normalized Difference Vegetation Index} (NDVI) data obtained from a study area in Iowa, USA. Second, we propose an efficient Toeplitz matrix based solution which significantly improves the computational complexity and memory requirements of the proposed GP based method. Specifically, the proposed solution can analyze a time series of length $t$ in $O(t^2)$ time while maintaining a $O(t)$ memory footprint, compared to the $O(t^3)$ time and $O(t^2)$ memory requirement of standard matrix manipulation based methods. Third, we describe a parallel version of the proposed solution which can be used to simultaneously analyze a large number of time series. We study three different parallel implementations: using threads, MPI, and a hybrid

  10. The Co Design Architecture for Exascale Systems, a Novel Approach for Scalable Designs

    SciTech Connect

    Kagan, Michael; Shainer, Gilad; Poole, Stephen W; Shamis, Pavel; Wilde, Todd; Pak, Lui; Liu, Tong; Dubman, Mike; Shahar, Yiftah; Graham, Richard L

    2012-01-01

    High performance computing (HPC) has begun scaling beyond the Petaflop range towards the Exaflop (1000 Petaflops) mark. One of the major concerns throughout the development toward such performance capability is scalability both at the system level and the application layer. In this paper we present a novel approach for a new design concept the Co Design approach with enables a tighter development of both the application communication libraries and the underlying hardware interconnect solution in order to overcome scalability issues and to enable a more efficient design approach towards Exascale computing. We have suggested a new application programing interface and have demonstrated a 50x improvement of performance and scalability increases.

  11. Interactive realtime Doppler-ultrasound visualization of the heart.

    PubMed

    Heid, V; Evers, H; Henn, C; Glombitza, G; Meinzer, H P

    2000-01-01

    Heart valve insufficiencies can optimally be assessed using transesophageal, triggered, three-dimensional ultrasound imaging. The dynamic ultrasound data contain morphological as well as functional components which are recorded and displayed simultaneously. It allows the visualization of intracardiac motion which is an important parameter to detect abnormal flow caused by defect valves. A realtime reconstruction is desired to get a spatial impression on the one hand and to interactively clip parts of the volume on the other hand. Therefore, we use the OpenGL Volumizer API. Scalability of the visualization was tested with respect to different workstations and graphics resources using a Multipipe Utility library. The combination of both APIs enables a visualization of volumetric and functional data with frame rates up to 10 frames per second. By using the proposed method, it is possible to visualize the jet in the original color-coding which is employed during a conventional two-dimensional examination for displaying the velocity values. The morphological and the functional data are handled as two independent data channels. A good scalability from low cost up to high end graphic workstations is given by the use of the MPU. The quality of the resulting 3D images allows exact differentiation of heart valve insufficiencies to support the diagnostic procedure.

  12. Implementation of scalable video coding deblocking filter from high-level SystemC description

    NASA Astrophysics Data System (ADS)

    Carballo, Pedro P.; Espino, Omar; Neris, Romén.; Hernández-Fernández, Pedro; Szydzik, Tomasz M.; Núñez, Antonio

    2013-05-01

    This paper describes key concepts in the design and implementation of a deblocking filter (DF) for a H.264/SVC video decoder. The DF supports QCIF and CIF video formats with temporal and spatial scalability. The design flow starts from a SystemC functional model and has been refined using high-level synthesis methodology to RTL microarchitecture. The process is guided with performance measurements (latency, cycle time, power, resource utilization) with the objective of assuring the quality of results of the final system. The functional model of the DF is created in an incremental way from the AVC DF model using OpenSVC source code as reference. The design flow continues with the logic synthesis and the implementation on the FPGA using various strategies. The final implementation is chosen among the implementations that meet the timing constraints. The DF is capable to run at 100 MHz, and macroblocks are processed in 6,500 clock cycles for a throughput of 130 fps for QCIF format and 37 fps for CIF format. The proposed architecture for the complete H.264/SVC decoder is composed of an OMAP 3530 SOC (ARM Cortex-A8 GPP + DSP) and the FPGA Virtex-5 acting as a coprocessor for DF implementation. The DF is connected to the OMAP SOC using the GPMC interface. A validation platform has been developed using the embedded PowerPC processor in the FPGA, composing a SoC that integrates the frame generation and visualization in a TFT screen. The FPGA implements both the DF core and a GPMC slave core. Both cores are connected to the PowerPC440 embedded processor using LocalLink interfaces. The FPGA also contains a local memory capable of storing information necessary to filter a complete frame and to store a decoded picture frame. The complete system is implemented in a Virtex5 FX70T device.

  13. Visualization rhetoric: framing effects in narrative visualization.

    PubMed

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation.

  14. The Scalable Coherent Interface and related standards projects

    SciTech Connect

    Gustavson, D.B.

    1991-09-01

    The Scalable Coherent Interface (SCI) project (IEEE P1596) found a way to avoid the limits that are inherent in bus technology. SCI provides bus-like services by transmitting packets on a collection of point-to-point unidirectional links. The SCI protocols support cache coherence in a distributed-shared-memory multiprocessor model, message passing, I/O, and local-area-network-like communication over fiber optic or wire links. VLSI circuits that operate parallel links at 1000 MByte/s and serial links at 1000 Mbit/s will be available early in 1992. Several ongoing SCI-related projects are applying the SCI technology to new areas or extending it to more difficult problems. P1596.1 defines the architecture of a bridge between SCI and VME; P1596.2 compatibly extends the cache coherence mechanism for efficient operation with kiloprocessor systems; P1596.3 defines new low-voltage (about 0.25 V) differential signals suitable for low power interfaces for CMOS or GaAs VLSI implementations of SCI; P1596.4 defines a high performance memory chip interface using these signals; P1596.5 defines data transfer formats for efficient interprocessor communication in heterogeneous multiprocessor systems. This paper reports the current status of SCI, related standards, and new projects. 16 refs.

  15. Efficient and scalable Pareto optimization by evolutionary local selection algorithms.

    PubMed

    Menczer, F; Degeratu, M; Street, W N

    2000-01-01

    Local selection is a simple selection scheme in evolutionary computation. Individual fitnesses are accumulated over time and compared to a fixed threshold, rather than to each other, to decide who gets to reproduce. Local selection, coupled with fitness functions stemming from the consumption of finite shared environmental resources, maintains diversity in a way similar to fitness sharing. However, it is more efficient than fitness sharing and lends itself to parallel implementations for distributed tasks. While local selection is not prone to premature convergence, it applies minimal selection pressure to the population. Local selection is, therefore, particularly suited to Pareto optimization or problem classes where diverse solutions must be covered. This paper introduces ELSA, an evolutionary algorithm employing local selection and outlines three experiments in which ELSA is applied to multiobjective problems: a multimodal graph search problem, and two Pareto optimization problems. In all these experiments, ELSA significantly outperforms other well-known evolutionary algorithms. The paper also discusses scalability, parameter dependence, and the potential distributed applications of the algorithm.

  16. Scalable Multicore Motion Planning Using Lock-Free Concurrency.

    PubMed

    Ichnowski, Jeffrey; Alterovitz, Ron

    2014-10-01

    We present PRRT (Parallel RRT) and PRRT* (Parallel RRT*), sampling-based methods for feasible and optimal motion planning designed for modern multicore CPUs. We parallelize RRT and RRT* such that all threads concurrently build a single motion planning tree. Parallelization in this manner requires that data structures, such as the nearest neighbor search tree and the motion planning tree, are safely shared across multiple threads. Rather than rely on traditional locks which can result in slowdowns due to lock contention, we introduce algorithms based on lock-free concurrency using atomic operations. We further improve scalability by using partition-based sampling (which shrinks each core's working data set to improve cache efficiency) and parallel work-saving (in reducing the number of rewiring steps performed in PRRT*). Because PRRT and PRRT* are CPU-based, they can be directly integrated with existing libraries. We demonstrate that PRRT and PRRT* scale well as core counts increase, in some cases exhibiting superlinear speedup, for scenarios such as the Alpha Puzzle and Cubicles scenarios and the Aldebaran Nao robot performing a 2-handed task.

  17. Enabling Technologies for Scalable Trapped Ion Quantum Computing

    NASA Astrophysics Data System (ADS)

    Crain, Stephen; Gaultney, Daniel; Mount, Emily; Knoernschild, Caleb; Baek, Soyoung; Maunz, Peter; Kim, Jungsang

    2013-05-01

    Scalability is one of the main challenges of trapped ion based quantum computation, mainly limited by the lack of enabling technologies needed to trap, manipulate and process the increasing number of qubits. Microelectromechanical systems (MEMS) technology allows one to design movable micromirrors to focus laser beams on individual ions in a chain and steer the focal point in two dimensions. Our current MEMS system is designed to steer 355 nm pulsed laser beams to carry out logic gates on a chain of Yb ions with a waist of 1.5 μm across a 20 μm range. In order to read the state of the qubit chain we developed a 32-channel PMT with a custom read-out circuit operating near the thermal noise limit of the readout amplifier which increases state detection fidelity. We also developed a set of digital to analog converters (DACs) used to supply analog DC voltages to the electrodes of an ion trap. We designed asynchronous DACs to avoid added noise injection at the update rate commonly found in synchronous DACs. Effective noise filtering is expected to reduce the heating rate of a surface trap, thus improving multi-qubit logic gate fidelities. Our DAC system features 96 channels and an integrated FPGA that allows the system to be controlled in real time. This work was supported by IARPA/ARO.

  18. Scalable Design of Paired CRISPR Guide RNAs for Genomic Deletion

    PubMed Central

    Polidori, Taisia; Palumbo, Emilio; Guigo, Roderic

    2017-01-01

    CRISPR-Cas9 technology can be used to engineer precise genomic deletions with pairs of single guide RNAs (sgRNAs). This approach has been widely adopted for diverse applications, from disease modelling of individual loci, to parallelized loss-of-function screens of thousands of regulatory elements. However, no solution has been presented for the unique bioinformatic design requirements of CRISPR deletion. We here present CRISPETa, a pipeline for flexible and scalable paired sgRNA design based on an empirical scoring model. Multiple sgRNA pairs are returned for each target, and any number of targets can be analyzed in parallel, making CRISPETa equally useful for focussed or high-throughput studies. Fast run-times are achieved using a pre-computed off-target database. sgRNA pair designs are output in a convenient format for visualisation and oligonucleotide ordering. We present pre-designed, high-coverage library designs for entire classes of protein-coding and non-coding elements in human, mouse, zebrafish, Drosophila melanogaster and Caenorhabditis elegans. In human cells, we reproducibly observe deletion efficiencies of ≥50% for CRISPETa designs targeting an enhancer and exonic fragment of the MALAT1 oncogene. In the latter case, deletion results in production of desired, truncated RNA. CRISPETa will be useful for researchers seeking to harness CRISPR for targeted genomic deletion, in a variety of model organisms, from single-target to high-throughput scales. PMID:28253259

  19. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  20. Photon Pairs for Scalable Quantum Communication with Atomic Ensembles

    NASA Astrophysics Data System (ADS)

    Kuzmich, A.; Bowen, W. P.; Boozer, A. D.; Boca, A.; Chou, C.; Duan, L.-M.; Kimble, H. J.

    2003-05-01

    Quantum information science attempts to exploit capabilities from the quantum realm to accomplish tasks that are otherwise impossible in the classical domain. In this regard, a significant advance is the invention of a protocol by Duan, Lukin, Cirac, and Zoller (DLCZ) for the realization of scalable long distance quantum communication and the distribution of entanglement over quantum networks [1]. Here we report the first enabling step in the realization of the protocol of DLCZ, namely the observation of quantum correlations for photon pairs generated in the collective emission from an atomic ensemble. An optically thick sample of three-level atoms in a lambda-configuration is exploited to produce correlated photons. The atomic sample for our experiment is provided by Cesium atoms in a magneto-optical trap (MOT). We find a significant violation of the Cauchy-Schwarz inequality clearly demonstrating the nonclassical character of the correlations between the two photons generated by sequential (write,read) beams. Moreover, the measured coincidence rates clearly demonstrate the cooperative nature of the emission process. These capabilities should help to enable other advances in the field of quantum information, including the implementation of quantum memory and fully controllable single-photon sources, which, combined together, pave the avenue for realization of universal quantum computation. [1] L.-M. Duan, M. Lukin, J. I. Cirac, and P. Zoller, Nature 414, 413 (2001).

  1. Scalable Multicore Motion Planning Using Lock-Free Concurrency

    PubMed Central

    Ichnowski, Jeffrey; Alterovitz, Ron

    2015-01-01

    We present PRRT (Parallel RRT) and PRRT* (Parallel RRT*), sampling-based methods for feasible and optimal motion planning designed for modern multicore CPUs. We parallelize RRT and RRT* such that all threads concurrently build a single motion planning tree. Parallelization in this manner requires that data structures, such as the nearest neighbor search tree and the motion planning tree, are safely shared across multiple threads. Rather than rely on traditional locks which can result in slowdowns due to lock contention, we introduce algorithms based on lock-free concurrency using atomic operations. We further improve scalability by using partition-based sampling (which shrinks each core’s working data set to improve cache efficiency) and parallel work-saving (in reducing the number of rewiring steps performed in PRRT*). Because PRRT and PRRT* are CPU-based, they can be directly integrated with existing libraries. We demonstrate that PRRT and PRRT* scale well as core counts increase, in some cases exhibiting superlinear speedup, for scenarios such as the Alpha Puzzle and Cubicles scenarios and the Aldebaran Nao robot performing a 2-handed task. PMID:26167135

  2. A scalable approach for high throughput branch flow filtration.

    PubMed

    Inglis, David W; Herman, Nick

    2013-05-07

    Microfluidic continuous flow filtration methods have the potential for very high size resolution using minimum feature sizes that are larger than the separation size, thereby circumventing the problem of clogging. Branch flow filtration is particularly promising because it has an unlimited dynamic range (ratio of largest passable particle to the smallest separated particle) but suffers from very poor volume throughput because when many branches are used, they cannot be identical if each is to have the same size cut-off. We describe a new iterative approach to the design of branch filtration devices able to overcome this limitation without large dead volumes. This is demonstrated by numerical modelling, fabrication and testing of devices with 20 branches, with dynamic ranges up to 6.9, and high filtration ratios (14-29%) on beads and fungal spores. The filters have a sharp size cutoff (10× depletion for 12% size difference), with large particle rejection equivalent to a 20th order Butterworth low pass filter. The devices are fully scalable, enabling higher throughput and smaller cutoff sizes and they are compatible with ultra low cost fabrication.

  3. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGES

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  4. Building Scalable PGAS Communication Subsystem on Blue Gene/Q

    SciTech Connect

    Vishnu, Abhinav; Kerbyson, Darren J.; Barker, Kevin J.; van Dam, Hubertus

    2013-05-20

    This paper presents a design of scalable Partitioned Global Address Space (PGAS) communication subsystems on recently proposed Blue Gene/Q architecture. The proposed design provides an in-depth modeling of communication infrastructure using Parallel Active Messaging Interface (PAMI). The communication infrastructure is used to design time-space efficient communication protocols for frequently used data-types (contiguous, uniformly non-contiguous) using Remote Direct Memory Access (RDMA) get/put primitives. The proposed design accelerates load balance counters by using asynchronous threads, which are required due to the missing network hardware support for Atomic Memory Operations (AMOs). Under the proposed design, the synchronization traffic is reduced by tracking conflicting memory accesses in distributed space with slightly increment in space complexity. An evaluation with simple communication benchmarks show a adjacent node get latency of 2.89$\\mu$s and peak bandwidth of 1775 MB/s resulting in $\\approx$ 99\\% communication efficiency.The evaluation shows a reduction in the execution time by up to 30\\% for NWChem self consistent field calculation on 4096 processes using the proposed asynchronous thread based design.

  5. Scalable Indoor Localization via Mobile Crowdsourcing and Gaussian Process.

    PubMed

    Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei; Wang, Weiping

    2016-03-16

    Indoor localization using Received Signal Strength Indication (RSSI) fingerprinting has been extensively studied for decades. The positioning accuracy is highly dependent on the density of the signal database. In areas without calibration data, however, this algorithm breaks down. Building and updating a dense signal database is labor intensive, expensive, and even impossible in some areas. Researchers are continually searching for better algorithms to create and update dense databases more efficiently. In this paper, we propose a scalable indoor positioning algorithm that works both in surveyed and unsurveyed areas. We first propose Minimum Inverse Distance (MID) algorithm to build a virtual database with uniformly distributed virtual Reference Points (RP). The area covered by the virtual RPs can be larger than the surveyed area. A Local Gaussian Process (LGP) is then applied to estimate the virtual RPs' RSSI values based on the crowdsourced training data. Finally, we improve the Bayesian algorithm to estimate the user's location using the virtual database. All the parameters are optimized by simulations, and the new algorithm is tested on real-case scenarios. The results show that the new algorithm improves the accuracy by 25.5% in the surveyed area, with an average positioning error below 2.2 m for 80% of the cases. Moreover, the proposed algorithm can localize the users in the neighboring unsurveyed area.

  6. Scalable Design of Paired CRISPR Guide RNAs for Genomic Deletion.

    PubMed

    Pulido-Quetglas, Carlos; Aparicio-Prat, Estel; Arnan, Carme; Polidori, Taisia; Hermoso, Toni; Palumbo, Emilio; Ponomarenko, Julia; Guigo, Roderic; Johnson, Rory

    2017-03-01

    CRISPR-Cas9 technology can be used to engineer precise genomic deletions with pairs of single guide RNAs (sgRNAs). This approach has been widely adopted for diverse applications, from disease modelling of individual loci, to parallelized loss-of-function screens of thousands of regulatory elements. However, no solution has been presented for the unique bioinformatic design requirements of CRISPR deletion. We here present CRISPETa, a pipeline for flexible and scalable paired sgRNA design based on an empirical scoring model. Multiple sgRNA pairs are returned for each target, and any number of targets can be analyzed in parallel, making CRISPETa equally useful for focussed or high-throughput studies. Fast run-times are achieved using a pre-computed off-target database. sgRNA pair designs are output in a convenient format for visualisation and oligonucleotide ordering. We present pre-designed, high-coverage library designs for entire classes of protein-coding and non-coding elements in human, mouse, zebrafish, Drosophila melanogaster and Caenorhabditis elegans. In human cells, we reproducibly observe deletion efficiencies of ≥50% for CRISPETa designs targeting an enhancer and exonic fragment of the MALAT1 oncogene. In the latter case, deletion results in production of desired, truncated RNA. CRISPETa will be useful for researchers seeking to harness CRISPR for targeted genomic deletion, in a variety of model organisms, from single-target to high-throughput scales.

  7. Multi-jagged: A scalable parallel spatial partitioning algorithm

    DOE PAGES

    Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; ...

    2015-03-18

    Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficientmore » implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.« less

  8. Multi-jagged: A scalable parallel spatial partitioning algorithm

    SciTech Connect

    Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; Catalyurek, Umit V.

    2015-03-18

    Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficient implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.

  9. A Novel Coarsening Method for Scalable and Efficient Mesh Generation

    SciTech Connect

    Yoo, A; Hysom, D; Gunney, B

    2010-12-02

    matrix-vector multiplication can be performed locally on each processor and hence to minimize communication. Furthermore, a good graph partitioning scheme ensures the equal amount of computation performed on each processor. Graph partitioning is a well known NP-complete problem, and thus the most commonly used graph partitioning algorithms employ some forms of heuristics. These algorithms vary in terms of their complexity, partition generation time, and the quality of partitions, and they tend to trade off these factors. A significant challenge we are currently facing at the Lawrence Livermore National Laboratory is how to partition very large meshes on massive-size distributed memory machines like IBM BlueGene/P, where scalability becomes a big issue. For example, we have found that the ParMetis, a very popular graph partitioning tool, can only scale to 16K processors. An ideal graph partitioning method on such an environment should be fast and scale to very large meshes, while producing high quality partitions. This is an extremely challenging task, as to scale to that level, the partitioning algorithm should be simple and be able to produce partitions that minimize inter-processor communications and balance the load imposed on the processors. Our goals in this work are two-fold: (1) To develop a new scalable graph partitioning method with good load balancing and communication reduction capability. (2) To study the performance of the proposed partitioning method on very large parallel machines using actual data sets and compare the performance to that of existing methods. The proposed method achieves the desired scalability by reducing the mesh size. For this, it coarsens an input mesh into a smaller size mesh by coalescing the vertices and edges of the original mesh into a set of mega-vertices and mega-edges. A new coarsening method called brick algorithm is developed in this research. In the brick algorithm, the zones in a given mesh are first grouped into fixed size

  10. Mindfulness and Compassion: An Examination of Mechanism and Scalability

    PubMed Central

    Lim, Daniel; Condon, Paul; DeSteno, David

    2015-01-01

    Emerging evidence suggests that meditation engenders prosocial behaviors meant to benefit others. However, the robustness, underlying mechanisms, and potential scalability of such effects remain open to question. The current experiment employed an ecologically valid situation that exposed participants to a person in visible pain. Following three-week, mobile-app based training courses in mindfulness meditation or cognitive skills (i.e., an active control condition), participants arrived at a lab individually to complete purported measures of cognitive ability. Upon entering a public waiting area outside the lab that contained three chairs, participants seated themselves in the last remaining unoccupied chair; confederates occupied the other two. As the participant sat and waited, a third confederate using crutches and a large walking boot entered the waiting area while displaying discomfort. Compassionate responding was assessed by whether participants gave up their seat to allow the uncomfortable confederate to sit, thereby relieving her pain. Participants’ levels of empathic accuracy was also assessed. As predicted, participants assigned to the mindfulness meditation condition gave up their seats more frequently than did those assigned to the active control group. In addition, empathic accuracy was not increased by mindfulness practice, suggesting that mindfulness-enhanced compassionate behavior does not stem from associated increases in the ability to decode the emotional experiences of others. PMID:25689827

  11. Superconductor digital electronics: Scalability and energy efficiency issues (Review Article)

    NASA Astrophysics Data System (ADS)

    Tolpygo, Sergey K.

    2016-05-01

    Superconductor digital electronics using Josephson junctions as ultrafast switches and magnetic-flux encoding of information was proposed over 30 years ago as a sub-terahertz clock frequency alternative to semiconductor electronics based on complementary metal-oxide-semiconductor (CMOS) transistors. Recently, interest in developing superconductor electronics has been renewed due to a search for energy saving solutions in applications related to high-performance computing. The current state of superconductor electronics and fabrication processes are reviewed in order to evaluate whether this electronics is scalable to a very large scale integration (VLSI) required to achieve computation complexities comparable to CMOS processors. A fully planarized process at MIT Lincoln Laboratory, perhaps the most advanced process developed so far for superconductor electronics, is used as an example. The process has nine superconducting layers: eight Nb wiring layers with the minimum feature size of 350 nm, and a thin superconducting layer for making compact high-kinetic-inductance bias inductors. All circuit layers are fully planarized using chemical mechanical planarization (CMP) of SiO2 interlayer dielectric. The physical limitations imposed on the circuit density by Josephson junctions, circuit inductors, shunt and bias resistors, etc., are discussed. Energy dissipation in superconducting circuits is also reviewed in order to estimate whether this technology, which requires cryogenic refrigeration, can be energy efficient. Fabrication process development required for increasing the density of superconductor digital circuits by a factor of ten and achieving densities above 107 Josephson junctions per cm2 is described.

  12. Scalable TCP-friendly Video Distribution for Heterogeneous Clients

    NASA Astrophysics Data System (ADS)

    Zink, Michael; Griwodz, Carsten; Schmitt, Jens; Steinmetz, Ralf

    2003-01-01

    This paper investigates an architecture and implementation for the use of a TCP-friendly protocol in a scalable video distribution system for hierarchically encoded layered video. The design supports a variety of heterogeneous clients, because recent developments have shown that access network and client capabilities differ widely in today's Internet. The distribution system presented here consists of videos servers, proxy caches and clients that make use of a TCP-friendly rate control (TFRC) to perform congestion controlled streaming of layer encoded video. The data transfer protocol of the system is RTP compliant, yet it integrates protocol elements for congestion control with protocols elements for retransmission that is necessary for lossless transfer of contents into proxy caches. The control protocol RTSP is used to negotiate capabilities, such as support for congestion control or retransmission. By tests performed with our experimental platform in a lab test and over the Internet, we show that congestion controlled streaming of layer encoded video through proxy caches is a valid means of supporting heterogeneous clients. We show that filtering of layers depending on a TFRC-controlled permissible bandwidth allows the preferred delivery of the most relevant layers to end-systems while additional layers can be delivered to the cache server. We experiment with uncontrolled delivery from the proxy cache to the client as well, which may result in random loss and bandwidth waste but also a higher goodput, and compare these two approaches.

  13. A scalable population code for time in the striatum.

    PubMed

    Mello, Gustavo B M; Soares, Sofia; Paton, Joseph J

    2015-05-04

    To guide behavior and learn from its consequences, the brain must represent time over many scales. Yet, the neural signals used to encode time in the seconds-to-minute range are not known. The striatum is a major input area of the basal ganglia associated with learning and motor function. Previous studies have also shown that the striatum is necessary for normal timing behavior. To address how striatal signals might be involved in timing, we recorded from striatal neurons in rats performing an interval timing task. We found that neurons fired at delays spanning tens of seconds and that this pattern of responding reflected the interaction between time and the animals' ongoing sensorimotor state. Surprisingly, cells rescaled responses in time when intervals changed, indicating that striatal populations encoded relative time. Moreover, time estimates decoded from activity predicted timing behavior as animals adjusted to new intervals, and disrupting striatal function led to a decrease in timing performance. These results suggest that striatal activity forms a scalable population code for time, providing timing signals that animals use to guide their actions.

  14. Unbalanced multiple description wavelet coding for scalable video transmission

    NASA Astrophysics Data System (ADS)

    Choupani, Roya; Wong, Stephan; Tolun, Mehmet

    2012-10-01

    Scalable video coding and multiple description coding are the two different adaptation schemes for video transmission over heterogeneous and best-effort networks such as the Internet. We propose a new method to encode video for unreliable networks with rate adaptation capability. Our proposed method groups three dimensional discrete wavelet transform coefficients in different descriptions and applies a modified embedded zero tree data for rate adaptation. The proposed method optimizes the bit-rates of the descriptions with respect to the channel bit rates and the maximum acceptable distortion. The experimental results in the presence of one description loss indicate that on average the videos at the rate of 1000 Kbit/s are reconstructed with Y-component of peak signal to noise ratio (Y-PSNR) value of 36.2 dB. The dynamic allocation of descriptions to the network channels is optimized for rate distortion minimization. The improvement in term of Y-PSNR achieved by rate distortion optimization has been between 0.7 and 5.3 dB in different bit rates.

  15. A highly scalable, interoperable clinical decision support service

    PubMed Central

    Goldberg, Howard S; Paterno, Marilyn D; Rocha, Beatriz H; Schaeffer, Molly; Wright, Adam; Erickson, Jessica L; Middleton, Blackford

    2014-01-01

    Objective To create a clinical decision support (CDS) system that is shareable across healthcare delivery systems and settings over large geographic regions. Materials and methods The enterprise clinical rules service (ECRS) realizes nine design principles through a series of enterprise java beans and leverages off-the-shelf rules management systems in order to provide consistent, maintainable, and scalable decision support in a variety of settings. Results The ECRS is deployed at Partners HealthCare System (PHS) and is in use for a series of trials by members of the CDS consortium, including internally developed systems at PHS, the Regenstrief Institute, and vendor-based systems deployed at locations in Oregon and New Jersey. Performance measures indicate that the ECRS provides sub-second response time when measured apart from services required to retrieve data and assemble the continuity of care document used as input. Discussion We consider related work, design decisions, comparisons with emerging national standards, and discuss uses and limitations of the ECRS. Conclusions ECRS design, implementation, and use in CDS consortium trials indicate that it provides the flexibility and modularity needed for broad use and performs adequately. Future work will investigate additional CDS patterns, alternative methods of data passing, and further optimizations in ECRS performance. PMID:23828174

  16. Simple and scalable method for peptide inhalable powder production.

    PubMed

    Schoubben, Aurélie; Blasi, Paolo; Giovagnoli, Stefano; Ricci, Maurizio; Rossi, Carlo

    2010-01-31

    The aim of this work was to produce capreomycin dry powder and capreomycin loaded PLGA microparticles intended for tuberculosis inhalation therapy, using simple and scalable methods. Capreomycin physico-chemical characteristics have been modified by hydrophobic ion pairing with oleate. The powder suspension was processed by high pressure homogenization and spray-dried. Spray-drying was also used to prepare capreomycin oleate (CO) loaded PLGA microparticles. CO powder was suspended in the organic phase containing PLGA and the suspension was spray-dried. Particle dimensions were determined using photon correlation spectroscopy and Accusizer C770. Morphology was investigated by scanning electron microscopy (SEM) and capreomycin content by spectrophotometry. Capreomycin properties were modified to increase polymeric microparticle content and obtain respirable CO powder. High pressure homogenization allowed to reduce CO particle dimensions obtaining a population in the micrometric (6.18 microm) and one in the nanometric (approximately 317 nm) range. SEM pictures showed not perfectly spherical particles with a wrinkled surface, generally suitable for inhalation. PLGA particles were characterized by a high encapsulation efficiency (about 90%) and dimensions (approximately 6.69 microm) suitable for inhalation. Concluding, two different formulations were successfully developed for capreomycin pulmonary delivery. The hydrophobic ion pair strategy led to a noticeable drug content increase.

  17. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  18. Scalable and Axiomatic Ranking of Network Role Similarity

    PubMed Central

    Jin, Ruoming; Lee, Victor E.; Li, Longjie

    2014-01-01

    A key task in analyzing social networks and other complex networks is role analysis: describing and categorizing nodes according to how they interact with other nodes. Two nodes have the same role if they interact with equivalent sets of neighbors. The most fundamental role equivalence is automorphic equivalence. Unfortunately, the fastest algorithms known for graph automorphism are nonpolynomial. Moreover, since exact equivalence is rare, a more meaningful task is measuring the role similarity between any two nodes. This task is closely related to the structural or link-based similarity problem that SimRank addresses. However, SimRank and other existing similarity measures are not sufficient because they do not guarantee to recognize automorphically or structurally equivalent nodes. This paper makes two contributions. First, we present and justify several axiomatic properties necessary for a role similarity measure or metric. Second, we present RoleSim, a new similarity metric which satisfies these axioms and which can be computed with a simple iterative algorithm. We rigorously prove that RoleSim satisfies all these axiomatic properties. We also introduce Iceberg RoleSim, a scalable algorithm which discovers all pairs with RoleSim scores above a user-defined threshold θ. We demonstrate the interpretative power of RoleSim on both both synthetic and real datasets. PMID:25383066

  19. High Performance Storage System Scalability: Architecture, Implementation, and Experience

    SciTech Connect

    Watson, R W

    2005-01-05

    The High Performance Storage System (HPSS) provides scalable hierarchical storage management (HSM), archive, and file system services. Its design, implementation and current dominant use are focused on HSM and archive services. It is also a general-purpose, global, shared, parallel file system, potentially useful in other application domains. When HPSS design and implementation began over a decade ago, scientific computing power and storage capabilities at a site, such as a DOE national laboratory, was measured in a few 10s of gigaops, data archived in HSMs in a few 10s of terabytes at most, data throughput rates to an HSM in a few megabytes/s, and daily throughput with the HSM in a few gigabytes/day. At that time, the DOE national laboratories and IBM HPSS design team recognized that we were headed for a data storage explosion driven by computing power rising to teraops/petaops requiring data stored in HSMs to rise to petabytes and beyond, data transfer rates with the HSM to rise to gigabytes/s and higher, and daily throughput with a HSM in 10s of terabytes/day. This paper discusses HPSS architectural, implementation and deployment experiences that contributed to its success in meeting the above orders of magnitude scaling targets. We also discuss areas that need additional attention as we continue significant scaling into the future.

  20. Toward SVOPME, a Scalable Virtual Organization Privileges Management Environment

    NASA Astrophysics Data System (ADS)

    Wang, Nanbor; Garzoglio, Gabriele; Ananthan, Balamurali; Timm, Steven; Levshina, Tanya

    Grids enable uniform access to resources by implementing standard interfaces to resource gateways. In the Open Science Grid (OSG), privileges are granted on the basis of the user's membership to a Virtual Organization (VO). However, individual Grid sites are solely responsible to determine and control access privileges to resources. While this guarantees that the sites retain full control on access rights, it often leads to heterogeneous VO privileges throughout the Grid and hardly fits with the Grid paradigm of uniform access to resources. To address these challenges, we developed the Scalable Virtual Organization Privileges Management Environment (SVOPME), which provides tools for VOs to define, publish, and verify desired privileges. Moreover, SVOPME provides tools for grid sites to analyze site access policies for various resources, verify compliance with preferred VO policies, and generate directives for site administrators on how the local access policies can be amended to achieve such compliance without taking control of local configurations away from site administrators. This paper describes how SVOPME implements privilege management tools for the OSG and our experiences in deploying and running the tools in a test bed. Finally, we outline our plan to continue to improve SVOPME and have it included as part of the standard Grid software distributions.

  1. Long-range interactions and parallel scalability in molecular simulations

    NASA Astrophysics Data System (ADS)

    Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko

    2007-01-01

    Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.

  2. Detailed Modeling and Evaluation of a Scalable Multilevel Checkpointing System

    SciTech Connect

    Mohror, Kathryn; Moody, Adam; Bronevetsky, Greg; de Supinski, Bronis R.

    2014-09-01

    High-performance computing (HPC) systems are growing more powerful by utilizing more components. As the system mean time before failure correspondingly drops, applications must checkpoint frequently to make progress. But, at scale, the cost of checkpointing becomes prohibitive. A solution to this problem is multilevel checkpointing, which employs multiple types of checkpoints in a single run. Moreover, lightweight checkpoints can handle the most common failure modes, while more expensive checkpoints can handle severe failures. We designed a multilevel checkpointing library, the Scalable Checkpoint/Restart (SCR) library, that writes lightweight checkpoints to node-local storage in addition to the parallel file system. We present probabilistic Markov models of SCR's performance. We show that on future large-scale systems, SCR can lead to a gain in machine efficiency of up to 35 percent, and reduce the load on the parallel file system by a factor of two. In addition, we predict that checkpoint scavenging, or only writing checkpoints to the parallel file system on application termination, can reduce the load on the parallel file system by 20 × on today's systems and still maintain high application efficiency.

  3. Toward SVOPME, a Scalable Virtual Organization Privileges Management Environment

    NASA Astrophysics Data System (ADS)

    Wang, Nanbor; Garzoglio, Gabriele; Ananthan, Balamurali; Timm, Steven

    2011-12-01

    Grids enable uniform access to resources by implementing standard interfaces to resource gateways. In the Open Science Grid (OSG), privileges are granted on the basis of the user's membership to a Virtual Organization (VO). However, user privilege definitions and enforcements are administered separately by VOs and Grid sites. Such partitioning can potentially introduce inconsistent user privileges throughout the Grid and break the Grid paradigm of uniform access to resources. There is a need for an automated privilege management mechanism for a VO to codify privilege policies granted to its users, to propagate the policies to grid sites, to identity and suggest remedies for non-supported VO privileges at individual sites. The Scalable Virtual Organization Privileges Management Environment (SVOPME) addresses the challenge under the context of the Open Science Grid (OSG). The SVOPME provides tools for VOs to define and publish desired privileges. At a site, SVOPME tools help analyze access policies defined for VO users and verify policy consistency between VOs and sites, and suggest site configurations changes. This paper presents the designs and features of SVOPME tools and the lessons learned in applying SVOPME tools for OSG VOs and sites. Furthermore, we will outline future improvements to SVOPME tools to adapt to a range of different site configurations and new privilege policies.

  4. An open, interoperable, and scalable prehospital information technology network architecture.

    PubMed

    Landman, Adam B; Rokos, Ivan C; Burns, Kevin; Van Gelder, Carin M; Fisher, Roger M; Dunford, James V; Cone, David C; Bogucki, Sandy

    2011-01-01

    Some of the most intractable challenges in prehospital medicine include response time optimization, inefficiencies at the emergency medical services (EMS)-emergency department (ED) interface, and the ability to correlate field interventions with patient outcomes. Information technology (IT) can address these and other concerns by ensuring that system and patient information is received when and where it is needed, is fully integrated with prior and subsequent patient information, and is securely archived. Some EMS agencies have begun adopting information technologies, such as wireless transmission of 12-lead electrocardiograms, but few agencies have developed a comprehensive plan for management of their prehospital information and integration with other electronic medical records. This perspective article highlights the challenges and limitations of integrating IT elements without a strategic plan, and proposes an open, interoperable, and scalable prehospital information technology (PHIT) architecture. The two core components of this PHIT architecture are 1) routers with broadband network connectivity to share data between ambulance devices and EMS system information services and 2) an electronic patient care report to organize and archive all electronic prehospital data. To successfully implement this comprehensive PHIT architecture, data and technology requirements must be based on best available evidence, and the system must adhere to health data standards as well as privacy and security regulations. Recent federal legislation prioritizing health information technology may position federal agencies to help design and fund PHIT architectures.

  5. Scalable video compression using longer motion compensated temporal filters

    NASA Astrophysics Data System (ADS)

    Golwelkar, Abhijeet V.; Woods, John W.

    2003-06-01

    Three-dimensional (3-D) subband/wavelet coding using a motion compensated temporal filter (MCTF) is emerging as a very effective structure for highly scalable video coding. Most previous work has used two-tap Haar filters for the temporal analysis/synthesis. To make better use of the temporal redundancies, we are proposing an MCTF scheme based on longer biorthogonal filters. We show a lifting based coder capable of subpixel accurate motion compensation. If we retain the fixed size GOP structure of the Haar filter MCTFs, we need to use symmetric extensions at both ends of the GOP. This gives rise to loss of coding efficiency at the GOP boundaries resulting in significant PSNR drops there. This performance can be considerably improved by using a 'sliding window,' in place of the GOP block. We employ the 5/3 filter and its non-orthogonality causes PSNR variation, which can be reduced by employing filter-based weighting coefficients. Overall the longer filters have a higher coding gain than the Haar filters and show significant improvement in average PSNR at high bit rates. However, a doubling in the number of motion vectors to be transmitted, translates to a drop in PSNR at the lower video bit rates.

  6. Scalable Indoor Localization via Mobile Crowdsourcing and Gaussian Process

    PubMed Central

    Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei; Wang, Weiping

    2016-01-01

    Indoor localization using Received Signal Strength Indication (RSSI) fingerprinting has been extensively studied for decades. The positioning accuracy is highly dependent on the density of the signal database. In areas without calibration data, however, this algorithm breaks down. Building and updating a dense signal database is labor intensive, expensive, and even impossible in some areas. Researchers are continually searching for better algorithms to create and update dense databases more efficiently. In this paper, we propose a scalable indoor positioning algorithm that works both in surveyed and unsurveyed areas. We first propose Minimum Inverse Distance (MID) algorithm to build a virtual database with uniformly distributed virtual Reference Points (RP). The area covered by the virtual RPs can be larger than the surveyed area. A Local Gaussian Process (LGP) is then applied to estimate the virtual RPs’ RSSI values based on the crowdsourced training data. Finally, we improve the Bayesian algorithm to estimate the user’s location using the virtual database. All the parameters are optimized by simulations, and the new algorithm is tested on real-case scenarios. The results show that the new algorithm improves the accuracy by 25.5% in the surveyed area, with an average positioning error below 2.2 m for 80% of the cases. Moreover, the proposed algorithm can localize the users in the neighboring unsurveyed area. PMID:26999139

  7. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    SciTech Connect

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.

  8. Scalable Influence Estimation in Continuous-Time Diffusion Networks

    PubMed Central

    Du, Nan; Song, Le; Gomez-Rodriguez, Manuel; Zha, Hongyuan

    2014-01-01

    If a piece of information is released from a media site, can we predict whether it may spread to one million web pages, in a month ? This influence estimation problem is very challenging since both the time-sensitive nature of the task and the requirement of scalability need to be addressed simultaneously. In this paper, we propose a randomized algorithm for influence estimation in continuous-time diffusion networks. Our algorithm can estimate the influence of every node in a network with |V| nodes and |ε| edges to an accuracy of ε using n = O(1/ε2) randomizations and up to logarithmic factors O(n|ε|+n|V|) computations. When used as a subroutine in a greedy influence maximization approach, our proposed algorithm is guaranteed to find a set of C nodes with the influence of at least (1 − 1/e) OPT − 2Cε, where OPT is the optimal value. Experiments on both synthetic and real-world data show that the proposed algorithm can easily scale up to networks of millions of nodes while significantly improves over previous state-of-the-arts in terms of the accuracy of the estimated influence and the quality of the selected nodes in maximizing the influence. PMID:26752940

  9. A Scalable Framework For Segmenting Magnetic Resonance Images

    PubMed Central

    Hore, Prodip; Goldgof, Dmitry B.; Gu, Yuhua; Maudsley, Andrew A.; Darkazanli, Ammar

    2009-01-01

    A fast, accurate and fully automatic method of segmenting magnetic resonance images of the human brain is introduced. The approach scales well allowing fast segmentations of fine resolution images. The approach is based on modifications of the soft clustering algorithm, fuzzy c-means, that enable it to scale to large data sets. Two types of modifications to create incremental versions of fuzzy c-means are discussed. They are much faster when compared to fuzzy c-means for medium to extremely large data sets because they work on successive subsets of the data. They are comparable in quality to application of fuzzy c-means to all of the data. The clustering algorithms coupled with inhomogeneity correction and smoothing are used to create a framework for automatically segmenting magnetic resonance images of the human brain. The framework is applied to a set of normal human brain volumes acquired from different magnetic resonance scanners using different head coils, acquisition parameters and field strengths. Results are compared to those from two widely used magnetic resonance image segmentation programs, Statistical Parametric Mapping and the FMRIB Software Library (FSL). The results are comparable to FSL while providing significant speed-up and better scalability to larger volumes of data. PMID:20046893

  10. Scalable Library for the Parallel Solution of Sparse Linear Systems

    SciTech Connect

    Jones, Mark; Plassmann, Paul E.

    1993-07-14

    BlockSolve is a scalable parallel software library for the solution of large sparse, symmetric systems of linear equations. It runs on a variety of parallel architectures and can easily be ported to others. BlockSovle is primarily intended for the solution of sparse linear systems that arise from physical problems having multiple degrees of freedom at each node point. For example, when the finite element method is used to solve practical problems in structural engineering, each node will typically have anywhere from 3-6 degrees of freedom associated with it. BlockSolve is written to take advantage of problems of this nature; however, it is still reasonably efficient for problems that have only one degree of freedom associated with each node, such as the three-dimensional Poisson problem. It does not require that the matrices have any particular structure other than being sparse and symmetric. BlockSolve is intended to be used within real application codes. It is designed to work best in the context of our experience which indicated that most application codes solve the same linear systems with several different right-hand sides and/or linear systems with the same structure, but different matrix values multiple times.

  11. Jumping-Droplet-Enhanced Condensation on Scalable Superhydrophobic Nanostructured Surfaces

    SciTech Connect

    Miljkovic, N; Enright, R; Nam, Y; Lopez, K; Dou, N; Sack, J; Wang, E

    2013-01-09

    When droplets coalesce on a superhydrophobic nanostructured surface, the resulting droplet can jump from the surface due to the release of excess surface energy. If designed properly, these superhydrophobic nanostructured surfaces can not only allow for easy droplet removal at micrometric length scales during condensation but also promise to enhance heat transfer performance. However, the rationale for the design of an ideal nanostructured surface as well as heat transfer experiments demonstrating the advantage of this jumping behavior are lacking. Here, we show that silanized copper oxide surfaces created via a simple fabrication method can achieve highly efficient jumping-droplet condensation heat transfer. We experimentally demonstrated a 25% higher overall heat flux and 30% higher condensation heat transfer coefficient compared to state-of-the-art hydrophobic condensing surfaces at low supersaturations (<1.12). This work not only shows significant condensation heat transfer enhancement but also promises a low cost and scalable approach to increase efficiency for applications such as atmospheric water harvesting and dehumidification. Furthermore, the results offer insights and an avenue to achieve high flux superhydrophobic condensation.

  12. Scalable, Low-Noise Architecture for Integrated Terahertz Imagers

    NASA Astrophysics Data System (ADS)

    Gergelyi, Domonkos; Földesy, Péter; Zarándy, Ákos

    2015-06-01

    We propose a scalable, low-noise imager architecture for terahertz recordings that helps to build large-scale integrated arrays from any field-effect transistor (FET)- or HEMT-based terahertz detector. It enhances the signal-to-noise ratio (SNR) by inherently enabling complex sampling schemes. The distinguishing feature of the architecture is the serially connected detectors with electronically controllable photoresponse. We show that this architecture facilitate room temperature imaging by decreasing the low-noise amplifier (LNA) noise to one-sixteenth of a non-serial sensor while also reducing the number of multiplexed signals in the same proportion. The serially coupled architecture can be combined with the existing read-out circuit organizations to create high-resolution, coarse-grain sensor arrays. Besides, it adds the capability to suppress overall noise with increasing array size. The theoretical considerations are proven on a 4 by 4 detector array manufactured on 180 nm feature sized standard CMOS technology. The detector array is integrated with a low-noise AC-coupled amplifier of 40 dB gain and has a resonant peak at 460 GHz with 200 kV/W overall sensitivity.

  13. Developing a scalable modeling architecture for studying survivability technologies

    NASA Astrophysics Data System (ADS)

    Mohammad, Syed; Bounker, Paul; Mason, James; Brister, Jason; Shady, Dan; Tucker, David

    2006-05-01

    To facilitate interoperability of models in a scalable environment, and provide a relevant virtual environment in which Survivability technologies can be evaluated, the US Army Research Development and Engineering Command (RDECOM) Modeling Architecture for Technology Research and Experimentation (MATREX) Science and Technology Objective (STO) program has initiated the Survivability Thread which will seek to address some of the many technical and programmatic challenges associated with the effort. In coordination with different Thread customers, such as the Survivability branches of various Army labs, a collaborative group has been formed to define the requirements for the simulation environment that would in turn provide them a value-added tool for assessing models and gauge system-level performance relevant to Future Combat Systems (FCS) and the Survivability requirements of other burgeoning programs. An initial set of customer requirements has been generated in coordination with the RDECOM Survivability IPT lead, through the Survivability Technology Area at RDECOM Tank-automotive Research Development and Engineering Center (TARDEC, Warren, MI). The results of this project are aimed at a culminating experiment and demonstration scheduled for September, 2006, which will include a multitude of components from within RDECOM and provide the framework for future experiments to support Survivability research. This paper details the components with which the MATREX Survivability Thread was created and executed, and provides insight into the capabilities currently demanded by the Survivability faculty within RDECOM.

  14. Developing highly scalable fluid solvers for enabling multiphysics simulation.

    SciTech Connect

    Clausen, Jonathan R

    2013-03-01

    We performed an investigation into explicit algorithms for the simulation of incompressible flows using methods with a finite, but small amount of compressibility added. Such methods include the artificial compressibility method and the lattice-Boltzmann method. The impetus for investigating such techniques stems from the increasing use of parallel computation at all levels (processors, clusters, and graphics processing units). Explicit algorithms have the potential to leverage these resources. In our investigation, a new form of artificial compressibility was derived. This method, referred to as the Entropically Damped Artificial Compressibility (EDAC) method, demonstrated superior results to traditional artificial compressibility methods by damping the numerical acoustic waves associated with these methods. Performance nearing that of the lattice- Boltzmann technique was observed, without the requirement of recasting the problem in terms of particle distribution functions; continuum variables may be used. Several example problems were investigated using a finite-di erence and finite-element discretizations of the EDAC equations. Example problems included lid-driven cavity flow, a convecting Taylor-Green vortex, a doubly periodic shear layer, freely decaying turbulence, and flow over a square cylinder. Additionally, a scalability study was performed using in excess of one million processing cores. Explicit methods were found to have desirable scaling properties; however, some robustness and general applicability issues remained.

  15. Scalable Virtual Network Mapping Algorithm for Internet-Scale Networks

    NASA Astrophysics Data System (ADS)

    Yang, Qiang; Wu, Chunming; Zhang, Min

    The proper allocation of network resources from a common physical substrate to a set of virtual networks (VNs) is one of the key technical challenges of network virtualization. While a variety of state-of-the-art algorithms have been proposed in an attempt to address this issue from different facets, the challenge still remains in the context of large-scale networks as the existing solutions mainly perform in a centralized manner which requires maintaining the overall and up-to-date information of the underlying substrate network. This implies the restricted scalability and computational efficiency when the network scale becomes large. This paper tackles the virtual network mapping problem and proposes a novel hierarchical algorithm in conjunction with a substrate network decomposition approach. By appropriately transforming the underlying substrate network into a collection of sub-networks, the hierarchical virtual network mapping algorithm can be carried out through a global virtual network mapping algorithm (GVNMA) and a local virtual network mapping algorithm (LVNMA) operated in the network central server and within individual sub-networks respectively with their cooperation and coordination as necessary. The proposed algorithm is assessed against the centralized approaches through a set of numerical simulation experiments for a range of network scenarios. The results show that the proposed hierarchical approach can be about 5-20 times faster for VN mapping tasks than conventional centralized approaches with acceptable communication overhead between GVNCA and LVNCA for all examined networks, whilst performs almost as well as the centralized solutions.

  16. Scalable Metropolis Monte Carlo for simulation of hard shapes

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.

    2016-07-01

    We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.

  17. Scalable wavelet-based active network detection of stepping stones

    NASA Astrophysics Data System (ADS)

    Gilbert, Joseph I.; Robinson, David J.; Butts, Jonathan W.; Lacey, Timothy H.

    2012-06-01

    Network intrusions leverage vulnerable hosts as stepping stones to penetrate deeper into a network and mask malicious actions from detection. Identifying stepping stones presents a significant challenge because network sessions appear as legitimate traffic. This research focuses on a novel active watermark technique using discrete wavelet transformations to mark and detect interactive network sessions. This technique is scalable, resilient to network noise, and difficult for attackers to discern that it is in use. Previously captured timestamps from the CAIDA 2009 dataset are sent using live stepping stones in the Amazon Elastic Compute Cloud service. The client system sends watermarked and unmarked packets from California to Virginia using stepping stones in Tokyo, Ireland and Oregon. Five trials are conducted in which the system sends simultaneous watermarked samples and unmarked samples to each target. The live experiment results demonstrate approximately 5% False Positive and 5% False Negative detection rates. Additionally, watermark extraction rates of approximately 92% are identified for a single stepping stone. The live experiment results demonstrate the effectiveness of discerning watermark traffic as applied to identifying stepping stones.

  18. Trench Visualization

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image shows oblique views of NASA's Phoenix Mars Lander's trench visualized using the NASA Ames Viz software package that allows interactive movement around terrain and measurement of features. The Surface Stereo Imager images are used to create a digital elevation model of the terrain. The trench is 1.5 inches deep. The top image was taken on the seventh Martian day of the mission, or Sol 7 (June 1, 2008). The bottom image was taken on the ninth Martian day of the mission, or Sol 9 (June 3, 2008).

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  19. Scalability issues in evolutionary synthesis of electronic circuits: lessons learned and challenges ahead

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Keymeulen, D.; Zebulum, R. S.; Ferguson, M. I.

    2003-01-01

    This paper describes scalability issues of evolutionary-driven automatic synthesis of electronic circuits. The article begins by reviewing the concepts of circuit evolution and discussing the limitations of this technique when trying to achieve more complex systems.

  20. SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems

    SciTech Connect

    Li, Xiaoye S.; Demmel, James W.

    2002-03-27

    In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.

  1. Visualization and analysis of single-cell RNA-seq data by kernel-based similarity learning.

    PubMed

    Wang, Bo; Zhu, Junjie; Pierson, Emma; Ramazzotti, Daniele; Batzoglou, Serafim

    2017-04-01

    We present single-cell interpretation via multikernel learning (SIMLR), an analytic framework and software which learns a similarity measure from single-cell RNA-seq data in order to perform dimension reduction, clustering and visualization. On seven published data sets, we benchmark SIMLR against state-of-the-art methods. We show that SIMLR is scalable and greatly enhances clustering performance while improving the visualization and interpretability of single-cell sequencing data.

  2. Demonstration of a Scalable, Multiplexed Ion Trap for Quantum Information Processing

    DTIC Science & Technology

    2009-07-09

    ion shuttling, storage, and manipulation, Appl. Phys. Letters 88, pp. 034101. 6. M. Riebe, et al. (2004), Deterministic quantum teleportation with atoms...Nature 429, pp. 734. 7. M. D. Barrett, et al. (2004), Deterministic quantum teleportation of atomic qubits, Nature 429, pp. 737. 8. J. Chiaverini...REPORT DEMONSTRATION OF A SCALABLE, MULTIPLEXED ION TRAPFOR QUANTUM INFORMATION PROCESSING 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: A scalable

  3. Multi-Purpose, Application-Centric, Scalable I/O Proxy Application

    SciTech Connect

    Miller, M. C.

    2015-06-15

    MACSio is a Multi-purpose, Application-Centric, Scalable I/O proxy application. It is designed to support a number of goals with respect to parallel I/O performance testing and benchmarking including the ability to test and compare various I/O libraries and I/O paradigms, to predict scalable performance of real applications and to help identify where improvements in I/O performance can be made within the HPC I/O software stack.

  4. Data Fusion of Geographically Dispersed Information: Experience With the Scalable Data Grid

    DTIC Science & Technology

    2011-03-01

    the 2008 terabyte sort challenge, Yahoo won by using Hadoop to sort 1 terabyte of data in 209 seconds (O’Malley 2008). That cluster consisted of 910...data grid approach would seem applicable. Key words: Aggregation and summarization; Apache Hadoop distributed files system; data collation; data mining...the scalable data grid approach, the ISI team found that Hadoop provided a scalable, but conceptually simple, distributed computation paradigm that is

  5. XGet: a highly scalable and efficient file transfer tool for clusters

    SciTech Connect

    Greenberg, Hugh; Ionkov, Latchesar; Minnich, Ronald

    2008-01-01

    As clusters rapidly grow in size, transferring files between nodes can no longer be solved by the traditional transfer utilities due to their inherent lack of scalability. In this paper, we describe a new file transfer utility called XGet, which was designed to address the scalability problem of standard tools. We compared XGet against four transfer tools: Bittorrent, Rsync, TFTP, and Udpcast and our results show that XGet's performance is superior to the these utilities in many cases.

  6. Compressing Test and Evaluation by Using Flow Data for Scalable Network Traffic Analysis

    DTIC Science & Technology

    2014-10-01

    For example, low quality of service may be caused by many factors including high traffic volume (and associated congestion ), proximity of sender...Scalable Network Traffic Analysis 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...by ANSI Std Z39-18 788Defense ARJ, October 2014, Vol. 21 No. 4 : 788–802 Compressing Test and Evaluation by Using Data for Scalable Network Traffic

  7. Why Teach Visual Culture?

    ERIC Educational Resources Information Center

    Passmore, Kaye

    2007-01-01

    Visual culture is a hot topic in art education right now as some teachers are dedicated to teaching it and others are adamant that it has no place in a traditional art class. Visual culture, the author asserts, can include just about anything that is visually represented. Although people often think of visual culture as contemporary visuals such…

  8. Simple, Scalable, Script-Based Science Processor (S4P)

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Vollmer, Bruce; Berrick, Stephen; Mack, Robert; Pham, Long; Zhou, Bryan; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    The development and deployment of data processing systems to process Earth Observing System (EOS) data has proven to be costly and prone to technical and schedule risk. Integration of science algorithms into a robust operational system has been difficult. The core processing system, based on commercial tools, has demonstrated limitations at the rates needed to produce the several terabytes per day for EOS, primarily due to job management overhead. This has motivated an evolution in the EOS Data Information System toward a more distributed one incorporating Science Investigator-led Processing Systems (SIPS). As part of this evolution, the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC) has developed a simplified processing system to accommodate the increased load expected with the advent of reprocessing and launch of a second satellite. This system, the Simple, Scalable, Script-based Science Processor (S42) may also serve as a resource for future SIPS. The current EOSDIS Core System was designed to be general, resulting in a large, complex mix of commercial and custom software. In contrast, many simpler systems, such as the EROS Data Center AVHRR IKM system, rely on a simple directory structure to drive processing, with directories representing different stages of production. The system passes input data to a directory, and the output data is placed in a "downstream" directory. The GES DAAC's Simple Scalable Script-based Science Processing System is based on the latter concept, but with modifications to allow varied science algorithms and improve portability. It uses a factory assembly-line paradigm: when work orders arrive at a station, an executable is run, and output work orders are sent to downstream stations. The stations are implemented as UNIX directories, while work orders are simple ASCII files. The core S4P infrastructure consists of a Perl program called stationmaster, which detects newly arrived work orders and forks a job to run the

  9. Recurrent, Robust and Scalable Patterns Underlie Human Approach and Avoidance

    PubMed Central

    Kennedy, David N.; Lehár, Joseph; Lee, Myung Joo; Blood, Anne J.; Lee, Sang; Perlis, Roy H.; Smoller, Jordan W.; Morris, Robert; Fava, Maurizio

    2010-01-01

    Background Approach and avoidance behavior provide a means for assessing the rewarding or aversive value of stimuli, and can be quantified by a keypress procedure whereby subjects work to increase (approach), decrease (avoid), or do nothing about time of exposure to a rewarding/aversive stimulus. To investigate whether approach/avoidance behavior might be governed by quantitative principles that meet engineering criteria for lawfulness and that encode known features of reward/aversion function, we evaluated whether keypress responses toward pictures with potential motivational value produced any regular patterns, such as a trade-off between approach and avoidance, or recurrent lawful patterns as observed with prospect theory. Methodology/Principal Findings Three sets of experiments employed this task with beautiful face images, a standardized set of affective photographs, and pictures of food during controlled states of hunger and satiety. An iterative modeling approach to data identified multiple law-like patterns, based on variables grounded in the individual. These patterns were consistent across stimulus types, robust to noise, describable by a simple power law, and scalable between individuals and groups. Patterns included: (i) a preference trade-off counterbalancing approach and avoidance, (ii) a value function linking preference intensity to uncertainty about preference, and (iii) a saturation function linking preference intensity to its standard deviation, thereby setting limits to both. Conclusions/Significance These law-like patterns were compatible with critical features of prospect theory, the matching law, and alliesthesia. Furthermore, they appeared consistent with both mean-variance and expected utility approaches to the assessment of risk. Ordering of responses across categories of stimuli demonstrated three properties thought to be relevant for preference-based choice, suggesting these patterns might be grouped together as a relative preference

  10. Error-resilient compression and transmission of scalable video

    NASA Astrophysics Data System (ADS)

    Cho, Sungdae; Pearlman, William A.

    2000-12-01

    Compressed video bitstreams require protection from channel errors in a wireless channel and protection from packet loss in a wired ATM channel. The three-dimensional (3-D) SPIHT coder has proved its efficiency and its real-time capability in compression of video. A forward-error-correcting (FEC) channel (RCPC) code combined with a single ARQ (automatic- repeat-request) proved to be an effective means for protecting the bitstream. There were two problems with this scheme: the noiseless reverse channel ARQ may not be feasible in practice; and, in the absence of channel coding and ARQ, the decoded sequence was hopelessly corrupted even for relatively clean channels. In this paper, we first show how to make the 3-D SPIHT bitstream more robust to channel errors by breaking the wavelet transform into a number of spatio-temporal tree blocks which can be encoded and decoded independently. This procedure brings the added benefit of parallelization of the compression and decompression algorithms. Then we demonstrate the packetization of the bit stream and the reorganization of these packets to achieve scalability in bit rate and/or resolution in addition to robustness. Then we encode each packet with a channel code. Not only does this protect the integrity of the packets in most cases, but it also allows detection of packet decoding failures, so that only the cleanly recovered packets are reconstructed. This procedure obviates ARQ, because the performance is only about 1 dB worse than normal 3-D SPIHT with FEC and ARQ. Furthermore, the parallelization makes possible real-time implementation in hardware and software.

  11. A scalable memetic algorithm for simultaneous instance and feature selection.

    PubMed

    García-Pedrajas, Nicolás; de Haro-García, Aida; Pérez-Rodríguez, Javier

    2014-01-01

    Instance selection is becoming increasingly relevant due to the huge amount of data that is constantly produced in many fields of research. At the same time, most of the recent pattern recognition problems involve highly complex datasets with a large number of possible explanatory variables. For many reasons, this abundance of variables significantly harms classification or recognition tasks. There are efficiency issues, too, because the speed of many classification algorithms is largely improved when the complexity of the data is reduced. One of the approaches to address problems that have too many features or instances is feature or instance selection, respectively. Although most methods address instance and feature selection separately, both problems are interwoven, and benefits are expected from facing these two tasks jointly. This paper proposes a new memetic algorithm for dealing with many instances and many features simultaneously by performing joint instance and feature selection. The proposed method performs four different local search procedures with the aim of obtaining the most relevant subsets of instances and features to perform an accurate classification. A new fitness function is also proposed that enforces instance selection but avoids putting too much pressure on removing features. We prove experimentally that this fitness function improves the results in terms of testing error. Regarding the scalability of the method, an extension of the stratification approach is developed for simultaneous instance and feature selection. This extension allows the application of the proposed algorithm to large datasets. An extensive comparison using 55 medium to large datasets from the UCI Machine Learning Repository shows the usefulness of our method. Additionally, the method is applied to 30 large problems, with very good results. The accuracy of the method for class-imbalanced problems in a set of 40 datasets is shown. The usefulness of the method is also

  12. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    SciTech Connect

    Rubel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat,

    2008-08-22

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system.

  13. Exploring the connectome: petascale volume visualization of microscopy data streams.

    PubMed

    Beyer, Johanna; Hadwiger, Markus; Al-Awami, Ali; Jeong, Won-Ki; Kasthuri, Narayanan; Lichtman, Jeff W; Pfister, Hanspeter

    2013-01-01

    Recent advances in high-resolution microscopy let neuroscientists acquire neural-tissue volume data of extremely large sizes. However, the tremendous resolution and the high complexity of neural structures present big challenges to storage, processing, and visualization at interactive rates. A proposed system provides interactive exploration of petascale (petavoxel) volumes resulting from high-throughput electron microscopy data streams. The system can concurrently handle multiple volumes and can support the simultaneous visualization of high-resolution voxel segmentation data. Its visualization-driven design restricts most computations to a small subset of the data. It employs a multiresolution virtual-memory architecture for better scalability than previous approaches and for handling incomplete data. Researchers have employed it for a 1-teravoxel mouse cortex volume, of which several hundred axons and dendrites as well as synapses have been segmented and labeled.

  14. Cortical Visual Impairment

    MedlinePlus

    ... Frequently Asked Questions Español Condiciones Chinese Conditions Cortical Visual Impairment En Español Read in Chinese What is cortical visual impairment? Cortical visual impairment (CVI) is a decreased ...

  15. Hardware-accelerated interactive data visualization for neuroscience in Python.

    PubMed

    Rossant, Cyrille; Harris, Kenneth D

    2013-01-01

    Large datasets are becoming more and more common in science, particularly in neuroscience where experimental techniques are rapidly evolving. Obtaining interpretable results from raw data can sometimes be done automatically; however, there are numerous situations where there is a need, at all processing stages, to visualize the data in an interactive way. This enables the scientist to gain intuition, discover unexpected patterns, and find guidance about subsequent analysis steps. Existing visualization tools mostly focus on static publication-quality figures and do not support interactive visualization of large datasets. While working on Python software for visualization of neurophysiological data, we developed techniques to leverage the computational power of modern graphics cards for high-performance interactive data visualization. We were able to achieve very high performance despite the interpreted and dynamic nature of Python, by using state-of-the-art, fast libraries such as NumPy, PyOpenGL, and PyTables. We present applications of these methods to visualization of neurophysiological data. We believe our tools will be useful in a broad range of domains, in neuroscience and beyond, where there is an increasing need for scalable and fast interactive visualization.

  16. Learning Visualizations by Analogy: Promoting Visual Literacy through Visualization Morphing.

    PubMed

    Ruchikachorn, Puripant; Mueller, Klaus

    2015-09-01

    We propose the concept of teaching (and learning) unfamiliar visualizations by analogy, that is, demonstrating an unfamiliar visualization method by linking it to another more familiar one, where the in-betweens are designed to bridge the gap of these two visualizations and explain the difference in a gradual manner. As opposed to a textual description, our morphing explains an unfamiliar visualization through purely visual means. We demonstrate our idea by ways of four visualization pair examples: data table and parallel coordinates, scatterplot matrix and hyperbox, linear chart and spiral chart, and hierarchical pie chart and treemap. The analogy is commutative i.e. any member of the pair can be the unfamiliar visualization. A series of studies showed that this new paradigm can be an effective teaching tool. The participants could understand the unfamiliar visualization methods in all of the four pairs either fully or at least significantly better after they observed or interacted with the transitions from the familiar counterpart. The four examples suggest how helpful visualization pairings be identified and they will hopefully inspire other visualization morphings and associated transition strategies to be identified.

  17. Beyond Visual Communication Technology.

    ERIC Educational Resources Information Center

    Bell, Thomas P.

    1993-01-01

    Discusses various aspects of visual communication--light, semiotics, codes, photography, typography, and visual literacy--within the context of the communications technology area of technology education. (SK)

  18. Multivariate volume visualization through dynamic projections

    SciTech Connect

    Liu, Shusen; Wang, Bei; Thiagarajan, Jayaraman J.; Bremer, Peer -Timo; Pascucci, Valerio

    2014-11-01

    We propose a multivariate volume visualization framework that tightly couples dynamic projections with a high-dimensional transfer function design for interactive volume visualization. We assume that the complex, high-dimensional data in the attribute space can be well-represented through a collection of low-dimensional linear subspaces, and embed the data points in a variety of 2D views created as projections onto these subspaces. Through dynamic projections, we present animated transitions between different views to help the user navigate and explore the attribute space for effective transfer function design. Our framework not only provides a more intuitive understanding of the attribute space but also allows the design of the transfer function under multiple dynamic views, which is more flexible than being restricted to a single static view of the data. For large volumetric datasets, we maintain interactivity during the transfer function design via intelligent sampling and scalable clustering. As a result, using examples in combustion and climate simulations, we demonstrate how our framework can be used to visualize interesting structures in the volumetric space.

  19. Data Visualization Using Immersive Virtual Reality Tools

    NASA Astrophysics Data System (ADS)

    Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.

    2013-01-01

    The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this

  20. Scalable and Environmentally Benign Process for Smart Textile Nanofinishing.

    PubMed

    Feng, Jicheng; Hontañón, Esther; Blanes, Maria; Meyer, Jörg; Guo, Xiaoai; Santos, Laura; Paltrinieri, Laura; Ramlawi, Nabil; Smet, Louis C P M de; Nirschl, Hermann; Kruis, Frank Einar; Schmidt-Ott, Andreas; Biskos, George

    2016-06-15

    A major challenge in nanotechnology is that of determining how to introduce green and sustainable principles when assembling individual nanoscale elements to create working devices. For instance, textile nanofinishing is restricted by the many constraints of traditional pad-dry-cure processes, such as the use of costly chemical precursors to produce nanoparticles (NPs), the high liquid and energy consumption, the production of harmful liquid wastes, and multistep batch operations. By integrating low-cost, scalable, and environmentally benign aerosol processes of the type proposed here into textile nanofinishing, these constraints can be circumvented while leading to a new class of fabrics. The proposed one-step textile nanofinishing process relies on the diffusional deposition of aerosol NPs onto textile fibers. As proof of this concept, we deposit Ag NPs onto a range of textiles and assess their antimicrobial properties for two strains of bacteria (i.e., Staphylococcus aureus and Klebsiella pneumoniae). The measurements show that the logarithmic reduction in bacterial count can get as high as ca. 5.5 (corresponding to a reduction efficiency of 99.96%) when the Ag loading is 1 order of magnitude less (10 ppm; i.e., 10 mg Ag NPs per kg of textile) than that of textiles treated by traditional wet-routes. The antimicrobial activity does not increase in proportion to the Ag content above 10 ppm as a consequence of a "saturation" effect. Such low NP loadings on antimicrobial textiles minimizes the risk to human health (during textile use) and to the ecosystem (after textile disposal), as well as it reduces potential changes in color and texture of the resulting textile products. After three washes, the release of Ag is in the order of 1 wt %, which is comparable to textiles nanofinished with wet routes using binders. Interestingly, the washed textiles exhibit almost no reduction in antimicrobial activity, much as those of as-deposited samples. Considering that a realm

  1. Scalable quantum information processing with photons and atoms

    NASA Astrophysics Data System (ADS)

    Pan, Jian-Wei

    Over the past three decades, the promises of super-fast quantum computing and secure quantum cryptography have spurred a world-wide interest in quantum information, generating fascinating quantum technologies for coherent manipulation of individual quantum systems. However, the distance of fiber-based quantum communications is limited due to intrinsic fiber loss and decreasing of entanglement quality. Moreover, probabilistic single-photon source and entanglement source demand exponentially increased overheads for scalable quantum information processing. To overcome these problems, we are taking two paths in parallel: quantum repeaters and through satellite. We used the decoy-state QKD protocol to close the loophole of imperfect photon source, and used the measurement-device-independent QKD protocol to close the loophole of imperfect photon detectors--two main loopholes in quantum cryptograph. Based on these techniques, we are now building world's biggest quantum secure communication backbone, from Beijing to Shanghai, with a distance exceeding 2000 km. Meanwhile, we are developing practically useful quantum repeaters that combine entanglement swapping, entanglement purification, and quantum memory for the ultra-long distance quantum communication. The second line is satellite-based global quantum communication, taking advantage of the negligible photon loss and decoherence in the atmosphere. We realized teleportation and entanglement distribution over 100 km, and later on a rapidly moving platform. We are also making efforts toward the generation of multiphoton entanglement and its use in teleportation of multiple properties of a single quantum particle, topological error correction, quantum algorithms for solving systems of linear equations and machine learning. Finally, I will talk about our recent experiments on quantum simulations on ultracold atoms. On the one hand, by applying an optical Raman lattice technique, we realized a two-dimensional spin-obit (SO

  2. Scalable, massively parallel approaches to upstream drainage area computation

    NASA Astrophysics Data System (ADS)

    Richardson, A.; Hill, C. N.; Perron, T.

    2011-12-01

    Accumulated drainage area maps of large regions are required for several applications. Among these are assessments of regional patterns of flow and sediment routing, high-resolution landscape evolution models in which drainage basin geometry evolves with time, and surveys of the characteristics of river basins that drain to continental margins. The computation of accumulated drainage areas is accomplished by inferring the vector field of drainage flow directions from a two-dimensional digital elevation map, and then computing the area that drains to each tile. From this map of elevations we can compute the integrated, upstream area that drains to each tile of the map. Generally this last step is done with a recursive algorithm, that accumulates upstream areas sequentially. The inherently serial nature of this restricts the number of tiles that can be included, thereby limiting the resolution of continental-size domains. This is because of the requirements of both memory, which will rise proportionally to the number of tiles, N, and computing time, which is O(N2). The fundamental sequential property of this approach prohibits effective use of large scale parallelism. An alternate method of calculating accumulated drainage area from drainage direction data can be arrived at by reformulating the problem as the solution of a system of simultaneous linear equations. The equations define the relation that the total upslope area of a particular tile is the sum of all the upslope areas for tiles immediately adjacent to that tile that drain to it, and the tile's own area. Solving these equations amounts to finding the solution of a sparse, nine-diagonal matrix operating on a vector for a right-hand-side that is simply the individual tile areas and where the diagonals of the matrix are determined by the landscape geometry. We show how an iterative method, Bi-CGSTAB, can be used to solve this problem in a scalable, massively parallel manner. However, this introduces

  3. Scalable analysis of nonlinear systems using convex optimization

    NASA Astrophysics Data System (ADS)

    Papachristodoulou, Antonis

    In this thesis, we investigate how convex optimization can be used to analyze different classes of nonlinear systems at various scales algorithmically. The methodology is based on the construction of appropriate Lyapunov-type certificates using sum of squares techniques. After a brief introduction on the mathematical tools that we will be using, we turn our attention to robust stability and performance analysis of systems described by Ordinary Differential Equations. A general framework for constrained systems analysis is developed, under which stability of systems with polynomial, non-polynomial vector fields and switching systems, as well estimating the region of attraction and the L2 gain can be treated in a unified manner. We apply our results to examples from biology and aerospace. We then consider systems described by Functional Differential Equations (FDEs), i.e., time-delay systems. Their main characteristic is that they are infinite dimensional, which complicates their analysis. We first show how the complete Lyapunov-Krasovskii functional can be constructed algorithmically for linear time-delay systems. Then, we concentrate on delay-independent and delay-dependent stability analysis of nonlinear FDEs using sum of squares techniques. An example from ecology is given. The scalable stability analysis of congestion control algorithms for the Internet is investigated next. The models we use result in an arbitrary interconnection of FDE subsystems, for which we require that stability holds for arbitrary delays, network topologies and link capacities. Through a constructive proof, we develop a Lyapunov functional for FAST---a recently developed network congestion control scheme---so that the Lyapunov stability properties scale with the system size. We also show how other network congestion control schemes can be analyzed in the same way. Finally, we concentrate on systems described by Partial Differential Equations. We show that axially constant perturbations of

  4. Fault tolerant, reliable and scalable scientific ballooning control software

    NASA Astrophysics Data System (ADS)

    Stewart, Michael F.; Ellison, Steven B.; Isbert, Joachim; Granger, Doug; Guzik, T. Gregory; Wefel, John P.

    The Universal Balloon Control Software package (UBCS) was first designed and developed for the ATIC experiment in 1997 and has evolved over the years into a highly reliable and adaptable control system. The system has logged thousands of hours of operation time on ATIC with few reboots and has been adapted for the HASP balloon payload which has had two successful flights in 2006 and 2007. The goal was to develop a UBCS that was fault tolerant and auto-recoverable while at the same time extremely reliable and scalable. In order to meet these goals, we designed a modular software system where each process was able to run in parallel with other processes on the same or different CPUs. These modular processes needed to be relatively independent; so that one process didn't rely on another in order to function. We chose QNX 4.25 as the operating system because of its multi-tasking abilities and the level of abstraction offered in communication between processes. Another key component in the UBCS, called the Buffer Process Group (BPG), was developed to de-couple processes from one another allowing each to operate independently. The BPG is a client/server process data port with a standardized interface allowing any given server to load records for access by an independent client at any given time. The BPG is capable of handling many data servers and clients simultaneously. Examples of data servers are the data acquisition process and housekeeping processes and examples of data clients are the archive process, the down link telemetry processes and the ground display processes. Together, the BPG process and the QNX 4.25 OS allow the UBCS to meet all of its design goals. In particular they allow the system to be highly fault tolerant and recoverable. A monitoring process is able to restart failed processes and reboot the computers on which they reside, if necessary. This allows the UBCS to recover from software errors or bugs as well as hardware glitches such as temporary

  5. Scalable Data Mining and Archiving for the Square Kilometre Array

    NASA Astrophysics Data System (ADS)

    Jones, D. L.; Mattmann, C. A.; Hart, A. F.; Lazio, J.; Bennett, T.; Wagstaff, K. L.; Thompson, D. R.; Preston, R.

    2011-12-01

    As the technologies for remote observation improve, the rapid increase in the frequency and fidelity of those observations translates into an avalanche of data that is already beginning to eclipse the resources, both human and technical, of the institutions and facilities charged with managing the information. Common data management tasks like cataloging both data itself and contextual meta-data, creating and maintaining scalable permanent archive, and making data available on-demand for research present significant software engineering challenges when considered at the scales of modern multi-national scientific enterprises such as the upcoming Square Kilometre Array project. The NASA Jet Propulsion Laboratory (JPL), leveraging internal research and technology development funding, has begun to explore ways to address the data archiving and distribution challenges with a number of parallel activities involving collaborations with the EVLA and ALMA teams at the National Radio Astronomy Observatory (NRAO), and members of the Square Kilometre Array South Africa team. To date, we have leveraged the Apache OODT Process Control System framework and its catalog and archive service components that provide file management, workflow management, resource management as core web services. A client crawler framework ingests upstream data (e.g., EVLA raw directory output), identifies its MIME type and automatically extracts relevant metadata including temporal bounds, and job-relevant/processing information. A remote content acquisition (pushpull) service is responsible for staging remote content and handing it off to the crawler framework. A science algorithm wrapper (called CAS-PGE) wraps underlying code including CASApy programs for the EVLA, such as Continuum Imaging and Spectral Line Cube generation, executes the algorithm, and ingests its output (along with relevant extracted metadata). In addition to processing, the Process Control System has been leveraged to provide data

  6. Scalable synthesis and energy applications of defect engineeered nano materials

    NASA Astrophysics Data System (ADS)

    Karakaya, Mehmet

    Nanomaterials and nanotechnologies have attracted a great deal of attention in a few decades due to their novel physical properties such as, high aspect ratio, surface morphology, impurities, etc. which lead to unique chemical, optical and electronic properties. The awareness of importance of nanomaterials has motivated researchers to develop nanomaterial growth techniques to further control nanostructures properties such as, size, surface morphology, etc. that may alter their fundamental behavior. Carbon nanotubes (CNTs) are one of the most promising materials with their rigidity, strength, elasticity and electric conductivity for future applications. Despite their excellent properties explored by the abundant research works, there is big challenge to introduce them into the macroscopic world for practical applications. This thesis first gives a brief overview of the CNTs, it will then go on mechanical and oil absorption properties of macro-scale CNT assemblies, then following CNT energy storage applications and finally fundamental studies of defect introduced graphene systems. Chapter Two focuses on helically coiled carbon nanotube (HCNT) foams in compression. Similarly to other foams, HCNT foams exhibit preconditioning effects in response to cyclic loading; however, their fundamental deformation mechanisms are unique. Bulk HCNT foams exhibit super-compressibility and recover more than 90% of large compressive strains (up to 80%). When subjected to striker impacts, HCNT foams mitigate impact stresses more effectively compared to other CNT foams comprised of non-helical CNTs (~50% improvement). The unique mechanical properties we revealed demonstrate that the HCNT foams are ideally suited for applications in packaging, impact protection, and vibration mitigation. The third chapter describes a simple method for the scalable synthesis of three-dimensional, elastic, and recyclable multi-walled carbon nanotube (MWCNT) based light weight bucky-aerogels (BAGs) that are

  7. Snowflake Visualization

    NASA Astrophysics Data System (ADS)

    Bliven, L. F.; Kucera, P. A.; Rodriguez, P.

    2010-12-01

    NASA Snowflake Video Imagers (SVIs) enable snowflake visualization at diverse field sites. The natural variability of frozen precipitation is a complicating factor for remote sensing retrievals in high latitude regions. Particle classification is important for understanding snow/ice physics, remote sensing polarimetry, bulk radiative properties, surface emissivity, and ultimately, precipitation rates and accumulations. Yet intermittent storms, low temperatures, high winds, remote locations and complex terrain can impede us from observing falling snow in situ. SVI hardware and software have some special features. The standard camera and optics yield 8-bit gray-scale images with resolution of 0.05 x 0.1 mm, at 60 frames per second. Gray-scale images are highly desirable because they display contrast that aids particle classification. Black and white (1-bit) systems display no contrast, so there is less information to recognize particle types, which is particularly burdensome for aggregates. Data are analyzed at one-minute intervals using NASA's Precipitation Link Software that produces (a) Particle Catalogs and (b) Particle Size Distributions (PSDs). SVIs can operate nearly continuously for long periods (e.g., an entire winter season), so natural variability can be documented. Let’s summarize results from field studies this past winter and review some recent SVI enhancements. During the winter of 2009-2010, SVIs were deployed at two sites. One SVI supported weather observations during the 2010 Winter Olympics and Paralympics. It was located close to the summit (Roundhouse) of Whistler Mountain, near the town of Whistler, British Columbia, Canada. In addition, two SVIs were located at the King City Weather Radar Station (WKR) near Toronto, Ontario, Canada. Access was prohibited to the SVI on Whistler Mountain during the Olympics due to security concerns. So to meet the schedule for daily data products, we operated the SVI by remote control. We also upgraded the

  8. Automatic Generation of Remote Visualization Tools with WATT

    NASA Astrophysics Data System (ADS)

    Jensen, P. A.; Bollig, E. F.; Yuen, D. A.; Erlebacher, G.; Momsen, A. R.

    2006-12-01

    The ever increasing size and complexity of geophysical and other scientific datasets has forced developers to turn to more powerful alternatives for visualizing results of computations and experiments. These alternative need to be faster, scalable, more efficient, and able to be run on large machines. At the same time, advances in scripting languages and visualization libraries have significantly decreased the development time of smaller, desktop visualization tools. Ideally, programmers would be able to develop visualization tools in a high-level, local, scripted environment and then automatically convert their programs into compiled, remote visualization tools for integration into larger computation environments. The Web Automation and Translation Toolkit (WATT) [1] converts a Tcl script for the Visualization Toolkit (VTK) [2] into a standards-compliant web service. We will demonstrate the used of WATT for the automated conversion of a desktop visualization application (written in Tcl for VTK) into a remote visualization service of interest to geoscientists. The resulting service will allow real-time access to a large dataset through the Internet, and will be easily integrated into the existing architecture of the Virtual Laboratory for Earth and Planetary Materials (VLab) [3]. [1] Jensen, P.A., Yuen, D.A., Erlebacher, G., Bollig, E.F., Kigelman, D.G., Shukh, E.A., Automated Generation of Web Services for Visualization Toolkits, Eos Trans. AGU, 86(52), Fall Meet. Suppl., Abstract IN42A-06, 2005. [2] The Visualization Toolkit, http://www.vtk.org [3] The Virtual Laboratory for Earth and Planetary Materials, http://vlab.msi.umn.edu

  9. An Interactive Learning Framework for Scalable Classification of Pathology Images.

    PubMed

    Nalisnik, Michael; Gutman, David A; Kong, Jun; Cooper, Lee Ad

    2015-01-01

    Recent advances in microscopy imaging and genomics have created an explosion of patient data in the pathology domain. Whole-slide images (WSIs) of tissues can now capture disease processes as they unfold in high resolution, recording the visual cues that have been the basis of pathologic diagnosis for over a century. Each WSI contains billions of pixels and up to a million or more microanatomic objects whose appearances hold important prognostic information. Computational image analysis enables the mining of massive WSI datasets to extract quantitative morphologic features describing the visual qualities of patient tissues. When combined with genomic and clinical variables, this quantitative information provides scientists and clinicians with insights into disease biology and patient outcomes. To facilitate interaction with this rich resource, we have developed a web-based machine-learning framework that enables users to rapidly build classifiers using an intuitive active learning process that minimizes data labeling effort. In this paper we describe the architecture and design of this system, and demonstrate its effectiveness through quantification of glioma brain tumors.

  10. An Interactive Learning Framework for Scalable Classification of Pathology Images

    PubMed Central

    Nalisnik, Michael; Gutman, David A; Kong, Jun; Cooper, Lee AD

    2016-01-01

    Recent advances in microscopy imaging and genomics have created an explosion of patient data in the pathology domain. Whole-slide images (WSIs) of tissues can now capture disease processes as they unfold in high resolution, recording the visual cues that have been the basis of pathologic diagnosis for over a century. Each WSI contains billions of pixels and up to a million or more microanatomic objects whose appearances hold important prognostic information. Computational image analysis enables the mining of massive WSI datasets to extract quantitative morphologic features describing the visual qualities of patient tissues. When combined with genomic and clinical variables, this quantitative information provides scientists and clinicians with insights into disease biology and patient outcomes. To facilitate interaction with this rich resource, we have developed a web-based machine-learning framework that enables users to rapidly build classifiers using an intuitive active learning process that minimizes data labeling effort. In this paper we describe the architecture and design of this system, and demonstrate its effectiveness through quantification of glioma brain tumors. PMID:27796014

  11. Scalable Analysis Techniques for Microprocessor Performance Counter Metrics

    SciTech Connect

    Ahn, D H; Vetter, J S

    2002-07-24

    Contemporary microprocessors provide a rich set of integrated performance counters that allow application developers and system architects alike the opportunity to gather important information about workload behaviors. These counters can capture instruction, memory, and operating system behaviors. Current techniques for analyzing data produced from these counters use raw counts, ratios, and visualization techniques to help users make decisions about their application source code. While these techniques are appropriate for analyzing data from one process, they do not scale easily to new levels demanded by contemporary computing systems. Indeed, the amount of data generated by these experiments is on the order of tens of thousands of data points. Furthermore, if users execute multiple experiments, then we add yet another dimension to this already knotty picture. This flood of multidimensional data can swamp efforts to harvest important ideas from these valuable counters. Very simply, this paper addresses these concerns by evaluating several multivariate statistical techniques on these datasets. We find that several techniques, such as statistical clustering, can automatically extract important features from this data. These derived results can, in turn, be feed directly back to an application developer, or used as input to a more comprehensive performance analysis environment, such as a visualization or an expert system.

  12. Spelling: A Visual Skill.

    ERIC Educational Resources Information Center

    Hendrickson, Homer

    1988-01-01

    Spelling problems arise due to problems with form discrimination and inadequate visualization. A child's sequence of visual development involves learning motor control and coordination, with vision directing and monitoring the movements; learning visual comparison of size, shape, directionality, and solidity; developing visual memory or recall;…

  13. A scalable messaging system for accelerating discovery from large scale scientific simulations

    SciTech Connect

    Jin, Tong; Zhang, Fan; Parashar, Manish; Klasky, Scott A; Podhorszki, Norbert; Abbasi, Hasan

    2012-01-01

    Emerging scientific and engineering simulations running at scale on leadership-class High End Computing (HEC) environments are producing large volumes of data, which has to be transported and analyzed before any insights can result from these simulations. The complexity and cost (in terms of time and energy) associated with managing and analyzing this data have become significant challenges, and are limiting the impact of these simulations. Recently, data-staging approaches along with in-situ and in-transit analytics have been proposed to address these challenges by offloading I/O and/or moving data processing closer to the data. However, scientists continue to be overwhelmed by the large data volumes and data rates. In this paper we address this latter challenge. Specifically, we propose a highly scalable and low-overhead associative messaging framework that runs on the data staging resources within the HEC platform, and builds on the staging-based online in-situ/in- transit analytics to provide publish/subscribe/notification-type messaging patterns to the scientist. Rather than having to ingest and inspect the data volumes, this messaging system allows scientists to (1) dynamically subscribe to data events of interest, e.g., simple data values or a complex function or simple reduction (max()/min()/avg()) of the data values in a certain region of the application domain is greater/less than a threshold value, or certain spatial/temporal data features or data patterns are detected; (2) define customized in-situ/in-transit actions that are triggered based on the events, such as data visualization or transformation; and (3) get notified when these events occur. The key contribution of this paper is a design and implementation that can support such a messaging abstraction at scale on high- end computing (HEC) systems with minimal overheads. We have implemented and deployed the messaging system on the Jaguar Cray XK6 machines at Oak Ridge National Laboratory and the

  14. Supporting the Process of Exploring and Interpreting Space–Time Multivariate Patterns: The Visual Inquiry Toolkit

    PubMed Central

    Chen, Jin; MacEachren, Alan M.; Guo, Diansheng

    2009-01-01

    While many data sets carry geographic and temporal references, our ability to analyze these datasets lags behind our ability to collect them because of the challenges posed by both data complexity and tool scalability issues. This study develops a visual analytics approach that leverages human expertise with visual, computational, and cartographic methods to support the application of visual analytics to relatively large spatio-temporal, multivariate data sets. We develop and apply a variety of methods for data clustering, pattern searching, information visualization, and synthesis. By combining both human and machine strengths, this approach has a better chance to discover novel, relevant, and potentially useful information that is difficult to detect by any of the methods used in isolation. We demonstrate the effectiveness of the approach by applying the Visual Inquiry Toolkit we developed to analyze a data set containing geographically referenced, time-varying and multivariate data for U.S. technology industries. PMID:19960096

  15. GAViT: Genome Assembly Visualization Tool for Short Read Data

    SciTech Connect

    Syed, Aijazuddin; Shapiro, Harris; Tu, Hank; Pangilinan, Jasmyn; Trong, Stephan

    2008-03-14

    It is a challenging job for genome analysts to accurately debug, troubleshoot, and validate genome assembly results. Genome analysts rely on visualization tools to help validate and troubleshoot assembly results, including such problems as mis-assemblies, low-quality regions, and repeats. Short read data adds further complexity and makes it extremely challenging for the visualization tools to scale and to view all needed assembly information. As a result, there is a need for a visualization tool that can scale to display assembly data from the new sequencing technologies. We present Genome Assembly Visualization Tool (GAViT), a highly scalable and interactive assembly visualization tool developed at the DOE Joint Genome Institute (JGI).

  16. VisTrails : enabling interactive multiple-view visualizations.

    SciTech Connect

    Scheidegger, Carlos E.; Vo, Huy T.; Crossno, Patricia Joyce; Callahan, Steven P.; Bavoil, Louis; Freire, Juliana.; Silva, Claudio

    2005-04-01

    VisTrails is a new system that enables interactive multiple-view visualizations by simplifying the creation and maintenance of visualization pipelines, and by optimizing their execution. It provides a general infrastructure that can be combined with existing visualization systems and libraries. A key component of VisTrails is the visualization trail (vistrail), a formal specification of a pipeline. Unlike existing dataflow-based systems, in VisTrails there is a clear separation between the specification of a pipeline and its execution instances. This separation enables powerful scripting capabilities and provides a scalable mechanism for generating a large number of visualizations. VisTrails also leverages the vistrail specification to identify and avoid redundant operations. This optimization is especially useful while exploring multiple visualizations. When variations of the same pipeline need to be executed, substantial speedups can be obtained by caching the results of overlapping subsequences of the pipelines. In this paper, we describe the design and implementation of VisTrails, and show its effectiveness in different application scenarios.

  17. Scalable shear-exfoliation of high-quality phosphorene nanoflakes with reliable electrochemical cycleability in nano batteries

    NASA Astrophysics Data System (ADS)

    Xu, Feng; Ge, Binghui; Chen, Jing; Nathan, Arokia; Xin, Linhuo L.; Ma, Hongyu; Min, Huihua; Zhu, Chongyang; Xia, Weiwei; Li, Zhengrui; Li, Shengli; Yu, Kaihao; Wu, Lijun; Cui, Yiping; Sun, Litao; Zhu, Yimei

    2016-06-01

    Atomically thin black phosphorus (called phosphorene) holds great promise as an alternative to graphene and other two-dimensional transition-metal dichalcogenides as an anode material for lithium-ion batteries (LIBs). However, bulk black phosphorus (BP) suffers from rapid capacity fading and poor rechargeable performance. This work reports for the first time the use of in situ transmission electron microscopy (TEM) to construct nanoscale phosphorene LIBs. This enables direct visualization of the mechanisms underlying capacity fading in thick multilayer phosphorene through real-time capture of delithiation-induced structural decomposition, which serves to reduce electrical conductivity thus causing irreversibility of the lithiated phases. We further demonstrate that few-layer-thick phosphorene successfully circumvents the structural decomposition and holds superior structural restorability, even when subject to multi-cycle lithiation/delithiation processes and concomitant huge volume expansion. This finding provides breakthrough insights into thickness-dependent lithium diffusion kinetics in phosphorene. More importantly, a scalable liquid-phase shear exfoliation route has been developed to produce high-quality ultrathin phosphorene using simple means such as a high-speed shear mixer or even a household kitchen blender with the shear rate threshold of ˜1.25 × 104 s-1. The results reported here will pave the way for industrial-scale applications of rechargeable phosphorene LIBs.

  18. Scalable shear-exfoliation of high-quality phosphorene nanoflakes with reliable electrochemical cycleability in nano batteries

    DOE PAGES

    Xu, Feng; Ge, Binghui; Chen, Jing; ...

    2016-03-30

    Atomically thin black phosphorus (called phosphorene) holds great promise as an alternative to graphene and other two-dimensional transition-metal dichalcogenides as an anode material for lithium-ion batteries (LIBs). But, bulk black phosphorus (BP) suffers from rapid capacity fading and poor rechargeable performance. This work reports for the first time the use of in situ transmission electron microscopy (TEM) to construct nanoscale phosphorene LIBs. This enables direct visualization of the mechanisms underlying capacity fading in thick multilayer phosphorene through real-time capture of delithiation-induced structural decomposition, which serves to reduce electrical conductivity thus causing irreversibility of the lithiated phases. Furthermore, we demonstrate thatmore » few-layer-thick phosphorene successfully circumvents the structural decomposition and holds superior structural restorability, even when subject to multi-cycle lithiation/delithiation processes and concomitant huge volume expansion. This finding provides breakthrough insights into thickness-dependent lithium diffusion kinetics in phosphorene. More importantly, a scalable liquid-phase shear exfoliation route has been developed to produce high-quality ultrathin phosphorene using simple means such as a high-speed shear mixer or even a household kitchen blender with the shear rate threshold of ~1.25 × 104 s-1. Our results reported here will pave the way for industrial-scale applications of rechargeable phosphorene LIBs.« less

  19. Scalable shear-exfoliation of high-quality phosphorene nanoflakes with reliable electrochemical cycleability in nano batteries

    SciTech Connect

    Xu, Feng; Ge, Binghui; Chen, Jing; Nathan, Arokia; Xin, Linhuo L.; Ma, Hongyu; Zhu, Chongyang; Xia, Weiwei; Li, Zhengrui; Li, Shengli; Yu, Kaihao; Wu, Lijun; Cui, Yiping; Sun, Litao; Zhu, Yimei

    2016-03-30

    Atomically thin black phosphorus (called phosphorene) holds great promise as an alternative to graphene and other two-dimensional transition-metal dichalcogenides as an anode material for lithium-ion batteries (LIBs). But, bulk black phosphorus (BP) suffers from rapid capacity fading and poor rechargeable performance. This work reports for the first time the use of in situ transmission electron microscopy (TEM) to construct nanoscale phosphorene LIBs. This enables direct visualization of the mechanisms underlying capacity fading in thick multilayer phosphorene through real-time capture of delithiation-induced structural decomposition, which serves to reduce electrical conductivity thus causing irreversibility of the lithiated phases. Furthermore, we demonstrate that few-layer-thick phosphorene successfully circumvents the structural decomposition and holds superior structural restorability, even when subject to multi-cycle lithiation/delithiation processes and concomitant huge volume expansion. This finding provides breakthrough insights into thickness-dependent lithium diffusion kinetics in phosphorene. More importantly, a scalable liquid-phase shear exfoliation route has been developed to produce high-quality ultrathin phosphorene using simple means such as a high-speed shear mixer or even a household kitchen blender with the shear rate threshold of ~1.25 × 104 s-1. Our results reported here will pave the way for industrial-scale applications of rechargeable phosphorene LIBs.

  20. The OpenEarth Framework (OEF) for the 3D Visualization of Integrated Earth Science Data

    NASA Astrophysics Data System (ADS)

    Nadeau, David; Moreland, John; Baru, Chaitan; Crosby, Chris

    2010-05-01

    seismic tomography may be sliced by multiple oriented cutting planes and isosurfaced to create 3D skins that trace feature boundaries within the data. Topography may be overlaid with satellite imagery, maps, and data such as gravity and magnetics measurements. Multiple data sets may be visualized simultaneously using overlapping layers within a common 3D coordinate space. Data management within the OEF handles and hides the inevitable quirks of differing file formats, web protocols, storage structures, coordinate spaces, and metadata representations. Heuristics are used to extract necessary metadata used to guide data and visual operations. Derived data representations are computed to better support fluid interaction and visualization while the original data is left unchanged in its original form. Data is cached for better memory and network efficiency, and all visualization makes use of 3D graphics hardware support found on today's computers. The OpenEarth Framework project is currently prototyping the software for use in the visualization, and integration of continental scale geophysical data being produced by EarthScope-related research in the Western US. The OEF is providing researchers with new ways to display and interrogate their data and is anticipated to be a valuable tool for future EarthScope-related research.

  1. Scalable architecture for a room temperature solid-state quantum information processor.

    PubMed

    Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D

    2012-04-24

    The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.

  2. SMG: Fast scalable greedy algorithm for influence maximization in social networks

    NASA Astrophysics Data System (ADS)

    Heidari, Mehdi; Asadpour, Masoud; Faili, Hesham

    2015-02-01

    Influence maximization is the problem of finding k most influential nodes in a social network. Many works have been done in two different categories, greedy approaches and heuristic approaches. The greedy approaches have better influence spread, but lower scalability on large networks. The heuristic approaches are scalable and fast but not for all type of networks. Improving the scalability of greedy approach is still an open and hot issue. In this work we present a fast greedy algorithm called State Machine Greedy that improves the existing algorithms by reducing calculations in two parts: (1) counting the traversing nodes in estimate propagation procedure, (2) Monte-Carlo graph construction in simulation of diffusion. The results show that our method makes a huge improvement in the speed over the existing greedy approaches.

  3. Asynchronous Checkpoint Migration with MRNet in the Scalable Checkpoint / Restart Library

    SciTech Connect

    Mohror, K; Moody, A; de Supinski, B R

    2012-03-20

    Applications running on today's supercomputers tolerate failures by periodically saving their state in checkpoint files on stable storage, such as a parallel file system. Although this approach is simple, the overhead of writing the checkpoints can be prohibitive, especially for large-scale jobs. In this paper, we present initial results of an enhancement to our Scalable Checkpoint/Restart Library (SCR). We employ MRNet, a tree-based overlay network library, to transfer checkpoints from the compute nodes to the parallel file system asynchronously. This enhancement increases application efficiency by removing the need for an application to block while checkpoints are transferred to the parallel file system. We show that the integration of SCR with MRNet can reduce the time spent in I/O operations by as much as 15x. However, our experiments exposed new scalability issues with our initial implementation. We discuss the sources of the scalability problems and our plans to address them.

  4. Scalable Implementation of Finite Elements by NASA _ Implicit (ScIFEi)

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Bomarito, Geoffrey F.; Heber, Gerd; Hochhalter, Jacob D.

    2016-01-01

    Scalable Implementation of Finite Elements by NASA (ScIFEN) is a parallel finite element analysis code written in C++. ScIFEN is designed to provide scalable solutions to computational mechanics problems. It supports a variety of finite element types, nonlinear material models, and boundary conditions. This report provides an overview of ScIFEi (\\Sci-Fi"), the implicit solid mechanics driver within ScIFEN. A description of ScIFEi's capabilities is provided, including an overview of the tools and features that accompany the software as well as a description of the input and output le formats. Results from several problems are included, demonstrating the efficiency and scalability of ScIFEi by comparing to finite element analysis using a commercial code.

  5. SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop

    PubMed Central

    Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo

    2014-01-01

    Summary: Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig’s scalability over many computing nodes and illustrate its use with example scripts. Availability and Implementation: Available under the open source MIT license at http://sourceforge.net/projects/seqpig/ Contact: andre.schumacher@yahoo.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24149054

  6. Vertical nanowire electrode array: a highly scalable platform for intracellular interfacing to neuronal circuits

    NASA Astrophysics Data System (ADS)

    Jorgolli, Marsela; Robinson, Jacob; Shalek, Alex; Yoon, Myung-Han; Gertner, Rona; Park, Hongkun

    2012-02-01

    Interrogation of complex neuronal network requires new experimental tools that are sensitive enough to quantify the strengths of synaptic connections, yet scalable enough to couple to a large number of neurons simultaneously. Here, we will present a new, highly scalable intracellular electrode platform based on vertical nanowires that affords parallel interfacing to multiple mammalian neurons. Specifically, we show that our vertical nanowire electrode arrays can intracellularly record and stimulate neuronal activity in dissociated cultures of rat cortical neurons and be used to map multiple individual synaptic connections. This platform's scalability and full compatibility with silicon nanofabrication techniques provide a clear path toward simultaneous high-fidelity interfacing with hundreds of individual neurons, opening up exciting new avenues for neuronal circuit studies and prosthetics.

  7. A scalable approach to modeling groundwater flow on massively parallel computers

    SciTech Connect

    Ashby, S.F.; Falgout, R.D.; Tompson, A.F.B.

    1995-12-01

    We describe a fully scalable approach to the simulation of groundwater flow on a hierarchy of computing platforms, ranging from workstations to massively parallel computers. Specifically, we advocate the use of scalable conceptual models in which the subsurface model is defined independently of the computational grid on which the simulation takes place. We also describe a scalable multigrid algorithm for computing the groundwater flow velocities. We axe thus able to leverage both the engineer`s time spent developing the conceptual model and the computing resources used in the numerical simulation. We have successfully employed this approach at the LLNL site, where we have run simulations ranging in size from just a few thousand spatial zones (on workstations) to more than eight million spatial zones (on the CRAY T3D)-all using the same conceptual model.

  8. Scalable, Shape-specific, Top-down Fabrication Methods for the Synthesis of Engineered Colloidal Particles

    PubMed Central

    Merkel, Timothy J.; Herlihy, Kevin P.; Nunes, Janine; Orgel, Ryan M.; Rolland, Jason P.; DeSimone, Joseph M.

    2010-01-01

    The search for a method to fabricate non-spherical colloidal particles from a variety of materials is of growing interest. As the commercialization of nanotechnology continues to expand, the ability to translate particle fabrication methods from a laboratory to an industrial scale is of increasing significance. In this article, we examine several of the most readily scalable top-down methods for the fabrication of such shape specific particles and compare their capabilities with respect to particle composition, size, shape and complexity as well as the scalability of the method. We offer an extensive examination of Particle Replication In Non-wetting Templates (PRINT®) with regards to the versatility and scalability of this technique. We also detail the specific methods used in PRINT particle fabrication, including harvesting, purification and surface modification techniques, with examination of both past and current methods. PMID:20000620

  9. Agroinfiltration as an Effective and Scalable Strategy of Gene Delivery for Production of Pharmaceutical Proteins.

    PubMed

    Chen, Qiang; Lai, Huafang; Hurtado, Jonathan; Stahnke, Jake; Leuzinger, Kahlin; Dent, Matthew

    2013-06-01

    Current human biologics are most commonly produced by mammalian cell culture-based fermentation technologies. However, its limited scalability and high cost prevent this platform from meeting the ever increasing global demand. Plants offer a novel alternative system for the production of pharmaceutical proteins that is more scalable, cost-effective, and safer than current expression paradigms. The recent development of deconstructed virus-based vectors has allowed rapid and high-level transient expression of recombinant proteins, and in turn, provided a preferred plant based production platform. One of the remaining challenges for the commercial application of this platform was the lack of a scalable technology to deliver the transgene into plant cells. Therefore, this review focuses on the development of an effective and scalable technology for gene delivery in plants. Direct and indirect gene delivery strategies for plant cells are first presented, and the two major gene delivery technologies based on agroinfiltration are subsequently discussed. Furthermore, the advantages of syringe and vacuum infiltration as gene delivery methodologies are extensively discussed, in context of their applications and scalability for commercial production of human pharmaceutical proteins in plants. The important steps and critical parameters for the successful implementation of these strategies are also detailed in the review. Overall, agroinfiltration based on syringe and vacuum infiltration provides an efficient, robust and scalable gene-delivery technology for the transient expression of recombinant proteins in plants. The development of this technology will greatly facilitate the realization of plant transient expression systems as a premier platform for commercial production of pharmaceutical proteins.

  10. Extreme Performance Scalable Operating Systems Final Progress Report (July 1, 2008 - October 31, 2011)

    SciTech Connect

    Malony, Allen D; Shende, Sameer

    2011-10-31

    This is the final progress report for the FastOS (Phase 2) (FastOS-2) project with Argonne National Laboratory and the University of Oregon (UO). The project started at UO on July 1, 2008 and ran until April 30, 2010, at which time a six-month no-cost extension began. The FastOS-2 work at UO delivered excellent results in all research work areas: * scalable parallel monitoring * kernel-level performance measurement * parallel I/0 system measurement * large-scale and hybrid application performance measurement * onlne scalable performance data reduction and analysis * binary instrumentation

  11. Nanodiamonds in Fabry-Perot cavities: a route to scalable quantum computing

    NASA Astrophysics Data System (ADS)

    Greentree, Andrew D.

    2016-02-01

    The negatively-charged nitrogen-vacancy colour centre in diamond has long been identified as a platform for quantum computation. However, despite beautiful proof of concept experiments, a pathway to true scalability has proven elusive. Now a group from Oxford and Grenoble-Alpes have shown coupling between nitrogen-vacancy centres and open Fabry-Perot cavities in a way that proves a clear route to scalable quantum computing (Johnson et al 2015 New J. Phys. 17 122003). And all at the relatively balmy temperature of 77 K.

  12. Scalable load balancing for massively parallel distributed Monte Carlo particle transport

    SciTech Connect

    O'Brien, M. J.; Brantley, P. S.; Joy, K. I.

    2013-07-01

    In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrence Livermore National Laboratory. (authors)

  13. Limits of size scalability of diffusion and growth: Atoms versus molecules versus colloids

    NASA Astrophysics Data System (ADS)

    Kleppmann, N.; Schreiber, F.; Klapp, S. H. L.

    2017-02-01

    Understanding fundamental growth processes is key to the control of nonequilibrium structure formation for a wide range of materials on all length scales, from atomic to molecular and even colloidal systems. While atomic systems are relatively well studied, molecular and colloidal growth are currently moving more into the focus. This poses the question to what extent growth laws are size scalable between different material systems. We study this question by analyzing the potential energy landscape and performing kinetic Monte Carlo simulations for three representative systems. While submonolayer (island) growth is found to be essentially scalable, we find marked differences when moving into the third (vertical) dimension.

  14. A Scalable Software Architecture Booting and Configuring Nodes in the Whitney Commodity Computing Testbed

    NASA Technical Reports Server (NTRS)

    Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)

    1997-01-01

    The Whitney project is integrating commodity off-the-shelf PC hardware and software technology to build a parallel supercomputer with hundreds to thousands of nodes. To build such a system, one must have a scalable software model, and the installation and maintenance of the system software must be completely automated. We describe the design of an architecture for booting, installing, and configuring nodes in such a system with particular consideration given to scalability and ease of maintenance. This system has been implemented on a 40-node prototype of Whitney and is to be used on the 500 processor Whitney system to be built in 1998.

  15. Performance and scalability aspects of directory-based cache coherence in shared-memory multiprocessors

    SciTech Connect

    Picano, S.; Meyer, D.G.; Brooks, E.D. III; Hoag, J.E.

    1993-05-01

    We present a study that accentuates the performance and scalability aspects of directory-based cache coherence in multiprocessor systems. Using a multiprocessor with a software-based coherence scheme, efficient implementations rely heavily on the programmer`s ability to explicitly manage the memory system, which is typically handled by hardware support on other bus-based, shared memory multiprocessors. We describe a scalable, shared memory, cache coherent multiprocessor and present simulation results obtained on three parallel programs. This multiprocessor configuration exhibits high performance at no additional parallel programming cost.

  16. Limits of size scalability of diffusion and growth: Atoms versus molecules versus colloids.

    PubMed

    Kleppmann, N; Schreiber, F; Klapp, S H L

    2017-02-01

    Understanding fundamental growth processes is key to the control of nonequilibrium structure formation for a wide range of materials on all length scales, from atomic to molecular and even colloidal systems. While atomic systems are relatively well studied, molecular and colloidal growth are currently moving more into the focus. This poses the question to what extent growth laws are size scalable between different material systems. We study this question by analyzing the potential energy landscape and performing kinetic Monte Carlo simulations for three representative systems. While submonolayer (island) growth is found to be essentially scalable, we find marked differences when moving into the third (vertical) dimension.

  17. Visualizing Human Migration Trhough Space and Time

    NASA Astrophysics Data System (ADS)

    Zambotti, G.; Guan, W.; Gest, J.

    2015-07-01

    Human migration has been an important activity in human societies since antiquity. Since 1890, approximately three percent of the world's population has lived outside of their country of origin. As globalization intensifies in the modern era, human migration persists even as governments seek to more stringently regulate flows. Understanding this phenomenon, its causes, processes and impacts often starts from measuring and visualizing its spatiotemporal patterns. This study builds a generic online platform for users to interactively visualize human migration through space and time. This entails quickly ingesting human migration data in plain text or tabular format; matching the records with pre-established geographic features such as administrative polygons; symbolizing the migration flow by circular arcs of varying color and weight based on the flow attributes; connecting the centroids of the origin and destination polygons; and allowing the user to select either an origin or a destination feature to display all flows in or out of that feature through time. The method was first developed using ArcGIS Server for world-wide cross-country migration, and later applied to visualizing domestic migration patterns within China between provinces, and between states in the United States, all through multiple years. The technical challenges of this study include simplifying the shapes of features to enhance user interaction, rendering performance and application scalability; enabling the temporal renderers to provide time-based rendering of features and the flow among them; and developing a responsive web design (RWD) application to provide an optimal viewing experience. The platform is available online for the public to use, and the methodology is easily adoptable to visualizing any flow, not only human migration but also the flow of goods, capital, disease, ideology, etc., between multiple origins and destinations across space and time.

  18. Data Fusion and Visualization with the OpenEarth Framework (OEF)

    NASA Astrophysics Data System (ADS)

    Nadeau, D. R.; Baru, C.; Fouch, M. J.; Crosby, C. J.

    2010-12-01

    sliced by multiple oriented cutting planes and isosurfaced to create 3D skins that trace feature boundaries within the data. Topography may be overlaid with satellite imagery along with data such as gravity and magnetics measurements. Multiple data sets may be visualized simultaneously using overlapping layers and a common 3D+time coordinate space. Data management within the OEF handles and hides the quirks of differing file formats, web protocols, storage structures, coordinate spaces, and metadata representations. Derived data are computed automatically to support interaction and visualization while the original data is left unchanged in its original form. Data is cached for better memory and network efficiency, and all visualization is accelerated by 3D graphics hardware found on today’s computers. The OpenEarth Framework project is currently prototyping the software for use in the visualization, and integration of continental scale geophysical data being produced by EarthScope-related research in the Western US. The OEF is providing researchers with new ways to display and interrogate their data and is anticipated to be a valuable tool for future EarthScope-related research.

  19. The Visual Analysis of Visual Metaphor.

    ERIC Educational Resources Information Center

    Dake, Dennis M.; Roberts, Brian

    This paper presents an approach to understanding visual metaphor which uses metaphoric analysis and comprehension by graphic and pictorial means. The perceptible qualities of shape, line, form, color, and texture, that make up the visual structure characteristic of any particular shape, configuration, or scene, are called physiognomic properties;…

  20. Visualizer cognitive style enhances visual creativity.

    PubMed

    Palmiero, Massimiliano; Nori, Raffaella; Piccardi, Laura

    2016-02-26

    In the last two decades, interest towards creativity has increased significantly since it was recognized as a skill and as a cognitive reserve and is now always more frequently used in ageing training. Here, the relationships between visual creativity and Visualization-Verbalization cognitive style were investigated. Fifty college students were administered the Creative Synthesis Task aimed at measuring the ability to construct creative objects and the Visualization-Verbalization Questionnaire (VVQ) aimed at measuring the attitude to preferentially use either imagery or verbal strategy while processing information. Analyses showed that only the originality score of inventions was positively predicted by the VVQ score: higher VVQ score (indicating the preference to use imagery) predicted originality of inventions. These results showed that the visualization strategy is involved especially in the originality dimension of creative objects production. In light of neuroimaging results, the possibility that different strategies, such those that involve motor processes, affect visual creativity is also discussed.