Science.gov

Sample records for scalable isosurface visualization

  1. Direct isosurface visualization of hex-based high-order geometry and attribute representations.

    PubMed

    Martin, Tobias; Cohen, Elaine; Kirby, Robert M

    2012-05-01

    In this paper, we present a novel isosurface visualization technique that guarantees the accurate visualization of isosurfaces with complex attribute data defined on (un)structured (curvi)linear hexahedral grids. Isosurfaces of high-order hexahedral-based finite element solutions on both uniform grids (including MRI and CT scans) and more complex geometry representing a domain of interest that can be rendered using our algorithm. Additionally, our technique can be used to directly visualize solutions and attributes in isogeometric analysis, an area based on trivariate high-order NURBS (Non-Uniform Rational B-splines) geometry and attribute representations for the analysis. Furthermore, our technique can be used to visualize isosurfaces of algebraic functions. Our approach combines subdivision and numerical root finding to form a robust and efficient isosurface visualization algorithm that does not miss surface features, while finding all intersections between a view frustum and desired isosurfaces. This allows the use of view-independent transparency in the rendering process. We demonstrate our technique through a straightforward CPU implementation on both complex-structured and complex-unstructured geometries with high-order simulation solutions, isosurfaces of medical data sets, and isosurfaces of algebraic functions. PMID:22442127

  2. Large Scale Isosurface Bicubic Subdivision-Surface Wavelets for Representation and Visualization

    SciTech Connect

    Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.

    2000-01-05

    We introduce a new subdivision-surface wavelet transform for arbitrary two-manifolds with boundary that is the first to use simple lifting-style filtering operations with bicubic precision. We also describe a conversion process for re-mapping large-scale isosurfaces to have subdivision connectivity and fair parameterizations so that the new wavelet transform can be used for compression and visualization. The main idea enabling our wavelet transform is the circular symmetrization of the filters in irregular neighborhoods, which replaces the traditional separation of filters into two 1-D passes. Our wavelet transform uses polygonal base meshes to represent surface topology, from which a Catmull-Clark-style subdivision hierarchy is generated. The details between these levels of resolution are quickly computed and compactly stored as wavelet coefficients. The isosurface conversion process begins with a contour triangulation computed using conventional techniques, which we subsequently simplify with a variant edge-collapse procedure, followed by an edge-removal process. This provides a coarse initial base mesh, which is subsequently refined, relaxed and attracted in phases to converge to the contour. The conversion is designed to produce smooth, untangled and minimally-skewed parameterizations, which improves the subsequent compression after applying the transform. We have demonstrated our conversion and transform for an isosurface obtained from a high-resolution turbulent-mixing hydrodynamics simulation, showing the potential for compression and level-of-detail visualization.

  3. Seamless multiresolution isosurfaces using wavelets

    SciTech Connect

    Udeshi, T.; Hudson, R.; Papka, M. E.

    2000-04-11

    Data sets that are being produced by today's simulations, such as the ones generated by DOE's ASCI program, are too large for real-time exploration and visualization. Therefore, new methods of visualizing these data sets need to be investigated. The authors present a method that combines isosurface representations of different resolutions into a seamless solution, virtually free of cracks and overlaps. The solution combines existing isosurface generation algorithms and wavelet theory to produce a real-time solution to multiple-resolution isosurfaces.

  4. Case study of isosurface extraction algorithm performance

    SciTech Connect

    Sutton, P M; Hansen, C D; Shen, H; Schikore, D

    1999-12-14

    Isosurface extraction is an important and useful visualization method. Over the past ten years, the field has seen numerous isosurface techniques published leaving the user in a quandary about which one should be used. Some papers have published complexity analysis of the techniques yet empirical evidence comparing different methods is lacking. This case study presents a comparative study of several representative isosurface extraction algorithms. It reports and analyzes empirical measurements of execution times and memory behavior for each algorithm. The results show that asymptotically optimal techniques may not be the best choice when implemented on modern computer architectures.

  5. Scalable low complexity image coder for remote volume visualization

    NASA Astrophysics Data System (ADS)

    Lalgudi, Hariharan G.; Marcellin, Michael W.; Bilgin, Ali; Nadar, Mariappan S.

    2008-08-01

    Remote visualization of volumetric data has gained importance over the past few years in order to realize the full potential of tele-radiology. Volume rendering is a computationally intensive process, often requiring hardware acceleration to achieve real time visualization. Hence a remote visualization model that is well-suited for high speed networks would be to transmit rendered images from the server (with dedicated hardware) based on view point requests from clients. In this regard, a compression scheme for the rendered images is vital for efficient utilization of the server-client bandwidth. Also, the complexity of the decompressor should be considered so that a low end client workstation can decode images at the desired frame rate. We present a scalable low complexity image coder that has good compression efficiency and high throughput.

  6. The Scalable Reasoning System: Lightweight Visualization for Distributed Analytics

    SciTech Connect

    Pike, William A.; Bruce, Joseph R.; Baddeley, Robert L.; Best, Daniel M.; Franklin, Lyndsey; May, Richard A.; Rice, Douglas M.; Riensche, Roderick M.; Younkin, Katarina

    2008-11-01

    A central challenge in visual analytics is the creation of accessible, widely distributable analysis applications that bring the benefits of visual discovery to as broad a user base as possible. Moreover, to support the role of visualization in the knowledge creation process, it is advantageous to allow users to describe the reasoning strategies they employ while interacting with analytic environments. We introduce an application suite called the Scalable Reasoning System (SRS), which provides web-based and mobile interfaces for visual analysis. The service-oriented analytic framework that underlies SRS provides a platform for deploying pervasive visual analytic environments across an enterprise. SRS represents a “lightweight” approach to visual analytics whereby thin client analytic applications can be rapidly deployed in a platform-agnostic fashion. Client applications support multiple coordinated views while giving analysts the ability to record evidence, assumptions, hypotheses and other reasoning artifacts. We describe the capabilities of SRS in the context of a real-world deployment at a regional law enforcement organization.

  7. The Scalable Reasoning System: Lightweight Visualization for Distributed Analytics

    SciTech Connect

    Pike, William A.; Bruce, Joseph R.; Baddeley, Robert L.; Best, Daniel M.; Franklin, Lyndsey; May, Richard A.; Rice, Douglas M.; Riensche, Roderick M.; Younkin, Katarina

    2009-03-01

    A central challenge in visual analytics is the creation of accessible, widely distributable analysis applications that bring the benefits of visual discovery to as broad a user base as possible. Moreover, to support the role of visualization in the knowledge creation process, it is advantageous to allow users to describe the reasoning strategies they employ while interacting with analytic environments. We introduce an application suite called the Scalable Reasoning System (SRS), which provides web-based and mobile interfaces for visual analysis. The service-oriented analytic framework that underlies SRS provides a platform for deploying pervasive visual analytic environments across an enterprise. SRS represents a “lightweight” approach to visual analytics whereby thin client analytic applications can be rapidly deployed in a platform-agnostic fashion. Client applications support multiple coordinated views while giving analysts the ability to record evidence, assumptions, hypotheses and other reasoning artifacts. We describe the capabilities of SRS in the context of a real-world deployment at a regional law enforcement organization.

  8. ViSUS: Visualization Streams for Ultimate Scalability

    SciTech Connect

    Pascucci, V

    2005-02-14

    In this project we developed a suite of progressive visualization algorithms and a data-streaming infrastructure that enable interactive exploration of scientific datasets of unprecedented size. The methodology aims to globally optimize the data flow in a pipeline of processing modules. Each module reads a multi-resolution representation of the input while producing a multi-resolution representation of the output. The use of multi-resolution representations provides the necessary flexibility to trade speed for accuracy in the visualization process. Maximum coherency and minimum delay in the data-flow is achieved by extensive use of progressive algorithms that continuously map local geometric updates of the input stream into immediate updates of the output stream. We implemented a prototype software infrastructure that demonstrated the flexibility and scalability of this approach by allowing large data visualization on single desktop computers, on PC clusters, and on heterogeneous computing resources distributed over a wide area network. When processing terabytes of scientific data, we have achieved an effective increase in visualization performance of several orders of magnitude in two major settings: (i) interactive visualization on desktop workstations of large datasets that cannot be stored locally; (ii) real-time monitoring of a large scientific simulation with negligible impact on the computing resources available. The ViSUS streaming infrastructure enabled the real-time execution and visualization of the two LLNL simulation codes (Miranda and Raptor) run at Supercomputing 2004 on Blue Gene/L at its presentation as the fastest supercomputer in the world. In addition to SC04, we have run live demonstrations at the IEEE VIS conference and at invited talks at the DOE MICS office, DOE computer graphics forum, UC Riverside, and the University of Maryland. In all cases we have shown the capability to stream and visualize interactively data stored remotely at the San

  9. Interactive high-resolution isosurface ray casting on multicore processors.

    PubMed

    Wang, Qin; JaJa, Joseph

    2008-01-01

    We present a new method for the interactive rendering of isosurfaces using ray casting on multi-core processors. This method consists of a combination of an object-order traversal that coarsely identifies possible candidate 3D data blocks for each small set of contiguous pixels, and an isosurface ray casting strategy tailored for the resulting limited-size lists of candidate 3D data blocks. While static screen partitioning is widely used in the literature, our scheme performs dynamic allocation of groups of ray casting tasks to ensure almost equal loads among the different threads running on multi-cores while maintaining spatial locality. We also make careful use of memory management environment commonly present in multi-core processors. We test our system on a two-processor Clovertown platform, each consisting of a Quad-Core 1.86-GHz Intel Xeon Processor, for a number of widely different benchmarks. The detailed experimental results show that our system is efficient and scalable, and achieves high cache performance and excellent load balancing, resulting in an overall performance that is superior to any of the previous algorithms. In fact, we achieve an interactive isosurface rendering on a 1024(2) screen for all the datasets tested up to the maximum size of the main memory of our platform. PMID:18369267

  10. Direct Isosurface Ray Casting of NURBS-Based Isogeometric Analysis.

    PubMed

    Schollmeyer, Andre; Froehlich, Bernd

    2014-09-01

    In NURBS-based isogeometric analysis, the basis functions of a 3D model's geometric description also form the basis for the solution space of variational formulations of partial differential equations. In order to visualize the results of a NURBS-based isogeometric analysis, we developed a novel GPU-based multi-pass isosurface visualization technique which performs directly on an equivalent rational Bézier representation without the need for discretization or approximation. Our approach utilizes rasterization to generate a list of intervals along the ray that each potentially contain boundary or isosurface intersections. Depth-sorting this list for each ray allows us to proceed in front-to-back order and enables early ray termination. We detect multiple intersections of a ray with the higher-order surface of the model using a sampling-based root-isolation method. The model's surfaces and the isosurfaces always appear smooth, independent of the zoom level due to our pixel-precise processing scheme. Our adaptive sampling strategy minimizes costs for point evaluations and intersection computations. The implementation shows that the proposed approach interactively visualizes volume meshes containing hundreds of thousands of Bézier elements on current graphics hardware. A comparison to a GPU-based ray casting implementation using spatial data structures indicates that our approach generally performs significantly faster while being more accurate. PMID:26357373

  11. Scalable and portable visualization of large atomistic datasets

    NASA Astrophysics Data System (ADS)

    Sharma, Ashish; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2004-10-01

    A scalable and portable code named Atomsviewer has been developed to interactively visualize a large atomistic dataset consisting of up to a billion atoms. The code uses a hierarchical view frustum-culling algorithm based on the octree data structure to efficiently remove atoms outside of the user's field-of-view. Probabilistic and depth-based occlusion-culling algorithms then select atoms, which have a high probability of being visible. Finally a multiresolution algorithm is used to render the selected subset of visible atoms at varying levels of detail. Atomsviewer is written in C++ and OpenGL, and it has been tested on a number of architectures including Windows, Macintosh, and SGI. Atomsviewer has been used to visualize tens of millions of atoms on a standard desktop computer and, in its parallel version, up to a billion atoms. Program summaryTitle of program: Atomsviewer Catalogue identifier: ADUM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: 2.4 GHz Pentium 4/Xeon processor, professional graphics card; Apple G4 (867 MHz)/G5, professional graphics card Operating systems under which the program has been tested: Windows 2000/XP, Mac OS 10.2/10.3, SGI IRIX 6.5 Programming languages used: C++, C and OpenGL Memory required to execute with typical data: 1 gigabyte of RAM High speed storage required: 60 gigabytes No. of lines in the distributed program including test data, etc.: 550 241 No. of bytes in the distributed program including test data, etc.: 6 258 245 Number of bits in a word: Arbitrary Number of processors used: 1 Has the code been vectorized or parallelized: No Distribution format: tar gzip file Nature of physical problem: Scientific visualization of atomic systems Method of solution: Rendering of atoms using computer graphic techniques, culling algorithms for data

  12. Time Critical Isosurface Refinement and Smoothing

    SciTech Connect

    Pascucci, V.; Bajaj, C.L.

    2000-07-10

    Multi-resolution data-structures and algorithms are key in Visualization to achieve real-time interaction with large data-sets. Research has been primarily focused on the off-line construction of such representations mostly using decimation schemes. Drawbacks of this class of approaches include: (i) the inability to maintain interactivity when the displayed surface changes frequently, (ii) inability to control the global geometry of the embedding (no self-intersections) of any approximated level of detail of the output surface. In this paper we introduce a technique for on-line construction and smoothing of progressive isosurfaces. Our hybrid approach combines the flexibility of a progressive multi-resolution representation with the advantages of a recursive sub-division scheme. Our main contributions are: (i) a progressive algorithm that builds a multi-resolution surface by successive refinements so that a coarse representation of the output is generated as soon as a coarse representation of the input is provided, (ii) application of the same scheme to smooth the surface by means of a 3D recursive subdivision rule, (iii) a multi-resolution representation where any adaptively selected level of detail surface is guaranteed to be free of self-intersections.

  13. ParaText : scalable text analysis and visualization.

    SciTech Connect

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-07-01

    Automated analysis of unstructured text documents (e.g., web pages, newswire articles, research publications, business reports) is a key capability for solving important problems in areas including decision making, risk assessment, social network analysis, intelligence analysis, scholarly research and others. However, as data sizes continue to grow in these areas, scalable processing, modeling, and semantic analysis of text collections becomes essential. In this paper, we present the ParaText text analysis engine, a distributed memory software framework for processing, modeling, and analyzing collections of unstructured text documents. Results on several document collections using hundreds of processors are presented to illustrate the exibility, extensibility, and scalability of the the entire process of text modeling from raw data ingestion to application analysis.

  14. Infrastructure for Scalable and Interoperable Visualization and Analysis Software Technology

    SciTech Connect

    Bethel, E. Wes

    2004-08-01

    This document describes the LBNL vision for issues to be considered when assembling a large, multi-institution visualization and analysis effort. It was drafted at the request of the PNNL National Visual Analytics Center in July 2004.

  15. Visual Communications for Heterogeneous Networks/Visually Optimized Scalable Image Compression. Final Report for September 1, 1995 - February 28, 2002

    SciTech Connect

    Hemami, S. S.

    2003-06-03

    The authors developed image and video compression algorithms that provide scalability, reconstructibility, and network adaptivity, and developed compression and quantization strategies that are visually optimal at all bit rates. The goal of this research is to enable reliable ''universal access'' to visual communications over the National Information Infrastructure (NII). All users, regardless of their individual network connection bandwidths, qualities-of-service, or terminal capabilities, should have the ability to access still images, video clips, and multimedia information services, and to use interactive visual communications services. To do so requires special capabilities for image and video compression algorithms: scalability, reconstructibility, and network adaptivity. Scalability allows an information service to provide visual information at many rates, without requiring additional compression or storage after the stream has been compressed the first time. Reconstructibility allows reliable visual communications over an imperfect network. Network adaptivity permits real-time modification of compression parameters to adjust to changing network conditions. Furthermore, to optimize the efficiency of the compression algorithms, they should be visually optimal, where each bit expended reduces the visual distortion. Visual optimality is achieved through first extensive experimentation to quantify human sensitivity to supra-threshold compression artifacts and then incorporation of these experimental results into quantization strategies and compression algorithms.

  16. View-independent Contour Culling of 3D Density Maps for Far-field Viewing of Iso-surfaces

    PubMed Central

    Feng, Powei; Ju, Tao; Warren, Joe

    2011-01-01

    In many applications, iso-surface is the primary method for visualizing the structure of 3D density maps. We consider a common scenario where the user views the iso-surfaces from a distance and varies the level associated with the iso-surface as well as the view direction to gain a sense of the general 3D structure of the density map. For many types of density data, the iso-surfaces associated with a particular threshold may be nested and never visible during this type of viewing. In this paper, we discuss a simple, conservative culling method that avoids the generation of interior portions of iso-surfaces at the contouring stage. Unlike existing methods that perform culling based on the current view direction, our culling is performed once for all views and requires no additional computation as the view changes. By pre-computing a single visibility map, culling is done at any iso-value with little overhead in contouring. We demonstrate the effectiveness of the algorithm on a range of bio-medical data and discuss a practical application in online visualization. PMID:21673830

  17. Dynamic Isosurface Extraction and Level-of-Detail in Voxel Space

    SciTech Connect

    Lamphere, P.B.; Linebarger, J.M.

    1999-03-01

    A new visualization representation is described, which dramatically improves interactivity for scientific visualizations of structured grid data sets by creating isosurfaces at interactive speeds and with dynamically changeable levels-of-detail (LOD). This representation enables greater interactivity by allowing an analyst to dynamically specify both the desired isosurface threshold and required level-of-detail to be used while rendering the image. A scientist can therefore view very large isosurfaces at interactive speeds (with a low level-of-detail), but has the full data set always available for analysis. The key idea is that various levels-of-detail are represented as differently sized hexahedral virtual voxels, which are stored in a three-dimensional binary tree, or kd-tree; thus the level-of-detail representation is done in voxel space instead of the traditional approach which relies on surface or geometry space decimations. Utilizing the voxel space is an essential step to moving from a post-processing visualization paradigm to a quantitative, real-time paradigm. This algorithm has been implemented as an integral component of the EIGEN/VR project at Sandia National Laboratories, which provides a rich environment for scientists to interactively explore and visualize the results of very large-scale simulations performed on massively parallel supercomputers.

  18. AggreSet: Rich and Scalable Set Exploration using Visualizations of Element Aggregations.

    PubMed

    Yalçin, M Adil; Elmqvist, Niklas; Bederson, Benjamin B

    2016-01-01

    Datasets commonly include multi-value (set-typed) attributes that describe set memberships over elements, such as genres per movie or courses taken per student. Set-typed attributes describe rich relations across elements, sets, and the set intersections. Increasing the number of sets results in a combinatorial growth of relations and creates scalability challenges. Exploratory tasks (e.g. selection, comparison) have commonly been designed in separation for set-typed attributes, which reduces interface consistency. To improve on scalability and to support rich, contextual exploration of set-typed data, we present AggreSet. AggreSet creates aggregations for each data dimension: sets, set-degrees, set-pair intersections, and other attributes. It visualizes the element count per aggregate using a matrix plot for set-pair intersections, and histograms for set lists, set-degrees and other attributes. Its non-overlapping visual design is scalable to numerous and large sets. AggreSet supports selection, filtering, and comparison as core exploratory tasks. It allows analysis of set relations inluding subsets, disjoint sets and set intersection strength, and also features perceptual set ordering for detecting patterns in set matrices. Its interaction is designed for rich and rapid data exploration. We demonstrate results on a wide range of datasets from different domains with varying characteristics, and report on expert reviews and a case study using student enrollment and degree data with assistant deans at a major public university. PMID:26390465

  19. Decomposable decoding and display structure for scalable media visualization over advanced collaborative environments

    NASA Astrophysics Data System (ADS)

    Kim, JaeYoun; Kim, JongWon

    2005-10-01

    In this paper, we propose a scalable visualization system to offer high-resolution visualization on multiparty collaborative environments. The proposed system treats with a coordination technique to employ large-scale high-resolution display system and to display multiple high-quality videos effectively on systems with limited resources. To handle these, the proposed system includes the distributed visualization application under generic structure to enable high-resolution video format, such as DV (digital video) and HDV (high definition video) streaming, and under decomposable decoding and display structure to assign the separated visualization task (decoding/display) to different system resources. The system is based on high-performance local area network and the high-performance network between decoding and display task is utilized as the system bus to transfer the decoded large pixel data. The main focus in this paper is the decoupling technique of decoding and display based on high-performance network to handle multiple high-resolution videos effectively. We explore the possibility of the proposed system by implementing a prototype and evaluating it over a high-performance network. Finally, the experiment results verify the improved scalable display system through the proposed structure.

  20. Interactive Querying over Large Network Data: Scalability, Visualization, and Interaction Design

    PubMed Central

    Pienta, Robert; Tamersoy, Acar; Tong, Hanghang; Endert, Alex; Chau, Duen Horng

    2015-01-01

    Given the explosive growth of modern graph data, new methods are needed that allow for the querying of complex graph structures without the need of a complicated querying languages; in short, interactive graph querying is desirable. We describe our work towards achieving our overall research goal of designing and developing an interactive querying system for large network data. We focus on three critical aspects: scalable data mining algorithms, graph visualization, and interaction design. We have already completed an approximate subgraph matching system called MAGE in our previous work that fulfills the algorithmic foundation allowing us to query a graph with hundreds of millions of edges. Our preliminary work on visual graph querying, Graphite, was the first step in the process to making an interactive graph querying system. We are in the process of designing the graph visualization and robust interaction needed to make truly interactive graph querying a reality. PMID:25859567

  1. Decoupling illumination from isosurface generation using 4D light transport.

    PubMed

    Banks, David C; Beason, Kevin M

    2009-01-01

    One way to provide global illumination for the scientist who performs an interactive sweep through a 3D scalar dataset is to pre-compute global illumination, resample the radiance onto a 3D grid, then use it as a 3D texture. The basic approach of repeatedly extracting isosurfaces, illuminating them, and then building a 3D illumination grid suffers from the non-uniform sampling that arises from coupling the sampling of radiance with the sampling of isosurfaces. We demonstrate how the illumination step can be decoupled from the isosurface extraction step by illuminating the entire 3D scalar function as a 3-manifold in 4-dimensional space. By reformulating light transport in a higher dimension, one can sample a 3D volume without requiring the radiance samples to aggregate along individual isosurfaces in the pre-computed illumination grid. PMID:19834238

  2. Decoupling Illumination from Isosurface Generation Using 4D Light Transport

    PubMed Central

    Banks, David C.; Beason, Kevin M.

    2014-01-01

    One way to provide global illumination for the scientist who performs an interactive sweep through a 3D scalar dataset is to pre-compute global illumination, resample the radiance onto a 3D grid, then use it as a 3D texture. The basic approach of repeatedly extracting isosurfaces, illuminating them, and then building a 3D illumination grid suffers from the non-uniform sampling that arises from coupling the sampling of radiance with the sampling of isosurfaces. We demonstrate how the illumination step can be decoupled from the isosurface extraction step by illuminating the entire 3D scalar function as a 3-manifold in 4-dimensional space. By reformulating light transport in a higher dimension, one can sample a 3D volume without requiring the radiance samples to aggregate along individual isosurfaces in the pre-computed illumination grid. PMID:19834238

  3. Topology, accuracy, and quality of isosurface meshes using dynamic particles.

    PubMed

    Meyer, Miriah; Kirby, Robert M; Whitaker, Ross

    2007-01-01

    This paper describes a method for constructing isosurface triangulations of sampled, volumetric, three-dimensional scalar fields. The resulting meshes consist of triangles that are of consistently high quality, making them well suited for accurate interpolation of scalar and vector-valued quantities, as required for numerous applications in visualization and numerical simulation. The proposed method does not rely on a local construction or adjustment of triangles as is done, for instance, in advancing wavefront or adaptive refinement methods. Instead, a system of dynamic particles optimally samples an implicit function such that the particles' relative positions can produce a topologically correct Delaunay triangulation. Thus, the proposed method relies on a global placement of triangle vertices. The main contributions of the paper are the integration of dynamic particles systems with surface sampling theory and PDE-based methods for controlling the local variability of particle densities, as well as detailing a practical method that accommodates Delaunay sampling requirements to generate sparse sets of points for the production of high-quality tessellations. PMID:17968128

  4. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform

    PubMed Central

    Poucke, Sven Van; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; Deyne, Cathy De

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner’s Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research. PMID:26731286

  5. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    PubMed

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research. PMID:26731286

  6. Scalable and interactive segmentation and visualization of neural processes in EM datasets.

    PubMed

    Jeong, Won-Ki; Beyer, Johanna; Hadwiger, Markus; Vazquez, Amelio; Pfister, Hanspeter; Whitaker, Ross T

    2009-01-01

    Recent advances in scanning technology provide high resolution EM (Electron Microscopy) datasets that allow neuro-scientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes. PMID:19834227

  7. JuxtaView - A tool for interactive visualization of large imagery on scalable tiled displays

    USGS Publications Warehouse

    Krishnaprasad, N.K.; Vishwanath, V.; Venkataraman, S.; Rao, A.G.; Renambot, L.; Leigh, J.; Johnson, A.E.; Davis, B.

    2004-01-01

    JuxtaView is a cluster-based application for viewing ultra-high-resolution images on scalable tiled displays. We present in JuxtaView, a new parallel computing and distributed memory approach for out-of-core montage visualization, using LambdaRAM, a software-based network-level cache system. The ultimate goal of JuxtaView is to enable a user to interactively roam through potentially terabytes of distributed, spatially referenced image data such as those from electron microscopes, satellites and aerial photographs. In working towards this goal, we describe our first prototype implemented over a local area network, where the image is distributed using LambdaRAM, on the memory of all nodes of a PC cluster driving a tiled display wall. Aggressive pre-fetching schemes employed by LambdaRAM help to reduce latency involved in remote memory access. We compare LambdaRAM with a more traditional memory-mapped file approach for out-of-core visualization. ?? 2004 IEEE.

  8. Interactive View-Dependent Rendering of Large Isosurfaces

    SciTech Connect

    Gregorski, B; Duchaineau, M; Lindstrom, P; Pascucci, V; Joy, K I

    2002-11-19

    We present an algorithm for interactively extracting and rendering isosurfaces of large volume datasets in a view-dependent fashion. A recursive tetrahedral mesh refinement scheme, based on longest edge bisection, is used to hierarchically decompose the data into a multiresolution structure. This data structure allows fast extraction of arbitrary isosurfaces to within user specified view-dependent error bounds. A data layout scheme based on hierarchical space filling curves provides access to the data in a cache coherent manner that follows the data access pattern indicated by the mesh refinement.

  9. A Scalable Cloud Library Empowering Big Data Management, Diagnosis, and Visualization of Cloud-Resolving Models

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.

    2015-12-01

    A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a

  10. Adaptively synchronous scalable spread spectrum (A4S) data-hiding strategy for three-dimensional visualization

    NASA Astrophysics Data System (ADS)

    Hayat, Khizar; Puech, William; Gesquière, Gilles

    2010-04-01

    We propose an adaptively synchronous scalable spread spectrum (A4S) data-hiding strategy to integrate disparate data, needed for a typical 3-D visualization, into a single JPEG2000 format file. JPEG2000 encoding provides a standard format on one hand and the needed multiresolution for scalability on the other. The method has the potential of being imperceptible and robust at the same time. While the spread spectrum (SS) methods are known for the high robustness they offer, our data-hiding strategy is removable at the same time, which ensures highest possible visualization quality. The SS embedding of the discrete wavelet transform (DWT)-domain depth map is carried out in transform domain YCrCb components from the JPEG2000 coding stream just after the DWT stage. To maintain synchronization, the embedding is carried out while taking into account the correspondence of subbands. Since security is not the immediate concern, we are at liberty with the strength of embedding. This permits us to increase the robustness and bring the reversibility of our method. To estimate the maximum tolerable error in the depth map according to a given viewpoint, a human visual system (HVS)-based psychovisual analysis is also presented.

  11. Dynamic isosurface extraction and level-of-detail in voxel space

    SciTech Connect

    Linebarger, J.M.; Lamphere, P.B.; Breckenridge, A.R.

    1998-06-01

    A new visualization technique is reported, which dramatically improves interactivity for scientific visualizations by working directly with voxel data and by employing efficient algorithms and data structures. This discussion covers the research software, the file structures, examples of data creation, data search, and triangle rendering codes that allow geometric surfaces to be extracted from volumetric data. Uniquely, these methods enable greater interactivity by allowing an analyst to dynamically specify both the desired isosurface threshold and required level-of-detail to be used while rendering the image. The key idea behind this visualization paradigm is that various levels-of-detail are represented as differently sized hexahedral virtual voxels, which are stored in a three-dimensional kd-tree; thus the level-of-detail representation is done in voxel space instead of the traditional approach which relies on surface or geometry space decimations. This algorithm has been implemented as an integral component in the EIGEN/VR project at Sandia National Laboratories, which provides a rich environment for scientists to interactively explore and visualize the results of very large-scale simulations performed on massively parallel supercomputers.

  12. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Astrophysics Data System (ADS)

    Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.

    2013-12-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video

  13. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt

    2013-01-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video

  14. SBIR Phase II Final Report for Scalable Grid Technologies for Visualization Services

    SciTech Connect

    Sebastien Barre; Will Schroeder

    2006-10-15

    This project developed software tools for the automation of grid computing. In particular, the project focused in visualization and imaging tools (VTK, ParaView and ITK); i.e., we developed tools to automatically create Grid services from C++ programs implemented using the open-source VTK visualization and ITK segmentation and registration systems. This approach helps non-Grid experts to create applications using tools with which they are familiar, ultimately producing Grid services for visualization and image analysis by invocation of an automatic process.

  15. A scalable architecture for extracting, aligning, linking, and visualizing multi-Int data

    NASA Astrophysics Data System (ADS)

    Knoblock, Craig A.; Szekely, Pedro

    2015-05-01

    An analyst today has a tremendous amount of data available, but each of the various data sources typically exists in their own silos, so an analyst has limited ability to see an integrated view of the data and has little or no access to contextual information that could help in understanding the data. We have developed the Domain-Insight Graph (DIG) system, an innovative architecture for extracting, aligning, linking, and visualizing massive amounts of domain-specific content from unstructured sources. Under the DARPA Memex program we have already successfully applied this architecture to multiple application domains, including the enormous international problem of human trafficking, where we extracted, aligned and linked data from 50 million online Web pages. DIG builds on our Karma data integration toolkit, which makes it easy to rapidly integrate structured data from a variety of sources, including databases, spreadsheets, XML, JSON, and Web services. The ability to integrate Web services allows Karma to pull in live data from the various social media sites, such as Twitter, Instagram, and OpenStreetMaps. DIG then indexes the integrated data and provides an easy to use interface for query, visualization, and analysis.

  16. Isosurface Extraction in Time-Varying Fields Using a Temporal Hierarchical Index Tree

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Gerald-Yamasaki, Michael (Technical Monitor)

    1998-01-01

    Many high-performance isosurface extraction algorithms have been proposed in the past several years as a result of intensive research efforts. When applying these algorithms to large-scale time-varying fields, the storage overhead incurred from storing the search index often becomes overwhelming. this paper proposes an algorithm for locating isosurface cells in time-varying fields. We devise a new data structure, called Temporal Hierarchical Index Tree, which utilizes the temporal coherence that exists in a time-varying field and adoptively coalesces the cells' extreme values over time; the resulting extreme values are then used to create the isosurface cell search index. For a typical time-varying scalar data set, not only does this temporal hierarchical index tree require much less storage space, but also the amount of I/O required to access the indices from the disk at different time steps is substantially reduced. We illustrate the utility and speed of our algorithm with data from several large-scale time-varying CID simulations. Our algorithm can achieve more than 80% of disk-space savings when compared with the existing techniques, while the isosurface extraction time is nearly optimal.

  17. Isosurface Computation Made Simple: Hardware acceleration,Adaptive Refinement and tetrahedral Stripping

    SciTech Connect

    Pascucci, V

    2004-02-18

    This paper presents a simple approach for rendering isosurfaces of a scalar field. Using the vertex programming capability of commodity graphics cards, we transfer the cost of computing an isosurface from the Central Processing Unit (CPU), running the main application, to the Graphics Processing Unit (GPU), rendering the images. We consider a tetrahedral decomposition of the domain and draw one quadrangle (quad) primitive per tetrahedron. A vertex program transforms the quad into the piece of isosurface within the tetrahedron (see Figure 2). In this way, the main application is only devoted to streaming the vertices of the tetrahedra from main memory to the graphics card. For adaptively refined rectilinear grids, the optimization of this streaming process leads to the definition of a new 3D space-filling curve, which generalizes the 2D Sierpinski curve used for efficient rendering of triangulated terrains. We maintain the simplicity of the scheme when constructing view-dependent adaptive refinements of the domain mesh. In particular, we guarantee the absence of T-junctions by satisfying local bounds in our nested error basis. The expensive stage of fixing cracks in the mesh is completely avoided. We discuss practical tradeoffs in the distribution of the workload between the application and the graphics hardware. With current GPU's it is convenient to perform certain computations on the main CPU. Beyond the performance considerations that will change with the new generations of GPU's this approach has the major advantage of avoiding completely the storage in memory of the isosurface vertices and triangles.

  18. A Unified Air-Sea Visualization System: Survey on Gridding Structures

    NASA Technical Reports Server (NTRS)

    Anand, Harsh; Moorhead, Robert

    1995-01-01

    The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.

  19. Order-of-magnitude faster isosurface rendering in software on a PC than using dedicated general-purpose rendering hardware

    NASA Astrophysics Data System (ADS)

    Grevera, George J.; Udupa, Jayaram K.; Odhner, Dewey

    1999-05-01

    The purpose of this work is to compare the speed of isosurface rendering in software with that using dedicated hardware. Input data consists of 10 different objects form various parts of the body and various modalities with a variety of surface sizes and shapes. The software rendering technique consists of a particular method of voxel-based surface rendering, called shell rendering. The hardware method is OpenGL-based and uses the surfaces constructed from our implementation of the 'Marching Cubes' algorithm. The hardware environment consists of a variety of platforms including a Sun Ultra I with a Creator3D graphics card and a Silicon Graphics Reality Engine II, both with polygon rendering hardware, and a 300Mhz Pentium PC. The results indicate that the software method was 18 to 31 times faster than any hardware rendering methods. This work demonstrates that a software implementation of a particular rendering algorithm can outperform dedicated hardware. We conclude that for medical surface visualization, expensive dedicated hardware engines are not required. More importantly, available software algorithms on a 300Mhz Pentium PC outperform the speed of rendering via hardware engines by a factor of 18 to 31.

  20. Evaluation of a Scalable In-Situ Visualization System Approach in a Parallelized Computational Fluid Dynamics Application

    NASA Astrophysics Data System (ADS)

    Manten, Sebastian; Vetter, Michael; Olbrich, Stephan

    Current parallel supercomputers provide sufficient performance to simulate unsteady three-dimensional fluid dynamics in high resolution. However, the visualization of the huge amounts of result data cannot be handled by traditional methods, where post-processing modules are usually coupled to the raw data source, either by files or by data flow. To avoid significant bottlenecks of the storage and communication resources, efficient techniques for data extraction and preprocessing at the source have been realized in the parallel, network-distributed chain of our Distributed Simulation and Virtual Reality Environment(DSVR). Here the 3D data extraction is implemented as a parallel library (libDVRP) and can be done in-situ during the numerical simulations, which avoids the storage of raw data for visualization at all.

  1. A scalable visualization environment for the correlation of radiological and histopathological data at multiple levels of resolution.

    PubMed

    Annese, Jacopo; Weber, Philip

    2009-01-01

    Until the introduction of non-invasive imaging techniques, the representation of anatomy and pathology relied solely on gross dissection and histological staining. Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) protocols allow for the clinical evaluation of anatomical images derived from complementary modalities, thereby increasing reliability of the diagnosis and the prognosis of disease. Despite the significant improvements in image contrast and resolution of MRI, autopsy and classical histopathological analysis are still indispensable for the correct diagnosis of specific disease. It is therefore important to be able to correlate multiple images from different modalities, in vivo and postmortem, in order to validate non-invasive imaging markers of disease. To that effect, we have developed a methodological pipeline and a visualization environment that allow for the concurrent observation of both macroscopic and microscopic image data relative to the same patient. We describe these applications and sample data relative to the study of the anatomy and disease of the Central Nervous System (CNS). The brain is approached as an organ with a complex 3-dimensional (3-D) architecture that can only be effectively studied combining observation and analysis at the system level as well as at the cellular level. Our computational and visualization environment allows seamless navigation through multiple layers of neurological data that are accessible quickly and simultaneously. PMID:19377104

  2. On the kinematics of scalar iso-surfaces in turbulent flow

    NASA Astrophysics Data System (ADS)

    Wang, Weirong; Riley, James J.; Kramlich, John C.

    2012-11-01

    The behavior of scalar iso-surfaces in turbulent flows is of fundamental interest and also of importance in certain applications, e.g., the stoichiometric surface in nonpremixed, turbulent reacting flows. Of particular interest is the average area per unit volume of the surface, Σ. We report on the use of direct numerical simulations to directly compute Σ and to model its evolution in time for the case of isotropic turbulence. Using both a direct measurement technique, and also Corrsin's (1955) suggestion of surface-crossing, we find the iso-surface in space and also measure Σ as the surface evolves in time. This allows us to follow the growth of the surface due to local surface stretching and its ultimate decrease due to molecular destruction. We are also able to measure the principal terms in the evolution equation for Σ, including the surface stretching term S and the molecular destruction term M . For example, for the scalar Z we find that its spatial derivative quantities are approximately statistically independent of Z itself, so that S and M are approximately statistically independent of Z as well. Finally, a model is proposed which fairly accurately predicts the evolution of Σ. Supported by NSF Grant No. OCI-0749200.

  3. Kd-Jump: a path-preserving stackless traversal for faster isosurface raytracing on GPUs.

    PubMed

    Hughes, David M; Lim, Ik Soo

    2009-01-01

    Stackless traversal techniques are often used to circumvent memory bottlenecks by avoiding a stack and replacing return traversal with extra computation. This paper addresses whether the stackless traversal approaches are useful on newer hardware and technology (such as CUDA). To this end, we present a novel stackless approach for implicit kd-trees, which exploits the benefits of index-based node traversal, without incurring extra node visitation. This approach, which we term Kd-Jump, enables the traversal to immediately return to the next valid node, like a stack, without incurring extra node visitation (kd-restart). Also, Kd-Jump does not require global memory (stack) at all and only requires a small matrix in fast constant-memory. We report that Kd-Jump outperforms a stack by 10 to 20% and kd-restart by 100%. We also present a Hybrid Kd-Jump, which utilizes a volume stepper for leaf testing and a run-time depth threshold to define where kd-tree traversal stops and volume-stepping occurs. By using both methods, we gain the benefits of empty space removal, fast texture-caching and realtime ability to determine the best threshold for current isosurface and view direction. PMID:19834233

  4. 3D motion tracking of the heart using Harmonic Phase (HARP) isosurfaces

    NASA Astrophysics Data System (ADS)

    Soliman, Abraam S.; Osman, Nael F.

    2010-03-01

    Tags are non-invasive features induced in the heart muscle that enable the tracking of heart motion. Each tag line, in fact, corresponds to a 3D tag surface that deforms with the heart muscle during the cardiac cycle. Tracking of tag surfaces deformation is useful for the analysis of left ventricular motion. Cardiac material markers (Kerwin et al, MIA, 1997) can be obtained from the intersections of orthogonal surfaces which can be reconstructed from short- and long-axis tagged images. The proposed method uses Harmonic Phase (HARP) method for tracking tag lines corresponding to a specific harmonic phase value and then the reconstruction of grid tag surfaces is achieved by a Delaunay triangulation-based interpolation for sparse tag points. Having three different tag orientations from short- and long-axis images, the proposed method showed the deformation of 3D tag surfaces during the cardiac cycle. Previous work on tag surface reconstruction was restricted for the "dark" tag lines; however, the use of HARP as proposed enables the reconstruction of isosurfaces based on their harmonic phase values. The use of HARP, also, provides a fast and accurate way for tag lines identification and tracking, and hence, generating the surfaces.

  5. An ISO-surface folding analysis method applied to premature neonatal brain development

    NASA Astrophysics Data System (ADS)

    Rodriguez-Carranza, Claudia E.; Rousseau, Francois; Iordanova, Bistra; Glenn, Orit; Vigneron, Daniel; Barkovich, James; Studholme, Colin

    2006-03-01

    In this paper we describe the application of folding measures to tracking in vivo cortical brain development in premature neonatal brain anatomy. The outer gray matter and the gray-white matter interface surfaces were extracted from semi-interactively segmented high-resolution T1 MRI data. Nine curvature- and geometric descriptor-based folding measures were applied to six premature infants, aged 28-37 weeks, using a direct voxelwise iso-surface representation. We have shown that using such an approach it is feasible to extract meaningful surfaces of adequate quality from typical clinically acquired neonatal MRI data. We have shown that most of the folding measures, including a new proposed measure, are sensitive to changes in age and therefore applicable in developing a model that tracks development in premature infants. For the first time gyrification measures have been computed on the gray-white matter interface and on cases whose age is representative of a period of intense brain development.

  6. Scientific Visualization for Atmospheric Data Analysis in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Engelke, Wito; Flatken, Markus; Garcia, Arturo S.; Bar, Christian; Gerndt, Andreas

    2016-04-01

    terabytes. The combination of different data sources (e.g., MOLA, HRSC, HiRISE) and selection of presented data (e.g., infrared, spectral, imagery) is also supported. Furthermore, the data is presented unchanged and with the highest possible resolution for the target setup (e.g., power-wall, workstation, laptop) and view distance. The visualization techniques for the volumetric data sets can handle VTK [6] based data sets and also support different grid types as well as a time component. In detail, the integrated volume rendering uses a GPU based ray casting algorithm which was adapted to work in spherical coordinate systems. This approach results in interactive frame-rates without compromising visual fidelity. Besides direct visualization via volume rendering the prototype supports interactive slicing, extraction of iso-surfaces and probing. The latter can also be used for side-by-side comparison and on-the-fly diagram generation within the application. Similarily to the surface data a combination of different data sources is supported as well. For example, the extracted iso-surface of a scalar pressure field can be used for the visualization of the temperature. The software development is supported by the ViSTA VR-toolkit [7] and supports different target systems as well as a wide range of VR-devices. Furthermore, the prototype is scalable to run on laptops, workstations and cluster setups. REFERENCES [1] A. S. Garcia, D. J. Roberts, T. Fernando, C. Bar, R. Wolff, J. Dodiya, W. Engelke, and A. Gerndt, "A collaborative workspace architecture for strengthening collaboration among space scientists," in IEEE Aerospace Conference, (Big Sky, Montana, USA), 7-14 March 2015. [2] W. Engelke, "Mars Cartography VR System 2/3." German Aerospace Center (DLR), 2015. Project Deliverable D4.2. [3] E. Hivon, F. K. Hansen, and A. J. Banday, "The healpix primer," arXivpreprint astro-ph/9905275, 1999. [4] K. M. Gorski, E. Hivon, A. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, and M

  7. Visualization of gridded scalar data with uncertainty in geosciences

    NASA Astrophysics Data System (ADS)

    Zehner, Björn; Watanabe, Norihiro; Kolditz, Olaf

    2010-10-01

    Characterization of the earth's subsurface involves the construction of 3D models from sparse data and so leads to simulation results that involve some degree of uncertainty. This uncertainty is often neglected in the subsequent visualization, due to the fact that no established methods or available software exist. We describe a visualization method to render scalar fields with a probability density function at each data point. We render these data as isosurfaces and make use of a colour scheme, which intuitively gives the viewer an idea of which parts of the surface are more reliable than others. We further show how to extract an envelope that indicates within which volume the isosurface will lie with a certain confidence, and augment the isosurfaces with additional geometry in order to show this information. The resulting visualization is easy and intuitive to understand and is suitable for rendering multiple distinguishable isosurfaces at a time. It can moreover be easily used together with other visualized objects, such as the geological context. Finally we show how we have integrated this into a visualization pipeline that is based on the Visualization Toolkit (VTK) and the open source scenegraph OpenSG, allowing us to render the results on a desktop and in different kinds of virtual environments.

  8. Finite Element Results Visualization for Unstructured Grids

    SciTech Connect

    Speck, Douglas E.; Dovey, Donald J.

    1996-07-15

    GRIZ is a general-purpose post-processing application supporting interactive visualization of finite element analysis results on unstructured grids. In addition to basic pseudocolor renderings of state variables over the mesh surface, GRIZ provides modern visualization techniques such as isocontours and isosurfaces, cutting planes, vector field display, and particle traces. GRIZ accepts both command-line and mouse-driven input, and is portable to virtually any UNIX platform which provides Motif and OpenGl libraries.

  9. Parallel Visualization Co-Processing of Overnight CFD Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Edwards, David E.; Haimes, Robert

    1999-01-01

    An interactive visualization system pV3 is being developed for the investigation of advanced computational methodologies employing visualization and parallel processing for the extraction of information contained in large-scale transient engineering simulations. Visual techniques for extracting information from the data in terms of cutting planes, iso-surfaces, particle tracing and vector fields are included in this system. This paper discusses improvements to the pV3 system developed under NASA's Affordable High Performance Computing project.

  10. Efficient visualization of unsteady and huge scalar and vector fields

    NASA Astrophysics Data System (ADS)

    Vetter, Michael; Olbrich, Stephan

    2016-04-01

    and methods, we are developing a stand-alone post-processor, adding further data structures and mapping algorithms, and cooperating with the ICON developers and users. With the implementation of a DSVR-based post-processor, a milestone was achieved. By using the DSVR post-processor the mentioned 3 processes are completely separated: the data set is processed in a batch mode - e.g. on the same supercomputer, which the data is generated on - and the interactive 3D rendering is done afterwards on the scientist's local system. At the actual status of implementation the DSVR post-processor supports the generation of isosurfaces and colored slicers on volume data set time series based on rectilinear grids as well as the visualization of pathlines on time varying flow fields based on either rectilinear grids or prism grids. The software implementation and evaluation is done on the supercomputers at DKRZ, including scalability tests using ICON output files in NetCDF format. The next milestones will be (a) the in-situ integration of the DSVR library in the ICON model and (b) the implementation of an isosurface algorithm for prism grids.

  11. Muster: Massively Scalable Clustering

    Energy Science and Technology Software Center (ESTSC)

    2010-05-20

    Muster is a framework for scalable cluster analysis. It includes implementations of classic K-Medoids partitioning algorithms, as well as infrastructure for making these algorithms run scalably on very large systems. In particular, Muster contains algorithms such as CAPEK (described in reference 1) that are capable of clustering highly distributed data sets in-place on a hundred thousand or more processes.

  12. Scalable rendering on PC clusters

    SciTech Connect

    WYLIE,BRIAN N.; LEWIS,VASILY; SHIRLEY,DAVID NOYES; PAVLAKOS,CONSTANTINE

    2000-04-25

    This case study presents initial results from research targeted at the development of cost-effective scalable visualization and rendering technologies. The implementations of two 3D graphics libraries based on the popular sort-last and sort-middle parallel rendering techniques are discussed. An important goal of these implementations is to provide scalable rendering capability for extremely large datasets (>> 5 million polygons). Applications can use these libraries for either run-time visualization, by linking to an existing parallel simulation, or for traditional post-processing by linking to an interactive display program. The use of parallel, hardware-accelerated rendering on commodity hardware is leveraged to achieve high performance. Current performance results show that, using current hardware (a small 16-node cluster), they can utilize up to 85% of the aggregate graphics performance and achieve rendering rates in excess of 20 million polygons/second using OpenGL{reg_sign} with lighting, Gouraud shading, and individually specified triangles (not t-stripped).

  13. Visualizing the Positive-Negative Interface of Molecular Electrostatic Potentials as an Educational Tool for Assigning Chemical Polarity

    ERIC Educational Resources Information Center

    Schonborn, Konrad; Host, Gunnar; Palmerius, Karljohan

    2010-01-01

    To help in interpreting the polarity of a molecule, charge separation can be visualized by mapping the electrostatic potential at the van der Waals surface using a color gradient or by indicating positive and negative regions of the electrostatic potential using different colored isosurfaces. Although these visualizations capture the molecular…

  14. Visualizing higher order finite elements. Final report

    SciTech Connect

    Thompson, David C; Pebay, Philippe Pierre

    2005-11-01

    This report contains an algorithm for decomposing higher-order finite elements into regions appropriate for isosurfacing and proves the conditions under which the algorithm will terminate. Finite elements are used to create piecewise polynomial approximants to the solution of partial differential equations for which no analytical solution exists. These polynomials represent fields such as pressure, stress, and momentum. In the past, these polynomials have been linear in each parametric coordinate. Each polynomial coefficient must be uniquely determined by a simulation, and these coefficients are called degrees of freedom. When there are not enough degrees of freedom, simulations will typically fail to produce a valid approximation to the solution. Recent work has shown that increasing the number of degrees of freedom by increasing the order of the polynomial approximation (instead of increasing the number of finite elements, each of which has its own set of coefficients) can allow some types of simulations to produce a valid approximation with many fewer degrees of freedom than increasing the number of finite elements alone. However, once the simulation has determined the values of all the coefficients in a higher-order approximant, tools do not exist for visual inspection of the solution. This report focuses on a technique for the visual inspection of higher-order finite element simulation results based on decomposing each finite element into simplicial regions where existing visualization algorithms such as isosurfacing will work. The requirements of the isosurfacing algorithm are enumerated and related to the places where the partial derivatives of the polynomial become zero. The original isosurfacing algorithm is then applied to each of these regions in turn.

  15. Scalable coherent interface

    SciTech Connect

    Alnaes, K.; Kristiansen, E.H. ); Gustavson, D.B. ); James, D.V. )

    1990-01-01

    The Scalable Coherent Interface (IEEE P1596) is establishing an interface standard for very high performance multiprocessors, supporting a cache-coherent-memory model scalable to systems with up to 64K nodes. This Scalable Coherent Interface (SCI) will supply a peak bandwidth per node of 1 GigaByte/second. The SCI standard should facilitate assembly of processor, memory, I/O and bus bridge cards from multiple vendors into massively parallel systems with throughput far above what is possible today. The SCI standard encompasses two levels of interface, a physical level and a logical level. The physical level specifies electrical, mechanical and thermal characteristics of connectors and cards that meet the standard. The logical level describes the address space, data transfer protocols, cache coherence mechanisms, synchronization primitives and error recovery. In this paper we address logical level issues such as packet formats, packet transmission, transaction handshake, flow control, and cache coherence. 11 refs., 10 figs.

  16. Sandia Scalable Encryption Software

    SciTech Connect

    Tarman, Thomas D.

    1997-08-13

    Sandia Scalable Encryption Library (SSEL) Version 1.0 is a library of functions that implement Sandia''s scalable encryption algorithm. This algorithm is used to encrypt Asynchronous Transfer Mode (ATM) data traffic, and is capable of operating on an arbitrary number of bits at a time (which permits scaling via parallel implementations), while being interoperable with differently scaled versions of this algorithm. The routines in this library implement 8 bit and 32 bit versions of a non-linear mixer which is compatible with Sandia''s hardware-based ATM encryptor.

  17. Sandia Scalable Encryption Software

    Energy Science and Technology Software Center (ESTSC)

    1997-08-13

    Sandia Scalable Encryption Library (SSEL) Version 1.0 is a library of functions that implement Sandia''s scalable encryption algorithm. This algorithm is used to encrypt Asynchronous Transfer Mode (ATM) data traffic, and is capable of operating on an arbitrary number of bits at a time (which permits scaling via parallel implementations), while being interoperable with differently scaled versions of this algorithm. The routines in this library implement 8 bit and 32 bit versions of a non-linearmore » mixer which is compatible with Sandia''s hardware-based ATM encryptor.« less

  18. Visualization of a Large Set of Hydrogen Atomic Orbital Contours Using New and Expanded Sets of Parametric Equations

    ERIC Educational Resources Information Center

    Rhile, Ian J.

    2014-01-01

    Atomic orbitals are a theme throughout the undergraduate chemistry curriculum, and visualizing them has been a theme in this journal. Contour plots as isosurfaces or contour lines in a plane are the most familiar representations of the hydrogen wave functions. In these representations, a surface of a fixed value of the wave function ? is plotted…

  19. Scalable filter banks

    NASA Astrophysics Data System (ADS)

    Hur, Youngmi; Okoudjou, Kasso A.

    2015-08-01

    A finite frame is said to be scalable if its vectors can be rescaled so that the resulting set of vectors is a tight frame. The theory of scalable frame has been extended to the setting of Laplacian pyramids which are based on (rectangular) paraunitary matrices whose column vectors are Laurent polynomial vectors. This is equivalent to scaling the polyphase matrices of the associated filter banks. Consequently, tight wavelet frames can be constructed by appropriately scaling the columns of these paraunitary matrices by diagonal matrices whose diagonal entries are square magnitude of Laurent polynomials. In this paper we present examples of tight wavelet frames constructed in this manner and discuss some of their properties in comparison to the (non tight) wavelet frames they arise from.

  20. Scalable Work Stealing

    SciTech Connect

    Dinan, James S.; Larkins, D. B.; Sadayappan, Ponnuswamy; Krishnamoorthy, Sriram; Nieplocha, Jaroslaw

    2009-11-14

    Irregular and dynamic parallel applications pose significant challenges to achieving scalable performance on large-scale multicore clusters. These applications often require ongoing, dynamic load balancing in order to maintain efficiency. While effective at small scale, centralized load balancing schemes quickly become a bottleneck on large-scale clusters. Work stealing is a popular approach to distributed dynamic load balancing; however its performance on large-scale clusters is not well understood. Prior work on work stealing has largely focused on shared memory machines. In this work we investigate the design and scalability of work stealing on modern distributed memory systems. We demonstrate high efficiency and low overhead when scaling to 8,192 processors for three benchmark codes: a producer-consumer benchmark, the unbalanced tree search benchmark, and a multiresolution analysis kernel.

  1. Complexity in scalable computing.

    SciTech Connect

    Rouson, Damian W. I.

    2008-12-01

    The rich history of scalable computing research owes much to a rapid rise in computing platform scale in terms of size and speed. As platforms evolve, so must algorithms and the software expressions of those algorithms. Unbridled growth in scale inevitably leads to complexity. This special issue grapples with two facets of this complexity: scalable execution and scalable development. The former results from efficient programming of novel hardware with increasing numbers of processing units (e.g., cores, processors, threads or processes). The latter results from efficient development of robust, flexible software with increasing numbers of programming units (e.g., procedures, classes, components or developers). The progression in the above two parenthetical lists goes from the lowest levels of abstraction (hardware) to the highest (people). This issue's theme encompasses this entire spectrum. The lead author of each article resides in the Scalable Computing Research and Development Department at Sandia National Laboratories in Livermore, CA. Their co-authors hail from other parts of Sandia, other national laboratories and academia. Their research sponsors include several programs within the Department of Energy's Office of Advanced Scientific Computing Research and its National Nuclear Security Administration, along with Sandia's Laboratory Directed Research and Development program and the Office of Naval Research. The breadth of interests of these authors and their customers reflects in the breadth of applications this issue covers. This article demonstrates how to obtain scalable execution on the increasingly dominant high-performance computing platform: a Linux cluster with multicore chips. The authors describe how deep memory hierarchies necessitate reducing communication overhead by using threads to exploit shared register and cache memory. On a matrix-matrix multiplication problem, they achieve up to 96% parallel efficiency with a three-part strategy: intra

  2. Scientific Visualization for Atmospheric Data Analysis in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Engelke, Wito; Flatken, Markus; Garcia, Arturo S.; Bar, Christian; Gerndt, Andreas

    2016-04-01

    terabytes. The combination of different data sources (e.g., MOLA, HRSC, HiRISE) and selection of presented data (e.g., infrared, spectral, imagery) is also supported. Furthermore, the data is presented unchanged and with the highest possible resolution for the target setup (e.g., power-wall, workstation, laptop) and view distance. The visualization techniques for the volumetric data sets can handle VTK [6] based data sets and also support different grid types as well as a time component. In detail, the integrated volume rendering uses a GPU based ray casting algorithm which was adapted to work in spherical coordinate systems. This approach results in interactive frame-rates without compromising visual fidelity. Besides direct visualization via volume rendering the prototype supports interactive slicing, extraction of iso-surfaces and probing. The latter can also be used for side-by-side comparison and on-the-fly diagram generation within the application. Similarily to the surface data a combination of different data sources is supported as well. For example, the extracted iso-surface of a scalar pressure field can be used for the visualization of the temperature. The software development is supported by the ViSTA VR-toolkit [7] and supports different target systems as well as a wide range of VR-devices. Furthermore, the prototype is scalable to run on laptops, workstations and cluster setups. REFERENCES [1] A. S. Garcia, D. J. Roberts, T. Fernando, C. Bar, R. Wolff, J. Dodiya, W. Engelke, and A. Gerndt, "A collaborative workspace architecture for strengthening collaboration among space scientists," in IEEE Aerospace Conference, (Big Sky, Montana, USA), 7-14 March 2015. [2] W. Engelke, "Mars Cartography VR System 2/3." German Aerospace Center (DLR), 2015. Project Deliverable D4.2. [3] E. Hivon, F. K. Hansen, and A. J. Banday, "The healpix primer," arXivpreprint astro-ph/9905275, 1999. [4] K. M. Gorski, E. Hivon, A. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, and M

  3. A Scalable Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Aiken, Alexander

    2001-01-01

    The Scalable Analysis Toolkit (SAT) project aimed to demonstrate that it is feasible and useful to statically detect software bugs in very large systems. The technical focus of the project was on a relatively new class of constraint-based techniques for analysis software, where the desired facts about programs (e.g., the presence of a particular bug) are phrased as constraint problems to be solved. At the beginning of this project, the most successful forms of formal software analysis were limited forms of automatic theorem proving (as exemplified by the analyses used in language type systems and optimizing compilers), semi-automatic theorem proving for full verification, and model checking. With a few notable exceptions these approaches had not been demonstrated to scale to software systems of even 50,000 lines of code. Realistic approaches to large-scale software analysis cannot hope to make every conceivable formal method scale. Thus, the SAT approach is to mix different methods in one application by using coarse and fast but still adequate methods at the largest scales, and reserving the use of more precise but also more expensive methods at smaller scales for critical aspects (that is, aspects critical to the analysis problem under consideration) of a software system. The principled method proposed for combining a heterogeneous collection of formal systems with different scalability characteristics is mixed constraints. This idea had been used previously in small-scale applications with encouraging results: using mostly coarse methods and narrowly targeted precise methods, useful information (meaning the discovery of bugs in real programs) was obtained with excellent scalability.

  4. Scalable optical quantum computer

    SciTech Connect

    Manykin, E A; Mel'nichenko, E V

    2014-12-31

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  5. Declarative Visualization Queries

    NASA Astrophysics Data System (ADS)

    Pinheiro da Silva, P.; Del Rio, N.; Leptoukh, G. G.

    2011-12-01

    necessarily entirely exposed to scientists writing visualization queries, facilitates the automated construction of visualization pipelines. VisKo queries have been successfully used in support of visualization scenarios from Earth Science domains including: velocity model isosurfaces, gravity data raster, and contour map renderings. Our synergistic environment provided by our CYBER-ShARE initiative at the University of Texas at El Paso has allowed us to work closely with Earth Science experts that have both provided us our test data as well as validation as to whether the execution of VisKo queries are returning visualizations that can be used for data analysis. Additionally, we have employed VisKo queries to support visualization scenarios associated with Giovanni, an online platform for data analysis developed by NASA GES DISC. VisKo-enhanced visualizations included time series plotting of aerosol data as well as contour and raster map generation of gridded brightness-temperature data.

  6. SFT: Scalable Fault Tolerance

    SciTech Connect

    Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod

    2006-04-15

    In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparent and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5μs; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.

  7. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  8. PADMA: PArallel Data Mining Agents for scalable text classification

    SciTech Connect

    Kargupta, H.; Hamzaoglu, I.; Stafford, B.

    1997-03-01

    This paper introduces PADMA (PArallel Data Mining Agents), a parallel agent based system for scalable text classification. PADMA contains modules for (1) parallel data accessing operations, (2) parallel hierarchical clustering, and (3) web-based data visualization. This paper introduces the general architecture of PADMA and presents a detailed description of its different modules.

  9. Optimized scalable network switch

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    2010-02-23

    In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.

  10. Optimized scalable network switch

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2007-12-04

    In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.

  11. Engineering scalable biological systems

    PubMed Central

    2010-01-01

    Synthetic biology is focused on engineering biological organisms to study natural systems and to provide new solutions for pressing medical, industrial and environmental problems. At the core of engineered organisms are synthetic biological circuits that execute the tasks of sensing inputs, processing logic and performing output functions. In the last decade, significant progress has been made in developing basic designs for a wide range of biological circuits in bacteria, yeast and mammalian systems. However, significant challenges in the construction, probing, modulation and debugging of synthetic biological systems must be addressed in order to achieve scalable higher-complexity biological circuits. Furthermore, concomitant efforts to evaluate the safety and biocontainment of engineered organisms and address public and regulatory concerns will be necessary to ensure that technological advances are translated into real-world solutions. PMID:21468204

  12. Scalable Node Monitoring

    SciTech Connect

    Drotar, Alexander P.; Quinn, Erin E.; Sutherland, Landon D.

    2012-07-30

    Project description is: (1) Build a high performance computer; and (2) Create a tool to monitor node applications in Component Based Tool Framework (CBTF) using code from Lightweight Data Metric Service (LDMS). The importance of this project is that: (1) there is a need a scalable, parallel tool to monitor nodes on clusters; and (2) New LDMS plugins need to be able to be easily added to tool. CBTF stands for Component Based Tool Framework. It's scalable and adjusts to different topologies automatically. It uses MRNet (Multicast/Reduction Network) mechanism for information transport. CBTF is flexible and general enough to be used for any tool that needs to do a task on many nodes. Its components are reusable and 'EASILY' added to a new tool. There are three levels of CBTF: (1) frontend node - interacts with users; (2) filter nodes - filters or concatenates information from backend nodes; and (3) backend nodes - where the actual work of the tool is done. LDMS stands for lightweight data metric servies. It's a tool used for monitoring nodes. Ltool is the name of the tool we derived from LDMS. It's dynamically linked and includes the following components: Vmstat, Meminfo, Procinterrupts and more. It works by: Ltool command is run on the frontend node; Ltool collects information from the backend nodes; backend nodes send information to the filter nodes; and filter nodes concatenate information and send to a database on the front end node. Ltool is a useful tool when it comes to monitoring nodes on a cluster because the overhead involved with running the tool is not particularly high and it will automatically scale to any size cluster.

  13. Scalable SCPPM Decoder

    NASA Technical Reports Server (NTRS)

    Quir, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy; Nakashima, Michael A.; Moision, Bruce E.

    2012-01-01

    A decoder was developed that decodes a serial concatenated pulse position modulation (SCPPM) encoded information sequence. The decoder takes as input a sequence of four bit log-likelihood ratios (LLR) for each PPM slot in a codeword via a XAUI 10-Gb/s quad optical fiber interface. If the decoder is unavailable, it passes the LLRs on to the next decoder via a XAUI 10-Gb/s quad optical fiber interface. Otherwise, it decodes the sequence and outputs information bits through a 1-GB/s Ethernet UDP/IP (User Datagram Protocol/Internet Protocol) interface. The throughput for a single decoder unit is 150-Mb/s at an average of four decoding iterations; by connecting a number of decoder units in series, a decoding rate equal to that of the aggregate rate is achieved. The unit is controlled through a 1-GB/s Ethernet UDP/IP interface. This ground station decoder was developed to demonstrate a deep space optical communication link capability, and is unique in the scalable design to achieve real-time SCPP decoding at the aggregate data rate.

  14. 3D visualization of biomedical CT images based on OpenGL and VRML techniques

    NASA Astrophysics Data System (ADS)

    Yin, Meng; Luo, Qingming; Xia, Fuhua

    2002-04-01

    Current high-performance computers and advanced image processing capabilities have made the application of three- dimensional visualization objects in biomedical computer tomographic (CT) images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3D data are typically stored and processed on powerful servers accessible by using TCP/IP, we should hold the results of the isosurface be applied in medical visualization generally. Furthermore, this project is a future part of PACS system our lab is working on. So in this system we use the 3D file format VRML2.0, which is used through the Web interface for manipulating 3D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm. Then we used OpenGL and MFC techniques to render the isosurface and manipulating voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3D image processing on personal computers is rather slow and the set of tools for 3D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed.

  15. Medical visualization based on VRML technology and its application

    NASA Astrophysics Data System (ADS)

    Yin, Meng; Luo, Qingming; Lu, Qiang; Sheng, Rongbing; Liu, Yafeng

    2003-07-01

    Current high-performance computers and advanced image processing capabilities have made the application of three dimensional visualization objects in biomedical images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3-D data are typically stored and processed on powerful servers accessible by using TCP/IP, we held the results of the isosurface be applied in medical visualization generally. So in this system we use the 3-D file format VRML2.0, which is used through the Web interface for manipulating 3-D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm, using OpenGL and MFC techniques to render the isosurface and manipulate voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3-D image processing on personal computers is rather slow and the set of tools for 3-D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed. With the help of OCT and MPE scanning image system, applying these techniques to the visualization of rabbit brain, constructing data sets of hierarchical subdivisions of the cerebral information, we can establish a virtual environment on the World Wide Web for the rabbit brain research from its gross anatomy to its tissue and cellular levels of detail, providng graphical modeling and information management of both the outer and the inner space of the rabbit brain.

  16. Scalable hybrid unstructured and structured grid raycasting.

    PubMed

    Muigg, Philipp; Hadwiger, Markus; Doleisch, Helmut; Hauser, Helwig

    2007-01-01

    This paper presents a scalable framework for real-time raycasting of large unstructured volumes that employs a hybrid bricking approach. It adaptively combines original unstructured bricks in important (focus) regions, with structured bricks that are resampled on demand in less important (context) regions. The basis of this focus+context approach is interactive specification of a scalar degree of interest (DOI) function. Thus, rendering always considers two volumes simultaneously: a scalar data volume, and the current DOI volume. The crucial problem of visibility sorting is solved by raycasting individual bricks and compositing in visibility order from front to back. In order to minimize visual errors at the grid boundary, it is always rendered accurately, even for resampled bricks. A variety of different rendering modes can be combined, including contour enhancement. A very important property of our approach is that it supports a variety of cell types natively, i.e., it is not constrained to tetrahedral grids, even when interpolation within cells is used. Moreover, our framework can handle multi-variate data, e.g., multiple scalar channels such as temperature or pressure, as well as time-dependent data. The combination of unstructured and structured bricks with different quality characteristics such as the type of interpolation or resampling resolution in conjunction with custom texture memory management yields a very scalable system. PMID:17968114

  17. iSIGHT-FD scalability test report.

    SciTech Connect

    Clay, Robert L.; Shneider, Max S.

    2008-07-01

    The engineering analysis community at Sandia National Laboratories uses a number of internal and commercial software codes and tools, including mesh generators, preprocessors, mesh manipulators, simulation codes, post-processors, and visualization packages. We define an analysis workflow as the execution of an ordered, logical sequence of these tools. Various forms of analysis (and in particular, methodologies that use multiple function evaluations or samples) involve executing parameterized variations of these workflows. As part of the DART project, we are evaluating various commercial workflow management systems, including iSIGHT-FD from Engineous. This report documents the results of a scalability test that was driven by DAKOTA and conducted on a parallel computer (Thunderbird). The purpose of this experiment was to examine the suitability and performance of iSIGHT-FD for large-scale, parameterized analysis workflows. As the results indicate, we found iSIGHT-FD to be suitable for this type of application.

  18. Scalable Computation of Streamlines on Very Large Datasets

    SciTech Connect

    Pugmire, David; Childs, Hank; Garth, Christoph; Ahern, Sean; Weber, Gunther H.

    2009-09-01

    Understanding vector fields resulting from large scientific simulations is an important and often difficult task. Streamlines, curves that are tangential to a vector field at each point, are a powerful visualization method in this context. Application of streamline-based visualization to very large vector field data represents a significant challenge due to the non-local and data-dependent nature of streamline computation, and requires careful balancing of computational demands placed on I/O, memory, communication, and processors. In this paper we review two parallelization approaches based on established parallelization paradigms (static decomposition and on-demand loading) and present a novel hybrid algorithm for computing streamlines. Our algorithm is aimed at good scalability and performance across the widely varying computational characteristics of streamline-based problems. We perform performance and scalability studies of all three algorithms on a number of prototypical application problems and demonstrate that our hybrid scheme is able to perform well in different settings.

  19. Libra: Scalable Load Balance Analysis

    SciTech Connect

    2009-09-16

    Libra is a tool for scalable analysis of load balance data from all processes in a parallel application. Libra contains an instrumentation module that collects model data from parallel applications and a parallel compression mechanism that uses distributed wavelet transforms to gather load balance model data in a scalable fashion. Data is output to files, and these files can be viewed in a GUI tool by Libra users. The GUI tool associates particular load balance data with regions for code, emabling users to view the load balance properties of distributed "slices" of their application code.

  20. Libra: Scalable Load Balance Analysis

    Energy Science and Technology Software Center (ESTSC)

    2009-09-16

    Libra is a tool for scalable analysis of load balance data from all processes in a parallel application. Libra contains an instrumentation module that collects model data from parallel applications and a parallel compression mechanism that uses distributed wavelet transforms to gather load balance model data in a scalable fashion. Data is output to files, and these files can be viewed in a GUI tool by Libra users. The GUI tool associates particular load balancemore » data with regions for code, emabling users to view the load balance properties of distributed "slices" of their application code.« less

  1. Scalability study of solid xenon

    SciTech Connect

    Yoo, J.; Cease, H.; Jaskierny, W. F.; Markley, D.; Pahlka, R. B.; Balakishiyeva, D.; Saab, T.; Filipenko, M.

    2015-04-01

    We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified {\\it Bridgeman's technique} reproduces a large scale optically transparent solid xenon.

  2. The relation of scalability and execution time

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1995-01-01

    Scalability has been used extensively as a de facto performance criterion for evaluating parallel algorithms and architectures. However, for many, scalability has theoretical interests only since it does not reveal execution time. In this paper, the relation between scalability and execution time is carefully studied. Results show that the isospeed scalability well characterizes the variation of execution time: smaller scalability leads to larger execution time, the same scalability leads to the same execution time, etc. Three algorithms from scientific computing are implemented on an Intel Paragon and an IBM SP2 parallel computer. Experimental and theoretical results show that scalability is an important, distinct metric for parallel and distributed systems, and may be as important as execution time in a scalable parallel and distributed environment.

  3. A Scalable Database Infrastructure

    NASA Astrophysics Data System (ADS)

    Arko, R. A.; Chayes, D. N.

    2001-12-01

    The rapidly increasing volume and complexity of MG&G data, and the growing demand from funding agencies and the user community that it be easily accessible, demand that we improve our approach to data management in order to reach a broader user-base and operate more efficient and effectively. We have chosen an approach based on industry-standard relational database management systems (RDBMS) that use community-wide data specifications, where there is a clear and well-documented external interface that allows use of general purpose as well as customized clients. Rapid prototypes assembled with this approach show significant advantages over the traditional, custom-built data management systems that often use "in-house" legacy file formats, data specifications, and access tools. We have developed an effective database prototype based a public domain RDBMS (PostgreSQL) and metadata standard (FGDC), and used it as a template for several ongoing MG&G database management projects - including ADGRAV (Antarctic Digital Gravity Synthesis), MARGINS, the Community Review system of the Digital Library for Earth Science Education, multibeam swath bathymetry metadata, and the R/V Maurice Ewing onboard acquisition system. By using standard formats and specifications, and working from a common prototype, we are able to reuse code and deploy rapidly. Rather than spend time on low-level details such as storage and indexing (which are built into the RDBMS), we can focus on high-level details such as documentation and quality control. In addition, because many commercial off-the-shelf (COTS) and public domain data browsers and visualization tools have built-in RDBMS support, we can focus on backend development and leave the choice of a frontend client(s) up to the end user. While our prototype is running under an open source RDBMS on a single processor host, the choice of standard components allows this implementation to scale to commercial RDBMS products and multiprocessor servers as

  4. Perspective: n-type oxide thermoelectrics via visual search strategies

    NASA Astrophysics Data System (ADS)

    Xing, Guangzong; Sun, Jifeng; Ong, Khuong P.; Fan, Xiaofeng; Zheng, Weitao; Singh, David J.

    2016-05-01

    We discuss and present search strategies for finding new thermoelectric compositions based on first principles electronic structure and transport calculations. We illustrate them by application to a search for potential n-type oxide thermoelectric materials. This includes a screen based on visualization of electronic energy isosurfaces. We report compounds that show potential as thermoelectric materials along with detailed properties, including SrTiO3, which is a known thermoelectric, and appropriately doped KNbO3 and rutile TiO2.

  5. Stereoscopic video compression using temporal scalability

    NASA Astrophysics Data System (ADS)

    Puri, Atul; Kollarits, Richard V.; Haskell, Barry G.

    1995-04-01

    Despite the fact that human ability to perceive a high degree of realism is directly related to our ability to perceive depth accurately in a scene, most of the commonly used imaging and display technologies are able to provide only a 2D rendering of the 3D real world. Many current as well as emerging applications in areas of entertainment, remote operations, industrial and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief discussion on the relationship of digital stereoscopic 3DTV with digital TV and HDTV, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we determine ways in which temporal scalability concepts can be applied to exploit redundancies inherent between the two views of a scene comprising stereoscopic video. Due consideration is given to masking properties of stereoscopic vision to determine bandwidth partitioning between the two views to realize an efficient coding scheme while providing sufficient quality. Simulations are performed on stereoscopic video of normal TV resolution to compare the performance of the two temporal scalability configurations with each other and with the simulcast solution. Preliminary results are quite promising and indicate that the configuration that exploits motion and disparity compensation significantly outperforms the one that exploits disparity compensation alone. Compression of both views of stereo video of normal TV resolution appears feasible in a total of 8 or 9 Mbit/s. Finally

  6. A Scalable Media Multicasting Scheme

    NASA Astrophysics Data System (ADS)

    Youwei, Zhang

    IP multicast has been proved to be unfeasible for deployment, Application Layer Multicast (ALM) Based on end multicast system is practical and more scalable than IP multicast in Internet. In this paper, an ALM protocol called Scalable multicast for High Definition streaming media (SHD) is proposed in which end to end transmission capability is fully cultivated for HD media transmission without increasing much control overhead. Similar to the transmission style of BiTtorrent, hosts only forward part of data piece according to the available bandwidth that improves the usage of bandwidth greatly. On the other hand, some novel strategies are adopted to overcome the disadvantages of BiTtorrent protocol in streaming media transmission. Data transmission between hosts is implemented in many-one transmission style in Hierarchical architecture in most circumstances. Simulations implemented on Internet-like topology indicate that SHD achieves low link stress, end to end latency and stability.

  7. A Scalable Tools Communication Infrastructure

    SciTech Connect

    Buntinas, Darius; Bosilca, George; Graham, Richard L; Vallee, Geoffroy R; Watson, Gregory R.

    2008-01-01

    The Scalable Tools Communication Infrastructure (STCI) is an open source collaborative effort intended to provide high-performance, scalable, resilient, and portable communications and process control services for a wide variety of user and system tools. STCI is aimed specifically at tools for ultrascale computing and uses a component architecture to simplify tailoring the infrastructure to a wide range of scenarios. This paper describes STCI's design philosophy, the various components that will be used to provide an STCI implementation for a range of ultrascale platforms, and a range of tool types. These include tools supporting parallel run-time environments, such as MPI, parallel application correctness tools and performance analysis tools, as well as system monitoring and management tools.

  8. Evaluation of angiogram visualization methods for fast and reliable aneurysm diagnosis

    NASA Astrophysics Data System (ADS)

    Lesar, Žiga; Bohak, Ciril; Marolt, Matija

    2015-03-01

    In this paper we present the results of an evaluation of different visualization methods for angiogram volumetric data-ray casting, marching cubes, and multi-level partition of unity implicits. There are several options available with ray-casting: isosurface extraction, maximum intensity projection and alpha compositing, each producing fundamentally different results. Different visualization methods are suitable for different needs, so this choice is crucial in diagnosis and decision making processes. We also evaluate visual effects such as ambient occlusion, screen space ambient occlusion, and depth of field. Some visualization methods include transparency, so we address the question of relevancy of this additional visual information. We employ transfer functions to map data values to color and transparency, allowing us to view or hide particular tissues. All the methods presented in this paper were developed using OpenCL, striving for real-time rendering and quality interaction. An evaluation has been conducted to assess the suitability of the visualization methods. Results show superiority of isosurface extraction with ambient occlusion effects. Visual effects may positively or negatively affect perception of depth, motion, and relative positions in space.

  9. Scalable chemical oxygen - iodine laser

    SciTech Connect

    Adamenkov, A A; Bakshin, V V; Vyskubenko, B A; Efremov, V I; Il'in, S P; Ilyushin, Yurii N; Kolobyanin, Yu V; Kudryashov, E A; Troshkin, M V

    2011-12-31

    The problem of scaling chemical oxygen - iodine lasers (COILs) is discussed. The results of experimental study of a twisted-aerosol singlet oxygen generator meeting the COIL scalability requirements are presented. The energy characteristics of a supersonic COIL with singlet oxygen and iodine mixing in parallel flows are also experimentally studied. The output power of {approx}7.5 kW, corresponding to a specific power of 230 W cm{sup -2}, is achieved. The maximum chemical efficiency of the COIL is {approx}30%.

  10. NWChem: scalable parallel computational chemistry

    SciTech Connect

    van Dam, Hubertus JJ; De Jong, Wibe A.; Bylaska, Eric J.; Govind, Niranjan; Kowalski, Karol; Straatsma, TP; Valiev, Marat

    2011-11-01

    NWChem is a general purpose computational chemistry code specifically designed to run on distributed memory parallel computers. The core functionality of the code focuses on molecular dynamics, Hartree-Fock and density functional theory methods for both plane-wave basis sets as well as Gaussian basis sets, tensor contraction engine based coupled cluster capabilities and combined quantum mechanics/molecular mechanics descriptions. It was realized from the beginning that scalable implementations of these methods required a programming paradigm inherently different from what message passing approaches could offer. In response a global address space library, the Global Array Toolkit, was developed. The programming model it offers is based on using predominantly one-sided communication. This model underpins most of the functionality in NWChem and the power of it is exemplified by the fact that the code scales to tens of thousands of processors. In this paper the core capabilities of NWChem are described as well as their implementation to achieve an efficient computational chemistry code with high parallel scalability. NWChem is a modern, open source, computational chemistry code1 specifically designed for large scale parallel applications2. To meet the challenges of developing efficient, scalable and portable programs of this nature a particular code design was adopted. This code design involved two main features. First of all, the code is build up in a modular fashion so that a large variety of functionality can be integrated easily. Secondly, to facilitate writing complex parallel algorithms the Global Array toolkit was developed. This toolkit allows one to write parallel applications in a shared memory like approach, but offers additional mechanisms to exploit data locality to lower communication overheads. This framework has proven to be very successful in computational chemistry but is applicable to any engineering domain. Within the context created by the features

  11. Visual Debugging of Visualization Software: A Case Study for Particle Systems

    SciTech Connect

    Angel, Edward; Crossno, Patricia

    1999-07-12

    Visualization systems are complex dynamic software systems. Debugging such systems is difficult using conventional debuggers because the programmer must try to imagine the three-dimensional geometry based on a list of positions and attributes. In addition, the programmer must be able to mentally animate changes in those positions and attributes to grasp dynamic behaviors within the algorithm. In this paper we shall show that representing geometry, attributes, and relationships graphically permits visual pattern recognition skills to be applied to the debugging problem. The particular application is a particle system used for isosurface extraction from volumetric data. Coloring particles based on individual attributes is especially helpful when these colorings are viewed as animations over successive iterations in the program. Although we describe a particular application, the types of tools that we discuss can be applied to a variety of problems.

  12. Scalable, enantioselective taxane total synthesis

    PubMed Central

    Mendoza, Abraham; Ishihara, Yoshihiro; Baran, Phil S.

    2011-01-01

    Taxanes are a large family of terpenes comprising over 350 members, the most famous of which is Taxol (paclitaxel) — a billion-dollar anticancer drug. Here, we describe the first practical and scalable synthetic entry to these natural products via a concise preparation of (+)-taxa-4(5),11(12)-dien-2-one, which possesses a suitable functional handle to access more oxidised members of its family. This route enabled a gram-scale preparation of the ”parent” taxane, taxadiene, representing the largest quantity of this naturally occurring terpene ever isolated or prepared in pure form. The taxane family’s characteristic 6-8-6 tricyclic system containing a bridgehead alkene is forged via a vicinal difunctionalisation/Diels–Alder strategy. Asymmetry is introduced by means of an enantioselective conjugate addition that forms an all-carbon quaternary centre, from which all other stereocentres are fixed via substrate control. This study lays a critical foundation for a planned access to minimally oxidised taxane analogs and a scalable laboratory preparation of Taxol itself. PMID:22169867

  13. Visual Interface for Materials Simulations

    Energy Science and Technology Software Center (ESTSC)

    2004-08-01

    VIMES (Visual Inteface for Materials Simulations) is a graphical user interface (GUI) for pre- and post-processing alomistic materials science calculations. The code includes tools for building and visualizing simple crystals, supercells, and surfaces, as well as tools for managing and modifying the input to Sandia materials simulations codes such as Quest (Peter Schultz, SNL 9235) and Towhee (Marcus Martin, SNL 9235). It is often useful to have a graphical interlace to construct input for materialsmore » simulations codes and to analyze the output of these programs. VIMES has been designed not only to build and visualize different materials systems, but also to allow several Sandia codes to be easier to use and analyze. Furthermore. VIMES has been designed to be reasonably easy to extend to new materials programs. We anticipate that users of Sandia materials simulations codes will use VIMCS to simplify the submission and analysis of these simulations. VIMES uses standard OpenGL graphics (as implemented in the Python programming language) to display the molecules. The algorithms used to rotate, zoom, and pan molecules are all standard applications using the OpenGL libraries. VIMES uses the Marching Cubes algorithm for isosurfacing 3D data such as molecular orbitals or electron densities around the molecules.« less

  14. Scripts for Scalable Monitoring of Parallel Filesystem Infrastructure

    Energy Science and Technology Software Center (ESTSC)

    2014-02-27

    Scripts for scalable monitoring of parallel filesystem infrastructure provide frameworks for monitoring the health of block storage arrays and large InfiniBand fabrics. The block storage framework uses Python multiprocessing to within scale the number monitored arrays to scale with the number of processors in the system. This enables live monitoring of HPC-scale filesystem with 10-50 storage arrays. For InfiniBand monitoring, there are scripts included that monitor InfiniBand health of each host along with visualization toolsmore » for mapping the topology of complex fabric topologies.« less

  15. Scripts for Scalable Monitoring of Parallel Filesystem Infrastructure

    SciTech Connect

    Caldwell, Blake

    2014-02-27

    Scripts for scalable monitoring of parallel filesystem infrastructure provide frameworks for monitoring the health of block storage arrays and large InfiniBand fabrics. The block storage framework uses Python multiprocessing to within scale the number monitored arrays to scale with the number of processors in the system. This enables live monitoring of HPC-scale filesystem with 10-50 storage arrays. For InfiniBand monitoring, there are scripts included that monitor InfiniBand health of each host along with visualization tools for mapping the topology of complex fabric topologies.

  16. Highly scalable coherent fiber combining

    NASA Astrophysics Data System (ADS)

    Antier, M.; Bourderionnet, J.; Larat, C.; Lallier, E.; Brignon, A.

    2015-10-01

    An architecture for active coherent fiber laser beam combining using an interferometric measurement is demonstrated. This technique allows measuring the exact phase errors of each fiber beam in a single shot. Therefore, this method is a promising candidate toward very large number of combined fibers. Our experimental system, composed of 16 independent fiber channels, is used to evaluate the achieved phase locking stability in terms of phase shift error and bandwidth. We show that only 8 pixels per fiber on the camera is required for a stable close loop operation with a residual phase error of λ/20 rms, which demonstrates the scalability of this concept. Furthermore we propose a beam shaping technique to increase the combining efficiency.

  17. Scalable Performance Measurement and Analysis

    SciTech Connect

    Gamblin, Todd

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  18. Rate control scheme for consistent video quality in scalable video codec.

    PubMed

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame. PMID:21411408

  19. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    PubMed

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures. PMID:25594961

  20. Scalable analysis tools for sensitivity analysis and UQ (3160) results.

    SciTech Connect

    Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.

    2009-09-01

    The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.

  1. Scalable Multi-Platform Distribution of Spatial 3d Contents

    NASA Astrophysics Data System (ADS)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  2. A scalable 2-D parallel sparse solver

    SciTech Connect

    Kothari, S.C.; Mitra, S.

    1995-12-01

    Scalability beyond a small number of processors, typically 32 or less, is known to be a problem for existing parallel general sparse (PGS) direct solvers. This paper presents a parallel general sparse PGS direct solver for general sparse linear systems on distributed memory machines. The algorithm is based on the well-known sequential sparse algorithm Y12M. To achieve efficient parallelization, a 2-D scattered decomposition of the sparse matrix is used. The proposed algorithm is more scalable than existing parallel sparse direct solvers. Its scalability is evaluated on a 256 processor nCUBE2s machine using Boeing/Harwell benchmark matrices.

  3. Scalable encryption using alpha rooting

    NASA Astrophysics Data System (ADS)

    Wharton, Eric J.; Panetta, Karen A.; Agaian, Sos S.

    2008-04-01

    Full and partial encryption methods are important for subscription based content providers, such as internet and cable TV pay channels. Providers need to be able to protect their products while at the same time being able to provide demonstrations to attract new customers without giving away the full value of the content. If an algorithm were introduced which could provide any level of full or partial encryption in a fast and cost effective manner, the applications to real-time commercial implementation would be numerous. In this paper, we present a novel application of alpha rooting, using it to achieve fast and straightforward scalable encryption with a single algorithm. We further present use of the measure of enhancement, the Logarithmic AME, to select optimal parameters for the partial encryption. When parameters are selected using the measure, the output image achieves a balance between protecting the important data in the image while still containing a good overall representation of the image. We will show results for this encryption method on a number of images, using histograms to evaluate the effectiveness of the encryption.

  4. Scalable Equation of State Capability

    SciTech Connect

    Epperly, T W; Fritsch, F N; Norquist, P D; Sanford, L A

    2007-12-03

    The purpose of this techbase project was to investigate the use of parallel array data types to reduce the memory footprint of the Livermore Equation Of State (LEOS) library. Addressing the memory scalability of LEOS is necessary to run large scientific simulations on IBM BG/L and future architectures with low memory per processing core. We considered using normal MPI, one-sided MPI, and Global Arrays to manage the distributed array and ended up choosing Global Arrays because it was the only communication library that provided the level of asynchronous access required. To reduce the runtime overhead using a parallel array data structure, a least recently used (LRU) caching algorithm was used to provide a local cache of commonly used parts of the parallel array. The approach was initially implemented in a isolated copy of LEOS and was later integrated into the main trunk of the LEOS Subversion repository. The approach was tested using a simple test. Testing indicated that the approach was feasible, and the simple LRU caching had a 86% hit rate.

  5. CMP: A memory-constrained scalability metric

    SciTech Connect

    Fienup, M.; Kothari, S.C.

    1995-12-01

    A scalability metric, called constant-memory-per-processor (CMP), is described for parallel architecture-algorithrn pairs. Its purpose is to predict the behavior of a specific algorithm on a distributed-memory machine as the number of processors is increased, but the memory per processor remains constant. While the CMP scalability metric predicts the asymptotic behavior, we show how to use it to predict expected performance on actual parallel machines, specifically the MasPar MP-I and MP-2.

  6. Scalable Systems Software Enabling Technology Center

    SciTech Connect

    Michael T. Showerman

    2009-04-06

    NCSA’s role in the SCIDAC Scalable Systems Software (SSS) project was to develop interfaces and communication mechanisms for systems monitoring, and to implement a prototype demonstrating those standards. The Scalable Systems Monitoring component of the SSS suite was designed to provide a large volume of both static and dynamic systems data to the components within the SSS infrastructure as well as external data consumers.

  7. Complexity scalable motion-compensated temporal filtering

    NASA Astrophysics Data System (ADS)

    Clerckx, Tom; Verdicchio, Fabio; Munteanu, Adrian; Andreopoulos, Yiannis; Devos, Harald; Eeckhaut, Hendrik; Christiaens, Mark; Stroobandt, Dirk; Verkest, Diederik; Schelkens, Peter

    2004-11-01

    Computer networks and the internet have taken an important role in modern society. Together with their development, the need for digital video transmission over these networks has grown. To cope with the user demands and limitations of the network, compression of the video material has become an important issue. Additionally, many video-applications require flexibility in terms of scalability and complexity (e.g. HD/SD-TV, video-surveillance). Current ITU-T and ISO/IEC video compression standards (MPEG-x, H.26-x) lack efficient support for these types of scalability. Wavelet-based compression techniques have been proposed to tackle this problem, of which the Motion Compensated Temporal Filtering (MCTF)-based architectures couple state-of-the-art performance with full (quality, resolution, and frame-rate) scalability. However, a significant drawback of these architectures is their high complexity. The computational and memory complexity of both spatial domain (SD) MCTF and in-band (IB) MCTF video codec instantiations are examined in this study. Comparisons in terms of complexity versus performance are presented for both types of codecs. The paper indicates how complexity scalability can be achieved in such video-codecs, and analyses some of the trade-offs between complexity and coding performance. Finally, guidelines on how to implement a fully scalable video-codec that incorporates quality, temporal, resolution and complexity scalability are proposed.

  8. A tool for intraoperative visualization of registration results

    NASA Astrophysics Data System (ADS)

    King, Franklin; Lasso, Andras; Pinter, Csaba; Fichtinger, Gabor

    2014-03-01

    PURPOSE: Validation of image registration algorithms is frequently accomplished by the visual inspection of the resulting linear or deformable transformation due to the lack of ground truth information. Visualization of transformations produced by image registration algorithms during image-guided interventions allows for a clinician to evaluate the accuracy of the result transformation. Software packages that perform the visualization of transformations exist, but are not part of a clinically usable software application. We present a tool that visualizes both linear and deformable transformations and is integrated in an open-source software application framework suited for intraoperative use and general evaluation of registration algorithms. METHODS: A choice of six different modes are available for visualization of a transform. Glyph visualization mode uses oriented and scaled glyphs, such as arrows, to represent the displacement field in 3D whereas glyph slice visualization mode creates arrows that can be seen as a 2D vector field. Grid visualization mode creates deformed grids shown in 3D whereas grid slice visualization mode creates a series of 2D grids. Block visualization mode creates a deformed bounding box of the warped volume. Finally, contour visualization mode creates isosurfaces and isolines that visualize the magnitude of displacement across a volume. The application 3D Slicer was chosen as the platform for the transform visualizer tool. 3D Slicer is a comprehensive open-source application framework developed for medical image computing and used for intra-operative registration. RESULTS: The transform visualizer tool fulfilled the requirements for quick evaluation of intraoperative image registrations. Visualizations were generated in 3D Slicer with little computation time on realistic datasets. It is freely available as an extension for 3D Slicer. CONCLUSION: A tool for the visualization of displacement fields was created and integrated into 3D Slicer

  9. Scalable extensions of HEVC for next generation services

    NASA Astrophysics Data System (ADS)

    Misra, Kiran; Segall, Andrew; Zhao, Jie; Kim, Seung-Hwan

    2013-02-01

    The high efficiency video coding (HEVC) standard being developed by ITU-T VCEG and ISO/IEC MPEG achieves a compression goal of reducing the bitrate by half for the same visual quality when compared with earlier video compression standards such as H.264/AVC. It achieves this goal with the use of several new tools such as quad-tree based partitioning of data, larger block sizes, improved intra prediction, the use of sophisticated prediction of motion information, inclusion of an in-loop sample adaptive offset process etc. This paper describes an approach where the HEVC framework is extended to achieve spatial scalability using a multi-loop approach. The enhancement layer inter-predictive coding efficiency is improved by including within the decoded picture buffer multiple up-sampled versions of the decoded base layer picture. This approach has the advantage of achieving significant coding gains with a simple extension of the base layer tools such as inter-prediction, motion information signaling etc. Coding efficiency of the enhancement layer is further improved using adaptive loop filter and internal bit-depth increment. The performance of the proposed scalable video coding approach is compared to simulcast transmission of video data using high efficiency model version 6.1 (HM-6.1). The bitrate savings are measured using Bjontegaard Delta (BD) rate for a spatial scalability factor of 2 and 1.5 respectively when compared with simulcast anchors. It is observed that the proposed approach provides an average luma BD rate gains of 33.7% and 50.5% respectively.

  10. Performance-scalable volumetric data classification for online industrial inspection

    NASA Astrophysics Data System (ADS)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  11. Scalable mobile image retrieval by exploring contextual saliency.

    PubMed

    Yang, Xiyu; Qian, Xueming; Xue, Yao

    2015-06-01

    Nowadays, it is very convenient to capture photos by a smart phone. As using, the smart phone is a convenient way to share what users experienced anytime and anywhere through social networks, it is very possible that we capture multiple photos to make sure the content is well photographed. In this paper, an effective scalable mobile image retrieval approach is proposed by exploring contextual salient information for the input query image. Our goal is to explore the high-level semantic information of an image by finding the contextual saliency from multiple relevant photos rather than solely using the input image. Thus, the proposed mobile image retrieval approach first determines the relevant photos according to visual similarity, then mines salient features by exploring contextual saliency from multiple relevant images, and finally determines contributions of salient features for scalable retrieval. Compared with the existing mobile-based image retrieval approaches, our approach requires less bandwidth and has better retrieval performance. We can carry out retrieval with <200-B data, which is <5% of existing approaches. Most importantly, when the bandwidth is limited, we can rank the transmitted features according to their contributions to retrieval. Experimental results show the effectiveness of the proposed approach. PMID:25775488

  12. Scientific visualization of landscapes and landforms

    NASA Astrophysics Data System (ADS)

    Mitasova, Helena; Harmon, Russell S.; Weaver, Katherine J.; Lyons, Nathan J.; Overton, Margery F.

    2012-01-01

    Scientific visualization of geospatial data provides highly effective tools for analysis and communication of information about the land surface and its features, properties, and temporal evolution. Whereas single-surface visualization of landscapes is now routinely used in presentation of Earth surface data, interactive 3D visualization based upon multiple elevation surfaces and cutting planes is gaining recognition as a powerful tool for analyzing landscape structure based on multiple return Light Detection and Ranging (LiDAR) data. This approach also provides valuable insights into land surface changes captured by multi-temporal elevation models. Thus, animations using 2D images and 3D views are becoming essential for communicating results of landscape monitoring and computer simulations of Earth processes. Multiple surfaces and 3D animations are also used to introduce novel concepts for visual analysis of terrain models derived from time-series of LiDAR data using multi-year core and envelope surfaces. Analysis of terrain evolution using voxel models and visualization of contour evolution using isosurfaces has potential for unique insights into geometric properties of rapidly evolving coastal landscapes. In addition to visualization on desktop computers, the coupling of GIS with new types of graphics hardware systems provides opportunities for cutting-edge applications of visualization for geomorphological research. These systems include tangible environments that facilitate intuitive 3D perception, interaction and collaboration. Application of the presented visualization techniques as supporting tools for analyses of landform evolution using airborne LiDAR data and open source geospatial software is illustrated by two case studies from North Carolina, USA.

  13. Wanted: Scalable Tracers for Diffusion Measurements

    PubMed Central

    2015-01-01

    Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core–shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say “reinforced Ficoll” or “reinforced hyperbranched polyglycerol.” PMID:25319586

  14. Visualizing Higher Order Finite Elements: FY05 Yearly Report.

    SciTech Connect

    Thompson, David; Pebay, Philippe Pierre

    2005-11-01

    This report contains an algorithm for decomposing higher-order finite elementsinto regions appropriate for isosurfacing and proves the conditions under which thealgorithm will terminate. Finite elements are used to create piecewise polynomialapproximants to the solution of partial differential equations for which no analyticalsolution exists. These polynomials represent fields such as pressure, stress, and mo-mentim. In the past, these polynomials have been linear in each parametric coordinate.Each polynomial coefficient must be uniquely determined by a simulation, and thesecoefficients are called degrees of freedom. When there are not enough degrees of free-dom, simulations will typically fail to produce a valid approximation to the solution.Recent work has shown that increasing the number of degrees of freedom by increas-ing the order of the polynomial approximation (instead of increasing the number offinite elements, each of which has its own set of coefficients) can allow some typesof simulations to produce a valid approximation with many fewer degrees of freedomthan increasing the number of finite elements alone. However, once the simulation hasdetermined the values of all the coefficients in a higher-order approximant, tools donot exist for visual inspection of the solution.This report focuses on a technique for the visual inspection of higher-order finiteelement simulation results based on decomposing each finite element into simplicialregions where existing visualization algorithms such as isosurfacing will work. Therequirements of the isosurfacing algorithm are enumerated and related to the placeswhere the partial derivatives of the polynomial become zero. The original isosurfacingalgorithm is then applied to each of these regions in turn.3 AcknowledgementThe authors would like to thank David Day and Louis Romero for their insight into poly-nomial system solvers and the LDRD Senior Council for the opportunity to pursue thisresearch. The authors were

  15. An overview on scalable encryption for wireless multimedia access

    NASA Astrophysics Data System (ADS)

    Yu, Hong Heather

    2003-08-01

    Wireless environments present many challenges for secure multimedia access, especial streaming media. The availability of varying network bandwidths and diverse receiver device processing powers and storage spaces demand scalable and flexible approaches that are capable of adapting to changing network conditions as well as device capabilities. To meet these requirements, scalable and fine granularity scalable (FGS) compression algorithms were proposed and widely adopted to provide scalable access of multimedia with interoperability between different services and flexible support to receivers with different device capabilities. Encryption is one of the most important security tools to protect content from unauthorized use. If a medium data stream is encrypted using non-scalable cryptography algorithms, decryption at arbitrary bit rate to provide scalable services can hardly be accomplished. If a medium compressed using scalable coding needs to be protected and non-scalable cryptography algorithms are used, the advantages of scalable coding may be lost. Therefore scalable encryption techniques are needed to provide scalability or to preserve the FGS adaptation capability (if the media stream is FGS coded) and enable intermediate processing of encrypted data without unnecessary decryption. In this paper, we will give an overview of scalable encryption schemes and present a fine grained scalable encryption algorithm. One desirable feature is its simplicity and flexibility in supporting scalable multimedia communication and multimedia content access control in wireless environments.

  16. The Scalable Checkpoint/Restart Library

    SciTech Connect

    Moody, A.

    2009-02-23

    The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes. It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.

  17. Area scalable optically induced photorefractive photonic microstructures

    NASA Astrophysics Data System (ADS)

    Jin, Wentao; Xue, Yan Ling; Jiang, Dongdong

    2016-07-01

    A convenient approach to fabricate area scalable two-dimensional photonic microstructures was experimentally demonstrated by multi-face optical wedges. The approach is quite compact and stable without complex optical alignment equipment. Large-area square lattice microstructures are optically induced inside an iron-doped lithium niobate photorefractive crystal. The induced large-area microstructures are analyzed and verified by plane wave guiding, Brillouin-zone spectroscopy, angle-dependent transmission spectrum, and lateral Bragg reflection patterns. The method can be easily extended to generate other more complex area scalable photonic microstructures, such as quasicrystal lattices, by designing the multi-face optical wedge appropriately. The induced area scalable photonic microstructures can be fixed or erased even re-recorded in the photorefractive crystal, which suggests potential applications in micro-nano photonic devices.

  18. The Scalable Checkpoint/Restart Library

    Energy Science and Technology Software Center (ESTSC)

    2009-02-23

    The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to amore » shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes. It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less

  19. Planetary subsurface investigation by 3D visualization model .

    NASA Astrophysics Data System (ADS)

    Seu, R.; Catallo, C.; Tragni, M.; Abbattista, C.; Cinquepalmi, L.

    Subsurface data analysis and visualization represents one of the main aspect in Planetary Observation (i.e. search for water or geological characterization). The data are collected by subsurface sounding radars as instruments on-board of deep space missions. These data are generally represented as 2D radargrams in the perspective of space track and z axes (perpendicular to the subsurface) but without direct correlation to other data acquisition or knowledge on the planet . In many case there are plenty of data from other sensors of the same mission, or other ones, with high continuity in time and in space and specially around the scientific sites of interest (i.e. candidate landing areas or particular scientific interesting sites). The 2D perspective is good to analyse single acquisitions and to perform detailed analysis on the returned echo but are quite useless to compare very large dataset as now are available on many planets and moons of solar system. The best way is to approach the analysis on 3D visualization model generated from the entire stack of data. First of all this approach allows to navigate the subsurface in all directions and analyses different sections and slices or moreover navigate the iso-surfaces respect to a value (or interval). The last one allows to isolate one or more iso-surfaces and remove, in the visualization mode, other data not interesting for the analysis; finally it helps to individuate the underground 3D bodies. Other aspect is the needs to link the on-ground data, as imaging, to the underground one by geographical and context field of view.

  20. Enhancing Scalability of Sparse Direct Methods

    SciTech Connect

    Li, Xiaoye S.; Demmel, James; Grigori, Laura; Gu, Ming; Xia,Jianlin; Jardin, Steve; Sovinec, Carl; Lee, Lie-Quan

    2007-07-23

    TOPS is providing high-performance, scalable sparse direct solvers, which have had significant impacts on the SciDAC applications, including fusion simulation (CEMM), accelerator modeling (COMPASS), as well as many other mission-critical applications in DOE and elsewhere. Our recent developments have been focusing on new techniques to overcome scalability bottleneck of direct methods, in both time and memory. These include parallelizing symbolic analysis phase and developing linear-complexity sparse factorization methods. The new techniques will make sparse direct methods more widely usable in large 3D simulations on highly-parallel petascale computers.

  1. Validation of a Scalable Solar Sailcraft

    NASA Technical Reports Server (NTRS)

    Murphy, D. M.

    2006-01-01

    The NASA In-Space Propulsion (ISP) program sponsored intensive solar sail technology and systems design, development, and hardware demonstration activities over the past 3 years. Efforts to validate a scalable solar sail system by functional demonstration in relevant environments, together with test-analysis correlation activities on a scalable solar sail system have recently been successfully completed. A review of the program, with descriptions of the design, results of testing, and analytical model validations of component and assembly functional, strength, stiffness, shape, and dynamic behavior are discussed. The scaled performance of the validated system is projected to demonstrate the applicability to flight demonstration and important NASA road-map missions.

  2. Scalable k-means statistics with Titan.

    SciTech Connect

    Thompson, David C.; Bennett, Janine C.; Pebay, Philippe Pierre

    2009-11-01

    This report summarizes existing statistical engines in VTK/Titan and presents both the serial and parallel k-means statistics engines. It is a sequel to [PT08], [BPRT09], and [PT09] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, and contingency engines. The ease of use of the new parallel k-means engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the k-means engine.

  3. Medusa: a scalable MR console using USB.

    PubMed

    Stang, Pascal P; Conolly, Steven M; Santos, Juan M; Pauly, John M; Scott, Greig C

    2012-02-01

    Magnetic resonance imaging (MRI) pulse sequence consoles typically employ closed proprietary hardware, software, and interfaces, making difficult any adaptation for innovative experimental technology. Yet MRI systems research is trending to higher channel count receivers, transmitters, gradient/shims, and unique interfaces for interventional applications. Customized console designs are now feasible for researchers with modern electronic components, but high data rates, synchronization, scalability, and cost present important challenges. Implementing large multichannel MR systems with efficiency and flexibility requires a scalable modular architecture. With Medusa, we propose an open system architecture using the universal serial bus (USB) for scalability, combined with distributed processing and buffering to address the high data rates and strict synchronization required by multichannel MRI. Medusa uses a modular design concept based on digital synthesizer, receiver, and gradient blocks, in conjunction with fast programmable logic for sampling and synchronization. Medusa is a form of synthetic instrument, being reconfigurable for a variety of medical/scientific instrumentation needs. The Medusa distributed architecture, scalability, and data bandwidth limits are presented, and its flexibility is demonstrated in a variety of novel MRI applications. PMID:21954200

  4. Scalable Domain Decomposed Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  5. Scalable microreactors and methods for using same

    DOEpatents

    Lawal, Adeniyi; Qian, Dongying

    2010-03-02

    The present invention provides a scalable microreactor comprising a multilayered reaction block having alternating reaction plates and heat exchanger plates that have a plurality of microchannels; a multilaminated reactor input manifold, a collecting reactor output manifold, a heat exchange input manifold and a heat exchange output manifold. The present invention also provides methods of using the microreactor for multiphase chemical reactions.

  6. Medusa: A Scalable MR Console Using USB

    PubMed Central

    Stang, Pascal P.; Conolly, Steven M.; Santos, Juan M.; Pauly, John M.; Scott, Greig C.

    2012-01-01

    MRI pulse sequence consoles typically employ closed proprietary hardware, software, and interfaces, making difficult any adaptation for innovative experimental technology. Yet MRI systems research is trending to higher channel count receivers, transmitters, gradient/shims, and unique interfaces for interventional applications. Customized console designs are now feasible for researchers with modern electronic components, but high data rates, synchronization, scalability, and cost present important challenges. Implementing large multi-channel MR systems with efficiency and flexibility requires a scalable modular architecture. With Medusa, we propose an open system architecture using the Universal Serial Bus (USB) for scalability, combined with distributed processing and buffering to address the high data rates and strict synchronization required by multi-channel MRI. Medusa uses a modular design concept based on digital synthesizer, receiver, and gradient blocks, in conjunction with fast programmable logic for sampling and synchronization. Medusa is a form of synthetic instrument, being reconfigurable for a variety of medical/scientific instrumentation needs. The Medusa distributed architecture, scalability, and data bandwidth limits are presented, and its flexibility is demonstrated in a variety of novel MRI applications. PMID:21954200

  7. Scalable metadata environments (MDE): artistically impelled immersive environments for large-scale data exploration

    NASA Astrophysics Data System (ADS)

    West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram

    2014-02-01

    Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.

  8. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  9. Visualization of cosmological particle-based datasets.

    PubMed

    Navratil, Paul; Johnson, Jarrett; Bromm, Volker

    2007-01-01

    We describe our visualization process for a particle-based simulation of the formation of the first stars and their impact on cosmic history. The dataset consists of several hundred time-steps of point simulation data, with each time-step containing approximately two million point particles. For each time-step, we interpolate the point data onto a regular grid using a method taken from the radiance estimate of photon mapping. We import the resulting regular grid representation into ParaView, with which we extract isosurfaces across multiple variables. Our images provide insights into the evolution of the early universe, tracing the cosmic transition from an initially homogeneous state to one of increasing complexity. Specifically, our visualizations capture the build-up of regions of ionized gas around the first stars, their evolution, and their complex interactions with the surrounding matter. These observations will guide the upcoming James Webb Space Telescope, the key astronomy mission of the next decade. PMID:17968129

  10. Scalable video coding in frequency domain

    NASA Astrophysics Data System (ADS)

    Civanlar, Mehmet R.; Puri, Atul

    1992-11-01

    Scalable video coding is important in a number of applications where video needs to be decoded and displayed at a variety of resolution scales. It is more efficient than simulcasting, in which all desired resolution scales are coded totally independent of one another within the constraint of a fixed available bandwidth. In this paper, we focus on scalability using the frequency domain approach. We employ the framework proposed for the ongoing second phase of Motion Picture Experts Group (MPEG-2) standard to study the performance of one such scheme and investigate improvements aimed at increasing its efficiency. Practical issues related to multiplexing of encoded data of various resolution scales to facilitate decoding are considered. Simulations are performed to investigate the potential of a chosen frequency domain scheme. Various prospects and limitations are also discussed.

  11. A Scalability Model for ECS's Data Server

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Singhal, Mukesh

    1998-01-01

    This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.

  12. Scalable coherent interface: Links to the future

    SciTech Connect

    Gustavson, D.B.; Kristiansen, E.

    1991-11-01

    Now that the Scalable Coherent Interface (SCI) has solved the bandwidth problem, what can we use it for? SCI was developed to support closely coupled multiprocessors and their caches in a distributed shared-memory environment, but its scalability and the efficient generality of its architecture make it work very well over a wide range of applications. It can replace a local area network for connecting workstations on a campus. It can be powerful I/O channel for a supercomputer. It can be the processor-cache-memory-I/O connection in a highly parallel computer. It can gather data from enormous particle detectors and distribute it among thousands of processors. It can connect a desktop microprocessor to memory chips a few millimeters away, disk drivers a few meters away, and servers a few kilometers away.

  13. Scalable coherent interface: Links to the future

    SciTech Connect

    Gustavson, D.B. ); Kristiansen, E. )

    1991-11-01

    Now that the Scalable Coherent Interface (SCI) has solved the bandwidth problem, what can we use it for SCI was developed to support closely coupled multiprocessors and their caches in a distributed shared-memory environment, but its scalability and the efficient generality of its architecture make it work very well over a wide range of applications. It can replace a local area network for connecting workstations on a campus. It can be powerful I/O channel for a supercomputer. It can be the processor-cache-memory-I/O connection in a highly parallel computer. It can gather data from enormous particle detectors and distribute it among thousands of processors. It can connect a desktop microprocessor to memory chips a few millimeters away, disk drivers a few meters away, and servers a few kilometers away.

  14. Scalable Petascale Storage for HEP using Lustre

    NASA Astrophysics Data System (ADS)

    Walker, C. J.; Traynor and, D. P.; Martin, A. J.

    2012-12-01

    We have deployed a 1 PB clustered filesystem for High Energy Physics. The use of commodity storage arrays and bonded ethernet interconnects makes the array cost effective, whilst providing high bandwidth to the storage. The filesystem is a POSIX filesytem, presented to the Grid using the StoRM Storage Resource Manager (SRM). We describe an upgrade to 10 Gbit/s networking and we present benchmarks demonstrating the performance and scalability of the filesystem.

  15. Scalable descriptive and correlative statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2008-12-01

    This report summarizes the existing statistical engines in VTK/Titan and presents the parallel versions thereof which have already been implemented. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.

  16. Scalable Computer Performance and Analysis (Hierarchical INTegration)

    Energy Science and Technology Software Center (ESTSC)

    1999-09-02

    HINT is a program to measure a wide variety of scalable computer systems. It is capable of demonstrating the benefits of using more memory or processing power, and of improving communications within the system. HINT can be used for measurement of an existing system, while the associated program ANALYTIC HINT can be used to explain the measurements or as a design tool for proposed systems.

  17. Pursuing Scalability for hypre's Conceptual Interfaces

    SciTech Connect

    Falgout, R D; Jones, J E; Yang, U M

    2004-07-21

    The software library hypre provides high performance preconditioners and solvers for the solution of large, sparse linear systems on massively parallel computers as well as conceptual interfaces that allow users to access the library in the way they naturally think about their problems. These interfaces include a stencil-based structured interface (Struct); a semi-structured interface (semiStruct), which is appropriate for applications that are mostly structured, e.g. block structured grids, composite grids in structured adaptive mesh refinement applications, and overset grids; a finite element interface (FEI) for unstructured problems, as well as a conventional linear-algebraic interface (IJ). It is extremely important to provide an efficient, scalable implementation of these interfaces in order to support the scalable solvers of the library, especially when using tens of thousands of processors. This paper describes the data structures, parallel implementation and resulting performance of the IJ, Struct and semiStruct interfaces. It investigates their scalability, presents successes as well as pitfalls of some of the approaches and suggests ways of dealing with them.

  18. ParaText : scalable solutions for processing and searching very large document collections : final LDRD report.

    SciTech Connect

    Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-09-01

    This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages of information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.

  19. Design and implementation of scalable tape archiver

    NASA Technical Reports Server (NTRS)

    Nemoto, Toshihiro; Kitsuregawa, Masaru; Takagi, Mikio

    1996-01-01

    In order to reduce costs, computer manufacturers try to use commodity parts as much as possible. Mainframes using proprietary processors are being replaced by high performance RISC microprocessor-based workstations, which are further being replaced by the commodity microprocessor used in personal computers. Highly reliable disks for mainframes are also being replaced by disk arrays, which are complexes of disk drives. In this paper we try to clarify the feasibility of a large scale tertiary storage system composed of 8-mm tape archivers utilizing robotics. In the near future, the 8-mm tape archiver will be widely used and become a commodity part, since recent rapid growth of multimedia applications requires much larger storage than disk drives can provide. We designed a scalable tape archiver which connects as many 8-mm tape archivers (element archivers) as possible. In the scalable archiver, robotics can exchange a cassette tape between two adjacent element archivers mechanically. Thus, we can build a large scalable archiver inexpensively. In addition, a sophisticated migration mechanism distributes frequently accessed tapes (hot tapes) evenly among all of the element archivers, which improves the throughput considerably. Even with the failures of some tape drives, the system dynamically redistributes hot tapes to the other element archivers which have live tape drives. Several kinds of specially tailored huge archivers are on the market, however, the 8-mm tape scalable archiver could replace them. To maintain high performance in spite of high access locality when a large number of archivers are attached to the scalable archiver, it is necessary to scatter frequently accessed cassettes among the element archivers and to use the tape drives efficiently. For this purpose, we introduce two cassette migration algorithms, foreground migration and background migration. Background migration transfers cassettes between element archivers to redistribute frequently accessed

  20. Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface

    NASA Astrophysics Data System (ADS)

    Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry

    2007-04-01

    As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.

  1. Improved volume rendering for the visualization of living cells examined with confocal microscopy

    NASA Astrophysics Data System (ADS)

    Enloe, L. Charity; Griffing, Lawrence R.

    2000-02-01

    This research applies recent advances in 3D isosurface reconstruction to images of test spheres and plant cells growing in suspension culture. Isosurfaces that represent object boundaries are constructed with a Marching Cubes algorithm applied to simple data sets, i.e., fluorescent test beads, and complex data sets, i.e., fluorescent plant cells, acquired with a Zeiss Confocal Laser Scanning Microscope (LSM). The marching cubes algorithm treats each pixel or voxel of the image as a separate entity when performing computations. To test the spatial accuracy of the reconstruction, control data representing the volume of a 25 micrometer test shaper was obtained with the LSM. This volume was then judged on the basis of uniformity and smoothness. Using polygon decimation and smoothing algorithms available through the visualization toolkit, 'voxellated' test spheres and cells were smoothed using several different smoothing algorithms after unessential polygons were eliminated. With these improvements, the shape of subcellular organelles could be modeled at various levels of accuracy. However, in order to accurately reconstruct these complex structures of interest to us, the subcellular organelles of the endosomal system or the endoplasmic reticulum of plant cells, measurements of the accuracy of connectedness of structures need to be developed.

  2. Visual Learning.

    ERIC Educational Resources Information Center

    Kirrane, Diane E.

    1992-01-01

    An increasingly visual culture is affecting work and training. Achievement of visual literacy means acquiring competence in critical analysis of visual images and in communicating through visual media. (SK)

  3. Visual field

    MedlinePlus

    Perimetry; Tangent screen exam; Automated perimetry exam; Goldmann visual field exam; Humphrey visual field exam ... Confrontation visual field exam : This is a quick and basic check of the visual field. The health care provider sits directly in front ...

  4. Visual field

    MedlinePlus

    Perimetry; Tangent screen exam; Automated perimetry exam; Goldmann visual field exam; Humphrey visual field exam ... Confrontation visual field exam : This is a quick and basic check of the visual field. The health care provider ...

  5. Visual Analytics for Power Grid Contingency Analysis

    SciTech Connect

    Wong, Pak C.; Huang, Zhenyu; Chen, Yousu; Mackey, Patrick S.; Jin, Shuangshuang

    2014-01-20

    Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure to do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.

  6. Scalable resource management in high performance computers.

    SciTech Connect

    Frachtenberg, E.; Petrini, F.; Fernandez Peinador, J.; Coll, S.

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  7. First experience with the scalable coherent interface

    SciTech Connect

    Mueller, H. . ECP Division); RD24 Collaboration

    1994-02-01

    The research project RD24 is studying applications of the Scalable Coherent Interface (IEEE-1596) standard for the large hadron collider (LHC). First SCI node chips from Dolphin were used to demonstrate the use and functioning of SCI's packet protocols and to measure data rates. The authors present results from a first, two-node SCI ringlet at CERN, based on a R3000 RISC processor node and DMA node on a MC68040 processor bus. A diagnostic link analyzer monitors the SCI packet protocols up to full link bandwidth. In its second phase, RD24 will build a first implementation of a multi-ringlet SCI data merger.

  8. Scalable analog wavefront sensor with subpixel resolution

    NASA Astrophysics Data System (ADS)

    Wilcox, Michael

    2006-06-01

    Standard Shack-Hartman wavefront sensors use a CCD element to sample position and distortion of a target or guide star. Digital sampling of the element and transfer to a memory space for subsequent computation adds significant temporal delay, thus, limiting the spatial frequency and scalability of the system as a wavefront sensor. A new approach to sampling uses information processing principles in an insect compound eye. Analog circuitry eliminates digital sampling and extends the useful range of the system to control a deformable mirror and make a faster, more capable wavefront sensor.

  9. Overcoming Scalability Challenges for Tool Daemon Launching

    SciTech Connect

    Ahn, D H; Arnold, D C; de Supinski, B R; Lee, G L; Miller, B P; Schulz, M

    2008-02-15

    Many tools that target parallel and distributed environments must co-locate a set of daemons with the distributed processes of the target application. However, efficient and portable deployment of these daemons on large scale systems is an unsolved problem. We overcome this gap with LaunchMON, a scalable, robust, portable, secure, and general purpose infrastructure for launching tool daemons. Its API allows tool builders to identify all processes of a target job, launch daemons on the relevant nodes and control daemon interaction. Our results show that Launch-MON scales to very large daemon counts and substantially enhances performance over existing ad hoc mechanisms.

  10. SPRNG Scalable Parallel Random Number Generator LIbrary

    Energy Science and Technology Software Center (ESTSC)

    2010-03-16

    This revision corrects some errors in SPRNG 1. Users of newer SPRNG versions can obtain the corrected files and build their version with it. This version also improves the scalability of some of the application-based tests in the SPRNG test suite. It also includes an interface to a parallel Mersenne Twister, so that if users install the Mersenne Twister, then they can test this generator with the SPRNG test suite and also use some SPRNGmore » features with that generator.« less

  11. Scalable Unix tools on parallel processors

    SciTech Connect

    Gropp, W.; Lusk, E.

    1994-12-31

    The introduction of parallel processors that run a separate copy of Unix on each process has introduced new problems in managing the user`s environment. This paper discusses some generalizations of common Unix commands for managing files (e.g. 1s) and processes (e.g. ps) that are convenient and scalable. These basic tools, just like their Unix counterparts, are text-based. We also discuss a way to use these with a graphical user interface (GUI). Some notes on the implementation are provided. Prototypes of these commands are publicly available.

  12. Dynamic superhydrophobic behavior in scalable random textured polymeric surfaces

    NASA Astrophysics Data System (ADS)

    Moreira, David; Park, Sung-hoon; Lee, Sangeui; Verma, Neil; Bandaru, Prabhakar R.

    2016-03-01

    Superhydrophobic (SH) surfaces, created from hydrophobic materials with micro- or nano- roughness, trap air pockets in the interstices of the roughness, leading, in fluid flow conditions, to shear-free regions with finite interfacial fluid velocity and reduced resistance to flow. Significant attention has been given to SH conditions on ordered, periodic surfaces. However, in practical terms, random surfaces are more applicable due to their relative ease of fabrication. We investigate SH behavior on a novel durable polymeric rough surface created through a scalable roll-coating process with varying micro-scale roughness through velocity and pressure drop measurements. We introduce a new method to construct the velocity profile over SH surfaces with significant roughness in microchannels. Slip length was measured as a function of differing roughness and interstitial air conditions, with roughness and air fraction parameters obtained through direct visualization. The slip length was matched to scaling laws with good agreement. Roughness at high air fractions led to a reduced pressure drop and higher velocities, demonstrating the effectiveness of the considered surface in terms of reduced resistance to flow. We conclude that the observed air fraction under flow conditions is the primary factor determining the response in fluid flow. Such behavior correlated well with the hydrophobic or superhydrophobic response, indicating significant potential for practical use in enhancing fluid flow efficiency.

  13. Designing Scalable PGAS Communication Subsystems on Cray Gemini Interconnect

    SciTech Connect

    Vishnu, Abhinav; Daily, Jeffrey A.; Palmer, Bruce J.

    2012-12-26

    The Cray Gemini Interconnect has been recently introduced as a next generation network architecture for building multi-petaflop supercomputers. Cray XE6 systems including LANL Cielo, NERSC Hopper, ORNL Titan and proposed NCSA BlueWaters leverage the Gemini Interconnect as their primary Interconnection network. At the same time, programming models such as the Message Passing Interface (MPI) and Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) and Co-Array Fortran (CAF) have become available on these systems. Global Arrays is a popular PGAS model used in a variety of application domains including hydrodynamics, chemistry and visualization. Global Arrays uses Aggregate Re- mote Memory Copy Interface (ARMCI) as the communication runtime system for Remote Memory Access communication. This paper presents a design, implementation and performance evaluation of scalable and high performance communication subsystems on Cray Gemini Interconnect using ARMCI. The design space is explored and time-space complexities of commu- nication protocols for one-sided communication primitives such as contiguous and uniformly non-contiguous datatypes, atomic memory operations (AMOs) and memory synchronization is presented. An implementation of the proposed design (referred as ARMCI-Gemini) demonstrates the efficacy on communication primitives, application kernels such as LU decomposition and full applications such as Smooth Particle Hydrodynamics (SPH) application.

  14. Computational scalability of large size image dissemination

    NASA Astrophysics Data System (ADS)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  15. An Open Infrastructure for Scalable, Reconfigurable Analysis

    SciTech Connect

    de Supinski, B R; Fowler, R; Gamblin, T; Mueller, F; Ratn, P; Schulz, M

    2008-05-15

    Petascale systems will have hundreds of thousands of processor cores so their applications must be massively parallel. Effective use of petascale systems will require efficient interprocess communication through memory hierarchies and complex network topologies. Tools to collect and analyze detailed data about this communication would facilitate its optimization. However, several factors complicate tool design. First, large-scale runs on petascale systems will be a precious commodity, so scalable tools must have almost no overhead. Second, the volume of performance data from petascale runs could easily overwhelm hand analysis and, thus, tools must collect only data that is relevant to diagnosing performance problems. Analysis must be done in-situ, when available processing power is proportional to the data. We describe a tool framework that overcomes these complications. Our approach allows application developers to combine existing techniques for measurement, analysis, and data aggregation to develop application-specific tools quickly. Dynamic configuration enables application developers to select exactly the measurements needed and generic components support scalable aggregation and analysis of this data with little additional effort.

  16. A scalable and operationally simple radical trifluoromethylation

    PubMed Central

    Beatty, Joel W.; Douglas, James J.; Cole, Kevin P.; Stephenson, Corey R. J.

    2015-01-01

    The large number of reagents that have been developed for the synthesis of trifluoromethylated compounds is a testament to the importance of the CF3 group as well as the associated synthetic challenge. Current state-of-the-art reagents for appending the CF3 functionality directly are highly effective; however, their use on preparative scale has minimal precedent because they require multistep synthesis for their preparation, and/or are prohibitively expensive for large-scale application. For a scalable trifluoromethylation methodology, trifluoroacetic acid and its anhydride represent an attractive solution in terms of cost and availability; however, because of the exceedingly high oxidation potential of trifluoroacetate, previous endeavours to use this material as a CF3 source have required the use of highly forcing conditions. Here we report a strategy for the use of trifluoroacetic anhydride for a scalable and operationally simple trifluoromethylation reaction using pyridine N-oxide and photoredox catalysis to affect a facile decarboxylation to the CF3 radical. PMID:26258541

  17. A scalable and operationally simple radical trifluoromethylation.

    PubMed

    Beatty, Joel W; Douglas, James J; Cole, Kevin P; Stephenson, Corey R J

    2015-01-01

    The large number of reagents that have been developed for the synthesis of trifluoromethylated compounds is a testament to the importance of the CF3 group as well as the associated synthetic challenge. Current state-of-the-art reagents for appending the CF3 functionality directly are highly effective; however, their use on preparative scale has minimal precedent because they require multistep synthesis for their preparation, and/or are prohibitively expensive for large-scale application. For a scalable trifluoromethylation methodology, trifluoroacetic acid and its anhydride represent an attractive solution in terms of cost and availability; however, because of the exceedingly high oxidation potential of trifluoroacetate, previous endeavours to use this material as a CF3 source have required the use of highly forcing conditions. Here we report a strategy for the use of trifluoroacetic anhydride for a scalable and operationally simple trifluoromethylation reaction using pyridine N-oxide and photoredox catalysis to affect a facile decarboxylation to the CF3 radical. PMID:26258541

  18. Using the scalable nonlinear equations solvers package

    SciTech Connect

    Gropp, W.D.; McInnes, L.C.; Smith, B.F.

    1995-02-01

    SNES (Scalable Nonlinear Equations Solvers) is a software package for the numerical solution of large-scale systems of nonlinear equations on both uniprocessors and parallel architectures. SNES also contains a component for the solution of unconstrained minimization problems, called SUMS (Scalable Unconstrained Minimization Solvers). Newton-like methods, which are known for their efficiency and robustness, constitute the core of the package. As part of the multilevel PETSc library, SNES incorporates many features and options from other parts of PETSc. In keeping with the spirit of the PETSc library, the nonlinear solution routines are data-structure-neutral, making them flexible and easily extensible. This users guide contains a detailed description of uniprocessor usage of SNES, with some added comments regarding multiprocessor usage. At this time the parallel version is undergoing refinement and extension, as we work toward a common interface for the uniprocessor and parallel cases. Thus, forthcoming versions of the software will contain additional features, and changes to parallel interface may result at any time. The new parallel version will employ the MPI (Message Passing Interface) standard for interprocessor communication. Since most of these details will be hidden, users will need to perform only minimal message-passing programming.

  19. Unequal erasure protection technique for scalable multistreams.

    PubMed

    Dumitrescu, Sorina; Rivers, Geoffrey; Shirani, Shahram

    2010-02-01

    This paper presents a novel unequal erasure protection (UEP) strategy for the transmission of scalable data, formed by interleaving independently decodable and scalable streams, over packet erasure networks. The technique, termed multistream UEP (M-UEP), differs from the traditional UEP strategy by: 1) placing separate streams in separate packets to establish independence and 2) using permuted systematic Reed-Solomon codes to enhance the distribution of message symbols amongst the packets. M-UEP improves upon UEP by ensuring that all received source symbols are decoded. The R-D optimal redundancy allocation problem for M-UEP is formulated and its globally optimal solution is shown to have a time complexity of O(2(N)N(L+1)(N+1)) , where N is the number of packets and L is the packet length. To address the high complexity of the globally optimal solution, an efficient suboptimal algorithm is proposed which runs in O(N(2)L(2)) time. The proposed M-UEP algorithm is applied on SPIHT coded images in conjunction with an appropriate grouping of wavelet coefficients into streams. The experimental results reveal that M-UEP consistently outperforms the traditional UEP reaching peak improvements of 0.6 dB. Moreover, our tests show that M-UEP is more robust than UEP in adverse channel conditions. PMID:19783503

  20. Scalable hardbody and plume optical signatures

    NASA Astrophysics Data System (ADS)

    Crow, Dennis R.; Hawes, Fred; Braunstein, Matthew; Coker, Charles F.; Smith, Thomas, Jr.

    2004-08-01

    The Fast Line-of-sight Imagery for Target and Exhaust Signatures (FLITES) is a High Performance Computing (HPC-CHSSI) and Missile Defense Agency (MDA) funded effort that provides a scalable program to compute highly resolved temporal, spatial, and spectral hardbody and plume optical signatures. Distributed processing capabilities are included to allow complex, high fidelity, solutions to be generated quickly generated. The distributed processing logic includes automated load balancing algorithms to facilitate scalability using large numbers of processors. To enhance exhaust plume optical signature capabilities, FLITES employs two different radiance transport algorithms. The first algorithm is the traditional Curtis-Godson bandmodel approach and is provided to support comparisons to historical results and high-frame rate production requirements. The second algorithm is the Quasi Bandmodel Line-by-line (QBL) approach, which uses randomly placed "cloned" spectral lines to yield highly resolved radiation spectra for increased accuracy while maintaining tractable runtimes. This capability will provide a significant advancement over the traditional SPURC/SIRRM radiance transport methodology.

  1. Towards Scalable Graph Computation on Mobile Devices

    PubMed Central

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  2. Scalable enantioselective total synthesis of taxanes

    NASA Astrophysics Data System (ADS)

    Mendoza, Abraham; Ishihara, Yoshihiro; Baran, Phil S.

    2012-01-01

    Taxanes form a large family of terpenes comprising over 350 members, the most famous of which is Taxol (paclitaxel), a billion-dollar anticancer drug. Here, we describe the first practical and scalable synthetic entry to these natural products via a concise preparation of (+)-taxa-4(5),11(12)-dien-2-one, which has a suitable functional handle with which to access more oxidized members of its family. This route enables a gram-scale preparation of the ‘parent’ taxane—taxadiene—which is the largest quantity of this naturally occurring terpene ever isolated or prepared in pure form. The characteristic 6-8-6 tricyclic system of the taxane family, containing a bridgehead alkene, is forged via a vicinal difunctionalization/Diels-Alder strategy. Asymmetry is introduced by means of an enantioselective conjugate addition that forms an all-carbon quaternary centre, from which all other stereocentres are fixed through substrate control. This study lays a critical foundation for a planned access to minimally oxidized taxane analogues and a scalable laboratory preparation of Taxol itself.

  3. Scalable Quantum Computing Over the Rainbow

    NASA Astrophysics Data System (ADS)

    Pfister, Olivier; Menicucci, Nicolas C.; Flammia, Steven T.

    2011-03-01

    The physical implementation of nontrivial quantum computing is an experimental challenge due to decoherence and the need for scalability. Recently we proved a novel theoretical scheme for realizing a scalable quantum register of very large size, entangled in a cluster state, in the optical frequency comb (OFC) defined by the eigenmodes of a single optical parametric oscillator (OPO). The classical OFC is well known as implemented by the femtosecond, carrier-envelope-phase- and mode-locked lasers which have redefined frequency metrology in recent years. The quantum OFC is a set of harmonic oscillators, or Qmodes, whose amplitude and phase quadratures are continuous variables, the manipulation of which is a mature field for one or two Qmodes. We have shown that the nonlinear optical medium of a single OPO can be engineered, in a sophisticated but already demonstrated manner, so as to entangle in constant time the OPO's OFC into a finitely squeezed, Gaussian cluster state suitable for universal quantum computing over continuous variables. Here we summarize our theoretical result and survey the ongoing experimental efforts in this direction.

  4. Scalable Feature Matching by Dual Cascaded Scalar Quantization for Image Retrieval.

    PubMed

    Zhou, Wengang; Yang, Ming; Wang, Xiaoyu; Li, Houqiang; Lin, Yuanqing; Tian, Qi

    2016-01-01

    In this paper, we investigate the problem of scalable visual feature matching in large-scale image search and propose a novel cascaded scalar quantization scheme in dual resolution. We formulate the visual feature matching as a range-based neighbor search problem and approach it by identifying hyper-cubes with a dual-resolution scalar quantization strategy. Specifically, for each dimension of the PCA-transformed feature, scalar quantization is performed at both coarse and fine resolutions. The scalar quantization results at the coarse resolution are cascaded over multiple dimensions to index an image database. The scalar quantization results over multiple dimensions at the fine resolution are concatenated into a binary super-vector and stored into the index list for efficient verification. The proposed cascaded scalar quantization (CSQ) method is free of the costly visual codebook training and thus is independent of any image descriptor training set. The index structure of the CSQ is flexible enough to accommodate new image features and scalable to index large-scale image database. We evaluate our approach on the public benchmark datasets for large-scale image retrieval. Experimental results demonstrate the competitive retrieval performance of the proposed method compared with several recent retrieval algorithms on feature quantization. PMID:26656584

  5. Trelliscope: A System for Detailed Visualization in Analysis of Large Complex Data

    SciTech Connect

    Hafen, Ryan P.; Gosink, Luke J.; McDermott, Jason E.; Rodland, Karin D.; Kleese-Van Dam, Kerstin; Cleveland, William S.

    2013-12-01

    Visualization plays a critical role in the statistical model building and data analysis process. Data analysts, well-versed in statistical and machine learning methods, visualize data to hypothesize and validate models. These analysts need flexible, scalable visualization tools that are not decoupled from their analysis environment. In this paper we introduce Trelliscope, a visualization framework for statistical analysis of large complex data. Trelliscope extends Trellis, an effective visualization framework that divides data into subsets and applies a plotting method to each subset, arranging the results in rows and columns of panels. Trelliscope provides a way to create, arrange and interactively view panels for very large datasets, enabling flexible detailed visualization for data of any size. Scalability is achieved using distributed computing technologies coupled with . We discuss the underlying principles, design, and scalable architecture of Trelliscope, and illustrate its use on three analysis projects in the domains of proteomics, high energy physics, and power systems engineering.

  6. Resource-constrained complexity-scalable video decoding via adaptive B-residual computation

    NASA Astrophysics Data System (ADS)

    Peng, Sharon S.; Zhong, Zhun

    2002-01-01

    As media processing gradually migrates from hardware to software programmable platforms, the number of media processing functions added on the media processor grow even faster than the ever-increasing media processor power can support. Computational complexity scalable algorithms become powerful vehicles for implementing many time-critical yet complexity-constrained applications, such as MPEG2 video decoding. In this paper, we present an adaptive resource-constrained complexity scalable MPEG2 video decoding scheme that makes a good trade-off between decoding complexity and output quality. Based on the available computational resources and the energy level of B-frame residuals, the scalable decoding algorithm selectively decodes B-residual blocks to significantly reduce system complexity. Furthermore, we describe an iterative procedure designed to dynamically adjust the complexity levels in order to achieve the best possible output quality under a given resource constraint. Experimental results show that up to 20% of total computational complexity reduction can be obtained with satisfactory output visual quality.

  7. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells.

    PubMed

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-01-01

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level. PMID:27005630

  8. A scalable distributed paradigm for multi-user interaction with tiled rear projection display walls.

    PubMed

    Roman, Pablo; Lazarov, Maxim; Majumder, Aditi

    2010-01-01

    We present the first distributed paradigm for multiple users to interact simultaneously with large tiled rear projection display walls. Unlike earlier works, our paradigm allows easy scalability across different applications, interaction modalities, displays and users. The novelty of the design lies in its distributed nature allowing well-compartmented, application independent, and application specific modules. This enables adapting to different 2D applications and interaction modalities easily by changing a few application specific modules. We demonstrate four challenging 2D applications on a nine projector display to demonstrate the application scalability of our method: map visualization, virtual graffiti, virtual bulletin board and an emergency management system. We demonstrate the scalability of our method to multiple interaction modalities by showing both gesture-based and laser-based user interfaces. Finally, we improve earlier distributed methods to register multiple projectors. Previous works need multiple patterns to identify the neighbors, the configuration of the display and the registration across multiple projectors in logarithmic time with respect to the number of projectors in the display. We propose a new approach that achieves this using a single pattern based on specially augmented QR codes in constant time. Further, previous distributed registration algorithms are prone to large misregistrations. We propose a novel radially cascading geometric registration technique that yields significantly better accuracy. Thus, our improvements allow a significantly more efficient and accurate technique for distributed self-registration of multi-projector display walls. PMID:20975205

  9. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells

    PubMed Central

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-01-01

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level. PMID:27005630

  10. Fundamental research on scalable DNA molecular computation

    NASA Astrophysics Data System (ADS)

    Wang, Sixue

    Beginning with the ground-breaking work on DNA computation by Adleman in 1994 [2], the idea of using DNA molecules to perform computations has been explored extensively. In this thesis, a computation based on a scalable DNA neural network was discussed and a neuron model was partially implemented using DNA molecules. In order to understand the behavior of short DNA strands in a polyacrylamide gel, we have measured the mobilities of various short single-stranded DNA (ssDNA) and double-stranded DNA (dsDNA) shorter than 100 bases. We found that sufficiently short lengths of ssDNA had a higher mobility than same lengths of dsDNA, with a crossover length Lx at which the mobilities are equal. The crossover length decreases approximately linearly with polyacrylamide gel acrylamide concentration. At the same time, the influence of DNA structure on its mobility was studied and the effect of single-stranded overhangs on dsDNA was discussed. The idea to make a scalable DNA neural network was discussed. To prepare our basis vector DNA oligomers, a 90 base DNA template with 50 base random strand in the middle and two 20 base primers on the ends was designed and purchased. By a series of dilutions, we obtained several aliquots, containing only 30 random sequence molecules each. These were amplified to roughly 5 pico mole quantities by 38 cycles of PCR with hot start DNA polymerase. We then used asymmetric PCR followed by polyacrylamide gel purification to get the necessary single-stranded basis vectors (ssDNA) and their complements. We tested the suitability of this scheme by adding two vectors formed from different linear of the basis vectors. The full scheme for DNA neural network computation was tested using two determinate ssDNA strands. We successfully transformed an input DNA oligomer to a different output oligomer using the polymerase reaction required by the proposed DNA neural network algorithm. Isothermal linear amplification was used to obtain a sufficient quantity of

  11. Network selection, Information filtering and Scalable computation

    NASA Astrophysics Data System (ADS)

    Ye, Changqing

    -complete factorizations, possibly with a high percentage of missing values. This promotes additional sparsity beyond rank reduction. Computationally, we design methods based on a ``decomposition and combination'' strategy, to break large-scale optimization into many small subproblems to solve in a recursive and parallel manner. On this basis, we implement the proposed methods through multi-platform shared-memory parallel programming, and through Mahout, a library for scalable machine learning and data mining, for mapReduce computation. For example, our methods are scalable to a dataset consisting of three billions of observations on a single machine with sufficient memory, having good timings. Both theoretical and numerical investigations show that the proposed methods exhibit significant improvement in accuracy over state-of-the-art scalable methods.

  12. Visualization for Molecular Dynamics Simulation of Gas and Metal Surface Interaction

    NASA Astrophysics Data System (ADS)

    Puzyrkov, D.; Polyakov, S.; Podryga, V.

    2016-02-01

    The development of methods, algorithms and applications for visualization of molecular dynamics simulation outputs is discussed. The visual analysis of the results of such calculations is a complex and actual problem especially in case of the large scale simulations. To solve this challenging task it is necessary to decide on: 1) what data parameters to render, 2) what type of visualization to choose, 3) what development tools to use. In the present work an attempt to answer these questions was made. For visualization it was offered to draw particles in the corresponding 3D coordinates and also their velocity vectors, trajectories and volume density in the form of isosurfaces or fog. We tested the way of post-processing and visualization based on the Python language with use of additional libraries. Also parallel software was developed that allows processing large volumes of data in the 3D regions of the examined system. This software gives the opportunity to achieve desired results that are obtained in parallel with the calculations, and at the end to collect discrete received frames into a video file. The software package "Enthought Mayavi2" was used as the tool for visualization. This visualization application gave us the opportunity to study the interaction of a gas with a metal surface and to closely observe the adsorption effect.

  13. Visual Text Analytics for Impromptu Analysts

    SciTech Connect

    Love, Oriana J.; Best, Daniel M.; Bruce, Joseph R.; Dowson, Scott T.; Larmey, Christopher S.

    2011-10-23

    The Scalable Reasoning System (SRS) is a lightweight visual analytics framework that makes analytical capabilities widely accessible to a class of users we have deemed “impromptu analysts.” By focusing on a deployment of SRS, the Lessons Learned Explorer (LLEx), we examine how to develop visualizations around analytical-oriented goals and data availability. We discuss how to help impromptu analysts to explore deeper patterns. Through designing consistent interactions, we arrive at an interdependent view capable of showcasing patterns. With the combination of SRS widget visualizations and interactions around the underlying textual data, we aim to transition the casual, infrequent user into a viable–albeit impromptu–analyst.

  14. Network-aware scalable video monitoring system for emergency situations with operator-managed fidelity control

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos

    2014-05-01

    In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier

  15. Porphyrins as Catalysts in Scalable Organic Reactions.

    PubMed

    Barona-Castaño, Juan C; Carmona-Vargas, Christian C; Brocksom, Timothy J; de Oliveira, Kleber T

    2016-01-01

    Catalysis is a topic of continuous interest since it was discovered in chemistry centuries ago. Aiming at the advance of reactions for efficient processes, a number of approaches have been developed over the last 180 years, and more recently, porphyrins occupy an important role in this field. Porphyrins and metalloporphyrins are fascinating compounds which are involved in a number of synthetic transformations of great interest for industry and academy. The aim of this review is to cover the most recent progress in reactions catalysed by porphyrins in scalable procedures, thus presenting the state of the art in reactions of epoxidation, sulfoxidation, oxidation of alcohols to carbonyl compounds and C-H functionalization. In addition, the use of porphyrins as photocatalysts in continuous flow processes is covered. PMID:27005601

  16. Efficient scalable solid-state neutron detector

    SciTech Connect

    Moses, Daniel

    2015-06-15

    We report on scalable solid-state neutron detector system that is specifically designed to yield high thermal neutron detection sensitivity. The basic detector unit in this system is made of a {sup 6}Li foil coupled to two crystalline silicon diodes. The theoretical intrinsic efficiency of a detector-unit is 23.8% and that of detector element comprising a stack of five detector-units is 60%. Based on the measured performance of this detector-unit, the performance of a detector system comprising a planar array of detector elements, scaled to encompass effective area of 0.43 m{sup 2}, is estimated to yield the minimum absolute efficiency required of radiological portal monitors used in homeland security.

  17. BASSET: Scalable Gateway Finder in Large Graphs

    SciTech Connect

    Tong, H; Papadimitriou, S; Faloutsos, C; Yu, P S; Eliassi-Rad, T

    2010-11-03

    Given a social network, who is the best person to introduce you to, say, Chris Ferguson, the poker champion? Or, given a network of people and skills, who is the best person to help you learn about, say, wavelets? The goal is to find a small group of 'gateways': persons who are close enough to us, as well as close enough to the target (person, or skill) or, in other words, are crucial in connecting us to the target. The main contributions are the following: (a) we show how to formulate this problem precisely; (b) we show that it is sub-modular and thus it can be solved near-optimally; (c) we give fast, scalable algorithms to find such gateways. Experiments on real data sets validate the effectiveness and efficiency of the proposed methods, achieving up to 6,000,000x speedup.

  18. Stability and scalability of piezoelectric flag

    NASA Astrophysics Data System (ADS)

    Wang, Xiaolin; Alben, Silas; Li, Chenyang; Young, Yin Lu

    2015-11-01

    Piezoelectric material (PZT) has drawn enormous attention in the past decades due to its ability to convert mechanical deformation energy into electrical potential energy, and vice versa, and has been applied to energy harvesting and vibration control. In this work, we consider the effect of PZT on the stability of a flexible flag using the inviscid vortex-sheet model. We find that the critical flutter speed is increased due to the extra damping effect of the PZT, and can also be altered by tuning the output inductance-resistance circuit. Optimal resistance and inductance are found to either maximize or minimize the flutter speed. The former application is useful for the vibration control while the latter is important for energy harvesting. We also discuss the scalability of above system to the actual application in air and water.

  19. Scalable problems and memory bounded speedup

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Ni, Lionel M.

    1992-01-01

    In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time speedup and memory-bounded speedup. The latter two consider the relationship between speedup and problem scalability. Two sets of speedup formulations are derived for these three models. One set considers uneven workload allocation and communication overhead and gives more accurate estimation. Another set considers a simplified case and provides a clear picture on the impact of the sequential portion of an application on the possible performance gain from parallel processing. The simplified fixed-size speedup is Amdahl's law. The simplified fixed-time speedup is Gustafson's scaled speedup. The simplified memory-bounded speedup contains both Amdahl's law and Gustafson's scaled speedup as special cases. This study leads to a better understanding of parallel processing.

  20. A versatile scalable PET processing system

    SciTech Connect

    H. Dong, A. Weisenberger, J. McKisson, Xi Wenze, C. Cuevas, J. Wilson, L. Zukerman

    2011-06-01

    Positron Emission Tomography (PET) historically has major clinical and preclinical applications in cancerous oncology, neurology, and cardiovascular diseases. Recently, in a new direction, an application specific PET system is being developed at Thomas Jefferson National Accelerator Facility (Jefferson Lab) in collaboration with Duke University, University of Maryland at Baltimore (UMAB), and West Virginia University (WVU) targeted for plant eco-physiology research. The new plant imaging PET system is versatile and scalable such that it could adapt to several plant imaging needs - imaging many important plant organs including leaves, roots, and stems. The mechanical arrangement of the detectors is designed to accommodate the unpredictable and random distribution in space of the plant organs without requiring the plant be disturbed. Prototyping such a system requires a new data acquisition system (DAQ) and data processing system which are adaptable to the requirements of these unique and versatile detectors.

  1. Scalable computer architecture for digital vascular systems

    NASA Astrophysics Data System (ADS)

    Goddard, Iain; Chao, Hui; Skalabrin, Mark

    1998-06-01

    Digital vascular computer systems are used for radiology and fluoroscopy (R/F), angiography, and cardiac applications. In the United States alone, about 26 million procedures of these types are performed annually: about 81% R/F, 11% cardiac, and 8% angiography. Digital vascular systems have a very wide range of performance requirements, especially in terms of data rates. In addition, new features are added over time as they are shown to be clinically efficacious. Application-specific processing modes such as roadmapping, peak opacification, and bolus chasing are particular to some vascular systems. New algorithms continue to be developed and proven, such as Cox and deJager's precise registration methods for masks and live images in digital subtraction angiography. A computer architecture must have high scalability and reconfigurability to meet the needs of this modality. Ideally, the architecture could also serve as the basis for a nonvascular R/F system.

  2. Efficient scalable solid-state neutron detector

    NASA Astrophysics Data System (ADS)

    Moses, Daniel

    2015-06-01

    We report on scalable solid-state neutron detector system that is specifically designed to yield high thermal neutron detection sensitivity. The basic detector unit in this system is made of a 6Li foil coupled to two crystalline silicon diodes. The theoretical intrinsic efficiency of a detector-unit is 23.8% and that of detector element comprising a stack of five detector-units is 60%. Based on the measured performance of this detector-unit, the performance of a detector system comprising a planar array of detector elements, scaled to encompass effective area of 0.43 m2, is estimated to yield the minimum absolute efficiency required of radiological portal monitors used in homeland security.

  3. Efficient scalable solid-state neutron detector.

    PubMed

    Moses, Daniel

    2015-06-01

    We report on scalable solid-state neutron detector system that is specifically designed to yield high thermal neutron detection sensitivity. The basic detector unit in this system is made of a (6)Li foil coupled to two crystalline silicon diodes. The theoretical intrinsic efficiency of a detector-unit is 23.8% and that of detector element comprising a stack of five detector-units is 60%. Based on the measured performance of this detector-unit, the performance of a detector system comprising a planar array of detector elements, scaled to encompass effective area of 0.43 m(2), is estimated to yield the minimum absolute efficiency required of radiological portal monitors used in homeland security. PMID:26133869

  4. Parallel scalability of Hartree–Fock calculations

    SciTech Connect

    Chow, Edmond Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-14

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree–Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  5. Scalable ranked retrieval using document images

    NASA Astrophysics Data System (ADS)

    Jain, Rajiv; Oard, Douglas W.; Doermann, David

    2013-12-01

    Despite the explosion of text on the Internet, hard copy documents that have been scanned as images still play a significant role for some tasks. The best method to perform ranked retrieval on a large corpus of document images, however, remains an open research question. The most common approach has been to perform text retrieval using terms generated by optical character recognition. This paper, by contrast, examines whether a scalable segmentation-free image retrieval algorithm, which matches sub-images containing text or graphical objects, can provide additional benefit in satisfying a user's information needs on a large, real world dataset. Results on 7 million scanned pages from the CDIP v1.0 test collection show that content based image retrieval finds a substantial number of documents that text retrieval misses, and that when used as a basis for relevance feedback can yield improvements in retrieval effectiveness.

  6. Toward Scalable Benchmarks for Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1996-01-01

    This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short- and long-term behavior. These benchmarks should be both portable and scalable so they may be used on storage systems from tens of gigabytes to petabytes or more. By developing a standard set of benchmarks that reflect real user workload, we hope to encourage system designers and users to publish performance figures that can be compared with those of other systems. This will allow users to choose the system that best meets their needs and give designers a tool with which they can measure the performance effects of improvements to their systems.

  7. VRML and Collaborative Environments: New Tools for Networked Visualization

    NASA Astrophysics Data System (ADS)

    Crutcher, R. M.; Plante, R. L.; Rajlich, P.

    We present two new applications that engage the network as a tool for astronomical research and/or education. The first is a VRML server which allows users over the Web to interactively create three-dimensional visualizations of FITS images contained in the NCSA Astronomy Digital Image Library (ADIL). The server's Web interface allows users to select images from the ADIL, fill in processing parameters, and create renderings featuring isosurfaces, slices, contours, and annotations; the often extensive computations are carried out on an NCSA SGI supercomputer server without the user having an individual account on the system. The user can then download the 3D visualizations as VRML files, which may be rotated and manipulated locally on virtually any class of computer. The second application is the ADILBrowser, a part of the NCSA Horizon Image Data Browser Java package. ADILBrowser allows a group of participants to browse images from the ADIL within a collaborative session. The collaborative environment is provided by the NCSA Habanero package which includes text and audio chat tools and a white board. The ADILBrowser is just an example of a collaborative tool that can be built with the Horizon and Habanero packages. The classes provided by these packages can be assembled to create custom collaborative applications that visualize data either from local disk or from anywhere on the network.

  8. Clip art rendering of smooth isosurfaces.

    PubMed

    Stroila, Matei; Eisemann, Elmar; Hart, John

    2008-01-01

    Clip art is a simplified illustration form consisting of layered filled polygons or closed curves used to convey 3D shape information in a 2D vector graphics format. This paper focuses on the problem of direct conversion of smooth surfaces, ranging from the free-form shapes of art and design to the mathematical structures of geometry and topology, into a clip art form suitable for illustration use in books, papers and presentations. We show how to represent silhouette, shadow, gleam and other surface feature curves as the intersection of implicit surfaces, and derive equations for their efficient interrogation via particle chains. We further describe how to sort, orient, identify and fill the closed regions that overlay to form clip art. We demonstrate the results with numerous renderings used to illustrate the paper itself. PMID:17993708

  9. Fully scalable video transmission using the SSM adaptation framework

    NASA Astrophysics Data System (ADS)

    Mukherjee, Debargha; Chen, Peisong; Hsiang, Shih-Ta; Woods, John W.; Said, Amir

    2003-06-01

    Recently a methodology for representation and adaptation of arbitrary scalable bit-streams in a fully content non-specific manner has been proposed on the basis of a universal model for all scalable bit-streams called Scalable Structured Meta-formats (SSM). According to this model, elementary scalable bit-streams are naturally organized in a symmetric multi-dimensional logical structure. The model parameters for a specific bit-stream along with information guiding decision-making among possible adaptation choices are represented in a binary or XML descriptor to accompany the bit-stream flowing downstream. The capabilities and preferences of receiving terminals flow upstream and are also specified in binary or XML form to represent constraints that guide adaptation. By interpreting the descriptor and the constraint specifications, a universal adaptation engine sitting on a network node can adapt the content appropriately to suit the specified needs and preferences of recipients, without knowledge of the specifics of the content, its encoding and/or encryption. In this framework, different adaptation infrastructures are no longer needed for different types of scalable media. In this work, we show how this framework can be used to adapt fully scalable video bit-streams, specifically ones obtained by the fully scalable MC-EZBC video coding system. MC-EZBC uses a 3-D subband/wavelet transform that exploits correlation by filtering along motion trajectories, to obtain a 3-dimensional scalable bit-stream combining temporal, spatial and SNR scalability in a compact bit-stream. Several adaptation use cases are presented to demonstrate the flexibility and advantages of a fully scalable video bit-stream when used in conjunction with a network adaptation engine for transmission.

  10. An efficient scalable intra coding algorithm for spatial scalability in enhancement layer

    NASA Astrophysics Data System (ADS)

    Wang, Zhang; Lu, Lijun

    2011-05-01

    Scalable video coding (SVC) is attractive due to the capability of reconstructing lower resolution or lower quality signals from partial bit streams, which allows for simple solutions adaptted to network and terminal capabilities. This article addresses the spatial scalability of SVC and proposes an efficient H.264-based scalable intra coding algorithm. In comparison with precious single layer intra prediction (SLIP) method, the proposed algorithm aims to improve the intra coding performance of the enhancement layer by a new inter layer intra prediction (ILIP) method. The main idea of ILIP is that up-sampled and reconstructed pixels of the base layer are very useful to predict and encode those pixels of the enhancement layer, especially when those neighbouring pixels are not available. Experimental results show that the peak signal to noise ratio (PSNR) data of luminance component of encoded frames are improved, and both bit-rates and computation complexity are maintained very well. For sequence Football, the average increase of PSNR is up to 0.21 dB, while for Foreman and Bus, they are 0.14 dB and 0.17 dB, respectively.

  11. Developing a personal computer-based data visualization system using public domain software

    NASA Astrophysics Data System (ADS)

    Chen, Philip C.

    1999-03-01

    The current research will investigate the possibility of developing a computing-visualization system using a public domain software system built on a personal computer. Visualization Toolkit (VTK) is available on UNIX and PC platforms. VTK uses C++ to build an executable. It has abundant programming classes/objects that are contained in the system library. Users can also develop their own classes/objects in addition to those existing in the class library. Users can develop applications with any of the C++, Tcl/Tk, and JAVA environments. The present research will show how a data visualization system can be developed with VTK running on a personal computer. The topics will include: execution efficiency; visual object quality; availability of the user interface design; and exploring the feasibility of the VTK-based World Wide Web data visualization system. The present research will feature a case study showing how to use VTK to visualize meteorological data with techniques including, iso-surface, volume rendering, vector display, and composite analysis. The study also shows how the VTK outline, axes, and two-dimensional annotation text and title are enhancing the data presentation. The present research will also demonstrate how VTK works in an internet environment while accessing an executable with a JAVA application programing in a webpage.

  12. Visualization of flaws within heavy section ultrasonic test blocks using high energy computed tomography

    SciTech Connect

    House, M.B.; Ross, D.M.; Janucik, F.X.; Friedman, W.D.; Yancey, R.N.

    1996-05-01

    The feasibility of high energy computed tomography (9 MeV) to detect volumetric and planar discontinuities in large pressure vessel mock-up blocks was studied. The data supplied by the manufacturer of the test blocks on the intended flaw geometry were compared to manual, contact ultrasonic test and computed tomography test data. Subsequently, a visualization program was used to construct fully three-dimensional morphological information enabling interactive data analysis on the detected flaws. Density isosurfaces show the relative shape and location of the volumetric defects within the mock-up blocks. Such a technique may be used to qualify personnel or newly developed ultrasonic test methods without the associated high cost of destructive evaluation. Data is presented showing the capability of the volumetric data analysis program to overlay the computed tomography and destructive evaluation (serial metallography) data for a direct, three-dimensional comparison.

  13. Visualization of time-varying MRI data for MS lesion analysis

    NASA Astrophysics Data System (ADS)

    Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella

    2001-05-01

    Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.

  14. Visual agnosia.

    PubMed

    Álvarez, R; Masjuan, J

    2016-03-01

    Visual agnosia is defined as an impairment of object recognition, in the absence of visual acuity or cognitive dysfunction that would explain this impairment. This condition is caused by lesions in the visual association cortex, sparing primary visual cortex. There are 2 main pathways that process visual information: the ventral stream, tasked with object recognition, and the dorsal stream, in charge of locating objects in space. Visual agnosia can therefore be divided into 2 major groups depending on which of the two streams is damaged. The aim of this article is to conduct a narrative review of the various visual agnosia syndromes, including recent developments in a number of these syndromes. PMID:26358494

  15. Computing and Visualizing Reachable Volumes for Maneuvering Satellites

    SciTech Connect

    Jiang, M; de Vries, W H; Pertica, A J; Olivier, S S

    2011-09-11

    Detecting and predicting maneuvering satellites is an important problem for Space Situational Awareness. The spatial envelope of all possible locations within reach of such a maneuvering satellite is known as the Reachable Volume (RV). As soon as custody of a satellite is lost, calculating the RV and its subsequent time evolution is a critical component in the rapid recovery of the satellite. In this paper, we present a Monte Carlo approach to computing the RV for a given object. Essentially, our approach samples all possible trajectories by randomizing thrust-vectors, thrust magnitudes and time of burn. At any given instance, the distribution of the 'point-cloud' of the virtual particles defines the RV. For short orbital time-scales, the temporal evolution of the point-cloud can result in complex, multi-reentrant manifolds. Visualization plays an important role in gaining insight and understanding into this complex and evolving manifold. In the second part of this paper, we focus on how to effectively visualize the large number of virtual trajectories and the computed RV. We present a real-time out-of-core rendering technique for visualizing the large number of virtual trajectories. We also examine different techniques for visualizing the computed volume of probability density distribution, including volume slicing, convex hull and isosurfacing. We compare and contrast these techniques in terms of computational cost and visualization effectiveness, and describe the main implementation issues encountered during our development process. Finally, we will present some of the results from our end-to-end system for computing and visualizing RVs using examples of maneuvering satellites.

  16. Scalability and interoperability within glideinWMS

    SciTech Connect

    Bradley, D.; Sfiligoi, I.; Padhi, S.; Frey, J.; Tannenbaum, T.; /Wisconsin U., Madison

    2010-01-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  17. Scalable multichannel MRI data acquisition system.

    PubMed

    Bodurka, Jerzy; Ledden, Patrick J; van Gelderen, Peter; Chu, Renxin; de Zwart, Jacco A; Morris, Doug; Duyn, Jeff H

    2004-01-01

    A scalable multichannel digital MRI receiver system was designed to achieve high bandwidth echo-planar imaging (EPI) acquisitions for applications such as BOLD-fMRI. The modular system design allows for easy extension to an arbitrary number of channels. A 16-channel receiver was developed and integrated with a General Electric (GE) Signa 3T VH/3 clinical scanner. Receiver performance was evaluated on phantoms and human volunteers using a custom-built 16-element receive-only brain surface coil array. At an output bandwidth of 1 MHz, a 100% acquisition duty cycle was achieved. Overall system noise figure and dynamic range were better than 0.85 dB and 84 dB, respectively. During repetitive EPI scanning on phantoms, the relative temporal standard deviation of the image intensity time-course was below 0.2%. As compared to the product birdcage head coil, 16-channel reception with the custom array yielded a nearly 6-fold SNR gain in the cerebral cortex and a 1.8-fold SNR gain in the center of the brain. The excellent system stability combined with the increased sensitivity and SENSE capabilities of 16-channel coils are expected to significantly benefit and enhance fMRI applications. PMID:14705057

  18. Scalable Combinatorial Tools for Health Disparities Research

    PubMed Central

    Langston, Michael A.; Levine, Robert S.; Kilbourne, Barbara J.; Rogers, Gary L.; Kershenbaum, Anne D.; Baktash, Suzanne H.; Coughlin, Steven S.; Saxton, Arnold M.; Agboto, Vincent K.; Hood, Darryl B.; Litchveld, Maureen Y.; Oyana, Tonny J.; Matthews-Juarez, Patricia; Juarez, Paul D.

    2014-01-01

    Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject. PMID:25310540

  19. Lightweight and scalable secure communication in VANET

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoling; Lu, Yang; Zhu, Xiaojuan; Qiu, Shuwei

    2015-05-01

    To avoid a message to be tempered and forged in vehicular ad hoc network (VANET), the digital signature method is adopted by IEEE1609.2. However, the costs of the method are excessively high for large-scale networks. The paper efficiently copes with the issue with a secure communication framework by introducing some lightweight cryptography primitives. In our framework, point-to-point and broadcast communications for vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) are studied, mainly based on symmetric cryptography. A new issue incurred is symmetric key management. Thus, we develop key distribution and agreement protocols for two-party key and group key under different environments, whether a road side unit (RSU) is deployed or not. The analysis shows that our protocols provide confidentiality, authentication, perfect forward secrecy, forward secrecy and backward secrecy. The proposed group key agreement protocol especially solves the key leak problem caused by members joining or leaving in existing key agreement protocols. Due to aggregated signature and substitution of XOR for point addition, the average computation and communication costs do not significantly increase with the increase in the number of vehicles; hence, our framework provides good scalability.

  20. SCTP as scalable video coding transport

    NASA Astrophysics Data System (ADS)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  1. Dynamically scalable dual-core pipelined processor

    NASA Astrophysics Data System (ADS)

    Kumar, Nishant; Aggrawal, Ekta; Rajawat, Arvind

    2015-10-01

    This article proposes design and architecture of a dynamically scalable dual-core pipelined processor. Methodology of the design is the core fusion of two processors where two independent cores can dynamically morph into a larger processing unit, or they can be used as distinct processing elements to achieve high sequential performance and high parallel performance. Processor provides two execution modes. Mode1 is multiprogramming mode for execution of streams of instruction of lower data width, i.e., each core can perform 16-bit operations individually. Performance is improved in this mode due to the parallel execution of instructions in both the cores at the cost of area. In mode2, both the processing cores are coupled and behave like single, high data width processing unit, i.e., can perform 32-bit operation. Additional core-to-core communication is needed to realise this mode. The mode can switch dynamically; therefore, this processor can provide multifunction with single design. Design and verification of processor has been done successfully using Verilog on Xilinx 14.1 platform. The processor is verified in both simulation and synthesis with the help of test programs. This design aimed to be implemented on Xilinx Spartan 3E XC3S500E FPGA.

  2. Scalable histopathological image analysis via active learning.

    PubMed

    Zhu, Yan; Zhang, Shaoting; Liu, Wei; Metaxas, Dimitris N

    2014-01-01

    Training an effective and scalable system for medical image analysis usually requires a large amount of labeled data, which incurs a tremendous annotation burden for pathologists. Recent progress in active learning can alleviate this issue, leading to a great reduction on the labeling cost without sacrificing the predicting accuracy too much. However, most existing active learning methods disregard the "structured information" that may exist in medical images (e.g., data from individual patients), and make a simplifying assumption that unlabeled data is independently and identically distributed. Both may not be suitable for real-world medical images. In this paper, we propose a novel batch-mode active learning method which explores and leverages such structured information in annotations of medical images to enforce diversity among the selected data, therefore maximizing the information gain. We formulate the active learning problem as an adaptive submodular function maximization problem subject to a partition matroid constraint, and further present an efficient greedy algorithm to achieve a good solution with a theoretically proven bound. We demonstrate the efficacy of our algorithm on thousands of histopathological images of breast microscopic tissues. PMID:25320821

  3. A scalable neuristor built with Mott memristors

    NASA Astrophysics Data System (ADS)

    Pickett, Matthew D.; Medeiros-Ribeiro, Gilberto; Williams, R. Stanley

    2013-02-01

    The Hodgkin-Huxley model for action potential generation in biological axons is central for understanding the computational capability of the nervous system and emulating its functionality. Owing to the historical success of silicon complementary metal-oxide-semiconductors, spike-based computing is primarily confined to software simulations and specialized analogue metal-oxide-semiconductor field-effect transistor circuits. However, there is interest in constructing physical systems that emulate biological functionality more directly, with the goal of improving efficiency and scale. The neuristor was proposed as an electronic device with properties similar to the Hodgkin-Huxley axon, but previous implementations were not scalable. Here we demonstrate a neuristor built using two nanoscale Mott memristors, dynamical devices that exhibit transient memory and negative differential resistance arising from an insulating-to-conducting phase transition driven by Joule heating. This neuristor exhibits the important neural functions of all-or-nothing spiking with signal gain and diverse periodic spiking, using materials and structures that are amenable to extremely high-density integration with or without silicon transistors.

  4. A scalable neuristor built with Mott memristors.

    PubMed

    Pickett, Matthew D; Medeiros-Ribeiro, Gilberto; Williams, R Stanley

    2013-02-01

    The Hodgkin-Huxley model for action potential generation in biological axons is central for understanding the computational capability of the nervous system and emulating its functionality. Owing to the historical success of silicon complementary metal-oxide-semiconductors, spike-based computing is primarily confined to software simulations and specialized analogue metal-oxide-semiconductor field-effect transistor circuits. However, there is interest in constructing physical systems that emulate biological functionality more directly, with the goal of improving efficiency and scale. The neuristor was proposed as an electronic device with properties similar to the Hodgkin-Huxley axon, but previous implementations were not scalable. Here we demonstrate a neuristor built using two nanoscale Mott memristors, dynamical devices that exhibit transient memory and negative differential resistance arising from an insulating-to-conducting phase transition driven by Joule heating. This neuristor exhibits the important neural functions of all-or-nothing spiking with signal gain and diverse periodic spiking, using materials and structures that are amenable to extremely high-density integration with or without silicon transistors. PMID:23241533

  5. On the scalability of parallel genetic algorithms.

    PubMed

    Cantú-Paz, E; Goldberg, D E

    1999-01-01

    This paper examines the scalability of several types of parallel genetic algorithms (GAs). The objective is to determine the optimal number of processors that can be used by each type to minimize the execution time. The first part of the paper considers algorithms with a single population. The investigation focuses on an implementation where the population is distributed to several processors, but the results are applicable to more common master-slave implementations, where the population is entirely stored in a master processor and multiple slaves are used to evaluate the fitness. The second part of the paper deals with parallel GAs with multiple populations. It first considers a bounding case where the connectivity, the migration rate, and the frequency of migrations are set to their maximal values. Then, arbitrary regular topologies with lower migration rates are considered and the frequency of migrations is set to its lowest value. The investigationis mainly theoretical, but experimental evidence with an additively-decomposable function is included to illustrate the accuracy of the theory. In all cases, the calculations show that the optimal number of processors that minimizes the execution time is directly proportional to the square root of the population size and the fitness evaluation time. Since these two factors usually increase as the domain becomes more difficult, the results of the paper suggest that parallel GAs can integrate large numbers of processors and significantly reduce the execution time of many practical applications. PMID:10578030

  6. Scalable Silicon Nanostructuring for Thermoelectric Applications

    NASA Astrophysics Data System (ADS)

    Koukharenko, E.; Boden, S. A.; Platzek, D.; Bagnall, D. M.; White, N. M.

    2013-07-01

    The current limitations of commercially available thermoelectric (TE) generators include their incompatibility with human body applications due to the toxicity of commonly used alloys and possible future shortage of raw materials (Bi-Sb-Te and Se). In this respect, exploiting silicon as an environmentally friendly candidate for thermoelectric applications is a promising alternative since it is an abundant, ecofriendly semiconductor for which there already exists an infrastructure for low-cost and high-yield processing. Contrary to the existing approaches, where n/ p-legs were either heavily doped to an optimal carrier concentration of 1019 cm-3 or morphologically modified by increasing their roughness, in this work improved thermoelectric performance was achieved in smooth silicon nanostructures with low doping concentration (1.5 × 1015 cm-3). Scalable, highly reproducible e-beam lithographies, which are compatible with nanoimprint and followed by deep reactive-ion etching (DRIE), were employed to produce arrays of regularly spaced nanopillars of 400 nm height with diameters varying from 140 nm to 300 nm. A potential Seebeck microprobe (PSM) was used to measure the Seebeck coefficients of such nanostructures. This resulted in values ranging from -75 μV/K to -120 μV/K for n-type and 100 μV/K to 140 μV/K for p-type, which are significant improvements over previously reported data.

  7. Quantum Information Processing using Scalable Techniques

    NASA Astrophysics Data System (ADS)

    Hanneke, D.; Bowler, R.; Jost, J. D.; Home, J. P.; Lin, Y.; Tan, T.-R.; Leibfried, D.; Wineland, D. J.

    2011-05-01

    We report progress towards improving our previous demonstrations that combined all the fundamental building blocks required for scalable quantum information processing using trapped atomic ions. Included elements are long-lived qubits; a laser-induced universal gate set; state initialization and readout; and information transport, including co-trapping a second ion species to reinitialize motion without qubit decoherence. Recent efforts have focused on reducing experimental overhead and increasing gate fidelity. Most of the experimental duty cycle was previously used for transport, separation, and recombination of ion chains as well as re-cooling of motional excitation. We have addressed these issues by developing and implementing an arbitrary waveform generator with an update rate far above the ions' motional frequencies. To reduce gate errors, we actively stabilize the position of several UV (313 nm) laser beams. We have also switched the two-qubit entangling gate to one that acts directly on 9Be+ hyperfine qubit states whose energy separation is magnetic-fluctuation insensitive. This work is supported by DARPA, NSA, ONR, IARPA, Sandia, and the NIST Quantum Information Program.

  8. Scalable office-based health care

    PubMed Central

    Koepp, Gabriel A.; Manohar, Chinmay U.; McCrady-Spitzer, Shelly K.; Levine, James A.

    2014-01-01

    The goal of healthcare is to provide high quality care at an affordable cost for its patients. However, the population it serves has changed dramatically since the popularization of hospital-based healthcare. With available new technology, alternative healthcare delivery methods can be designed and tested. This study examines Scalable Office Based Healthcare for Small Business, where healthcare is delivered to the office floor. This delivery was tested in 18 individuals at a small business in Minneapolis, Minnesota. The goal was to deliver modular healthcare and mitigate conditions such as diabetes, hyperlipidemia, obesity, sedentariness, and metabolic disease. The modular healthcare system was welcomed by employees – 70% of those eligible enrolled. The findings showed that the modular healthcare deliverable was feasible and effective. The data demonstrated significant improvements in weight loss, fat loss, and blood variables for at risk participants. This study leaves room for improvement and further innovation. Expansion to include offerings such as physicals, diabetes management, smoking cessation, and pre-natal treatment would improve its utility. Future studies could include testing the adaptability of delivery method, as it should adapt to reach rural and underserved populations. PMID:21471576

  9. Scalable Production of Molybdenum Disulfide Based Biosensors.

    PubMed

    Naylor, Carl H; Kybert, Nicholas J; Schneier, Camilla; Xi, Jin; Romero, Gabriela; Saven, Jeffery G; Liu, Renyu; Johnson, A T Charlie

    2016-06-28

    We demonstrate arrays of opioid biosensors based on chemical vapor deposition grown molybdenum disulfide (MoS2) field effect transistors (FETs) coupled to a computationally redesigned, water-soluble variant of the μ-opioid receptor (MOR). By transferring dense films of monolayer MoS2 crystals onto prefabricated electrode arrays, we obtain high-quality FETs with clean surfaces that allow for reproducible protein attachment. The fabrication yield of MoS2 FETs and biosensors exceeds 95%, with an average mobility of 2.0 cm(2) V(-1) s(-1) (36 cm(2) V(-1) s(-1)) at room temperature under ambient (in vacuo). An atomic length nickel-mediated linker chemistry enables target binding events that occur very close to the MoS2 surface to maximize sensitivity. The biosensor response calibration curve for a synthetic opioid peptide known to bind to the wild-type MOR indicates binding affinity that matches values determined using traditional techniques and a limit of detection ∼3 nM (1.5 ng/mL). The combination of scalable array fabrication and rapid, precise binding readout enabled by the MoS2 transistor offers the prospect of a solid-state drug testing platform for rapid readout of the interactions between novel drugs and their intended protein targets. PMID:27227361

  10. Towards Scalable Optimal Sequence Homology Detection

    SciTech Connect

    Daily, Jeffrey A.; Krishnamoorthy, Sriram; Kalyanaraman, Anantharaman

    2012-12-26

    Abstract—The field of bioinformatics and computational biol- ogy is experiencing a data revolution — experimental techniques to procure data have increased in throughput, improved in accuracy and reduced in costs. This has spurred an array of high profile sequencing and data generation projects. While the data repositories represent untapped reservoirs of rich information critical for scientific breakthroughs, the analytical software tools that are needed to analyze large volumes of such sequence data have significantly lagged behind in their capacity to scale. In this paper, we address homology detection, which is a funda- mental problem in large-scale sequence analysis with numerous applications. We present a scalable framework to conduct large- scale optimal homology detection on massively parallel super- computing platforms. Our approach employs distributed memory work stealing to effectively parallelize optimal pairwise alignment computation tasks. Results on 120,000 cores of the Hopper Cray XE6 supercomputer demonstrate strong scaling and up to 2.42 × 107 optimal pairwise sequence alignments computed per second (PSAPS), the highest reported in the literature.

  11. Towards scalable electronic structure calculations for alloys

    SciTech Connect

    Stocks, G.M.; Nicholson, D.M.C.; Wang, Y.; Shelton, W.A.; Szotek, Z.; Temmermann, W.M.

    1994-06-01

    A new approach to calculating the properties of large systems within the local density approximation (LDA) that offers the promise of scalability on massively parallel supercomputers is outlined. The electronic structure problem is formulated in real space using multiple scattering theory. The standard LDA algorithm is divided into two parts. Firstly, finding the self-consistent field (SCF) electron density, Secondly, calculating the energy corresponding to the SCF density. We show, at least for metals and alloys, that the former problem is easily solved using real space methods. For the second we take advantage of the variational properties of a generalized Harris-Foulkes free energy functional, a new conduction band Fermi function, and a fictitious finite electron temperature that again allow us to use real-space methods. Using a compute-node {R_arrow} atom equivalence the new method is naturally highly parallel and leads to O(N) scaling where N is the number of atoms making up the system. We show scaling data gathered on the Intel XP/S 35 Paragon for systems up to 512-atoms/simulation cell. To demonstrate that we can achieve metallurgical-precision, we apply the new method to the calculation the energies of disordered CuO{sub 0.5}Zn{sub 0.5} alloys using a large random sample.

  12. Visual exploration of nasal airflow.

    PubMed

    Zachow, Stefan; Muigg, Philipp; Hildebrandt, Thomas; Doleisch, Helmut; Hege, Hans-Christian

    2009-01-01

    Rhinologists are often faced with the challenge of assessing nasal breathing from a functional point of view to derive effective therapeutic interventions. While the complex nasal anatomy can be revealed by visual inspection and medical imaging, only vague information is available regarding the nasal airflow itself: Rhinomanometry delivers rather unspecific integral information on the pressure gradient as well as on total flow and nasal flow resistance. In this article we demonstrate how the understanding of physiological nasal breathing can be improved by simulating and visually analyzing nasal airflow, based on an anatomically correct model of the upper human respiratory tract. In particular we demonstrate how various Information Visualization (InfoVis) techniques, such as a highly scalable implementation of parallel coordinates, time series visualizations, as well as unstructured grid multi-volume rendering, all integrated within a multiple linked views framework, can be utilized to gain a deeper understanding of nasal breathing. Evaluation is accomplished by visual exploration of spatio-temporal airflow characteristics that include not only information on flow features but also on accompanying quantities such as temperature and humidity. To our knowledge, this is the first in-depth visual exploration of the physiological function of the nose over several simulated breathing cycles under consideration of a complete model of the nasal airways, realistic boundary conditions, and all physically relevant time-varying quantities. PMID:19834215

  13. Memory Scalability and Efficiency Analysis of Parallel Codes

    SciTech Connect

    Janjusic, Tommy; Kartsaklis, Christos

    2015-01-01

    Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an application s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).

  14. Improving the Performance Scalability of the Community Atmosphere Model

    SciTech Connect

    Mirin, Arthur; Worley, Patrick H

    2012-01-01

    The Community Atmosphere Model (CAM), which serves as the atmosphere component of the Community Climate System Model (CCSM), is the most computationally expensive CCSM component in typical configurations. On current and next-generation leadership class computing systems, the performance of CAM is tied to its parallel scalability. Improving performance scalability in CAM has been a challenge, due largely to algorithmic restrictions necessitated by the polar singularities in its latitude-longitude computational grid. Nevertheless, through a combination of exploiting additional parallelism, implementing improved communication protocols, and eliminating scalability bottlenecks, we have been able to more than double the maximum throughput rate of CAM on production platforms. We describe these improvements and present results on the Cray XT5 and IBM BG/P. The approaches taken are not specific to CAM and may inform similar scalability enhancement activities for other codes.

  15. Three-dimensional Visualization of Cosmological and Galaxy Formation Simulations

    NASA Astrophysics Data System (ADS)

    Thooris, Bruno; Pomarède, Daniel

    2011-12-01

    Our understanding of the structuring of the Universe from large-scale cosmological structures down to the formation of galaxies now largely benefits from numerical simulations. The RAMSES code, relying on the Adaptive Mesh Refinement technique, is used to perform massively parallel simulations at multiple scales. The interactive, immersive, three-dimensional visualization of such complex simulations is a challenge that is addressed using the SDvision software package. Several rendering techniques are available, including ray-casting and isosurface reconstruction, to explore the simulated volumes at various resolution levels and construct temporal sequences. These techniques are illustrated in the context of different classes of simulations. We first report on the visualization of the HORIZON Galaxy Formation Simulation at MareNostrum, a cosmological simulation with detailed physics at work in the galaxy formation process. We then carry on in the context of an intermediate zoom simulation leading to the formation of a Milky-Way like galaxy. Finally, we present a variety of simulations of interacting galaxies, including a case-study of the Antennae Galaxies interaction.

  16. TriG: Next Generation Scalable Spaceborne GNSS Receiver

    NASA Technical Reports Server (NTRS)

    Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.

    2012-01-01

    TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.

  17. BactoGeNIE: a large-scale comparative genome visualization for big displays

    PubMed Central

    2015-01-01

    Background The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. Results In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. Conclusions BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics. PMID:26329021

  18. BactoGeNIE: A large-scale comparative genome visualization for big displays

    DOE PAGESBeta

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; Marai, Elisabeta G.; Leigh, Jason

    2015-08-13

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less

  19. BactoGeNIE: A large-scale comparative genome visualization for big displays

    SciTech Connect

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; Marai, Elisabeta G.; Leigh, Jason

    2015-08-13

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.

  20. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  1. Responsive, Flexible and Scalable Broader Impacts (Invited)

    NASA Astrophysics Data System (ADS)

    Decharon, A.; Companion, C.; Steinman, M.

    2010-12-01

    In many educator professional development workshops, scientists present content in a slideshow-type format and field questions afterwards. Drawbacks of this approach include: inability to begin the lecture with content that is responsive to audience needs; lack of flexible access to specific material within the linear presentation; and “Q&A” sessions are not easily scalable to broader audiences. Often this type of traditional interaction provides little direct benefit to the scientists. The Centers for Ocean Sciences Education Excellence - Ocean Systems (COSEE-OS) applies the technique of concept mapping with demonstrated effectiveness in helping scientists and educators “get on the same page” (deCharon et al., 2009). A key aspect is scientist professional development geared towards improving face-to-face and online communication with non-scientists. COSEE-OS promotes scientist-educator collaboration, tests the application of scientist-educator maps in new contexts through webinars, and is piloting the expansion of maps as long-lived resources for the broader community. Collaboration - COSEE-OS has developed and tested a workshop model bringing scientists and educators together in a peer-oriented process, often clarifying common misconceptions. Scientist-educator teams develop online concept maps that are hyperlinked to “assets” (i.e., images, videos, news) and are responsive to the needs of non-scientist audiences. In workshop evaluations, 91% of educators said that the process of concept mapping helped them think through science topics and 89% said that concept mapping helped build a bridge of communication with scientists (n=53). Application - After developing a concept map, with COSEE-OS staff assistance, scientists are invited to give webinar presentations that include live “Q&A” sessions. The webinars extend the reach of scientist-created concept maps to new contexts, both geographically and topically (e.g., oil spill), with a relatively small

  2. An HEVC extension for spatial and quality scalable video coding

    NASA Astrophysics Data System (ADS)

    Hinz, Tobias; Helle, Philipp; Lakshman, Haricharan; Siekmann, Mischa; Stegemann, Jan; Schwarz, Heiko; Marpe, Detlev; Wiegand, Thomas

    2013-02-01

    This paper describes an extension of the upcoming High Efficiency Video Coding (HEVC) standard for supporting spatial and quality scalable video coding. Besides scalable coding tools known from scalable profiles of prior video coding standards such as H.262/MPEG-2 Video and H.264/MPEG-4 AVC, the proposed scalable HEVC extension includes new coding tools that further improve the coding efficiency of the enhancement layer. In particular, new coding modes by which base and enhancement layer signals are combined for forming an improved enhancement layer prediction signal have been added. All scalable coding tools have been integrated in a way that the low-level syntax and decoding process of HEVC remain unchanged to a large extent. Simulation results for typical application scenarios demonstrate the effectiveness of the proposed design. For spatial and quality scalable coding with two layers, bit-rate savings of about 20-30% have been measured relative to simulcasting the layers, which corresponds to a bit-rate overhead of about 5-15% relative to single-layer coding of the enhancement layer.

  3. Scalable Machine Learning for Massive Astronomical Datasets

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.; Astronomy Data Centre, Canadian

    2014-01-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors, and the local outlier factor. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex datasets that wishes to extract the full scientific value from its data.

  4. Scalable Machine Learning for Massive Astronomical Datasets

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.; Gray, A.

    2014-04-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors. This is likely of particular interest to the radio astronomy community given, for example, that survey projects contain groups dedicated to this topic. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex

  5. A Robust Scalable Transportation System Concept

    NASA Technical Reports Server (NTRS)

    Hahn, Andrew; DeLaurentis, Daniel

    2006-01-01

    This report documents the 2005 Revolutionary System Concept for Aeronautics (RSCA) study entitled "A Robust, Scalable Transportation System Concept". The objective of the study was to generate, at a high-level of abstraction, characteristics of a new concept for the National Airspace System, or the new NAS, under which transportation goals such as increased throughput, delay reduction, and improved robustness could be realized. Since such an objective can be overwhelmingly complex if pursued at the lowest levels of detail, instead a System-of-Systems (SoS) approach was adopted to model alternative air transportation architectures at a high level. The SoS approach allows the consideration of not only the technical aspects of the NAS", but also incorporates policy, socio-economic, and alternative transportation system considerations into one architecture. While the representations of the individual systems are basic, the higher level approach allows for ways to optimize the SoS at the network level, determining the best topology (i.e. configuration of nodes and links). The final product (concept) is a set of rules of behavior and network structure that not only satisfies national transportation goals, but represents the high impact rules that accomplish those goals by getting the agents to "do the right thing" naturally. The novel combination of Agent Based Modeling and Network Theory provides the core analysis methodology in the System-of-Systems approach. Our method of approach is non-deterministic which means, fundamentally, it asks and answers different questions than deterministic models. The nondeterministic method is necessary primarily due to our marriage of human systems with technological ones in a partially unknown set of future worlds. Our goal is to understand and simulate how the SoS, human and technological components combined, evolve.

  6. Parallel Heuristics for Scalable Community Detection

    SciTech Connect

    Lu, Howard; Kalyanaraman, Anantharaman; Halappanavar, Mahantesh; Choudhury, Sutanay

    2014-05-17

    Community detection has become a fundamental operation in numerous graph-theoretic applications. It is used to reveal natural divisions that exist within real world networks without imposing prior size or cardinality constraints on the set of communities. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed by Blondel et al. in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method is also inherently sequential, thereby limiting its scalability to problems that can be solved on desktops. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose multiple heuristics that are designed to break the sequential barrier. Our heuristics are agnostic to the underlying parallel architecture. For evaluation purposes, we implemented our heuristics on shared memory (OpenMP) and distributed memory (MapReduce-MPI) machines, and tested them over real world graphs derived from multiple application domains (internet, biological, natural language processing). Experimental results demonstrate the ability of our heuristics to converge to high modularity solutions comparable to those output by the serial algorithm in nearly the same number of iterations, while also drastically reducing time to solution.

  7. Scalable tensor factorizations with missing data.

    SciTech Connect

    Morup, Morten; Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2010-04-01

    The problem of missing data is ubiquitous in domains such as biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, and communication networks|all domains in which data collection is subject to occasional errors. Moreover, these data sets can be quite large and have more than two axes of variation, e.g., sender, receiver, time. Many applications in those domains aim to capture the underlying latent structure of the data; in other words, they need to factorize data sets with missing entries. If we cannot address the problem of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP), and formulate the CP model as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) using a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes.

  8. Physical principles for scalable neural recording.

    PubMed

    Marblestone, Adam H; Zamft, Bradley M; Maguire, Yael G; Shapiro, Mikhail G; Cybulski, Thaddeus R; Glaser, Joshua I; Amodei, Dario; Stranges, P Benjamin; Kalhor, Reza; Dalrymple, David A; Seo, Dongjin; Alon, Elad; Maharbiz, Michel M; Carmena, Jose M; Rabaey, Jan M; Boyden, Edward S; Church, George M; Kording, Konrad P

    2013-01-01

    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power-bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and

  9. Microscopic Characterization of Scalable Coherent Rydberg Superatoms

    NASA Astrophysics Data System (ADS)

    Zeiher, Johannes; Schauß, Peter; Hild, Sebastian; Macrı, Tommaso; Bloch, Immanuel; Gross, Christian

    2015-07-01

    Strong interactions can amplify quantum effects such that they become important on macroscopic scales. Controlling these coherently on a single-particle level is essential for the tailored preparation of strongly correlated quantum systems and opens up new prospects for quantum technologies. Rydberg atoms offer such strong interactions, which lead to extreme nonlinearities in laser-coupled atomic ensembles. As a result, multiple excitation of a micrometer-sized cloud can be blocked while the light-matter coupling becomes collectively enhanced. The resulting two-level system, often called a "superatom," is a valuable resource for quantum information, providing a collective qubit. Here, we report on the preparation of 2 orders of magnitude scalable superatoms utilizing the large interaction strength provided by Rydberg atoms combined with precise control of an ensemble of ultracold atoms in an optical lattice. The latter is achieved with sub-shot-noise precision by local manipulation of a two-dimensional Mott insulator. We microscopically confirm the superatom picture by in situ detection of the Rydberg excitations and observe the characteristic square-root scaling of the optical coupling with the number of atoms. Enabled by the full control over the atomic sample, including the motional degrees of freedom, we infer the overlap of the produced many-body state with a W state from the observed Rabi oscillations and deduce the presence of entanglement. Finally, we investigate the breakdown of the superatom picture when two Rydberg excitations are present in the system, which leads to dephasing and a loss of coherence.

  10. Myria: Scalable Analytics as a Service

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  11. Physical principles for scalable neural recording

    PubMed Central

    Zamft, Bradley M.; Maguire, Yael G.; Shapiro, Mikhail G.; Cybulski, Thaddeus R.; Glaser, Joshua I.; Amodei, Dario; Stranges, P. Benjamin; Kalhor, Reza; Dalrymple, David A.; Seo, Dongjin; Alon, Elad; Maharbiz, Michel M.; Carmena, Jose M.; Rabaey, Jan M.; Boyden, Edward S.; Church, George M.; Kording, Konrad P.

    2013-01-01

    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power–bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and

  12. Scalability of Localized Arc Filament Plasma Actuators

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2008-01-01

    Temporal flow control of a jet has been widely studied in the past to enhance jet mixing or reduce jet noise. Most of this research, however, has been done using small diameter low Reynolds number jets that often have little resemblance to the much larger jets common in real world applications because the flow actuators available lacked either the power or bandwidth to sufficiently impact these larger higher energy jets. The Localized Arc Filament Plasma Actuators (LAFPA), developed at the Ohio State University (OSU), have demonstrated the ability to impact a small high speed jet in experiments conducted at OSU and the power to perturb a larger high Reynolds number jet in experiments conducted at the NASA Glenn Research Center. However, the response measured in the large-scale experiments was significantly reduced for the same number of actuators compared to the jet response found in the small-scale experiments. A computational study has been initiated to simulate the LAFPA system with additional actuators on a large-scale jet to determine the number of actuators required to achieve the same desired response for a given jet diameter. Central to this computational study is a model for the LAFPA that both accurately represents the physics of the actuator and can be implemented into a computational fluid dynamics solver. One possible model, based on pressure waves created by the rapid localized heating that occurs at the actuator, is investigated using simplified axisymmetric simulations. The results of these simulations will be used to determine the validity of the model before more realistic and time consuming three-dimensional simulations are conducted to ultimately determine the scalability of the LAFPA system.

  13. Visual Scripting.

    ERIC Educational Resources Information Center

    Halas, John

    Visual scripting is the coordination of words with pictures in sequence. This book presents the methods and viewpoints on visual scripting of fourteen film makers, from nine countries, who are involved in animated cinema; it contains concise examples of how a storybook and preproduction script can be prepared in visual terms; and it includes a…

  14. Freeprocessing: Transparent in situ visualization via data interception

    PubMed Central

    Fogal, Thomas; Proch, Fabian; Schiewe, Alexander; Hasemann, Olaf; Kempf, Andreas; Krüger, Jens

    2014-01-01

    In situ visualization has become a popular method for avoiding the slowest component of many visualization pipelines: reading data from disk. Most previous in situ work has focused on achieving visualization scalability on par with simulation codes, or on the data movement concerns that become prevalent at extreme scales. In this work, we consider in situ analysis with respect to ease of use and programmability. We describe an abstraction that opens up new applications for in situ visualization, and demonstrate that this abstraction and an expanded set of use cases can be realized without a performance cost. PMID:25995996

  15. A scalable climate health justice assessment model

    PubMed Central

    McDonald, Yolanda J.; Grineski, Sara E.; Collins, Timothy W.; Kim, Young-An

    2014-01-01

    This paper introduces a scalable “climate health justice” model for assessing and projecting incidence, treatment costs, and sociospatial disparities for diseases with well-documented climate change linkages. The model is designed to employ low-cost secondary data, and it is rooted in a perspective that merges normative environmental justice concerns with theoretical grounding in health inequalities. Since the model employs International Classification of Diseases, Ninth Revision Clinical Modification (ICD-9-CM) disease codes, it is transferable to other contexts, appropriate for use across spatial scales, and suitable for comparative analyses. We demonstrate the utility of the model through analysis of 2008–2010 hospitalization discharge data at state and county levels in Texas (USA). We identified several disease categories (i.e., cardiovascular, gastrointestinal, heat-related, and respiratory) associated with climate change, and then selected corresponding ICD-9 codes with the highest hospitalization counts for further analyses. Selected diseases include ischemic heart disease, diarrhea, heat exhaustion/cramps/stroke/syncope, and asthma. Cardiovascular disease ranked first among the general categories of diseases for age-adjusted hospital admission rate (5286.37 per 100,000). In terms of specific selected diseases (per 100,000 population), asthma ranked first (517.51), followed by ischemic heart disease (195.20), diarrhea (75.35), and heat exhaustion/cramps/stroke/syncope (7.81). Charges associated with the selected diseases over the 3-year period amounted to US$5.6 billion. Blacks were disproportionately burdened by the selected diseases in comparison to non-Hispanic whites, while Hispanics were not. Spatial distributions of the selected disease rates revealed geographic zones of disproportionate risk. Based upon a downscaled regional climate-change projection model, we estimate a >5% increase in the incidence and treatment costs of asthma attributable to

  16. WIFIRE: A Scalable Data-Driven Monitoring, Dynamic Prediction and Resilience Cyberinfrastructure for Wildfires

    NASA Astrophysics Data System (ADS)

    Altintas, I.; Block, J.; Braun, H.; de Callafon, R. A.; Gollner, M. J.; Smarr, L.; Trouve, A.

    2013-12-01

    Recent studies confirm that climate change will cause wildfires to increase in frequency and severity in the coming decades especially for California and in much of the North American West. The most critical sustainability issue in the midst of these ever-changing dynamics is how to achieve a new social-ecological equilibrium of this fire ecology. Wildfire wind speeds and directions change in an instant, and first responders can only be effective when they take action as quickly as the conditions change. To deliver information needed for sustainable policy and management in this dynamically changing fire regime, we must capture these details to understand the environmental processes. We are building an end-to-end cyberinfrastructure (CI), called WIFIRE, for real-time and data-driven simulation, prediction and visualization of wildfire behavior. The WIFIRE integrated CI system supports social-ecological resilience to the changing fire ecology regime in the face of urban dynamics and climate change. Networked observations, e.g., heterogeneous satellite data and real-time remote sensor data is integrated with computational techniques in signal processing, visualization, modeling and data assimilation to provide a scalable, technological, and educational solution to monitor weather patterns to predict a wildfire's Rate of Spread. Our collaborative WIFIRE team of scientists, engineers, technologists, government policy managers, private industry, and firefighters architects implement CI pathways that enable joint innovation for wildfire management. Scientific workflows are used as an integrative distributed programming model and simplify the implementation of engineering modules for data-driven simulation, prediction and visualization while allowing integration with large-scale computing facilities. WIFIRE will be scalable to users with different skill-levels via specialized web interfaces and user-specified alerts for environmental events broadcasted to receivers before

  17. Visual Imagery without Visual Perception?

    ERIC Educational Resources Information Center

    Bertolo, Helder

    2005-01-01

    The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…

  18. FMOE-MR: content-driven multiresolution MPEG-4 fine grained scalable layered video encoding

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, S.; Luo, X.; Bhandarkar, S. M.; Li, K.

    2007-01-01

    The MPEG-4 Fine Grained Scalability (FGS) profile aims at scalable layered video encoding, in order to ensure efficient video streaming in networks with fluctuating bandwidths. In this paper, we propose a novel technique, termed as FMOEMR, which delivers significantly improved rate distortion performance compared to existing MPEG-4 Base Layer encoding techniques. The video frames are re-encoded at high resolution at semantically and visually important regions of the video (termed as Features, Motion and Objects) that are defined using a mask (FMO-Mask) and at low resolution in the remaining regions. The multiple-resolution re-rendering step is implemented such that further MPEG-4 compression leads to low bit rate Base Layer video encoding. The Features, Motion and Objects Encoded-Multi- Resolution (FMOE-MR) scheme is an integrated approach that requires only encoder-side modifications, and is transparent to the decoder. Further, since the FMOE-MR scheme incorporates "smart" video preprocessing, it requires no change in existing MPEG-4 codecs. As a result, it is straightforward to use the proposed FMOE-MR scheme with any existing MPEG codec, thus allowing great flexibility in implementation. In this paper, we have described, and implemented, unsupervised and semi-supervised algorithms to create the FMO-Mask from a given video sequence, using state-of-the-art computer vision algorithms.

  19. A scalable multi-DLP pico-projector system for virtual reality

    NASA Astrophysics Data System (ADS)

    Teubl, F.; Kurashima, C.; Cabral, M.; Fels, S.; Lopes, R.; Zuffo, M.

    2014-03-01

    Virtual Reality (VR) environments can offer immersion, interaction and realistic images to users. A VR system is usually expensive and requires special equipment in a complex setup. One approach is to use Commodity-Off-The-Shelf (COTS) desktop multi-projectors manually or camera based calibrated to reduce the cost of VR systems without significant decrease of the visual experience. Additionally, for non-planar screen shapes, special optics such as lenses and mirrors are required thus increasing costs. We propose a low-cost, scalable, flexible and mobile solution that allows building complex VR systems that projects images onto a variety of arbitrary surfaces such as planar, cylindrical and spherical surfaces. This approach combines three key aspects: 1) clusters of DLP-picoprojectors to provide homogeneous and continuous pixel density upon arbitrary surfaces without additional optics; 2) LED lighting technology for energy efficiency and light control; 3) smaller physical footprint for flexibility purposes. Therefore, the proposed system is scalable in terms of pixel density, energy and physical space. To achieve these goals, we developed a multi-projector software library called FastFusion that calibrates all projectors in a uniform image that is presented to viewers. FastFusion uses a camera to automatically calibrate geometric and photometric correction of projected images from ad-hoc positioned projectors, the only requirement is some few pixels overlapping amongst them. We present results with eight Pico-projectors, with 7 lumens (LED) and DLP 0.17 HVGA Chipset.

  20. NEXUS Scalable and Distributed Next-Generation Avionics Bus for Space Missions

    NASA Technical Reports Server (NTRS)

    He, Yutao; Shalom, Eddy; Chau, Savio N.; Some, Raphael R.; Bolotin, Gary S.

    2011-01-01

    A paper discusses NEXUS, a common, next-generation avionics interconnect that is transparently compatible with wired, fiber-optic, and RF physical layers; provides a flexible, scalable, packet switched topology; is fault-tolerant with sub-microsecond detection/recovery latency; has scalable bandwidth from 1 Kbps to 10 Gbps; has guaranteed real-time determinism with sub-microsecond latency/jitter; has built-in testability; features low power consumption (< 100 mW per Gbps); is lightweight with about a 5,000-logic-gate footprint; and is implemented in a small Bus Interface Unit (BIU) with reconfigurable back-end providing interface to legacy subsystems. NEXUS enhances a commercial interconnect standard, Serial RapidIO, to meet avionics interconnect requirements without breaking the standard. This unified interconnect technology can be used to meet performance, power, size, and reliability requirements of all ranges of equipment, sensors, and actuators at chip-to-chip, board-to-board, or box-to-box boundary. Early results from in-house modeling activity of Serial RapidIO using VisualSim indicate that the use of a switched, high-performance avionics network will provide a quantum leap in spacecraft onboard science and autonomy capability for science and exploration missions.

  1. Scalability enhancement of AODV using local link repairing

    NASA Astrophysics Data System (ADS)

    Jain, Jyoti; Gupta, Roopam; Bandhopadhyay, T. K.

    2014-09-01

    Dynamic change in the topology of an ad hoc network makes it difficult to design an efficient routing protocol. Scalability of an ad hoc network is also one of the important criteria of research in this field. Most of the research works in ad hoc network focus on routing and medium access protocols and produce simulation results for limited-size networks. Ad hoc on-demand distance vector (AODV) is one of the best reactive routing protocols. In this article, modified routing protocols based on local link repairing of AODV are proposed. Method of finding alternate routes for next-to-next node is proposed in case of link failure. These protocols are beacon-less, means periodic hello message is removed from the basic AODV to improve scalability. Few control packet formats have been changed to accommodate suggested modification. Proposed protocols are simulated to investigate scalability performance and compared with basic AODV protocol. This also proves that local link repairing of proposed protocol improves scalability of the network. From simulation results, it is clear that scalability performance of routing protocol is improved because of link repairing method. We have tested protocols for different terrain area with approximate constant node densities and different traffic load.

  2. A Prototype for a Distributed and Automatic Visualization Pipeline for Oceanographic Datasets.

    NASA Astrophysics Data System (ADS)

    Nayak, A.; Weber, P.; Arrott, M.; Schulze, J.; Orcutt, J.; Chao, Y.; Li, P.

    2007-12-01

    The Laboratory for the Ocean Observatory Knowledge INtegration Grid (LOOKING) is a NSF research project focused on the identification, synthesis and assembly of existing and emerging concepts and technologies into a coherent viable cyberinfrastructure design for ocean observatories. One of the goals of the project is to prototype an automated pipeline for continuously generating visualization products (time variant geometric representations and rendered image sequences) from streaming and regenerating data sources. Current work involves remote visualization of NASA JPL's Our Ocean Data Assimilation of the Central California region on a continuous basis. The prototype uses OPeNDAP as the data retrieval mechanism to fetch netcdf formatted data for specific variables or time steps. A geometry conversion engine transforms this data into 3D geometric models (e.g. isosurfaces for scalar data like temperature and salinity or streamlines for ocean currents) using the Visualization Toolkit (VTK) and delivers a 3D scene graph that can be imported into the end user's choice of visualization software for viewing the scene. Our current preferred 3D viewer is ossimPlanet (a 3D Geospatial viewer built using OpenSceneGraph, libwms and OSSIM) embedded inside the COVISE framework for interactive exploration in a georeferenced framework.

  3. Techniques for the visualization of topological defect behavior in nematic liquid crystals.

    PubMed

    Slavin, Vadim A; Pelcovits, Robert A; Loriot, George; Callan-Jones, Andrew; Laidlaw, David H

    2006-01-01

    We present visualization tools for analyzing molecular simulations of liquid crystal (LC) behavior. The simulation data consists of terabytes of data describing the position and orientation of every molecule in the simulated system over time. Condensed matter physicists study the evolution of topological defects in these data, and our visualization tools focus on that goal. We first convert the discrete simulation data to a sampled version of a continuous second-order tensor field and then use combinations of visualization methods to simultaneously display combinations of contractions of the tensor data, providing an interactive environment for exploring these complicated data. The system, built using AVS, employs colored cutting planes, colored isosurfaces, and colored integral curves to display fields of tensor contractions including Westin's scalar cl, cp, and cs metrics and the principal eigenvector. Our approach has been in active use in the physics lab for over a year. It correctly displays structures already known; it displays the data in a spatially and temporally smoother way than earlier approaches, avoiding confusing grid effects and facilitating the study of multiple time steps; it extends the use of tools developed for visualizing diffusion tensor data, re-interpreting them in the context of molecular simulations; and it has answered long-standing questions regarding the orientation of molecules around defects and the conformational changes of the defects. PMID:17080868

  4. Scalable Track Initiation for Optical Space Surveillance

    NASA Astrophysics Data System (ADS)

    Schumacher, P.; Wilkins, M. P.

    2012-09-01

    least cubic and commonly quartic or higher. Therefore, practical implementations require attention to the scalability of the algorithms, when one is dealing with the very large number of observations from large surveillance telescopes. We address two broad categories of algorithms. The first category includes and extends the classical methods of Laplace and Gauss, as well as the more modern method of Gooding, in which one solves explicitly for the apparent range to the target in terms of the given data. In particular, recent ideas offered by Mortari and Karimi allow us to construct a family of range-solution methods that can be scaled to many processors efficiently. We find that the orbit solutions (data association hypotheses) can be ranked by means of a concept we call persistence, in which a simple statistical measure of likelihood is based on the frequency of occurrence of combinations of observations in consistent orbit solutions. Of course, range-solution methods can be expected to perform poorly if the orbit solutions of most interest are not well conditioned. The second category of algorithms addresses this difficulty. Instead of solving for range, these methods attach a set of range hypotheses to each measured line of sight. Then all pair-wise combinations of observations are considered and the family of Lambert problems is solved for each pair. These algorithms also have polynomial complexity, though now the complexity is quadratic in the number of observations and also quadratic in the number of range hypotheses. We offer a novel type of admissible-region analysis, constructing partitions of the orbital element space and deriving rigorous upper and lower bounds on the possible values of the range for each partition. This analysis allows us to parallelize with respect to the element partitions and to reduce the number of range hypotheses that have to be considered in each processor simply by making the partitions smaller. Naturally, there are many ways to

  5. geoKepler Workflow Module for Computationally Scalable and Reproducible Geoprocessing and Modeling

    NASA Astrophysics Data System (ADS)

    Cowart, C.; Block, J.; Crawl, D.; Graham, J.; Gupta, A.; Nguyen, M.; de Callafon, R.; Smarr, L.; Altintas, I.

    2015-12-01

    The NSF-funded WIFIRE project has developed an open-source, online geospatial workflow platform for unifying geoprocessing tools and models for for fire and other geospatially dependent modeling applications. It is a product of WIFIRE's objective to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. geoKepler includes a set of reusable GIS components, or actors, for the Kepler Scientific Workflow System (https://kepler-project.org). Actors exist for reading and writing GIS data in formats such as Shapefile, GeoJSON, KML, and using OGC web services such as WFS. The actors also allow for calling geoprocessing tools in other packages such as GDAL and GRASS. Kepler integrates functions from multiple platforms and file formats into one framework, thus enabling optimal GIS interoperability, model coupling, and scalability. Products of the GIS actors can be fed directly to models such as FARSITE and WRF. Kepler's ability to schedule and scale processes using Hadoop and Spark also makes geoprocessing ultimately extensible and computationally scalable. The reusable workflows in geoKepler can be made to run automatically when alerted by real-time environmental conditions. Here, we show breakthroughs in the speed of creating complex data for hazard assessments with this platform. We also demonstrate geoKepler workflows that use Data Assimilation to ingest real-time weather data into wildfire simulations, and for data mining techniques to gain insight into environmental conditions affecting fire behavior. Existing machine learning tools and libraries such as R and MLlib are being leveraged for this purpose in Kepler, as well as Kepler's Distributed Data Parallel (DDP) capability to provide a framework for scalable processing. geoKepler workflows can be executed via an iPython notebook as a part of a Jupyter hub at UC San Diego for sharing and reporting of the scientific analysis and results from

  6. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  7. Current parallel I/O limitations to scalable data analysis.

    SciTech Connect

    Mascarenhas, Ajith Arthur; Pebay, Philippe Pierre

    2011-07-01

    This report describes the limitations to parallel scalability which we have encountered when applying our otherwise optimally scalable parallel statistical analysis tool kit to large data sets distributed across the parallel file system of the current premier DOE computational facility. This report describes our study to evaluate the effect of parallel I/O on the overall scalability of a parallel data analysis pipeline using our scalable parallel statistics tool kit [PTBM11]. In this goal, we tested it using the Jaguar-pf DOE/ORNL peta-scale platform on a large combustion simulation data under a variety of process counts and domain decompositions scenarios. In this report we have recalled the foundations of the parallel statistical analysis tool kit which we have designed and implemented, with the specific double intent of reproducing typical data analysis workflows, and achieving optimal design for scalable parallel implementations. We have briefly reviewed those earlier results and publications which allow us to conclude that we have achieved both goals. However, in this report we have further established that, when used in conjuction with a state-of-the-art parallel I/O system, as can be found on the premier DOE peta-scale platform, the scaling properties of the overall analysis pipeline comprising parallel data access routines degrade rapidly. This finding is problematic and must be addressed if peta-scale data analysis is to be made scalable, or even possible. In order to attempt to address these parallel I/O limitations, we will investigate the use the Adaptable IO System (ADIOS) [LZL+10] to improve I/O performance, while maintaining flexibility for a variety of IO options, such MPI IO, POSIX IO. This system is developed at ORNL and other collaborating institutions, and is being tested extensively on Jaguar-pf. Simulation code being developed on these systems will also use ADIOS to output the data thereby making it easier for other systems, such as ours, to

  8. Scalable broadband OPCPA in Lithium Niobate with signal angular dispersion

    NASA Astrophysics Data System (ADS)

    Tóth, György; Pálfalvi, László; Tokodi, Levente; Hebling, János; Fülöp, József András

    2016-07-01

    Angular dispersion of the signal beam is proposed for efficient, scalable high-power few-cycle pulse generation in LiNbO3 by optical parametric chirped-pulse amplification (OPCPA) in the 1.4 to 2.1 μm wavelength range. An optimized double-grating setup can provide the required angular dispersion. Calculations predict 16.8 fs (3 cycles) pulses with 13 TW peak power. Further scalability of the scheme towards the 100-TW power level is feasible by using efficient, cost-effective, compact diode-pumped solid-state lasers for pumping directly at 1 μm, without second-harmonic generation.

  9. Comparison of scalable fast methods for long-range interactions.

    PubMed

    Arnold, Axel; Fahrenberger, Florian; Holm, Christian; Lenz, Olaf; Bolten, Matthias; Dachsel, Holger; Halver, Rene; Kabadshow, Ivo; Gähler, Franz; Heber, Frederik; Iseringhausen, Julian; Hofmann, Michael; Pippig, Michael; Potts, Daniel; Sutmann, Godehard

    2013-12-01

    Based on a parallel scalable library for Coulomb interactions in particle systems, a comparison between the fast multipole method (FMM), multigrid-based methods, fast Fourier transform (FFT)-based methods, and a Maxwell solver is provided for the case of three-dimensional periodic boundary conditions. These methods are directly compared with respect to complexity, scalability, performance, and accuracy. To ensure comparable conditions for all methods and to cover typical applications, we tested all methods on the same set of computers using identical benchmark systems. Our findings suggest that, depending on system size and desired accuracy, the FMM- and FFT-based methods are most efficient in performance and stability. PMID:24483585

  10. SSEL1.0. Sandia Scalable Encryption Software

    SciTech Connect

    Tarman, T.D.

    1996-08-29

    Sandia Scalable Encryption Library (SSEL) Version 1.0 is a library of functions that implement Sandia`s scalable encryption algorithm. This algorithm is used to encrypt Asynchronous Transfer Mode (ATM) data traffic, and is capable of operating on an arbitrary number of bits at a time (which permits scaling via parallel implementations), while being interoperable with differently scaled versions of this algorithm. The routines in this library implement 8 bit and 32 bit versions of a non-linear mixer which is compatible with Sandia`s hardware-based ATM encryptor.