Science.gov

Sample records for scalable isosurface visualization

  1. Visualization of multiple anatomical structures with explicit isosurface manipulation.

    PubMed

    Wan, Xiaonan; Yang, Fei; Yang, Feng; Li, Xiuli; Xu, Min; Tian, Jie

    2015-01-01

    In medical image analysis and surgical planning, it is an essential task to visualize and differentiate multiple anatomical structures. The traditional approaches require expensive 3D segmentation steps during pre-processing stage, which defeats the purpose of real-time interaction with the data. In this paper, we propose an interactive method for visualization of multiple anatomical structures. In our results, we show that the new method is a promising technique for visual analysis of medical datasets and a helpful tool for surgical planning. It can be very efficient for a wide range of visualization and analysis tasks. PMID:26737229

  2. Large Scale Isosurface Bicubic Subdivision-Surface Wavelets for Representation and Visualization

    SciTech Connect

    Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.

    2000-01-05

    We introduce a new subdivision-surface wavelet transform for arbitrary two-manifolds with boundary that is the first to use simple lifting-style filtering operations with bicubic precision. We also describe a conversion process for re-mapping large-scale isosurfaces to have subdivision connectivity and fair parameterizations so that the new wavelet transform can be used for compression and visualization. The main idea enabling our wavelet transform is the circular symmetrization of the filters in irregular neighborhoods, which replaces the traditional separation of filters into two 1-D passes. Our wavelet transform uses polygonal base meshes to represent surface topology, from which a Catmull-Clark-style subdivision hierarchy is generated. The details between these levels of resolution are quickly computed and compactly stored as wavelet coefficients. The isosurface conversion process begins with a contour triangulation computed using conventional techniques, which we subsequently simplify with a variant edge-collapse procedure, followed by an edge-removal process. This provides a coarse initial base mesh, which is subsequently refined, relaxed and attracted in phases to converge to the contour. The conversion is designed to produce smooth, untangled and minimally-skewed parameterizations, which improves the subsequent compression after applying the transform. We have demonstrated our conversion and transform for an isosurface obtained from a high-resolution turbulent-mixing hydrodynamics simulation, showing the potential for compression and level-of-detail visualization.

  3. Interactive isosurface ray tracing of time-varying tetrahedral volumes.

    PubMed

    Wald, Ingo; Friedrich, Heiko; Knoll, Aaron; Hansen, Charles D

    2007-01-01

    We describe a system for interactively rendering isosurfaces of tetrahedral finite-element scalar fields using coherent ray tracing techniques on the CPU. By employing state-of-the art methods in polygonal ray tracing, namely aggressive packet/frustum traversal of a bounding volume hierarchy, we can accomodate large and time-varying unstructured data. In conjunction with this efficiency structure, we introduce a novel technique for intersecting ray packets with tetrahedral primitives. Ray tracing is flexible, allowing for dynamic changes in isovalue and time step, visualization of multiple isosurfaces, shadows, and depth-peeling transparency effects. The resulting system offers the intuitive simplicity of isosurfacing, guaranteed-correct visual results, and ultimately a scalable, dynamic and consistently interactive solution for visualizing unstructured volumes.

  4. Volume Haptics with Topology-Consistent Isosurfaces.

    PubMed

    Corenthy, Loc; Otaduy, Miguel A; Pastor, Luis; Garcia, Marcos

    2015-01-01

    Haptic interfaces offer an intuitive way to interact with and manipulate 3D datasets, and may simplify the interpretation of visual information. This work proposes an algorithm to provide haptic feedback directly from volumetric datasets, as an aid to regular visualization. The haptic rendering algorithm lets users perceive isosurfaces in volumetric datasets, and it relies on several design features that ensure a robust and efficient rendering. A marching tetrahedra approach enables the dynamic extraction of a piecewise linear continuous isosurface. Robustness is achieved using a continuous collision detection step coupled with state-of-the-art proxy-based rendering methods over the extracted isosurface. The introduced marching tetrahedra approach guarantees that the extracted isosurface will match the topology of an equivalent isosurface computed using trilinear interpolation. The proposed haptic rendering algorithm improves the consistency between haptic and visual cues computing a second proxy on the isosurface displayed on screen. Our experiments demonstrate the improvements on the isosurface extraction stage as well as the robustness and the efficiency of the complete algorithm.

  5. Case study of isosurface extraction algorithm performance

    SciTech Connect

    Sutton, P M; Hansen, C D; Shen, H; Schikore, D

    1999-12-14

    Isosurface extraction is an important and useful visualization method. Over the past ten years, the field has seen numerous isosurface techniques published leaving the user in a quandary about which one should be used. Some papers have published complexity analysis of the techniques yet empirical evidence comparing different methods is lacking. This case study presents a comparative study of several representative isosurface extraction algorithms. It reports and analyzes empirical measurements of execution times and memory behavior for each algorithm. The results show that asymptotically optimal techniques may not be the best choice when implemented on modern computer architectures.

  6. A graph algebra for scalable visual analytics.

    PubMed

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data. PMID:24806630

  7. A graph algebra for scalable visual analytics.

    PubMed

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  8. Interactive high-resolution isosurface ray casting on multicore processors.

    PubMed

    Wang, Qin; JaJa, Joseph

    2008-01-01

    We present a new method for the interactive rendering of isosurfaces using ray casting on multi-core processors. This method consists of a combination of an object-order traversal that coarsely identifies possible candidate 3D data blocks for each small set of contiguous pixels, and an isosurface ray casting strategy tailored for the resulting limited-size lists of candidate 3D data blocks. While static screen partitioning is widely used in the literature, our scheme performs dynamic allocation of groups of ray casting tasks to ensure almost equal loads among the different threads running on multi-cores while maintaining spatial locality. We also make careful use of memory management environment commonly present in multi-core processors. We test our system on a two-processor Clovertown platform, each consisting of a Quad-Core 1.86-GHz Intel Xeon Processor, for a number of widely different benchmarks. The detailed experimental results show that our system is efficient and scalable, and achieves high cache performance and excellent load balancing, resulting in an overall performance that is superior to any of the previous algorithms. In fact, we achieve an interactive isosurface rendering on a 1024(2) screen for all the datasets tested up to the maximum size of the main memory of our platform.

  9. ViSUS: Visualization Streams for Ultimate Scalability

    SciTech Connect

    Pascucci, V

    2005-02-14

    In this project we developed a suite of progressive visualization algorithms and a data-streaming infrastructure that enable interactive exploration of scientific datasets of unprecedented size. The methodology aims to globally optimize the data flow in a pipeline of processing modules. Each module reads a multi-resolution representation of the input while producing a multi-resolution representation of the output. The use of multi-resolution representations provides the necessary flexibility to trade speed for accuracy in the visualization process. Maximum coherency and minimum delay in the data-flow is achieved by extensive use of progressive algorithms that continuously map local geometric updates of the input stream into immediate updates of the output stream. We implemented a prototype software infrastructure that demonstrated the flexibility and scalability of this approach by allowing large data visualization on single desktop computers, on PC clusters, and on heterogeneous computing resources distributed over a wide area network. When processing terabytes of scientific data, we have achieved an effective increase in visualization performance of several orders of magnitude in two major settings: (i) interactive visualization on desktop workstations of large datasets that cannot be stored locally; (ii) real-time monitoring of a large scientific simulation with negligible impact on the computing resources available. The ViSUS streaming infrastructure enabled the real-time execution and visualization of the two LLNL simulation codes (Miranda and Raptor) run at Supercomputing 2004 on Blue Gene/L at its presentation as the fastest supercomputer in the world. In addition to SC04, we have run live demonstrations at the IEEE VIS conference and at invited talks at the DOE MICS office, DOE computer graphics forum, UC Riverside, and the University of Maryland. In all cases we have shown the capability to stream and visualize interactively data stored remotely at the San

  10. Scalable and portable visualization of large atomistic datasets

    NASA Astrophysics Data System (ADS)

    Sharma, Ashish; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2004-10-01

    A scalable and portable code named Atomsviewer has been developed to interactively visualize a large atomistic dataset consisting of up to a billion atoms. The code uses a hierarchical view frustum-culling algorithm based on the octree data structure to efficiently remove atoms outside of the user's field-of-view. Probabilistic and depth-based occlusion-culling algorithms then select atoms, which have a high probability of being visible. Finally a multiresolution algorithm is used to render the selected subset of visible atoms at varying levels of detail. Atomsviewer is written in C++ and OpenGL, and it has been tested on a number of architectures including Windows, Macintosh, and SGI. Atomsviewer has been used to visualize tens of millions of atoms on a standard desktop computer and, in its parallel version, up to a billion atoms. Program summaryTitle of program: Atomsviewer Catalogue identifier: ADUM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: 2.4 GHz Pentium 4/Xeon processor, professional graphics card; Apple G4 (867 MHz)/G5, professional graphics card Operating systems under which the program has been tested: Windows 2000/XP, Mac OS 10.2/10.3, SGI IRIX 6.5 Programming languages used: C++, C and OpenGL Memory required to execute with typical data: 1 gigabyte of RAM High speed storage required: 60 gigabytes No. of lines in the distributed program including test data, etc.: 550 241 No. of bytes in the distributed program including test data, etc.: 6 258 245 Number of bits in a word: Arbitrary Number of processors used: 1 Has the code been vectorized or parallelized: No Distribution format: tar gzip file Nature of physical problem: Scientific visualization of atomic systems Method of solution: Rendering of atoms using computer graphic techniques, culling algorithms for data

  11. Scalable nanohelices for predictive studies and enhanced 3D visualization.

    PubMed

    Meagher, Kwyn A; Doblack, Benjamin N; Ramirez, Mercedes; Davila, Lilian P

    2014-11-12

    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO₂) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of "bulk" silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for

  12. Scalable nanohelices for predictive studies and enhanced 3D visualization.

    PubMed

    Meagher, Kwyn A; Doblack, Benjamin N; Ramirez, Mercedes; Davila, Lilian P

    2014-01-01

    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO₂) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of "bulk" silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for

  13. SciVis: Domain Customized, Scalable Visualization Software for Space Sciences

    NASA Astrophysics Data System (ADS)

    Loring, B.; Karimabadi, H.

    2009-12-01

    The primary visualization issues of large data are remote interactivity, visualization scalability, and domain applicability of available algorithms. Large data necessitates the need for scalability. Without scalable components in the visualization pipeline interactive exploration is not possible. Scalable components are not the end of the story though. As data to be visualized is often too large to move, it must be visualized in place at a remote site and this necessitate the need for highly optimized client-server delivery layer. Another issue is that most visualization software only have generic functionality and often significant customization is required on the part of the user. Many scientists do not have the time or resources for such customization. The purpose of the SciVis toolkit is to address these issues within the context of the open source ParaView framework. We are currently collecting visualization requirements from members of the community and working with individuals to get them up and running with their visualization needs. In this presentation, we will demonstrate some of the unique capabilities of SciViz including a scalability study of ParaView's IO subsystem that compares performance of existing IO layer to our new parallel file system optimized layer and a case study showing how remote interaction can degrade quickly even when the pipeline scales and how we re-factored ParaView to boost interactivity during remote visualization over networks ranging from Gigabit to consumer grade broadband. We also show an example of domain customization by demonstrating our new magnetic field topology visualization tool and its application to analysis of global MHD simulations.

  14. ParaText : scalable text analysis and visualization.

    SciTech Connect

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-07-01

    Automated analysis of unstructured text documents (e.g., web pages, newswire articles, research publications, business reports) is a key capability for solving important problems in areas including decision making, risk assessment, social network analysis, intelligence analysis, scholarly research and others. However, as data sizes continue to grow in these areas, scalable processing, modeling, and semantic analysis of text collections becomes essential. In this paper, we present the ParaText text analysis engine, a distributed memory software framework for processing, modeling, and analyzing collections of unstructured text documents. Results on several document collections using hundreds of processors are presented to illustrate the exibility, extensibility, and scalability of the the entire process of text modeling from raw data ingestion to application analysis.

  15. Infrastructure for Scalable and Interoperable Visualization and Analysis Software Technology

    SciTech Connect

    Bethel, E. Wes

    2004-08-01

    This document describes the LBNL vision for issues to be considered when assembling a large, multi-institution visualization and analysis effort. It was drafted at the request of the PNNL National Visual Analytics Center in July 2004.

  16. Visual Communications for Heterogeneous Networks/Visually Optimized Scalable Image Compression. Final Report for September 1, 1995 - February 28, 2002

    SciTech Connect

    Hemami, S. S.

    2003-06-03

    The authors developed image and video compression algorithms that provide scalability, reconstructibility, and network adaptivity, and developed compression and quantization strategies that are visually optimal at all bit rates. The goal of this research is to enable reliable ''universal access'' to visual communications over the National Information Infrastructure (NII). All users, regardless of their individual network connection bandwidths, qualities-of-service, or terminal capabilities, should have the ability to access still images, video clips, and multimedia information services, and to use interactive visual communications services. To do so requires special capabilities for image and video compression algorithms: scalability, reconstructibility, and network adaptivity. Scalability allows an information service to provide visual information at many rates, without requiring additional compression or storage after the stream has been compressed the first time. Reconstructibility allows reliable visual communications over an imperfect network. Network adaptivity permits real-time modification of compression parameters to adjust to changing network conditions. Furthermore, to optimize the efficiency of the compression algorithms, they should be visually optimal, where each bit expended reduces the visual distortion. Visual optimality is achieved through first extensive experimentation to quantify human sensitivity to supra-threshold compression artifacts and then incorporation of these experimental results into quantization strategies and compression algorithms.

  17. A transparently scalable visualization architecture for exploring the universe.

    PubMed

    Fu, Chi-Wing; Hanson, Andrew J

    2007-01-01

    Modern astronomical instruments produce enormous amounts of three-dimensional data describing the physical Universe. The currently available data sets range from the solar system to nearby stars and portions of the Milky Way Galaxy, including the interstellar medium and some extrasolar planets, and extend out to include galaxies billions of light years away. Because of its gigantic scale and the fact that it is dominated by empty space, modeling and rendering the Universe is very different from modeling and rendering ordinary three-dimensional virtual worlds at human scales. Our purpose is to introduce a comprehensive approach to an architecture solving this visualization problem that encompasses the entire Universe while seeking to be as scale-neutral as possible. One key element is the representation of model-rendering procedures using power scaled coordinates (PSC), along with various PSC-based techniques that we have devised to generalize and optimize the conventional graphics framework to the scale domains of astronomical visualization. Employing this architecture, we have developed an assortment of scale-independent modeling and rendering methods for a large variety of astronomical models, and have demonstrated scale-insensitive interactive visualizations of the physical Universe covering scales ranging from human scale to the Earth, to the solar system, to the Milky Way Galaxy, and to the entire observable Universe.

  18. A transparently scalable visualization architecture for exploring the universe.

    PubMed

    Fu, Chi-Wing; Hanson, Andrew J

    2007-01-01

    Modern astronomical instruments produce enormous amounts of three-dimensional data describing the physical Universe. The currently available data sets range from the solar system to nearby stars and portions of the Milky Way Galaxy, including the interstellar medium and some extrasolar planets, and extend out to include galaxies billions of light years away. Because of its gigantic scale and the fact that it is dominated by empty space, modeling and rendering the Universe is very different from modeling and rendering ordinary three-dimensional virtual worlds at human scales. Our purpose is to introduce a comprehensive approach to an architecture solving this visualization problem that encompasses the entire Universe while seeking to be as scale-neutral as possible. One key element is the representation of model-rendering procedures using power scaled coordinates (PSC), along with various PSC-based techniques that we have devised to generalize and optimize the conventional graphics framework to the scale domains of astronomical visualization. Employing this architecture, we have developed an assortment of scale-independent modeling and rendering methods for a large variety of astronomical models, and have demonstrated scale-insensitive interactive visualizations of the physical Universe covering scales ranging from human scale to the Earth, to the solar system, to the Milky Way Galaxy, and to the entire observable Universe. PMID:17093340

  19. Dynamic Isosurface Extraction and Level-of-Detail in Voxel Space

    SciTech Connect

    Lamphere, P.B.; Linebarger, J.M.

    1999-03-01

    A new visualization representation is described, which dramatically improves interactivity for scientific visualizations of structured grid data sets by creating isosurfaces at interactive speeds and with dynamically changeable levels-of-detail (LOD). This representation enables greater interactivity by allowing an analyst to dynamically specify both the desired isosurface threshold and required level-of-detail to be used while rendering the image. A scientist can therefore view very large isosurfaces at interactive speeds (with a low level-of-detail), but has the full data set always available for analysis. The key idea is that various levels-of-detail are represented as differently sized hexahedral virtual voxels, which are stored in a three-dimensional binary tree, or kd-tree; thus the level-of-detail representation is done in voxel space instead of the traditional approach which relies on surface or geometry space decimations. Utilizing the voxel space is an essential step to moving from a post-processing visualization paradigm to a quantitative, real-time paradigm. This algorithm has been implemented as an integral component of the EIGEN/VR project at Sandia National Laboratories, which provides a rich environment for scientists to interactively explore and visualize the results of very large-scale simulations performed on massively parallel supercomputers.

  20. Decoupling Illumination from Isosurface Generation Using 4D Light Transport

    PubMed Central

    Banks, David C.; Beason, Kevin M.

    2014-01-01

    One way to provide global illumination for the scientist who performs an interactive sweep through a 3D scalar dataset is to pre-compute global illumination, resample the radiance onto a 3D grid, then use it as a 3D texture. The basic approach of repeatedly extracting isosurfaces, illuminating them, and then building a 3D illumination grid suffers from the non-uniform sampling that arises from coupling the sampling of radiance with the sampling of isosurfaces. We demonstrate how the illumination step can be decoupled from the isosurface extraction step by illuminating the entire 3D scalar function as a 3-manifold in 4-dimensional space. By reformulating light transport in a higher dimension, one can sample a 3D volume without requiring the radiance samples to aggregate along individual isosurfaces in the pre-computed illumination grid. PMID:19834238

  1. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    NASA Astrophysics Data System (ADS)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  2. Reinventing the Contingency Wheel: Scalable Visual Analytics of Large Categorical Data.

    PubMed

    Alsallakh, B; Aigner, W; Miksch, S; Groller, M E

    2012-12-01

    Contingency tables summarize the relations between categorical variables and arise in both scientific and business domains. Asymmetrically large two-way contingency tables pose a problem for common visualization methods. The Contingency Wheel has been recently proposed as an interactive visual method to explore and analyze such tables. However, the scalability and readability of this method are limited when dealing with large and dense tables. In this paper we present Contingency Wheel++, new visual analytics methods that overcome these major shortcomings: (1) regarding automated methods, a measure of association based on Pearson's residuals alleviates the bias of the raw residuals originally used, (2) regarding visualization methods, a frequency-based abstraction of the visual elements eliminates overlapping and makes analyzing both positive and negative associations possible, and (3) regarding the interactive exploration environment, a multi-level overview+detail interface enables exploring individual data items that are aggregated in the visualization or in the table using coordinated views. We illustrate the applicability of these new methods with a use case and show how they enable discovering and analyzing nontrivial patterns and associations in large categorical data.

  3. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    PubMed

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research. PMID:26731286

  4. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    PubMed

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  5. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform

    PubMed Central

    Poucke, Sven Van; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; Deyne, Cathy De

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner’s Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research. PMID:26731286

  6. Interactive View-Dependent Rendering of Large Iso-Surfaces

    SciTech Connect

    Gregorski, B.; Duchaineau, M.A.; Lindstrom, P.; Pascucci, V.; Joy, K.J.

    2002-01-11

    The authors present an algorithm for interactively extracting and rendering iso-surfaces of large volume datasets in a view-dependent optimal fashion. A recursive tetrahedral mesh subdivision scheme, based on longest edge bisection, is used to hierarchically decompose the data into a multi-resolution structure. This data structure allows fast extraction of arbitrary iso-surfaces to within user specified view-dependent error bounds. A compact encoding of the mesh subdivision optimizes memory usage and processor performance necessary for large datasets. A data layout scheme based on hierarchical space filling curves provides optimal access to the data in a cache coherent manner.

  7. Interactive View-Dependent Rendering of Large Isosurfaces

    SciTech Connect

    Gregorski, B; Duchaineau, M; Lindstrom, P; Pascucci, V; Joy, K I

    2002-11-19

    We present an algorithm for interactively extracting and rendering isosurfaces of large volume datasets in a view-dependent fashion. A recursive tetrahedral mesh refinement scheme, based on longest edge bisection, is used to hierarchically decompose the data into a multiresolution structure. This data structure allows fast extraction of arbitrary isosurfaces to within user specified view-dependent error bounds. A data layout scheme based on hierarchical space filling curves provides access to the data in a cache coherent manner that follows the data access pattern indicated by the mesh refinement.

  8. JuxtaView - A tool for interactive visualization of large imagery on scalable tiled displays

    USGS Publications Warehouse

    Krishnaprasad, N.K.; Vishwanath, V.; Venkataraman, S.; Rao, A.G.; Renambot, L.; Leigh, J.; Johnson, A.E.; Davis, B.

    2004-01-01

    JuxtaView is a cluster-based application for viewing ultra-high-resolution images on scalable tiled displays. We present in JuxtaView, a new parallel computing and distributed memory approach for out-of-core montage visualization, using LambdaRAM, a software-based network-level cache system. The ultimate goal of JuxtaView is to enable a user to interactively roam through potentially terabytes of distributed, spatially referenced image data such as those from electron microscopes, satellites and aerial photographs. In working towards this goal, we describe our first prototype implemented over a local area network, where the image is distributed using LambdaRAM, on the memory of all nodes of a PC cluster driving a tiled display wall. Aggressive pre-fetching schemes employed by LambdaRAM help to reduce latency involved in remote memory access. We compare LambdaRAM with a more traditional memory-mapped file approach for out-of-core visualization. ?? 2004 IEEE.

  9. A Scalable Cloud Library Empowering Big Data Management, Diagnosis, and Visualization of Cloud-Resolving Models

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.

    2015-12-01

    A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a

  10. Dynamic isosurface extraction and level-of-detail in voxel space

    SciTech Connect

    Linebarger, J.M.; Lamphere, P.B.; Breckenridge, A.R.

    1998-06-01

    A new visualization technique is reported, which dramatically improves interactivity for scientific visualizations by working directly with voxel data and by employing efficient algorithms and data structures. This discussion covers the research software, the file structures, examples of data creation, data search, and triangle rendering codes that allow geometric surfaces to be extracted from volumetric data. Uniquely, these methods enable greater interactivity by allowing an analyst to dynamically specify both the desired isosurface threshold and required level-of-detail to be used while rendering the image. The key idea behind this visualization paradigm is that various levels-of-detail are represented as differently sized hexahedral virtual voxels, which are stored in a three-dimensional kd-tree; thus the level-of-detail representation is done in voxel space instead of the traditional approach which relies on surface or geometry space decimations. This algorithm has been implemented as an integral component in the EIGEN/VR project at Sandia National Laboratories, which provides a rich environment for scientists to interactively explore and visualize the results of very large-scale simulations performed on massively parallel supercomputers.

  11. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt

    2013-01-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video

  12. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Astrophysics Data System (ADS)

    Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.

    2013-12-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video

  13. SBIR Phase II Final Report for Scalable Grid Technologies for Visualization Services

    SciTech Connect

    Sebastien Barre; Will Schroeder

    2006-10-15

    This project developed software tools for the automation of grid computing. In particular, the project focused in visualization and imaging tools (VTK, ParaView and ITK); i.e., we developed tools to automatically create Grid services from C++ programs implemented using the open-source VTK visualization and ITK segmentation and registration systems. This approach helps non-Grid experts to create applications using tools with which they are familiar, ultimately producing Grid services for visualization and image analysis by invocation of an automatic process.

  14. Complex absorbing potentials with Voronoi isosurfaces wrapping perfectly around molecules.

    PubMed

    Sommerfeld, Thomas; Ehara, Masahiro

    2015-10-13

    Complex absorbing potentials (CAPs) are imaginary potentials that are added to a Hamiltonian to change the boundary conditions of the problem from scattering to square-integrable. In other words, with a CAP, standard bound-state methods can be used in problems involving unbound states such as identifying resonance states and predicting their energies and lifetimes. Although in wave packet dynamics, many CAP forms are used, in electronic structure theory, the so-called box-CAP is used almost exclusively, because of the ease of evaluating its integrals in a Gaussian basis set. However, the box-CAP does has certain disadvantages. First, it will, e.g., break the symmetry of Cnv molecules if n is odd and the main axis is placed along the z-axis by the "standard orientation" of the electronic structure code. Second, it provides a CAP starting at the smallest box around the entire molecular system. For larger molecules or clusters, which do not fill the space efficiently, that implies that much "dead space" within the molecule will be left, where there is neither a CAP nor a sufficient description with basis functions. Here, two new CAP forms are introduced and systematically explored: first, a Voronoi-CAP (that is, a CAP defined in each atom's Voronoi cell), and second, a smooth Voronoi-CAP (which is similar to the Voronoi-CAP; however, the noncontinuously differentiable behavior at the surfaces between the Voronoi cells is smoothed out). Both have isosurfaces that are similar to the cavities used in solvation modeling. An obvious disadvantage of these two CAPs is that the integrals cannot be obtained analytically, but must be computed numerically. However, Voronoi-CAPs share the advantage of having the same symmetry as the molecular system, and, more importantly, considerably facilitate the treatment of larger molecules with asymmetric side chains and of molecular clusters. PMID:26574253

  15. A scalable architecture for extracting, aligning, linking, and visualizing multi-Int data

    NASA Astrophysics Data System (ADS)

    Knoblock, Craig A.; Szekely, Pedro

    2015-05-01

    An analyst today has a tremendous amount of data available, but each of the various data sources typically exists in their own silos, so an analyst has limited ability to see an integrated view of the data and has little or no access to contextual information that could help in understanding the data. We have developed the Domain-Insight Graph (DIG) system, an innovative architecture for extracting, aligning, linking, and visualizing massive amounts of domain-specific content from unstructured sources. Under the DARPA Memex program we have already successfully applied this architecture to multiple application domains, including the enormous international problem of human trafficking, where we extracted, aligned and linked data from 50 million online Web pages. DIG builds on our Karma data integration toolkit, which makes it easy to rapidly integrate structured data from a variety of sources, including databases, spreadsheets, XML, JSON, and Web services. The ability to integrate Web services allows Karma to pull in live data from the various social media sites, such as Twitter, Instagram, and OpenStreetMaps. DIG then indexes the integrated data and provides an easy to use interface for query, visualization, and analysis.

  16. Isosurface Computation Made Simple: Hardware acceleration,Adaptive Refinement and tetrahedral Stripping

    SciTech Connect

    Pascucci, V

    2004-02-18

    This paper presents a simple approach for rendering isosurfaces of a scalar field. Using the vertex programming capability of commodity graphics cards, we transfer the cost of computing an isosurface from the Central Processing Unit (CPU), running the main application, to the Graphics Processing Unit (GPU), rendering the images. We consider a tetrahedral decomposition of the domain and draw one quadrangle (quad) primitive per tetrahedron. A vertex program transforms the quad into the piece of isosurface within the tetrahedron (see Figure 2). In this way, the main application is only devoted to streaming the vertices of the tetrahedra from main memory to the graphics card. For adaptively refined rectilinear grids, the optimization of this streaming process leads to the definition of a new 3D space-filling curve, which generalizes the 2D Sierpinski curve used for efficient rendering of triangulated terrains. We maintain the simplicity of the scheme when constructing view-dependent adaptive refinements of the domain mesh. In particular, we guarantee the absence of T-junctions by satisfying local bounds in our nested error basis. The expensive stage of fixing cracks in the mesh is completely avoided. We discuss practical tradeoffs in the distribution of the workload between the application and the graphics hardware. With current GPU's it is convenient to perform certain computations on the main CPU. Beyond the performance considerations that will change with the new generations of GPU's this approach has the major advantage of avoiding completely the storage in memory of the isosurface vertices and triangles.

  17. Isosurface Extraction in Time-Varying Fields Using a Temporal Hierarchical Index Tree

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Gerald-Yamasaki, Michael (Technical Monitor)

    1998-01-01

    Many high-performance isosurface extraction algorithms have been proposed in the past several years as a result of intensive research efforts. When applying these algorithms to large-scale time-varying fields, the storage overhead incurred from storing the search index often becomes overwhelming. this paper proposes an algorithm for locating isosurface cells in time-varying fields. We devise a new data structure, called Temporal Hierarchical Index Tree, which utilizes the temporal coherence that exists in a time-varying field and adoptively coalesces the cells' extreme values over time; the resulting extreme values are then used to create the isosurface cell search index. For a typical time-varying scalar data set, not only does this temporal hierarchical index tree require much less storage space, but also the amount of I/O required to access the indices from the disk at different time steps is substantially reduced. We illustrate the utility and speed of our algorithm with data from several large-scale time-varying CID simulations. Our algorithm can achieve more than 80% of disk-space savings when compared with the existing techniques, while the isosurface extraction time is nearly optimal.

  18. Order-of-magnitude faster isosurface rendering in software on a PC than using dedicated general-purpose rendering hardware

    NASA Astrophysics Data System (ADS)

    Grevera, George J.; Udupa, Jayaram K.; Odhner, Dewey

    1999-05-01

    The purpose of this work is to compare the speed of isosurface rendering in software with that using dedicated hardware. Input data consists of 10 different objects form various parts of the body and various modalities with a variety of surface sizes and shapes. The software rendering technique consists of a particular method of voxel-based surface rendering, called shell rendering. The hardware method is OpenGL-based and uses the surfaces constructed from our implementation of the 'Marching Cubes' algorithm. The hardware environment consists of a variety of platforms including a Sun Ultra I with a Creator3D graphics card and a Silicon Graphics Reality Engine II, both with polygon rendering hardware, and a 300Mhz Pentium PC. The results indicate that the software method was 18 to 31 times faster than any hardware rendering methods. This work demonstrates that a software implementation of a particular rendering algorithm can outperform dedicated hardware. We conclude that for medical surface visualization, expensive dedicated hardware engines are not required. More importantly, available software algorithms on a 300Mhz Pentium PC outperform the speed of rendering via hardware engines by a factor of 18 to 31.

  19. A Unified Air-Sea Visualization System: Survey on Gridding Structures

    NASA Technical Reports Server (NTRS)

    Anand, Harsh; Moorhead, Robert

    1995-01-01

    The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.

  20. Grid-based Visualization Framework

    NASA Astrophysics Data System (ADS)

    Thiebaux, M.; Tangmunarunkit, H.; Kesselman, C.

    2003-12-01

    Advances in science and engineering have put high demands on tools for high-performance large-scale visual data exploration and analysis. For example, earthquake scientists can now study earthquake phenomena from first principle physics-based simulations. These simulations can generate large amounts of data, possibly high spatial resolution, and long time series. Single-system visualization software running on commodity machines cannot scale up to the large amounts of data generated by these simulations. To address this problem, we propose a flexible and extensible Grid-based visualization framework for time-critical, interactively controlled visual browsing of spatially and temporally large datasets in a Grid environment. Our framework leverages Grid resources for scalable computation and data storage to maintain performance and interactivity with large visualization jobs. Our framework utilizes Globus Toolkit 2.4 components for security (i.e., GSI), resource allocation and management (i.e., DUROC, GRAM) and communication (i.e., Globus-IO) to couple commodity desktops with remote, scalable storage and computational resources in a Grid for interactive data exploration. There are two major components in this framework---Grid Data Transport (GDT) and the Grid Visualization Utility (GVU). GDT provides libraries for performing parallel data filtering and parallel data exchange among Grid resources. GDT allows arbitrary data filtering to be integrated into the system. It also facilitates multi-tiered pipeline topology construction of compute resources and displays. In addition to scientific visualization applications, GDT can be used to support other applications that require parallel processing and parallel transfer of partial ordered independent files, such as file-set transfer. On top of GDT, we have developed the Grid Visualization Utility (GVU), which is designed to assist visualization dataset management, including file formatting, data transport and automatic

  1. A scalable visualization environment for the correlation of radiological and histopathological data at multiple levels of resolution.

    PubMed

    Annese, Jacopo; Weber, Philip

    2009-01-01

    Until the introduction of non-invasive imaging techniques, the representation of anatomy and pathology relied solely on gross dissection and histological staining. Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) protocols allow for the clinical evaluation of anatomical images derived from complementary modalities, thereby increasing reliability of the diagnosis and the prognosis of disease. Despite the significant improvements in image contrast and resolution of MRI, autopsy and classical histopathological analysis are still indispensable for the correct diagnosis of specific disease. It is therefore important to be able to correlate multiple images from different modalities, in vivo and postmortem, in order to validate non-invasive imaging markers of disease. To that effect, we have developed a methodological pipeline and a visualization environment that allow for the concurrent observation of both macroscopic and microscopic image data relative to the same patient. We describe these applications and sample data relative to the study of the anatomy and disease of the Central Nervous System (CNS). The brain is approached as an organ with a complex 3-dimensional (3-D) architecture that can only be effectively studied combining observation and analysis at the system level as well as at the cellular level. Our computational and visualization environment allows seamless navigation through multiple layers of neurological data that are accessible quickly and simultaneously. PMID:19377104

  2. An ISO-surface folding analysis method applied to premature neonatal brain development

    NASA Astrophysics Data System (ADS)

    Rodriguez-Carranza, Claudia E.; Rousseau, Francois; Iordanova, Bistra; Glenn, Orit; Vigneron, Daniel; Barkovich, James; Studholme, Colin

    2006-03-01

    In this paper we describe the application of folding measures to tracking in vivo cortical brain development in premature neonatal brain anatomy. The outer gray matter and the gray-white matter interface surfaces were extracted from semi-interactively segmented high-resolution T1 MRI data. Nine curvature- and geometric descriptor-based folding measures were applied to six premature infants, aged 28-37 weeks, using a direct voxelwise iso-surface representation. We have shown that using such an approach it is feasible to extract meaningful surfaces of adequate quality from typical clinically acquired neonatal MRI data. We have shown that most of the folding measures, including a new proposed measure, are sensitive to changes in age and therefore applicable in developing a model that tracks development in premature infants. For the first time gyrification measures have been computed on the gray-white matter interface and on cases whose age is representative of a period of intense brain development.

  3. Kd-Jump: a path-preserving stackless traversal for faster isosurface raytracing on GPUs.

    PubMed

    Hughes, David M; Lim, Ik Soo

    2009-01-01

    Stackless traversal techniques are often used to circumvent memory bottlenecks by avoiding a stack and replacing return traversal with extra computation. This paper addresses whether the stackless traversal approaches are useful on newer hardware and technology (such as CUDA). To this end, we present a novel stackless approach for implicit kd-trees, which exploits the benefits of index-based node traversal, without incurring extra node visitation. This approach, which we term Kd-Jump, enables the traversal to immediately return to the next valid node, like a stack, without incurring extra node visitation (kd-restart). Also, Kd-Jump does not require global memory (stack) at all and only requires a small matrix in fast constant-memory. We report that Kd-Jump outperforms a stack by 10 to 20% and kd-restart by 100%. We also present a Hybrid Kd-Jump, which utilizes a volume stepper for leaf testing and a run-time depth threshold to define where kd-tree traversal stops and volume-stepping occurs. By using both methods, we gain the benefits of empty space removal, fast texture-caching and realtime ability to determine the best threshold for current isosurface and view direction. PMID:19834233

  4. 3D motion tracking of the heart using Harmonic Phase (HARP) isosurfaces

    NASA Astrophysics Data System (ADS)

    Soliman, Abraam S.; Osman, Nael F.

    2010-03-01

    Tags are non-invasive features induced in the heart muscle that enable the tracking of heart motion. Each tag line, in fact, corresponds to a 3D tag surface that deforms with the heart muscle during the cardiac cycle. Tracking of tag surfaces deformation is useful for the analysis of left ventricular motion. Cardiac material markers (Kerwin et al, MIA, 1997) can be obtained from the intersections of orthogonal surfaces which can be reconstructed from short- and long-axis tagged images. The proposed method uses Harmonic Phase (HARP) method for tracking tag lines corresponding to a specific harmonic phase value and then the reconstruction of grid tag surfaces is achieved by a Delaunay triangulation-based interpolation for sparse tag points. Having three different tag orientations from short- and long-axis images, the proposed method showed the deformation of 3D tag surfaces during the cardiac cycle. Previous work on tag surface reconstruction was restricted for the "dark" tag lines; however, the use of HARP as proposed enables the reconstruction of isosurfaces based on their harmonic phase values. The use of HARP, also, provides a fast and accurate way for tag lines identification and tracking, and hence, generating the surfaces.

  5. Scientific Visualization for Atmospheric Data Analysis in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Engelke, Wito; Flatken, Markus; Garcia, Arturo S.; Bar, Christian; Gerndt, Andreas

    2016-04-01

    terabytes. The combination of different data sources (e.g., MOLA, HRSC, HiRISE) and selection of presented data (e.g., infrared, spectral, imagery) is also supported. Furthermore, the data is presented unchanged and with the highest possible resolution for the target setup (e.g., power-wall, workstation, laptop) and view distance. The visualization techniques for the volumetric data sets can handle VTK [6] based data sets and also support different grid types as well as a time component. In detail, the integrated volume rendering uses a GPU based ray casting algorithm which was adapted to work in spherical coordinate systems. This approach results in interactive frame-rates without compromising visual fidelity. Besides direct visualization via volume rendering the prototype supports interactive slicing, extraction of iso-surfaces and probing. The latter can also be used for side-by-side comparison and on-the-fly diagram generation within the application. Similarily to the surface data a combination of different data sources is supported as well. For example, the extracted iso-surface of a scalar pressure field can be used for the visualization of the temperature. The software development is supported by the ViSTA VR-toolkit [7] and supports different target systems as well as a wide range of VR-devices. Furthermore, the prototype is scalable to run on laptops, workstations and cluster setups. REFERENCES [1] A. S. Garcia, D. J. Roberts, T. Fernando, C. Bar, R. Wolff, J. Dodiya, W. Engelke, and A. Gerndt, "A collaborative workspace architecture for strengthening collaboration among space scientists," in IEEE Aerospace Conference, (Big Sky, Montana, USA), 7-14 March 2015. [2] W. Engelke, "Mars Cartography VR System 2/3." German Aerospace Center (DLR), 2015. Project Deliverable D4.2. [3] E. Hivon, F. K. Hansen, and A. J. Banday, "The healpix primer," arXivpreprint astro-ph/9905275, 1999. [4] K. M. Gorski, E. Hivon, A. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, and M

  6. Nicer-slicer-dicer: an interactive volume visualization tool

    NASA Astrophysics Data System (ADS)

    Moran, Patrick J.

    1994-04-01

    Nicer-Slicer-Dicer (NSD) is a new, interactive tool for volume visualization. In addition to single scalar volumes, NSD can visualize multivariate and time series data. NSD achieves high rendering rates by compositing slice planes back-to-front while blending pixels through the use of alpha buffering hardware. This technique introduces some small artifacts, but it works remarkably well with large data sets where voxels represent small areas in image space. The plane compositing technique also allows one to generate an isosurface and volume rendering simultaneously by interleaving planes and isosurface polygons.

  7. Finite Element Results Visualization for Unstructured Grids

    SciTech Connect

    Speck, Douglas E.; Dovey, Donald J.

    1996-07-15

    GRIZ is a general-purpose post-processing application supporting interactive visualization of finite element analysis results on unstructured grids. In addition to basic pseudocolor renderings of state variables over the mesh surface, GRIZ provides modern visualization techniques such as isocontours and isosurfaces, cutting planes, vector field display, and particle traces. GRIZ accepts both command-line and mouse-driven input, and is portable to virtually any UNIX platform which provides Motif and OpenGl libraries.

  8. Parallel Visualization Co-Processing of Overnight CFD Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Edwards, David E.; Haimes, Robert

    1999-01-01

    An interactive visualization system pV3 is being developed for the investigation of advanced computational methodologies employing visualization and parallel processing for the extraction of information contained in large-scale transient engineering simulations. Visual techniques for extracting information from the data in terms of cutting planes, iso-surfaces, particle tracing and vector fields are included in this system. This paper discusses improvements to the pV3 system developed under NASA's Affordable High Performance Computing project.

  9. Efficient visualization of unsteady and huge scalar and vector fields

    NASA Astrophysics Data System (ADS)

    Vetter, Michael; Olbrich, Stephan

    2016-04-01

    and methods, we are developing a stand-alone post-processor, adding further data structures and mapping algorithms, and cooperating with the ICON developers and users. With the implementation of a DSVR-based post-processor, a milestone was achieved. By using the DSVR post-processor the mentioned 3 processes are completely separated: the data set is processed in a batch mode - e.g. on the same supercomputer, which the data is generated on - and the interactive 3D rendering is done afterwards on the scientist's local system. At the actual status of implementation the DSVR post-processor supports the generation of isosurfaces and colored slicers on volume data set time series based on rectilinear grids as well as the visualization of pathlines on time varying flow fields based on either rectilinear grids or prism grids. The software implementation and evaluation is done on the supercomputers at DKRZ, including scalability tests using ICON output files in NetCDF format. The next milestones will be (a) the in-situ integration of the DSVR library in the ICON model and (b) the implementation of an isosurface algorithm for prism grids.

  10. Muster: Massively Scalable Clustering

    2010-05-20

    Muster is a framework for scalable cluster analysis. It includes implementations of classic K-Medoids partitioning algorithms, as well as infrastructure for making these algorithms run scalably on very large systems. In particular, Muster contains algorithms such as CAPEK (described in reference 1) that are capable of clustering highly distributed data sets in-place on a hundred thousand or more processes.

  11. Scalable rendering on PC clusters

    SciTech Connect

    WYLIE,BRIAN N.; LEWIS,VASILY; SHIRLEY,DAVID NOYES; PAVLAKOS,CONSTANTINE

    2000-04-25

    This case study presents initial results from research targeted at the development of cost-effective scalable visualization and rendering technologies. The implementations of two 3D graphics libraries based on the popular sort-last and sort-middle parallel rendering techniques are discussed. An important goal of these implementations is to provide scalable rendering capability for extremely large datasets (>> 5 million polygons). Applications can use these libraries for either run-time visualization, by linking to an existing parallel simulation, or for traditional post-processing by linking to an interactive display program. The use of parallel, hardware-accelerated rendering on commodity hardware is leveraged to achieve high performance. Current performance results show that, using current hardware (a small 16-node cluster), they can utilize up to 85% of the aggregate graphics performance and achieve rendering rates in excess of 20 million polygons/second using OpenGL{reg_sign} with lighting, Gouraud shading, and individually specified triangles (not t-stripped).

  12. Visualizing the Positive-Negative Interface of Molecular Electrostatic Potentials as an Educational Tool for Assigning Chemical Polarity

    ERIC Educational Resources Information Center

    Schonborn, Konrad; Host, Gunnar; Palmerius, Karljohan

    2010-01-01

    To help in interpreting the polarity of a molecule, charge separation can be visualized by mapping the electrostatic potential at the van der Waals surface using a color gradient or by indicating positive and negative regions of the electrostatic potential using different colored isosurfaces. Although these visualizations capture the molecular…

  13. Scalable coherent interface

    SciTech Connect

    Alnaes, K.; Kristiansen, E.H. ); Gustavson, D.B. ); James, D.V. )

    1990-01-01

    The Scalable Coherent Interface (IEEE P1596) is establishing an interface standard for very high performance multiprocessors, supporting a cache-coherent-memory model scalable to systems with up to 64K nodes. This Scalable Coherent Interface (SCI) will supply a peak bandwidth per node of 1 GigaByte/second. The SCI standard should facilitate assembly of processor, memory, I/O and bus bridge cards from multiple vendors into massively parallel systems with throughput far above what is possible today. The SCI standard encompasses two levels of interface, a physical level and a logical level. The physical level specifies electrical, mechanical and thermal characteristics of connectors and cards that meet the standard. The logical level describes the address space, data transfer protocols, cache coherence mechanisms, synchronization primitives and error recovery. In this paper we address logical level issues such as packet formats, packet transmission, transaction handshake, flow control, and cache coherence. 11 refs., 10 figs.

  14. Sandia Scalable Encryption Software

    1997-08-13

    Sandia Scalable Encryption Library (SSEL) Version 1.0 is a library of functions that implement Sandia''s scalable encryption algorithm. This algorithm is used to encrypt Asynchronous Transfer Mode (ATM) data traffic, and is capable of operating on an arbitrary number of bits at a time (which permits scaling via parallel implementations), while being interoperable with differently scaled versions of this algorithm. The routines in this library implement 8 bit and 32 bit versions of a non-linearmore » mixer which is compatible with Sandia''s hardware-based ATM encryptor.« less

  15. Scalable Parallel Utopia

    SciTech Connect

    King, D.; Pierson, L.

    1998-10-01

    This contribution proposes a 128 bit wide interface structure clocked at approximately 80 MHz that will operate at 10 Gbps as a strawman for a 0C192C Utopia Specification. In addition, the concept of scalable width of data transfers in order to maintain manageably low clock rates is proposed.

  16. Performance of scalable coding in depth domain

    NASA Astrophysics Data System (ADS)

    Sjöström, Mårten; Karlsson, Linda S.

    2010-02-01

    Common autostereoscopic 3D displays are based on multi-view projection. The diversity of resolutions and number of views of such displays implies a necessary flexibility of 3D content formats in order to make broadcasting efficient. Furthermore, distribution of content over a heterogeneous network should adapt to an available network capacity. Present scalable video coding provides the ability to adapt to network conditions; it allows for quality, temporal and spatial scaling of 2D video. Scalability for 3D data extends this list to the depth and the view domains. We have introduced scalability with respect to depth information. Our proposed scheme is based on the multi-view-plus-depth format where the center view data are preserved, and side views are extracted in enhancement layers depending on depth values. We investigate the performance of various layer assignment strategies: number of layers, and distribution of layers in depth, either based on equal number of pixels or histogram characteristics. We further consider the consequences to variable distortion due to encoder parameters. The results are evaluated considering their overall distortion verses bit rate, distortion per enhancement layer, as well as visual quality appearance. Scalability with respect to depth (and views) allows for an increased number of quality steps; the cost is a slight increase of required capacity for the whole sequence. The main advantage is, however, an improved quality for objects close to the viewer, even if overall quality is worse.

  17. Visualization of a Large Set of Hydrogen Atomic Orbital Contours Using New and Expanded Sets of Parametric Equations

    ERIC Educational Resources Information Center

    Rhile, Ian J.

    2014-01-01

    Atomic orbitals are a theme throughout the undergraduate chemistry curriculum, and visualizing them has been a theme in this journal. Contour plots as isosurfaces or contour lines in a plane are the most familiar representations of the hydrogen wave functions. In these representations, a surface of a fixed value of the wave function ? is plotted…

  18. Measuring Video Quality on Full Scalability of H.264/AVC Scalable Video Coding

    NASA Astrophysics Data System (ADS)

    Kim, Cheon Seog; Jin, Sung Ho; Seo, Doug Jun; Ro, Yong Man

    In heterogeneous network environments, it is mandatory to measure the grade of the video quality in order to guarantee the optimal quality of the video streaming service. Quality of Service (QoS) has become a key issue for service acceptability and user satisfaction. Although there have been many recent works regarding video quality, most of them have been limited to measuring quality within temporal and Signal-to-Noise Ratio (SNR) scalability. H.264/AVC Scalable Video Coding (SVC) has emerged and has been developed to support full scalability. This includes spatial, temporal, and SNR scalability, each of which shows different visual effects. The aim of this paper is to define and develop a novel video quality metric allowing full scalability. It focuses on the effect of frame rate, SNR, the change of spatial resolution, and motion characteristics using subjective quality assessment. Experimental results show the proposed quality metric has a high correlation to subjective quality and that it is useful in determining the video quality of SVC.

  19. AstroBlend: Visualization package for use with Blender

    NASA Astrophysics Data System (ADS)

    Naiman, J. P.

    2015-12-01

    AstroBlend is a visualization package for use in the three dimensional animation and modeling software, Blender. It reads data in via a text file or can use pre-fab isosurface files stored as OBJ or Wavefront files. AstroBlend supports a variety of codes such as FLASH (ascl:1010.082), Enzo (ascl:1010.072), and Athena (ascl:1010.014), and combines artistic 3D models with computational astrophysics datasets to create models and animations.

  20. Scientific Visualization for Atmospheric Data Analysis in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Engelke, Wito; Flatken, Markus; Garcia, Arturo S.; Bar, Christian; Gerndt, Andreas

    2016-04-01

    terabytes. The combination of different data sources (e.g., MOLA, HRSC, HiRISE) and selection of presented data (e.g., infrared, spectral, imagery) is also supported. Furthermore, the data is presented unchanged and with the highest possible resolution for the target setup (e.g., power-wall, workstation, laptop) and view distance. The visualization techniques for the volumetric data sets can handle VTK [6] based data sets and also support different grid types as well as a time component. In detail, the integrated volume rendering uses a GPU based ray casting algorithm which was adapted to work in spherical coordinate systems. This approach results in interactive frame-rates without compromising visual fidelity. Besides direct visualization via volume rendering the prototype supports interactive slicing, extraction of iso-surfaces and probing. The latter can also be used for side-by-side comparison and on-the-fly diagram generation within the application. Similarily to the surface data a combination of different data sources is supported as well. For example, the extracted iso-surface of a scalar pressure field can be used for the visualization of the temperature. The software development is supported by the ViSTA VR-toolkit [7] and supports different target systems as well as a wide range of VR-devices. Furthermore, the prototype is scalable to run on laptops, workstations and cluster setups. REFERENCES [1] A. S. Garcia, D. J. Roberts, T. Fernando, C. Bar, R. Wolff, J. Dodiya, W. Engelke, and A. Gerndt, "A collaborative workspace architecture for strengthening collaboration among space scientists," in IEEE Aerospace Conference, (Big Sky, Montana, USA), 7-14 March 2015. [2] W. Engelke, "Mars Cartography VR System 2/3." German Aerospace Center (DLR), 2015. Project Deliverable D4.2. [3] E. Hivon, F. K. Hansen, and A. J. Banday, "The healpix primer," arXivpreprint astro-ph/9905275, 1999. [4] K. M. Gorski, E. Hivon, A. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, and M

  1. A Scalable Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Aiken, Alexander

    2001-01-01

    The Scalable Analysis Toolkit (SAT) project aimed to demonstrate that it is feasible and useful to statically detect software bugs in very large systems. The technical focus of the project was on a relatively new class of constraint-based techniques for analysis software, where the desired facts about programs (e.g., the presence of a particular bug) are phrased as constraint problems to be solved. At the beginning of this project, the most successful forms of formal software analysis were limited forms of automatic theorem proving (as exemplified by the analyses used in language type systems and optimizing compilers), semi-automatic theorem proving for full verification, and model checking. With a few notable exceptions these approaches had not been demonstrated to scale to software systems of even 50,000 lines of code. Realistic approaches to large-scale software analysis cannot hope to make every conceivable formal method scale. Thus, the SAT approach is to mix different methods in one application by using coarse and fast but still adequate methods at the largest scales, and reserving the use of more precise but also more expensive methods at smaller scales for critical aspects (that is, aspects critical to the analysis problem under consideration) of a software system. The principled method proposed for combining a heterogeneous collection of formal systems with different scalability characteristics is mixed constraints. This idea had been used previously in small-scale applications with encouraging results: using mostly coarse methods and narrowly targeted precise methods, useful information (meaning the discovery of bugs in real programs) was obtained with excellent scalability.

  2. Scalable solvers and applications

    SciTech Connect

    Ribbens, C J

    2000-10-27

    The purpose of this report is to summarize research activities carried out under Lawrence Livermore National Laboratory (LLNL) research subcontract B501073. This contract supported the principal investigator (P1), Dr. Calvin Ribbens, during his sabbatical visit to LLNL from August 1999 through June 2000. Results and conclusions from the work are summarized below in two major sections. The first section covers contributions to the Scalable Linear Solvers and hypre projects in the Center for Applied Scientific Computing (CASC). The second section describes results from collaboration with Patrice Turchi of LLNL's Chemistry and Materials Science Directorate (CMS). A list of publications supported by this subcontract appears at the end of the report.

  3. Scalable optical quantum computer

    SciTech Connect

    Manykin, E A; Mel'nichenko, E V

    2014-12-31

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  4. Declarative Visualization Queries

    NASA Astrophysics Data System (ADS)

    Pinheiro da Silva, P.; Del Rio, N.; Leptoukh, G. G.

    2011-12-01

    necessarily entirely exposed to scientists writing visualization queries, facilitates the automated construction of visualization pipelines. VisKo queries have been successfully used in support of visualization scenarios from Earth Science domains including: velocity model isosurfaces, gravity data raster, and contour map renderings. Our synergistic environment provided by our CYBER-ShARE initiative at the University of Texas at El Paso has allowed us to work closely with Earth Science experts that have both provided us our test data as well as validation as to whether the execution of VisKo queries are returning visualizations that can be used for data analysis. Additionally, we have employed VisKo queries to support visualization scenarios associated with Giovanni, an online platform for data analysis developed by NASA GES DISC. VisKo-enhanced visualizations included time series plotting of aerosol data as well as contour and raster map generation of gridded brightness-temperature data.

  5. SFT: Scalable Fault Tolerance

    SciTech Connect

    Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod

    2006-04-15

    In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparent and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5μs; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.

  6. Scalable hierarchical video summary and search

    NASA Astrophysics Data System (ADS)

    Sull, Sanghoon; Kim, Jung-Rim; Kim, Yunam; Chang, Hyun S.; Lee, Sang U.

    2000-12-01

    Recently, a huge amount of the video data available in the digital form has given users to allow more ubiquitous access to visual information than ever. To efficiently manage such huge amount of video data, we need such tools as video summarization and search. In this paper, we propose a novel scheme allowing for both scalable hierarchical video summary and efficient retrieval by introducing a notion of fidelity. The notion of fidelity in the tree-structured key frame hierarchy describes how well the key frames at one level are represented by the parent key frame, relative to the other children of the parent. The experimental results demonstrate the feasibility of our scheme.

  7. Scalable hierarchical video summary and search

    NASA Astrophysics Data System (ADS)

    Sull, Sanghoon; Kim, Jung-Rim; Kim, Yunam; Chang, Hyun S.; Lee, Sang U.

    2001-01-01

    Recently, a huge amount of the video data available in the digital form has given users to allow more ubiquitous access to visual information than ever. To efficiently manage such huge amount of video data, we need such tools as video summarization and search. In this paper, we propose a novel scheme allowing for both scalable hierarchical video summary and efficient retrieval by introducing a notion of fidelity. The notion of fidelity in the tree-structured key frame hierarchy describes how well the key frames at one level are represented by the parent key frame, relative to the other children of the parent. The experimental results demonstrate the feasibility of our scheme.

  8. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  9. PADMA: PArallel Data Mining Agents for scalable text classification

    SciTech Connect

    Kargupta, H.; Hamzaoglu, I.; Stafford, B.

    1997-03-01

    This paper introduces PADMA (PArallel Data Mining Agents), a parallel agent based system for scalable text classification. PADMA contains modules for (1) parallel data accessing operations, (2) parallel hierarchical clustering, and (3) web-based data visualization. This paper introduces the general architecture of PADMA and presents a detailed description of its different modules.

  10. Foveation scalable video coding with automatic fixation selection.

    PubMed

    Wang, Zhou; Lu, Ligang; Bovik, Alan Conrad

    2003-01-01

    Image and video coding is an optimization problem. A successful image and video coding algorithm delivers a good tradeoff between visual quality and other coding performance measures, such as compression, complexity, scalability, robustness, and security. In this paper, we follow two recent trends in image and video coding research. One is to incorporate human visual system (HVS) models to improve the current state-of-the-art of image and video coding algorithms by better exploiting the properties of the intended receiver. The other is to design rate scalable image and video codecs, which allow the extraction of coded visual information at continuously varying bit rates from a single compressed bitstream. Specifically, we propose a foveation scalable video coding (FSVC) algorithm which supplies good quality-compression performance as well as effective rate scalability. The key idea is to organize the encoded bitstream to provide the best decoded video at an arbitrary bit rate in terms of foveated visual quality measurement. A foveation-based HVS model plays an important role in the algorithm. The algorithm is adaptable to different applications, such as knowledge-based video coding and video communications over time-varying, multiuser and interactive networks. PMID:18237905

  11. Optimized scalable network switch

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    2010-02-23

    In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.

  12. Optimized scalable network switch

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2007-12-04

    In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.

  13. Engineering scalable biological systems

    PubMed Central

    2010-01-01

    Synthetic biology is focused on engineering biological organisms to study natural systems and to provide new solutions for pressing medical, industrial and environmental problems. At the core of engineered organisms are synthetic biological circuits that execute the tasks of sensing inputs, processing logic and performing output functions. In the last decade, significant progress has been made in developing basic designs for a wide range of biological circuits in bacteria, yeast and mammalian systems. However, significant challenges in the construction, probing, modulation and debugging of synthetic biological systems must be addressed in order to achieve scalable higher-complexity biological circuits. Furthermore, concomitant efforts to evaluate the safety and biocontainment of engineered organisms and address public and regulatory concerns will be necessary to ensure that technological advances are translated into real-world solutions. PMID:21468204

  14. Oceanographic visualization interactive research tool (OVIRT)

    NASA Astrophysics Data System (ADS)

    Moorhead, Robert J., II; Hamann, Bernd; Everitt, Cass; Jones, Stephen C.; McAllister, Joel; Barlow, Jonathan

    1994-04-01

    The Oceanographic Visualization Interactive Research Tool (OVIRT) was developed to explore the utility of scalar field volume rendering in visualizing environmental ocean data and to extend some of the classical 2D oceanographic displays into a 3D visualization environment. It has five major visualization tools: cutting planes, minicubes, isosurfaces, sonic-surfaces, and direct volume rendering. The cutting planes tool provides three orthogonal cutting planes which can be interactively moved through the volume. The minicubes routine renders small cubes whose faces are shaded according to function value. The isosurface tool is conceptually similar to the well-known marching cubes technique. The sonic surfaces are an extension of the 2D surfaces which have been classically used to display acoustic propagation paths and inflection lines within the ocean. The surfaces delineate the extent and axis of the shallow and deep sound channels. The direct volume rendering (DVR) techniques give a global view of the data. Other features include the ability to overlay the shoreline and inlay the bathymetry. There are multiple colormaps, an automatic histogramming feature, a macro and scripting capability, a picking function, and the ability to display animations of DVR imagery. There is a network feature to allow computationally expensive functions to be executed on remote machines.

  15. Customer oriented SNR scalability scheme for scalable video coding

    NASA Astrophysics Data System (ADS)

    Li, Z. G.; Rahardja, S.

    2005-07-01

    Let the whole region be the whole bit rate range that customers are interested in, and a sub-region be a specific bit rate range. The weighting factor of each sub-region is determined according to customers' interest. A new type of region of interest (ROI) is defined for the SNR scalability as the gap between the coding efficiency of SNR scalability scheme and that of the state-of-the-art single layer coding for a sub-region is a monotonically non-increasing function of its weighting factor. This type of ROI is used as a performance index to design a customer oriented SNR scalability scheme. Our scheme can be used to achieve an optimal customer oriented scalable tradeoff (COST). The profit can thus be maximized.

  16. Scalable SCPPM Decoder

    NASA Technical Reports Server (NTRS)

    Quir, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy; Nakashima, Michael A.; Moision, Bruce E.

    2012-01-01

    A decoder was developed that decodes a serial concatenated pulse position modulation (SCPPM) encoded information sequence. The decoder takes as input a sequence of four bit log-likelihood ratios (LLR) for each PPM slot in a codeword via a XAUI 10-Gb/s quad optical fiber interface. If the decoder is unavailable, it passes the LLRs on to the next decoder via a XAUI 10-Gb/s quad optical fiber interface. Otherwise, it decodes the sequence and outputs information bits through a 1-GB/s Ethernet UDP/IP (User Datagram Protocol/Internet Protocol) interface. The throughput for a single decoder unit is 150-Mb/s at an average of four decoding iterations; by connecting a number of decoder units in series, a decoding rate equal to that of the aggregate rate is achieved. The unit is controlled through a 1-GB/s Ethernet UDP/IP interface. This ground station decoder was developed to demonstrate a deep space optical communication link capability, and is unique in the scalable design to achieve real-time SCPP decoding at the aggregate data rate.

  17. Nexa: a scalable neural simulator with integrated analysis.

    PubMed

    Benjaminsson, Simon; Lansner, Anders

    2012-01-01

    Large-scale neural simulations encompass challenges in simulator design, data handling and understanding of simulation output. As the computational power of supercomputers and the size of network models increase, these challenges become even more pronounced. Here we introduce the experimental scalable neural simulator Nexa, for parallel simulation of large-scale neural network models at a high level of biological abstraction and for exploration of the simulation methods involved. It includes firing-rate models and capabilities to build networks using machine learning inspired methods for e.g. self-organization of network architecture and for structural plasticity. We show scalability up to the size of the largest machines currently available for a number of model scenarios. We further demonstrate simulator integration with online analysis and real-time visualization as scalable solutions for the data handling challenges. PMID:23116128

  18. Scalable hybrid unstructured and structured grid raycasting.

    PubMed

    Muigg, Philipp; Hadwiger, Markus; Doleisch, Helmut; Hauser, Helwig

    2007-01-01

    This paper presents a scalable framework for real-time raycasting of large unstructured volumes that employs a hybrid bricking approach. It adaptively combines original unstructured bricks in important (focus) regions, with structured bricks that are resampled on demand in less important (context) regions. The basis of this focus+context approach is interactive specification of a scalar degree of interest (DOI) function. Thus, rendering always considers two volumes simultaneously: a scalar data volume, and the current DOI volume. The crucial problem of visibility sorting is solved by raycasting individual bricks and compositing in visibility order from front to back. In order to minimize visual errors at the grid boundary, it is always rendered accurately, even for resampled bricks. A variety of different rendering modes can be combined, including contour enhancement. A very important property of our approach is that it supports a variety of cell types natively, i.e., it is not constrained to tetrahedral grids, even when interpolation within cells is used. Moreover, our framework can handle multi-variate data, e.g., multiple scalar channels such as temperature or pressure, as well as time-dependent data. The combination of unstructured and structured bricks with different quality characteristics such as the type of interpolation or resampling resolution in conjunction with custom texture memory management yields a very scalable system. PMID:17968114

  19. iSIGHT-FD scalability test report.

    SciTech Connect

    Clay, Robert L.; Shneider, Max S.

    2008-07-01

    The engineering analysis community at Sandia National Laboratories uses a number of internal and commercial software codes and tools, including mesh generators, preprocessors, mesh manipulators, simulation codes, post-processors, and visualization packages. We define an analysis workflow as the execution of an ordered, logical sequence of these tools. Various forms of analysis (and in particular, methodologies that use multiple function evaluations or samples) involve executing parameterized variations of these workflows. As part of the DART project, we are evaluating various commercial workflow management systems, including iSIGHT-FD from Engineous. This report documents the results of a scalability test that was driven by DAKOTA and conducted on a parallel computer (Thunderbird). The purpose of this experiment was to examine the suitability and performance of iSIGHT-FD for large-scale, parameterized analysis workflows. As the results indicate, we found iSIGHT-FD to be suitable for this type of application.

  20. Scalable Computation of Streamlines on Very Large Datasets

    SciTech Connect

    Pugmire, David; Childs, Hank; Garth, Christoph; Ahern, Sean; Weber, Gunther H.

    2009-09-01

    Understanding vector fields resulting from large scientific simulations is an important and often difficult task. Streamlines, curves that are tangential to a vector field at each point, are a powerful visualization method in this context. Application of streamline-based visualization to very large vector field data represents a significant challenge due to the non-local and data-dependent nature of streamline computation, and requires careful balancing of computational demands placed on I/O, memory, communication, and processors. In this paper we review two parallelization approaches based on established parallelization paradigms (static decomposition and on-demand loading) and present a novel hybrid algorithm for computing streamlines. Our algorithm is aimed at good scalability and performance across the widely varying computational characteristics of streamline-based problems. We perform performance and scalability studies of all three algorithms on a number of prototypical application problems and demonstrate that our hybrid scheme is able to perform well in different settings.

  1. Scalable cloud without dedicated storage

    NASA Astrophysics Data System (ADS)

    Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.

    2015-05-01

    We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.

  2. Scalability study of solid xenon

    SciTech Connect

    Yoo, J.; Cease, H.; Jaskierny, W. F.; Markley, D.; Pahlka, R. B.; Balakishiyeva, D.; Saab, T.; Filipenko, M.

    2015-04-01

    We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified {\\it Bridgeman's technique} reproduces a large scale optically transparent solid xenon.

  3. Scalability study of solid xenon

    NASA Astrophysics Data System (ADS)

    Yoo, J.; Cease, H.; Jaskierny, W. F.; Markley, D.; Pahlka, R. B.; Balakishiyeva, D.; Saab, T.; Filipenko, M.

    2015-04-01

    We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified Bridgeman's technique reproduces a large scale optically transparent solid xenon.

  4. A Scalable Media Multicasting Scheme

    NASA Astrophysics Data System (ADS)

    Youwei, Zhang

    IP multicast has been proved to be unfeasible for deployment, Application Layer Multicast (ALM) Based on end multicast system is practical and more scalable than IP multicast in Internet. In this paper, an ALM protocol called Scalable multicast for High Definition streaming media (SHD) is proposed in which end to end transmission capability is fully cultivated for HD media transmission without increasing much control overhead. Similar to the transmission style of BiTtorrent, hosts only forward part of data piece according to the available bandwidth that improves the usage of bandwidth greatly. On the other hand, some novel strategies are adopted to overcome the disadvantages of BiTtorrent protocol in streaming media transmission. Data transmission between hosts is implemented in many-one transmission style in Hierarchical architecture in most circumstances. Simulations implemented on Internet-like topology indicate that SHD achieves low link stress, end to end latency and stability.

  5. A Scalable Tools Communication Infrastructure

    SciTech Connect

    Buntinas, Darius; Bosilca, George; Graham, Richard L; Vallee, Geoffroy R; Watson, Gregory R.

    2008-01-01

    The Scalable Tools Communication Infrastructure (STCI) is an open source collaborative effort intended to provide high-performance, scalable, resilient, and portable communications and process control services for a wide variety of user and system tools. STCI is aimed specifically at tools for ultrascale computing and uses a component architecture to simplify tailoring the infrastructure to a wide range of scenarios. This paper describes STCI's design philosophy, the various components that will be used to provide an STCI implementation for a range of ultrascale platforms, and a range of tool types. These include tools supporting parallel run-time environments, such as MPI, parallel application correctness tools and performance analysis tools, as well as system monitoring and management tools.

  6. Stereoscopic video compression using temporal scalability

    NASA Astrophysics Data System (ADS)

    Puri, Atul; Kollarits, Richard V.; Haskell, Barry G.

    1995-04-01

    Despite the fact that human ability to perceive a high degree of realism is directly related to our ability to perceive depth accurately in a scene, most of the commonly used imaging and display technologies are able to provide only a 2D rendering of the 3D real world. Many current as well as emerging applications in areas of entertainment, remote operations, industrial and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief discussion on the relationship of digital stereoscopic 3DTV with digital TV and HDTV, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we determine ways in which temporal scalability concepts can be applied to exploit redundancies inherent between the two views of a scene comprising stereoscopic video. Due consideration is given to masking properties of stereoscopic vision to determine bandwidth partitioning between the two views to realize an efficient coding scheme while providing sufficient quality. Simulations are performed on stereoscopic video of normal TV resolution to compare the performance of the two temporal scalability configurations with each other and with the simulcast solution. Preliminary results are quite promising and indicate that the configuration that exploits motion and disparity compensation significantly outperforms the one that exploits disparity compensation alone. Compression of both views of stereo video of normal TV resolution appears feasible in a total of 8 or 9 Mbit/s. Finally

  7. Scalable computations in penetration mechanics

    SciTech Connect

    Kimsey, K.D.; Schraml, S.J.; Hertel, E.S.

    1998-01-01

    This paper presents an overview of an explicit message passing paradigm for an Eulerian finite volume method for modeling solid dynamics problems involving shock wave propagation, multiple materials, and large deformations. Three-dimensional simulations of high-velocity impact were conducted on the IBM SP2, the SGI Power challenge Array, and the SGI Origin 2000. The scalability of the message-passing code on distributed-memory and symmetric multiprocessor architectures is presented and compared to the ideal linear performance.

  8. Visualization of cosmological particle-based datasets.

    PubMed

    Navratil, Paul; Johnson, Jarrett; Bromm, Volker

    2007-01-01

    We describe our visualization process for a particle-based simulation of the formation of the first stars and their impact on cosmic history. The dataset consists of several hundred time-steps of point simulation data, with each time-step containing approximately two million point particles. For each time-step, we interpolate the point data onto a regular grid using a method taken from the radiance estimate of photon mapping. We import the resulting regular grid representation into ParaView, with which we extract isosurfaces across multiple variables. Our images provide insights into the evolution of the early universe, tracing the cosmic transition from an initially homogeneous state to one of increasing complexity. Specifically, our visualizations capture the build-up of regions of ionized gas around the first stars, their evolution, and their complex interactions with the surrounding matter. These observations will guide the upcoming James Webb Space Telescope, the key astronomy mission of the next decade.

  9. Evaluation of angiogram visualization methods for fast and reliable aneurysm diagnosis

    NASA Astrophysics Data System (ADS)

    Lesar, Žiga; Bohak, Ciril; Marolt, Matija

    2015-03-01

    In this paper we present the results of an evaluation of different visualization methods for angiogram volumetric data-ray casting, marching cubes, and multi-level partition of unity implicits. There are several options available with ray-casting: isosurface extraction, maximum intensity projection and alpha compositing, each producing fundamentally different results. Different visualization methods are suitable for different needs, so this choice is crucial in diagnosis and decision making processes. We also evaluate visual effects such as ambient occlusion, screen space ambient occlusion, and depth of field. Some visualization methods include transparency, so we address the question of relevancy of this additional visual information. We employ transfer functions to map data values to color and transparency, allowing us to view or hide particular tissues. All the methods presented in this paper were developed using OpenCL, striving for real-time rendering and quality interaction. An evaluation has been conducted to assess the suitability of the visualization methods. Results show superiority of isosurface extraction with ambient occlusion effects. Visual effects may positively or negatively affect perception of depth, motion, and relative positions in space.

  10. Visual Debugging of Visualization Software: A Case Study for Particle Systems

    SciTech Connect

    Angel, Edward; Crossno, Patricia

    1999-07-12

    Visualization systems are complex dynamic software systems. Debugging such systems is difficult using conventional debuggers because the programmer must try to imagine the three-dimensional geometry based on a list of positions and attributes. In addition, the programmer must be able to mentally animate changes in those positions and attributes to grasp dynamic behaviors within the algorithm. In this paper we shall show that representing geometry, attributes, and relationships graphically permits visual pattern recognition skills to be applied to the debugging problem. The particular application is a particle system used for isosurface extraction from volumetric data. Coloring particles based on individual attributes is especially helpful when these colorings are viewed as animations over successive iterations in the program. Although we describe a particular application, the types of tools that we discuss can be applied to a variety of problems.

  11. Scalable, enantioselective taxane total synthesis

    PubMed Central

    Mendoza, Abraham; Ishihara, Yoshihiro; Baran, Phil S.

    2011-01-01

    Taxanes are a large family of terpenes comprising over 350 members, the most famous of which is Taxol (paclitaxel) — a billion-dollar anticancer drug. Here, we describe the first practical and scalable synthetic entry to these natural products via a concise preparation of (+)-taxa-4(5),11(12)-dien-2-one, which possesses a suitable functional handle to access more oxidised members of its family. This route enabled a gram-scale preparation of the ”parent” taxane, taxadiene, representing the largest quantity of this naturally occurring terpene ever isolated or prepared in pure form. The taxane family’s characteristic 6-8-6 tricyclic system containing a bridgehead alkene is forged via a vicinal difunctionalisation/Diels–Alder strategy. Asymmetry is introduced by means of an enantioselective conjugate addition that forms an all-carbon quaternary centre, from which all other stereocentres are fixed via substrate control. This study lays a critical foundation for a planned access to minimally oxidised taxane analogs and a scalable laboratory preparation of Taxol itself. PMID:22169867

  12. Scalable Performance Environments for Parallel Systems

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Olson, Robert D.; Aydt, Ruth A.; Madhyastha, Tara M.; Birkett, Thomas; Jensen, David W.; Nazief, Bobby A. A.; Totty, Brian K.

    1991-01-01

    As parallel systems expand in size and complexity, the absence of performance tools for these parallel systems exacerbates the already difficult problems of application program and system software performance tuning. Moreover, given the pace of technological change, we can no longer afford to develop ad hoc, one-of-a-kind performance instrumentation software; we need scalable, portable performance analysis tools. We describe an environment prototype based on the lessons learned from two previous generations of performance data analysis software. Our environment prototype contains a set of performance data transformation modules that can be interconnected in user-specified ways. It is the responsibility of the environment infrastructure to hide details of module interconnection and data sharing. The environment is written in C++ with the graphical displays based on X windows and the Motif toolkit. It allows users to interconnect and configure modules graphically to form an acyclic, directed data analysis graph. Performance trace data are represented in a self-documenting stream format that includes internal definitions of data types, sizes, and names. The environment prototype supports the use of head-mounted displays and sonic data presentation in addition to the traditional use of visual techniques.

  13. Visual Interface for Materials Simulations

    2004-08-01

    VIMES (Visual Inteface for Materials Simulations) is a graphical user interface (GUI) for pre- and post-processing alomistic materials science calculations. The code includes tools for building and visualizing simple crystals, supercells, and surfaces, as well as tools for managing and modifying the input to Sandia materials simulations codes such as Quest (Peter Schultz, SNL 9235) and Towhee (Marcus Martin, SNL 9235). It is often useful to have a graphical interlace to construct input for materialsmore » simulations codes and to analyze the output of these programs. VIMES has been designed not only to build and visualize different materials systems, but also to allow several Sandia codes to be easier to use and analyze. Furthermore. VIMES has been designed to be reasonably easy to extend to new materials programs. We anticipate that users of Sandia materials simulations codes will use VIMCS to simplify the submission and analysis of these simulations. VIMES uses standard OpenGL graphics (as implemented in the Python programming language) to display the molecules. The algorithms used to rotate, zoom, and pan molecules are all standard applications using the OpenGL libraries. VIMES uses the Marching Cubes algorithm for isosurfacing 3D data such as molecular orbitals or electron densities around the molecules.« less

  14. Scripts for Scalable Monitoring of Parallel Filesystem Infrastructure

    2014-02-27

    Scripts for scalable monitoring of parallel filesystem infrastructure provide frameworks for monitoring the health of block storage arrays and large InfiniBand fabrics. The block storage framework uses Python multiprocessing to within scale the number monitored arrays to scale with the number of processors in the system. This enables live monitoring of HPC-scale filesystem with 10-50 storage arrays. For InfiniBand monitoring, there are scripts included that monitor InfiniBand health of each host along with visualization toolsmore » for mapping the topology of complex fabric topologies.« less

  15. Rapid and scalable assembly of firefly luciferase substrates.

    PubMed

    McCutcheon, David C; Porterfield, William B; Prescher, Jennifer A

    2015-02-21

    Bioluminescence imaging with luciferase-luciferin pairs is a popular method for visualizing biological processes in vivo. Unfortunately, most luciferins are difficult to access and remain prohibitively expensive for some imaging applications. Here we report cost-effective and efficient syntheses of d-luciferin and 6'-aminoluciferin, two widely used bioluminescent substrates. Our approach employs inexpensive anilines and Appel's salt to generate the luciferin cores in a single pot. Additionally, the syntheses are scalable and can provide multi-gram quantities of both substrates. The streamlined production and improved accessibility of luciferin reagents will bolster in vivo imaging efforts.

  16. Scripts for Scalable Monitoring of Parallel Filesystem Infrastructure

    SciTech Connect

    Caldwell, Blake

    2014-02-27

    Scripts for scalable monitoring of parallel filesystem infrastructure provide frameworks for monitoring the health of block storage arrays and large InfiniBand fabrics. The block storage framework uses Python multiprocessing to within scale the number monitored arrays to scale with the number of processors in the system. This enables live monitoring of HPC-scale filesystem with 10-50 storage arrays. For InfiniBand monitoring, there are scripts included that monitor InfiniBand health of each host along with visualization tools for mapping the topology of complex fabric topologies.

  17. Rapid and scalable assembly of firefly luciferase substrates†

    PubMed Central

    McCutcheon, David C.; Porterfield, William B.; Prescher, Jennifer A.

    2015-01-01

    Bioluminescence imaging with luciferase-luciferin pairs is a popular method for visualizing biological processes in vivo. Unfortunately, most luciferins are difficult to access and remain prohibitively expensive for some imaging applications. Here we report cost-effective and efficient syntheses of D-luciferin and 6′-aminoluciferin, two widely used bioluminescent substrates. Our approach employs inexpensive anilines and Appel's salt to generate the luciferin cores in a single pot. Additionally, the syntheses are scalable and can provide multi-gram quantities of both substrates. The streamlined production and improved accessibility of luciferin reagents will bolster in vivo imaging efforts. PMID:25525906

  18. Scalable WIM: effective exploration in large-scale astrophysical environments.

    PubMed

    Li, Yinggang; Fu, Chi-Wing; Hanson, Andrew J

    2006-01-01

    Navigating through large-scale virtual environments such as simulations of the astrophysical Universe is difficult. The huge spatial range of astronomical models and the dominance of empty space make it hard for users to travel across cosmological scales effectively, and the problem of wayfinding further impedes the user's ability to acquire reliable spatial knowledge of astronomical contexts. We introduce a new technique called the scalable world-in-miniature (WIM) map as a unifying interface to facilitate travel and wayfinding in a virtual environment spanning gigantic spatial scales: Power-law spatial scaling enables rapid and accurate transitions among widely separated regions; logarithmically mapped miniature spaces offer a global overview mode when the full context is too large; 3D landmarks represented in the WIM are enhanced by scale, positional, and directional cues to augment spatial context awareness; a series of navigation models are incorporated into the scalable WIM to improve the performance of travel tasks posed by the unique characteristics of virtual cosmic exploration. The scalable WIM user interface supports an improved physical navigation experience and assists pragmatic cognitive understanding of a visualization context that incorporates the features of large-scale astronomy.

  19. Scalable Performance Measurement and Analysis

    SciTech Connect

    Gamblin, Todd

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  20. Rate control scheme for consistent video quality in scalable video codec.

    PubMed

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  1. Scalable analysis tools for sensitivity analysis and UQ (3160) results.

    SciTech Connect

    Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.

    2009-09-01

    The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.

  2. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    PubMed

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures. PMID:25594961

  3. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    PubMed

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  4. Dynamics-based scalability of complex networks.

    PubMed

    Huang, Liang; Lai, Ying-Cheng; Gatenby, Robert A

    2008-10-01

    We address the fundamental issue of network scalability in terms of dynamics and topology. In particular, we consider different network topologies and investigate, for every given topology, the dependence of certain dynamical properties on the network size. By focusing on network synchronizability, we find both analytically and numerically that globally coupled networks and random networks are scalable, but locally coupled regular networks are not. Scale-free networks are scalable for certain types of node dynamics. We expect our findings to provide insights into the ubiquity and workings of networks arising in nature and to be potentially useful for designing technological networks as well. PMID:18999478

  5. 3D Visualization of Global Ocean Circulation

    NASA Astrophysics Data System (ADS)

    Nelson, V. G.; Sharma, R.; Zhang, E.; Schmittner, A.; Jenny, B.

    2015-12-01

    Advanced 3D visualization techniques are seldom used to explore the dynamic behavior of ocean circulation. Streamlines are an effective method for visualization of flow, and they can be designed to clearly show the dynamic behavior of a fluidic system. We employ vector field editing and extraction software to examine the topology of velocity vector fields generated by a 3D global circulation model coupled to a one-layer atmosphere model simulating preindustrial and last glacial maximum (LGM) conditions. This results in a streamline-based visualization along multiple density isosurfaces on which we visualize points of vertical exchange and the distribution of properties such as temperature and biogeochemical tracers. Previous work involving this model examined the change in the energetics driving overturning circulation and mixing between simulations of LGM and preindustrial conditions. This visualization elucidates the relationship between locations of vertical exchange and mixing, as well as demonstrates the effects of circulation and mixing on the distribution of tracers such as carbon isotopes.

  6. Scalable Systems Software Enabling Technology Center

    SciTech Connect

    Michael T. Showerman

    2009-04-06

    NCSA’s role in the SCIDAC Scalable Systems Software (SSS) project was to develop interfaces and communication mechanisms for systems monitoring, and to implement a prototype demonstrating those standards. The Scalable Systems Monitoring component of the SSS suite was designed to provide a large volume of both static and dynamic systems data to the components within the SSS infrastructure as well as external data consumers.

  7. Visualization of confocal microscopic biomolecular data

    NASA Astrophysics Data System (ADS)

    Liu, Zhanping; Moorhead, Robert J., II

    2005-04-01

    Biomolecular visualization facilitates insightful interpretation of molecular structures and complex mechanisms underlying bio-chemical processes. Effective visualization techniques are required to deal with confocal microscopic biomolecular data in which intricate structures, fine features, and obscure patterns might be overlooked without sophisticated data processing and image synthesis. This paper presents major challenges in visualizing confocal microscopic biomolecular data, followed by a survey of related work. We then introduce a case study conducted to investigate the interaction between two proteins contained in a budding yeast saccharomyces cerevisiae by embedding custom modules in Amira. The multi-channel confocal microscopic volume data was first processed using an exponential operator to correct z-drop artifacts introduced during data acquisition. Channel correlation was then exploited to extract the overlap between the proteins as a new channel to represent the interaction while a statistical method was employed to compute the intensity of interaction to locate hot spots. To take advantage of crisp surface representation of region boundaries by iso-surfaces and visually pleasing translucent delineation of dense volumes by volume rendering, we adopted hybrid rendering that incorporates these two methods to display clear-cut protein boundaries, amorphous interior materials, and the scattered interaction in the same view volume with suppressed and highlighted parts selected by the user. The highlighted overlap helped biologists learn where the interaction happens and how it spreads, particularly when the volume was investigated in an immersive Cave Automatic Virtual Environment (CAVE) for intuitive comprehension of the data.

  8. Scalable extensions of HEVC for next generation services

    NASA Astrophysics Data System (ADS)

    Misra, Kiran; Segall, Andrew; Zhao, Jie; Kim, Seung-Hwan

    2013-02-01

    The high efficiency video coding (HEVC) standard being developed by ITU-T VCEG and ISO/IEC MPEG achieves a compression goal of reducing the bitrate by half for the same visual quality when compared with earlier video compression standards such as H.264/AVC. It achieves this goal with the use of several new tools such as quad-tree based partitioning of data, larger block sizes, improved intra prediction, the use of sophisticated prediction of motion information, inclusion of an in-loop sample adaptive offset process etc. This paper describes an approach where the HEVC framework is extended to achieve spatial scalability using a multi-loop approach. The enhancement layer inter-predictive coding efficiency is improved by including within the decoded picture buffer multiple up-sampled versions of the decoded base layer picture. This approach has the advantage of achieving significant coding gains with a simple extension of the base layer tools such as inter-prediction, motion information signaling etc. Coding efficiency of the enhancement layer is further improved using adaptive loop filter and internal bit-depth increment. The performance of the proposed scalable video coding approach is compared to simulcast transmission of video data using high efficiency model version 6.1 (HM-6.1). The bitrate savings are measured using Bjontegaard Delta (BD) rate for a spatial scalability factor of 2 and 1.5 respectively when compared with simulcast anchors. It is observed that the proposed approach provides an average luma BD rate gains of 33.7% and 50.5% respectively.

  9. Wanted: Scalable Tracers for Diffusion Measurements

    PubMed Central

    2015-01-01

    Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core–shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say “reinforced Ficoll” or “reinforced hyperbranched polyglycerol.” PMID:25319586

  10. Scalable L-infinite coding of meshes.

    PubMed

    Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter

    2010-01-01

    The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec. PMID:20224144

  11. Visualizing Higher Order Finite Elements: FY05 Yearly Report.

    SciTech Connect

    Thompson, David; Pebay, Philippe Pierre

    2005-11-01

    This report contains an algorithm for decomposing higher-order finite elementsinto regions appropriate for isosurfacing and proves the conditions under which thealgorithm will terminate. Finite elements are used to create piecewise polynomialapproximants to the solution of partial differential equations for which no analyticalsolution exists. These polynomials represent fields such as pressure, stress, and mo-mentim. In the past, these polynomials have been linear in each parametric coordinate.Each polynomial coefficient must be uniquely determined by a simulation, and thesecoefficients are called degrees of freedom. When there are not enough degrees of free-dom, simulations will typically fail to produce a valid approximation to the solution.Recent work has shown that increasing the number of degrees of freedom by increas-ing the order of the polynomial approximation (instead of increasing the number offinite elements, each of which has its own set of coefficients) can allow some typesof simulations to produce a valid approximation with many fewer degrees of freedomthan increasing the number of finite elements alone. However, once the simulation hasdetermined the values of all the coefficients in a higher-order approximant, tools donot exist for visual inspection of the solution.This report focuses on a technique for the visual inspection of higher-order finiteelement simulation results based on decomposing each finite element into simplicialregions where existing visualization algorithms such as isosurfacing will work. Therequirements of the isosurfacing algorithm are enumerated and related to the placeswhere the partial derivatives of the polynomial become zero. The original isosurfacingalgorithm is then applied to each of these regions in turn.3 AcknowledgementThe authors would like to thank David Day and Louis Romero for their insight into poly-nomial system solvers and the LDRD Senior Council for the opportunity to pursue thisresearch. The authors were

  12. Area scalable optically induced photorefractive photonic microstructures

    NASA Astrophysics Data System (ADS)

    Jin, Wentao; Xue, Yan Ling; Jiang, Dongdong

    2016-07-01

    A convenient approach to fabricate area scalable two-dimensional photonic microstructures was experimentally demonstrated by multi-face optical wedges. The approach is quite compact and stable without complex optical alignment equipment. Large-area square lattice microstructures are optically induced inside an iron-doped lithium niobate photorefractive crystal. The induced large-area microstructures are analyzed and verified by plane wave guiding, Brillouin-zone spectroscopy, angle-dependent transmission spectrum, and lateral Bragg reflection patterns. The method can be easily extended to generate other more complex area scalable photonic microstructures, such as quasicrystal lattices, by designing the multi-face optical wedge appropriately. The induced area scalable photonic microstructures can be fixed or erased even re-recorded in the photorefractive crystal, which suggests potential applications in micro-nano photonic devices.

  13. Realization of a scalable Shor algorithm.

    PubMed

    Monz, Thomas; Nigg, Daniel; Martinez, Esteban A; Brandl, Matthias F; Schindler, Philipp; Rines, Richard; Wang, Shannon X; Chuang, Isaac L; Blatt, Rainer

    2016-03-01

    Certain algorithms for quantum computers are able to outperform their classical counterparts. In 1994, Peter Shor came up with a quantum algorithm that calculates the prime factors of a large number vastly more efficiently than a classical computer. For general scalability of such algorithms, hardware, quantum error correction, and the algorithmic realization itself need to be extensible. Here we present the realization of a scalable Shor algorithm, as proposed by Kitaev. We factor the number 15 by effectively employing and controlling seven qubits and four "cache qubits" and by implementing generalized arithmetic operations, known as modular multipliers. This algorithm has been realized scalably within an ion-trap quantum computer and returns the correct factors with a confidence level exceeding 99%. PMID:26941315

  14. The Scalable Checkpoint/Restart Library

    2009-02-23

    The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to amore » shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes. It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less

  15. The Scalable Checkpoint/Restart Library

    SciTech Connect

    Moody, A.

    2009-02-23

    The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes. It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.

  16. Scalable illumination robust face identification using harmonic representation

    NASA Astrophysics Data System (ADS)

    Xia, Cong; Chen, Jiansheng; Yang, Chang; Wang, Jing; Liu, Jing; Su, Guangda; Zhang, Gang

    2013-07-01

    Evaluations of both academic face recognition algorithms and commercial systems have shown that the recognition performance degrades significantly due to the variation of illumination. Previous methods for illumination robust face recognition usually involve computationally expensive 3D model transformations or optimization base reconstruction using multiple gallery face images, making them infeasible in practical large scale face identification applications. In this paper, we propose an alternative face identification framework, in which one image per person is used for enrollment as is commonly practiced in real life applications. Several probe images captured under different illumination conditions are synthesized to imitate the illumination condition of the enrolled gallery face image. We assume Lambertian reflectance of human faces and use the harmonic representations of lighting. We demonstrate satisfactory performance on the Yale B database, both visually and quantitatively. The proposed method is of very low complexity when linear facial feature are used, and is therefore scalable for large scale applications.

  17. Planetary subsurface investigation by 3D visualization model .

    NASA Astrophysics Data System (ADS)

    Seu, R.; Catallo, C.; Tragni, M.; Abbattista, C.; Cinquepalmi, L.

    Subsurface data analysis and visualization represents one of the main aspect in Planetary Observation (i.e. search for water or geological characterization). The data are collected by subsurface sounding radars as instruments on-board of deep space missions. These data are generally represented as 2D radargrams in the perspective of space track and z axes (perpendicular to the subsurface) but without direct correlation to other data acquisition or knowledge on the planet . In many case there are plenty of data from other sensors of the same mission, or other ones, with high continuity in time and in space and specially around the scientific sites of interest (i.e. candidate landing areas or particular scientific interesting sites). The 2D perspective is good to analyse single acquisitions and to perform detailed analysis on the returned echo but are quite useless to compare very large dataset as now are available on many planets and moons of solar system. The best way is to approach the analysis on 3D visualization model generated from the entire stack of data. First of all this approach allows to navigate the subsurface in all directions and analyses different sections and slices or moreover navigate the iso-surfaces respect to a value (or interval). The last one allows to isolate one or more iso-surfaces and remove, in the visualization mode, other data not interesting for the analysis; finally it helps to individuate the underground 3D bodies. Other aspect is the needs to link the on-ground data, as imaging, to the underground one by geographical and context field of view.

  18. Scalable k-means statistics with Titan.

    SciTech Connect

    Thompson, David C.; Bennett, Janine C.; Pebay, Philippe Pierre

    2009-11-01

    This report summarizes existing statistical engines in VTK/Titan and presents both the serial and parallel k-means statistics engines. It is a sequel to [PT08], [BPRT09], and [PT09] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, and contingency engines. The ease of use of the new parallel k-means engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the k-means engine.

  19. Enhancing Scalability of Sparse Direct Methods

    SciTech Connect

    Li, Xiaoye S.; Demmel, James; Grigori, Laura; Gu, Ming; Xia,Jianlin; Jardin, Steve; Sovinec, Carl; Lee, Lie-Quan

    2007-07-23

    TOPS is providing high-performance, scalable sparse direct solvers, which have had significant impacts on the SciDAC applications, including fusion simulation (CEMM), accelerator modeling (COMPASS), as well as many other mission-critical applications in DOE and elsewhere. Our recent developments have been focusing on new techniques to overcome scalability bottleneck of direct methods, in both time and memory. These include parallelizing symbolic analysis phase and developing linear-complexity sparse factorization methods. The new techniques will make sparse direct methods more widely usable in large 3D simulations on highly-parallel petascale computers.

  20. Validation of a Scalable Solar Sailcraft

    NASA Technical Reports Server (NTRS)

    Murphy, D. M.

    2006-01-01

    The NASA In-Space Propulsion (ISP) program sponsored intensive solar sail technology and systems design, development, and hardware demonstration activities over the past 3 years. Efforts to validate a scalable solar sail system by functional demonstration in relevant environments, together with test-analysis correlation activities on a scalable solar sail system have recently been successfully completed. A review of the program, with descriptions of the design, results of testing, and analytical model validations of component and assembly functional, strength, stiffness, shape, and dynamic behavior are discussed. The scaled performance of the validated system is projected to demonstrate the applicability to flight demonstration and important NASA road-map missions.

  1. VEEVVIE: Visual Explorer for Empirical Visualization, VR and Interaction Experiments.

    PubMed

    Papadopoulos, C; Gutenko, I; Kaufman, A E

    2016-01-01

    Empirical, hypothesis-driven, experimentation is at the heart of the scientific discovery process and has become commonplace in human-factors related fields. To enable the integration of visual analytics in such experiments, we introduce VEEVVIE, the Visual Explorer for Empirical Visualization, VR and Interaction Experiments. VEEVVIE is comprised of a back-end ontology which can model several experimental designs encountered in these fields. This formalization allows VEEVVIE to capture experimental data in a query-able form and makes it accessible through a front-end interface. This front-end offers several multi-dimensional visualization widgets with built-in filtering and highlighting functionality. VEEVVIE is also expandable to support custom experimental measurements and data types through a plug-in visualization widget architecture. We demonstrate VEEVVIE through several case studies of visual analysis, performed on the design and data collected during an experiment on the scalability of high-resolution, immersive, tiled-display walls.

  2. Scalable Domain Decomposed Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  3. Medusa: A Scalable MR Console Using USB

    PubMed Central

    Stang, Pascal P.; Conolly, Steven M.; Santos, Juan M.; Pauly, John M.; Scott, Greig C.

    2012-01-01

    MRI pulse sequence consoles typically employ closed proprietary hardware, software, and interfaces, making difficult any adaptation for innovative experimental technology. Yet MRI systems research is trending to higher channel count receivers, transmitters, gradient/shims, and unique interfaces for interventional applications. Customized console designs are now feasible for researchers with modern electronic components, but high data rates, synchronization, scalability, and cost present important challenges. Implementing large multi-channel MR systems with efficiency and flexibility requires a scalable modular architecture. With Medusa, we propose an open system architecture using the Universal Serial Bus (USB) for scalability, combined with distributed processing and buffering to address the high data rates and strict synchronization required by multi-channel MRI. Medusa uses a modular design concept based on digital synthesizer, receiver, and gradient blocks, in conjunction with fast programmable logic for sampling and synchronization. Medusa is a form of synthetic instrument, being reconfigurable for a variety of medical/scientific instrumentation needs. The Medusa distributed architecture, scalability, and data bandwidth limits are presented, and its flexibility is demonstrated in a variety of novel MRI applications. PMID:21954200

  4. An atmospheric visual analysis and exploration system.

    PubMed

    Song, Yuyan; Ye, Jing; Svakhine, Nikolai; Lasher-Trapp, Sonia; Baldwin, Mike; Ebert, David S

    2006-01-01

    Meteorological research involves the analysis of multi-field, multi-scale, and multi-source data sets. In order to better understand these data sets, models and measurements at different resolutions must be analyzed. Unfortunately, traditional atmospheric visualization systems only provide tools to view a limited number of variables and small segments of the data. These tools are often restricted to two-dimensional contour or vector plots or three-dimensional isosurfaces. The meteorologist must mentally synthesize the data from multiple plots to glean the information needed to produce a coherent picture of the weather phenomenon of interest. In order to provide better tools to meteorologists and reduce system limitations, we have designed an integrated atmospheric visual analysis and exploration system for interactive analysis of weather data sets. Our system allows for the integrated visualization of 1D, 2D, and 3D atmospheric data sets in common meteorological grid structures and utilizes a variety of rendering techniques. These tools provide meteorologists with new abilities to analyze their data and answer questions on regions of interest, ranging from physics-based atmospheric rendering to illustrative rendering containing particles and glyphs. In this paper, we will discuss the use and performance of our visual analysis for two important meteorological applications. The first application is warm rain formation in small cumulus clouds. Here, our three-dimensional, interactive visualization of modeled drop trajectories within spatially correlated fields from a cloud simulation has provided researchers with new insight. Our second application is improving and validating severe storm models, specifically the Weather Research and Forecasting (WRF) model. This is done through correlative visualization of WRF model and experimental Doppler storm data.

  5. Proto-object based rate control for JPEG2000: an approach to content-based scalability.

    PubMed

    Xue, Jianru; Li, Ce; Zheng, Nanning

    2011-04-01

    The JPEG2000 system provides scalability with respect to quality, resolution and color component in the transfer of images. However, scalability with respect to semantic content is still lacking. We propose a biologically plausible salient region based bit allocation mechanism within the JPEG2000 codec for the purpose of augmenting scalability with respect to semantic content. First, an input image is segmented into several salient proto-objects (a region that possibly contains a semantically meaningful physical object) and background regions (a region that contains no object of interest) by modeling visual focus of attention on salient proto-objects. Then, a novel rate control scheme distributes a target bit rate to each individual region according to its saliency, and constructs quality layers of proto-objects for the purpose of more precise truncation comparable to original quality layers in the standard. Empirical results show that the suggested approach adds to the JPEG2000 system scalability with respect to content as well as the functionality of selectively encoding, decoding, and manipulation of each individual proto-object in the image, with only some slightly trivial modifications to the JPEG2000 standard. Furthermore, the proposed rate control approach efficiently reduces the computational complexity and memory usage, as well as maintains the high quality of the image to a level comparable to the conventional post-compression rate distortion (PCRD) optimum truncation algorithm for JPEG2000.

  6. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  7. Scalable noise estimation with random unitary operators

    NASA Astrophysics Data System (ADS)

    Emerson, Joseph; Alicki, Robert; Życzkowski, Karol

    2005-10-01

    We describe a scalable stochastic method for the experimental measurement of generalized fidelities characterizing the accuracy of the implementation of a coherent quantum transformation. The method is based on the motion reversal of random unitary operators. In the simplest case our method enables direct estimation of the average gate fidelity. The more general fidelities are characterized by a universal exponential rate of fidelity loss. In all cases the measurable fidelity decrease is directly related to the strength of the noise affecting the implementation, quantified by the trace of the superoperator describing the non-unitary dynamics. While the scalability of our stochastic protocol makes it most relevant in large Hilbert spaces (when quantum process tomography is infeasible), our method should be immediately useful for evaluating the degree of control that is achievable in any prototype quantum processing device. By varying over different experimental arrangements and error-correction strategies, additional information about the noise can be determined.

  8. Controlled, scalable embryonic stem cell differentiation culture.

    PubMed

    Dang, Stephen M; Gerecht-Nir, Sharon; Chen, Jinny; Itskovitz-Eldor, Joseph; Zandstra, Peter W

    2004-01-01

    Embryonic stem (ES) cells are of significant interest as a renewable source of therapeutically useful cells. ES cell aggregation is important for both human and mouse embryoid body (EB) formation and the subsequent generation of ES cell derivatives. Aggregation between EBs (agglomeration), however, inhibits cell growth and differentiation in stirred or high-cell-density static cultures. We demonstrate that the agglomeration of two EBs is initiated by E-cadherin-mediated cell attachment and followed by active cell migration. We report the development of a technology capable of controlling cell-cell interactions in scalable culture by the mass encapsulation of ES cells in size-specified agarose capsules. When placed in stirred-suspension bioreactors, encapsulated ES cells can be used to produce scalable quantities of hematopoietic progenitor cells in a controlled environment.

  9. Scalable coherent interface: Links to the future

    SciTech Connect

    Gustavson, D.B. ); Kristiansen, E. )

    1991-11-01

    Now that the Scalable Coherent Interface (SCI) has solved the bandwidth problem, what can we use it for SCI was developed to support closely coupled multiprocessors and their caches in a distributed shared-memory environment, but its scalability and the efficient generality of its architecture make it work very well over a wide range of applications. It can replace a local area network for connecting workstations on a campus. It can be powerful I/O channel for a supercomputer. It can be the processor-cache-memory-I/O connection in a highly parallel computer. It can gather data from enormous particle detectors and distribute it among thousands of processors. It can connect a desktop microprocessor to memory chips a few millimeters away, disk drivers a few meters away, and servers a few kilometers away.

  10. Scalable coherent interface: Links to the future

    SciTech Connect

    Gustavson, D.B.; Kristiansen, E.

    1991-11-01

    Now that the Scalable Coherent Interface (SCI) has solved the bandwidth problem, what can we use it for? SCI was developed to support closely coupled multiprocessors and their caches in a distributed shared-memory environment, but its scalability and the efficient generality of its architecture make it work very well over a wide range of applications. It can replace a local area network for connecting workstations on a campus. It can be powerful I/O channel for a supercomputer. It can be the processor-cache-memory-I/O connection in a highly parallel computer. It can gather data from enormous particle detectors and distribute it among thousands of processors. It can connect a desktop microprocessor to memory chips a few millimeters away, disk drivers a few meters away, and servers a few kilometers away.

  11. Visualization Dot Com

    SciTech Connect

    Bethel, Wes

    2000-02-09

    In this article, we explore the seemingly well-worn subject of distance-based, or remote visualization. Current practices in remote visualization tend to clump into two broad categories. One approach, which we'll call render-remote, is to render an image remotely, then transmit the data to the user. Another option, render-local, transfers raw data to the user, where it is then visualized and rendered on the local workstation. With advances in networking and graphics technology, we can explore a class of approaches from a new, third category. With this third category, which we'll called shared, or ''dot com'' visualization, we stand to reap the best of both worlds; minimized data transfers and workstation-accelerated rendering. We will describe a prototype system called Visapult currently under development at Lawrence Berkeley National Laboratory (LBNL) that strikes such a balance, achieving a blended, scalable visualization tool. ''Dot com'' visualization means that remote and local resources collaborate and negotiate, combining capabilities to produce a final product.

  12. SPRNG Scalable Parallel Random Number Generator LIbrary

    SciTech Connect

    Srinivasan, Ashok

    2010-03-16

    This revision corrects some errors in SPRNG 1. Users of newer SPRNG versions can obtain the corrected files and build their version with it. This version also improves the scalability of some of the application-based tests in the SPRNG test suite. It also includes an interface to a parallel Mersenne Twister, so that if users install the Mersenne Twister, then they can test this generator with the SPRNG test suite and also use some SPRNG features with that generator.

  13. Scalable Computer Performance and Analysis (Hierarchical INTegration)

    1999-09-02

    HINT is a program to measure a wide variety of scalable computer systems. It is capable of demonstrating the benefits of using more memory or processing power, and of improving communications within the system. HINT can be used for measurement of an existing system, while the associated program ANALYTIC HINT can be used to explain the measurements or as a design tool for proposed systems.

  14. Scalable descriptive and correlative statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2008-12-01

    This report summarizes the existing statistical engines in VTK/Titan and presents the parallel versions thereof which have already been implemented. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.

  15. Interactive Visualization of Astrophysical Plasma Simulations with SDvision

    NASA Astrophysics Data System (ADS)

    Pomarède, D.; Fidaali, Y.; Audit, E.; Brun, A. S.; Masset, F.; Teyssier, R.

    2008-04-01

    SDvision is a graphical interface developed in the framework of IDL Object Graphics and designed for the interactive and immersive visualization of astrophysical plasma simulations. Three-dimensional scalar and vector fields distributed over regular mesh grids or more complex structures such as adaptive mesh refinement data or multiple embedded grids, as well as N-body systems, can be visualized in a number of different, complementary ways. Various implementations of the visualization of the data are simultaneously proposed, such as 3D isosurfaces, volume projections, hedgehog and streamline displays, surface and image of 2D subsets, profile plots, particle clouds. The SDvision widget allows to visualize complex, composite scenes both from outside and from within the simulated volume. This tool is used to visualize data from RAMSES, a hybrid N-body and hydrodynamical code which solves the interplay of dark matter and the baryon gas in the study of cosmological structures formation, from HERACLES, a radiation hydrodynamics code used in particular to study turbulence in interstellar molecular clouds, from the ASH code dedicated to the study of stellar magnetohydrodynamics, and from the JUPITER multi-resolution code used in the study of protoplanetary disks formation.

  16. Pursuing Scalability for hypre's Conceptual Interfaces

    SciTech Connect

    Falgout, R D; Jones, J E; Yang, U M

    2004-07-21

    The software library hypre provides high performance preconditioners and solvers for the solution of large, sparse linear systems on massively parallel computers as well as conceptual interfaces that allow users to access the library in the way they naturally think about their problems. These interfaces include a stencil-based structured interface (Struct); a semi-structured interface (semiStruct), which is appropriate for applications that are mostly structured, e.g. block structured grids, composite grids in structured adaptive mesh refinement applications, and overset grids; a finite element interface (FEI) for unstructured problems, as well as a conventional linear-algebraic interface (IJ). It is extremely important to provide an efficient, scalable implementation of these interfaces in order to support the scalable solvers of the library, especially when using tens of thousands of processors. This paper describes the data structures, parallel implementation and resulting performance of the IJ, Struct and semiStruct interfaces. It investigates their scalability, presents successes as well as pitfalls of some of the approaches and suggests ways of dealing with them.

  17. ParaText : scalable solutions for processing and searching very large document collections : final LDRD report.

    SciTech Connect

    Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-09-01

    This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages of information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.

  18. Improved volume rendering for the visualization of living cells examined with confocal microscopy

    NASA Astrophysics Data System (ADS)

    Enloe, L. Charity; Griffing, Lawrence R.

    2000-02-01

    This research applies recent advances in 3D isosurface reconstruction to images of test spheres and plant cells growing in suspension culture. Isosurfaces that represent object boundaries are constructed with a Marching Cubes algorithm applied to simple data sets, i.e., fluorescent test beads, and complex data sets, i.e., fluorescent plant cells, acquired with a Zeiss Confocal Laser Scanning Microscope (LSM). The marching cubes algorithm treats each pixel or voxel of the image as a separate entity when performing computations. To test the spatial accuracy of the reconstruction, control data representing the volume of a 25 micrometer test shaper was obtained with the LSM. This volume was then judged on the basis of uniformity and smoothness. Using polygon decimation and smoothing algorithms available through the visualization toolkit, 'voxellated' test spheres and cells were smoothed using several different smoothing algorithms after unessential polygons were eliminated. With these improvements, the shape of subcellular organelles could be modeled at various levels of accuracy. However, in order to accurately reconstruct these complex structures of interest to us, the subcellular organelles of the endosomal system or the endoplasmic reticulum of plant cells, measurements of the accuracy of connectedness of structures need to be developed.

  19. GRIZ: Finite element analysis results visualization for unstructured grids. User manual

    SciTech Connect

    Dovey, D.J.; Spelce, T.E.

    1993-10-01

    GRIZ supports interactive visualization of finite element analysis results on unstructured grids. GRIZ is a general-purpose post-processing application which is designed to work with a variety of an analysis codes. Currently, GRIZ is capable of calculating and displaying derived variables for the DYNA3D, NIKE3D and TOPAZ3D analysis codes. GRIZ reads in data files in the ``MDG plotfile`` format. GRIZ provides support for modern 3D visualization techniques such as isosurface display, cutting planes and display of vector data. GRIZ also incorporates the ability to animate data over time and to store animation frames to a video disk. GRIZ is designed to utilize the capabilities of modern graphics workstations which provide hardware support for 3D graphics, thereby giving the user as much interactive performance as possible. This should make it easier for analysts to explore and interrogate their analysis results.

  20. Visual Analytics for Power Grid Contingency Analysis

    SciTech Connect

    Wong, Pak C.; Huang, Zhenyu; Chen, Yousu; Mackey, Patrick S.; Jin, Shuangshuang

    2014-01-20

    Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure to do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.

  1. Visualizing 3D Turbulence On Temporally Adaptive Wavelet Collocation Grids

    NASA Astrophysics Data System (ADS)

    Goldstein, D. E.; Kadlec, B. J.; Yuen, D. A.; Erlebacher, G.

    2005-12-01

    Today there is an explosion in data from high-resolution computations of nonlinear phenomena in many fields, including the geo- and environmental sciences. The efficient storage and subsequent visualization of these large data sets is a trade off in storage costs versus data quality. New dynamically adaptive simulation methodologies promise significant computational cost savings and have the added benefit of producing results on adapted grids that significantly reduce storage and data manipulation costs. Yet, with these adaptive simulation methodologies come new challenges in the visualization of temporally adaptive data sets. In this work turbulence data sets from Stochastic Coherent Adaptive Large Eddy Simulations (SCALES) are visualized with the open source tool ParaView, as a challenging case study. SCALES simulations use a temporally adaptive collocation grid defined by wavelet threshold filtering to resolve the most energetic coherent structures in a turbulence field. A subgrid scale model is used to account for the effect of unresolved subgrid scale modes. The results from the SCALES simulations are saved on a thresholded dyadic wavelet collocation grid, which by its nature does not include cell information. Paraview is an open source visualization package developed by KitWare(tm) that is based on the widely used VTK graphics toolkit. The efficient generation of cell information, required with current ParaView data formats, is explored using custom algorithms and VTK toolkit routines. Adaptive 3d visualizations using isosurfaces and volume visualizations are compared with non-adaptive visualizations. To explore the localized multiscale structures in the turbulent data sets the wavelet coefficients are also visualized allowing visualization of energy contained in local physical regions as well as in local wave number space.

  2. Scalable resource management in high performance computers.

    SciTech Connect

    Frachtenberg, E.; Petrini, F.; Fernandez Peinador, J.; Coll, S.

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  3. First experience with the scalable coherent interface

    SciTech Connect

    Mueller, H. . ECP Division); RD24 Collaboration

    1994-02-01

    The research project RD24 is studying applications of the Scalable Coherent Interface (IEEE-1596) standard for the large hadron collider (LHC). First SCI node chips from Dolphin were used to demonstrate the use and functioning of SCI's packet protocols and to measure data rates. The authors present results from a first, two-node SCI ringlet at CERN, based on a R3000 RISC processor node and DMA node on a MC68040 processor bus. A diagnostic link analyzer monitors the SCI packet protocols up to full link bandwidth. In its second phase, RD24 will build a first implementation of a multi-ringlet SCI data merger.

  4. SPRNG Scalable Parallel Random Number Generator LIbrary

    2010-03-16

    This revision corrects some errors in SPRNG 1. Users of newer SPRNG versions can obtain the corrected files and build their version with it. This version also improves the scalability of some of the application-based tests in the SPRNG test suite. It also includes an interface to a parallel Mersenne Twister, so that if users install the Mersenne Twister, then they can test this generator with the SPRNG test suite and also use some SPRNGmore » features with that generator.« less

  5. Overcoming Scalability Challenges for Tool Daemon Launching

    SciTech Connect

    Ahn, D H; Arnold, D C; de Supinski, B R; Lee, G L; Miller, B P; Schulz, M

    2008-02-15

    Many tools that target parallel and distributed environments must co-locate a set of daemons with the distributed processes of the target application. However, efficient and portable deployment of these daemons on large scale systems is an unsolved problem. We overcome this gap with LaunchMON, a scalable, robust, portable, secure, and general purpose infrastructure for launching tool daemons. Its API allows tool builders to identify all processes of a target job, launch daemons on the relevant nodes and control daemon interaction. Our results show that Launch-MON scales to very large daemon counts and substantially enhances performance over existing ad hoc mechanisms.

  6. Scalable Unix tools on parallel processors

    SciTech Connect

    Gropp, W.; Lusk, E.

    1994-12-31

    The introduction of parallel processors that run a separate copy of Unix on each process has introduced new problems in managing the user`s environment. This paper discusses some generalizations of common Unix commands for managing files (e.g. 1s) and processes (e.g. ps) that are convenient and scalable. These basic tools, just like their Unix counterparts, are text-based. We also discuss a way to use these with a graphical user interface (GUI). Some notes on the implementation are provided. Prototypes of these commands are publicly available.

  7. Scalable analog wavefront sensor with subpixel resolution

    NASA Astrophysics Data System (ADS)

    Wilcox, Michael

    2006-06-01

    Standard Shack-Hartman wavefront sensors use a CCD element to sample position and distortion of a target or guide star. Digital sampling of the element and transfer to a memory space for subsequent computation adds significant temporal delay, thus, limiting the spatial frequency and scalability of the system as a wavefront sensor. A new approach to sampling uses information processing principles in an insect compound eye. Analog circuitry eliminates digital sampling and extends the useful range of the system to control a deformable mirror and make a faster, more capable wavefront sensor.

  8. Scalable networks for discrete quantum random walks

    SciTech Connect

    Fujiwara, S.; Osaki, H.; Buluta, I.M.; Hasegawa, S.

    2005-09-15

    Recently, quantum random walks (QRWs) have been thoroughly studied in order to develop new quantum algorithms. In this paper we propose scalable quantum networks for discrete QRWs on circles, lines, and also in higher dimensions. In our method the information about the position of the walker is stored in a quantum register and the network consists of only one-qubit rotation and (controlled){sup n}-NOT gates, therefore it is purely computational and independent of the physical implementation. As an example, we describe the experimental realization in an ion-trap system.

  9. Scalable Optical-Fiber Communication Networks

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Peterson, John C.

    1993-01-01

    Scalable arbitrary fiber extension network (SAFEnet) is conceptual fiber-optic communication network passing digital signals among variety of computers and input/output devices at rates from 200 Mb/s to more than 100 Gb/s. Intended for use with very-high-speed computers and other data-processing and communication systems in which message-passing delays must be kept short. Inherent flexibility makes it possible to match performance of network to computers by optimizing configuration of interconnections. In addition, interconnections made redundant to provide tolerance to faults.

  10. Visual field

    MedlinePlus

    Perimetry; Tangent screen exam; Automated perimetry exam; Goldmann visual field exam; Humphrey visual field exam ... Confrontation visual field exam : This is a quick and basic check of the visual field. The health care provider ...

  11. Scalable enantioselective total synthesis of taxanes.

    PubMed

    Mendoza, Abraham; Ishihara, Yoshihiro; Baran, Phil S

    2011-11-06

    Taxanes form a large family of terpenes comprising over 350 members, the most famous of which is Taxol (paclitaxel), a billion-dollar anticancer drug. Here, we describe the first practical and scalable synthetic entry to these natural products via a concise preparation of (+)-taxa-4(5),11(12)-dien-2-one, which has a suitable functional handle with which to access more oxidized members of its family. This route enables a gram-scale preparation of the 'parent' taxane--taxadiene--which is the largest quantity of this naturally occurring terpene ever isolated or prepared in pure form. The characteristic 6-8-6 tricyclic system of the taxane family, containing a bridgehead alkene, is forged via a vicinal difunctionalization/Diels-Alder strategy. Asymmetry is introduced by means of an enantioselective conjugate addition that forms an all-carbon quaternary centre, from which all other stereocentres are fixed through substrate control. This study lays a critical foundation for a planned access to minimally oxidized taxane analogues and a scalable laboratory preparation of Taxol itself.

  12. Using the scalable nonlinear equations solvers package

    SciTech Connect

    Gropp, W.D.; McInnes, L.C.; Smith, B.F.

    1995-02-01

    SNES (Scalable Nonlinear Equations Solvers) is a software package for the numerical solution of large-scale systems of nonlinear equations on both uniprocessors and parallel architectures. SNES also contains a component for the solution of unconstrained minimization problems, called SUMS (Scalable Unconstrained Minimization Solvers). Newton-like methods, which are known for their efficiency and robustness, constitute the core of the package. As part of the multilevel PETSc library, SNES incorporates many features and options from other parts of PETSc. In keeping with the spirit of the PETSc library, the nonlinear solution routines are data-structure-neutral, making them flexible and easily extensible. This users guide contains a detailed description of uniprocessor usage of SNES, with some added comments regarding multiprocessor usage. At this time the parallel version is undergoing refinement and extension, as we work toward a common interface for the uniprocessor and parallel cases. Thus, forthcoming versions of the software will contain additional features, and changes to parallel interface may result at any time. The new parallel version will employ the MPI (Message Passing Interface) standard for interprocessor communication. Since most of these details will be hidden, users will need to perform only minimal message-passing programming.

  13. Scalable enantioselective total synthesis of taxanes

    NASA Astrophysics Data System (ADS)

    Mendoza, Abraham; Ishihara, Yoshihiro; Baran, Phil S.

    2012-01-01

    Taxanes form a large family of terpenes comprising over 350 members, the most famous of which is Taxol (paclitaxel), a billion-dollar anticancer drug. Here, we describe the first practical and scalable synthetic entry to these natural products via a concise preparation of (+)-taxa-4(5),11(12)-dien-2-one, which has a suitable functional handle with which to access more oxidized members of its family. This route enables a gram-scale preparation of the ‘parent’ taxane—taxadiene—which is the largest quantity of this naturally occurring terpene ever isolated or prepared in pure form. The characteristic 6-8-6 tricyclic system of the taxane family, containing a bridgehead alkene, is forged via a vicinal difunctionalization/Diels-Alder strategy. Asymmetry is introduced by means of an enantioselective conjugate addition that forms an all-carbon quaternary centre, from which all other stereocentres are fixed through substrate control. This study lays a critical foundation for a planned access to minimally oxidized taxane analogues and a scalable laboratory preparation of Taxol itself.

  14. An Open Infrastructure for Scalable, Reconfigurable Analysis

    SciTech Connect

    de Supinski, B R; Fowler, R; Gamblin, T; Mueller, F; Ratn, P; Schulz, M

    2008-05-15

    Petascale systems will have hundreds of thousands of processor cores so their applications must be massively parallel. Effective use of petascale systems will require efficient interprocess communication through memory hierarchies and complex network topologies. Tools to collect and analyze detailed data about this communication would facilitate its optimization. However, several factors complicate tool design. First, large-scale runs on petascale systems will be a precious commodity, so scalable tools must have almost no overhead. Second, the volume of performance data from petascale runs could easily overwhelm hand analysis and, thus, tools must collect only data that is relevant to diagnosing performance problems. Analysis must be done in-situ, when available processing power is proportional to the data. We describe a tool framework that overcomes these complications. Our approach allows application developers to combine existing techniques for measurement, analysis, and data aggregation to develop application-specific tools quickly. Dynamic configuration enables application developers to select exactly the measurements needed and generic components support scalable aggregation and analysis of this data with little additional effort.

  15. A scalable and operationally simple radical trifluoromethylation

    PubMed Central

    Beatty, Joel W.; Douglas, James J.; Cole, Kevin P.; Stephenson, Corey R. J.

    2015-01-01

    The large number of reagents that have been developed for the synthesis of trifluoromethylated compounds is a testament to the importance of the CF3 group as well as the associated synthetic challenge. Current state-of-the-art reagents for appending the CF3 functionality directly are highly effective; however, their use on preparative scale has minimal precedent because they require multistep synthesis for their preparation, and/or are prohibitively expensive for large-scale application. For a scalable trifluoromethylation methodology, trifluoroacetic acid and its anhydride represent an attractive solution in terms of cost and availability; however, because of the exceedingly high oxidation potential of trifluoroacetate, previous endeavours to use this material as a CF3 source have required the use of highly forcing conditions. Here we report a strategy for the use of trifluoroacetic anhydride for a scalable and operationally simple trifluoromethylation reaction using pyridine N-oxide and photoredox catalysis to affect a facile decarboxylation to the CF3 radical. PMID:26258541

  16. A scalable and operationally simple radical trifluoromethylation.

    PubMed

    Beatty, Joel W; Douglas, James J; Cole, Kevin P; Stephenson, Corey R J

    2015-01-01

    The large number of reagents that have been developed for the synthesis of trifluoromethylated compounds is a testament to the importance of the CF3 group as well as the associated synthetic challenge. Current state-of-the-art reagents for appending the CF3 functionality directly are highly effective; however, their use on preparative scale has minimal precedent because they require multistep synthesis for their preparation, and/or are prohibitively expensive for large-scale application. For a scalable trifluoromethylation methodology, trifluoroacetic acid and its anhydride represent an attractive solution in terms of cost and availability; however, because of the exceedingly high oxidation potential of trifluoroacetate, previous endeavours to use this material as a CF3 source have required the use of highly forcing conditions. Here we report a strategy for the use of trifluoroacetic anhydride for a scalable and operationally simple trifluoromethylation reaction using pyridine N-oxide and photoredox catalysis to affect a facile decarboxylation to the CF3 radical. PMID:26258541

  17. Towards Scalable Graph Computation on Mobile Devices

    PubMed Central

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  18. Scalable Quantum Computing Over the Rainbow

    NASA Astrophysics Data System (ADS)

    Pfister, Olivier; Menicucci, Nicolas C.; Flammia, Steven T.

    2011-03-01

    The physical implementation of nontrivial quantum computing is an experimental challenge due to decoherence and the need for scalability. Recently we proved a novel theoretical scheme for realizing a scalable quantum register of very large size, entangled in a cluster state, in the optical frequency comb (OFC) defined by the eigenmodes of a single optical parametric oscillator (OPO). The classical OFC is well known as implemented by the femtosecond, carrier-envelope-phase- and mode-locked lasers which have redefined frequency metrology in recent years. The quantum OFC is a set of harmonic oscillators, or Qmodes, whose amplitude and phase quadratures are continuous variables, the manipulation of which is a mature field for one or two Qmodes. We have shown that the nonlinear optical medium of a single OPO can be engineered, in a sophisticated but already demonstrated manner, so as to entangle in constant time the OPO's OFC into a finitely squeezed, Gaussian cluster state suitable for universal quantum computing over continuous variables. Here we summarize our theoretical result and survey the ongoing experimental efforts in this direction.

  19. Dynamic superhydrophobic behavior in scalable random textured polymeric surfaces

    NASA Astrophysics Data System (ADS)

    Moreira, David; Park, Sung-hoon; Lee, Sangeui; Verma, Neil; Bandaru, Prabhakar R.

    2016-03-01

    Superhydrophobic (SH) surfaces, created from hydrophobic materials with micro- or nano- roughness, trap air pockets in the interstices of the roughness, leading, in fluid flow conditions, to shear-free regions with finite interfacial fluid velocity and reduced resistance to flow. Significant attention has been given to SH conditions on ordered, periodic surfaces. However, in practical terms, random surfaces are more applicable due to their relative ease of fabrication. We investigate SH behavior on a novel durable polymeric rough surface created through a scalable roll-coating process with varying micro-scale roughness through velocity and pressure drop measurements. We introduce a new method to construct the velocity profile over SH surfaces with significant roughness in microchannels. Slip length was measured as a function of differing roughness and interstitial air conditions, with roughness and air fraction parameters obtained through direct visualization. The slip length was matched to scaling laws with good agreement. Roughness at high air fractions led to a reduced pressure drop and higher velocities, demonstrating the effectiveness of the considered surface in terms of reduced resistance to flow. We conclude that the observed air fraction under flow conditions is the primary factor determining the response in fluid flow. Such behavior correlated well with the hydrophobic or superhydrophobic response, indicating significant potential for practical use in enhancing fluid flow efficiency.

  20. Designing Scalable PGAS Communication Subsystems on Cray Gemini Interconnect

    SciTech Connect

    Vishnu, Abhinav; Daily, Jeffrey A.; Palmer, Bruce J.

    2012-12-26

    The Cray Gemini Interconnect has been recently introduced as a next generation network architecture for building multi-petaflop supercomputers. Cray XE6 systems including LANL Cielo, NERSC Hopper, ORNL Titan and proposed NCSA BlueWaters leverage the Gemini Interconnect as their primary Interconnection network. At the same time, programming models such as the Message Passing Interface (MPI) and Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) and Co-Array Fortran (CAF) have become available on these systems. Global Arrays is a popular PGAS model used in a variety of application domains including hydrodynamics, chemistry and visualization. Global Arrays uses Aggregate Re- mote Memory Copy Interface (ARMCI) as the communication runtime system for Remote Memory Access communication. This paper presents a design, implementation and performance evaluation of scalable and high performance communication subsystems on Cray Gemini Interconnect using ARMCI. The design space is explored and time-space complexities of commu- nication protocols for one-sided communication primitives such as contiguous and uniformly non-contiguous datatypes, atomic memory operations (AMOs) and memory synchronization is presented. An implementation of the proposed design (referred as ARMCI-Gemini) demonstrates the efficacy on communication primitives, application kernels such as LU decomposition and full applications such as Smooth Particle Hydrodynamics (SPH) application.

  1. Coupling Advanced Modeling and Visualization to Improve High-Impact Tropical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Green, Bryan

    2009-01-01

    To meet the goals of extreme weather event warning, this approach couples a modeling and visualization system that integrates existing NASA technologies and improves the modeling system's parallel scalability to take advantage of petascale supercomputers. It also streamlines the data flow for fast processing and 3D visualizations, and develops visualization modules to fuse NASA satellite data.

  2. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells.

    PubMed

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-01-01

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level. PMID:27005630

  3. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells

    PubMed Central

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-01-01

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level. PMID:27005630

  4. A scalable distributed paradigm for multi-user interaction with tiled rear projection display walls.

    PubMed

    Roman, Pablo; Lazarov, Maxim; Majumder, Aditi

    2010-01-01

    We present the first distributed paradigm for multiple users to interact simultaneously with large tiled rear projection display walls. Unlike earlier works, our paradigm allows easy scalability across different applications, interaction modalities, displays and users. The novelty of the design lies in its distributed nature allowing well-compartmented, application independent, and application specific modules. This enables adapting to different 2D applications and interaction modalities easily by changing a few application specific modules. We demonstrate four challenging 2D applications on a nine projector display to demonstrate the application scalability of our method: map visualization, virtual graffiti, virtual bulletin board and an emergency management system. We demonstrate the scalability of our method to multiple interaction modalities by showing both gesture-based and laser-based user interfaces. Finally, we improve earlier distributed methods to register multiple projectors. Previous works need multiple patterns to identify the neighbors, the configuration of the display and the registration across multiple projectors in logarithmic time with respect to the number of projectors in the display. We propose a new approach that achieves this using a single pattern based on specially augmented QR codes in constant time. Further, previous distributed registration algorithms are prone to large misregistrations. We propose a novel radially cascading geometric registration technique that yields significantly better accuracy. Thus, our improvements allow a significantly more efficient and accurate technique for distributed self-registration of multi-projector display walls.

  5. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells.

    PubMed

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-03-09

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level.

  6. Trelliscope: A System for Detailed Visualization in Analysis of Large Complex Data

    SciTech Connect

    Hafen, Ryan P.; Gosink, Luke J.; McDermott, Jason E.; Rodland, Karin D.; Kleese-Van Dam, Kerstin; Cleveland, William S.

    2013-12-01

    Visualization plays a critical role in the statistical model building and data analysis process. Data analysts, well-versed in statistical and machine learning methods, visualize data to hypothesize and validate models. These analysts need flexible, scalable visualization tools that are not decoupled from their analysis environment. In this paper we introduce Trelliscope, a visualization framework for statistical analysis of large complex data. Trelliscope extends Trellis, an effective visualization framework that divides data into subsets and applies a plotting method to each subset, arranging the results in rows and columns of panels. Trelliscope provides a way to create, arrange and interactively view panels for very large datasets, enabling flexible detailed visualization for data of any size. Scalability is achieved using distributed computing technologies coupled with . We discuss the underlying principles, design, and scalable architecture of Trelliscope, and illustrate its use on three analysis projects in the domains of proteomics, high energy physics, and power systems engineering.

  7. Network selection, Information filtering and Scalable computation

    NASA Astrophysics Data System (ADS)

    Ye, Changqing

    -complete factorizations, possibly with a high percentage of missing values. This promotes additional sparsity beyond rank reduction. Computationally, we design methods based on a ``decomposition and combination'' strategy, to break large-scale optimization into many small subproblems to solve in a recursive and parallel manner. On this basis, we implement the proposed methods through multi-platform shared-memory parallel programming, and through Mahout, a library for scalable machine learning and data mining, for mapReduce computation. For example, our methods are scalable to a dataset consisting of three billions of observations on a single machine with sufficient memory, having good timings. Both theoretical and numerical investigations show that the proposed methods exhibit significant improvement in accuracy over state-of-the-art scalable methods.

  8. Visualization for Molecular Dynamics Simulation of Gas and Metal Surface Interaction

    NASA Astrophysics Data System (ADS)

    Puzyrkov, D.; Polyakov, S.; Podryga, V.

    2016-02-01

    The development of methods, algorithms and applications for visualization of molecular dynamics simulation outputs is discussed. The visual analysis of the results of such calculations is a complex and actual problem especially in case of the large scale simulations. To solve this challenging task it is necessary to decide on: 1) what data parameters to render, 2) what type of visualization to choose, 3) what development tools to use. In the present work an attempt to answer these questions was made. For visualization it was offered to draw particles in the corresponding 3D coordinates and also their velocity vectors, trajectories and volume density in the form of isosurfaces or fog. We tested the way of post-processing and visualization based on the Python language with use of additional libraries. Also parallel software was developed that allows processing large volumes of data in the 3D regions of the examined system. This software gives the opportunity to achieve desired results that are obtained in parallel with the calculations, and at the end to collect discrete received frames into a video file. The software package "Enthought Mayavi2" was used as the tool for visualization. This visualization application gave us the opportunity to study the interaction of a gas with a metal surface and to closely observe the adsorption effect.

  9. Network-aware scalable video monitoring system for emergency situations with operator-managed fidelity control

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos

    2014-05-01

    In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier

  10. Scalable computer architecture for digital vascular systems

    NASA Astrophysics Data System (ADS)

    Goddard, Iain; Chao, Hui; Skalabrin, Mark

    1998-06-01

    Digital vascular computer systems are used for radiology and fluoroscopy (R/F), angiography, and cardiac applications. In the United States alone, about 26 million procedures of these types are performed annually: about 81% R/F, 11% cardiac, and 8% angiography. Digital vascular systems have a very wide range of performance requirements, especially in terms of data rates. In addition, new features are added over time as they are shown to be clinically efficacious. Application-specific processing modes such as roadmapping, peak opacification, and bolus chasing are particular to some vascular systems. New algorithms continue to be developed and proven, such as Cox and deJager's precise registration methods for masks and live images in digital subtraction angiography. A computer architecture must have high scalability and reconfigurability to meet the needs of this modality. Ideally, the architecture could also serve as the basis for a nonvascular R/F system.

  11. BASSET: Scalable Gateway Finder in Large Graphs

    SciTech Connect

    Tong, H; Papadimitriou, S; Faloutsos, C; Yu, P S; Eliassi-Rad, T

    2010-11-03

    Given a social network, who is the best person to introduce you to, say, Chris Ferguson, the poker champion? Or, given a network of people and skills, who is the best person to help you learn about, say, wavelets? The goal is to find a small group of 'gateways': persons who are close enough to us, as well as close enough to the target (person, or skill) or, in other words, are crucial in connecting us to the target. The main contributions are the following: (a) we show how to formulate this problem precisely; (b) we show that it is sub-modular and thus it can be solved near-optimally; (c) we give fast, scalable algorithms to find such gateways. Experiments on real data sets validate the effectiveness and efficiency of the proposed methods, achieving up to 6,000,000x speedup.

  12. A versatile scalable PET processing system

    SciTech Connect

    H. Dong, A. Weisenberger, J. McKisson, Xi Wenze, C. Cuevas, J. Wilson, L. Zukerman

    2011-06-01

    Positron Emission Tomography (PET) historically has major clinical and preclinical applications in cancerous oncology, neurology, and cardiovascular diseases. Recently, in a new direction, an application specific PET system is being developed at Thomas Jefferson National Accelerator Facility (Jefferson Lab) in collaboration with Duke University, University of Maryland at Baltimore (UMAB), and West Virginia University (WVU) targeted for plant eco-physiology research. The new plant imaging PET system is versatile and scalable such that it could adapt to several plant imaging needs - imaging many important plant organs including leaves, roots, and stems. The mechanical arrangement of the detectors is designed to accommodate the unpredictable and random distribution in space of the plant organs without requiring the plant be disturbed. Prototyping such a system requires a new data acquisition system (DAQ) and data processing system which are adaptable to the requirements of these unique and versatile detectors.

  13. Efficient scalable solid-state neutron detector.

    PubMed

    Moses, Daniel

    2015-06-01

    We report on scalable solid-state neutron detector system that is specifically designed to yield high thermal neutron detection sensitivity. The basic detector unit in this system is made of a (6)Li foil coupled to two crystalline silicon diodes. The theoretical intrinsic efficiency of a detector-unit is 23.8% and that of detector element comprising a stack of five detector-units is 60%. Based on the measured performance of this detector-unit, the performance of a detector system comprising a planar array of detector elements, scaled to encompass effective area of 0.43 m(2), is estimated to yield the minimum absolute efficiency required of radiological portal monitors used in homeland security. PMID:26133869

  14. Efficient scalable solid-state neutron detector

    SciTech Connect

    Moses, Daniel

    2015-06-15

    We report on scalable solid-state neutron detector system that is specifically designed to yield high thermal neutron detection sensitivity. The basic detector unit in this system is made of a {sup 6}Li foil coupled to two crystalline silicon diodes. The theoretical intrinsic efficiency of a detector-unit is 23.8% and that of detector element comprising a stack of five detector-units is 60%. Based on the measured performance of this detector-unit, the performance of a detector system comprising a planar array of detector elements, scaled to encompass effective area of 0.43 m{sup 2}, is estimated to yield the minimum absolute efficiency required of radiological portal monitors used in homeland security.

  15. Efficient scalable solid-state neutron detector

    NASA Astrophysics Data System (ADS)

    Moses, Daniel

    2015-06-01

    We report on scalable solid-state neutron detector system that is specifically designed to yield high thermal neutron detection sensitivity. The basic detector unit in this system is made of a 6Li foil coupled to two crystalline silicon diodes. The theoretical intrinsic efficiency of a detector-unit is 23.8% and that of detector element comprising a stack of five detector-units is 60%. Based on the measured performance of this detector-unit, the performance of a detector system comprising a planar array of detector elements, scaled to encompass effective area of 0.43 m2, is estimated to yield the minimum absolute efficiency required of radiological portal monitors used in homeland security.

  16. Toward Scalable Benchmarks for Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1996-01-01

    This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short- and long-term behavior. These benchmarks should be both portable and scalable so they may be used on storage systems from tens of gigabytes to petabytes or more. By developing a standard set of benchmarks that reflect real user workload, we hope to encourage system designers and users to publish performance figures that can be compared with those of other systems. This will allow users to choose the system that best meets their needs and give designers a tool with which they can measure the performance effects of improvements to their systems.

  17. Scalable problems and memory bounded speedup

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Ni, Lionel M.

    1992-01-01

    In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time speedup and memory-bounded speedup. The latter two consider the relationship between speedup and problem scalability. Two sets of speedup formulations are derived for these three models. One set considers uneven workload allocation and communication overhead and gives more accurate estimation. Another set considers a simplified case and provides a clear picture on the impact of the sequential portion of an application on the possible performance gain from parallel processing. The simplified fixed-size speedup is Amdahl's law. The simplified fixed-time speedup is Gustafson's scaled speedup. The simplified memory-bounded speedup contains both Amdahl's law and Gustafson's scaled speedup as special cases. This study leads to a better understanding of parallel processing.

  18. Parallel scalability of Hartree–Fock calculations

    SciTech Connect

    Chow, Edmond Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-14

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree–Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  19. 3-D Flow Visualization of a Turbulent Boundary Layer

    NASA Astrophysics Data System (ADS)

    Thurow, Brian; Williams, Steven; Lynch, Kyle

    2009-11-01

    A recently developed 3-D flow visualization technique is used to visualize large-scale structures in a turbulent boundary layer. The technique is based on the scanning of a laser light sheet through the flow field similar to that of Delo and Smits (1997). High-speeds are possible using a recently developed MHz rate pulse burst laser system, an ultra-high-speed camera capable of 500,000 fps and a galvanometric scanning mirror yielding a total acquisition time of 136 microseconds for a 220 x 220 x 68 voxel image. In these experiments, smoke is seeded into the boundary layer formed on the wall of a low-speed wind tunnel. The boundary layer is approximately 1.5'' thick at the imaging location with a free stream velocity of 24 ft/s yielding a Reynolds number of 18,000 based on boundary layer thickness. The 3-D image volume is approximately 4'' x 4'' x 4''. Preliminary results using 3-D iso-surface visualizations show a collection of elongated large-scale structures inclined in the streamwise direction. The spanwise width of the structures, which are located in the outer region, is on the order of 25 -- 50% of the boundary layer thickness.

  20. Visualization of CFD Results in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Wasfy, Tamer M.; Noor Ahmed K.

    2001-01-01

    An object-oriented event-driven immersive virtual environment (VE) is described for the visualization of computational fluid dynamics (CFD) results. The VE incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. The fluid domain is discretized using either a multi-block structured grid or an unstructured finite element mesh. The VE allows natural 'fly-through' visualization of the model, the CFD grid, and the model's surroundings. In order to help visualize the flow and its effects on the model, the VE incorporates the following objects: stream objects (lines, surface-restricted lines. ribbons. and volumes); colored surfaces; elevation surfaces; surface arrows; global and local iso-surfaces; vortex cores; and separation/attachment surfaces and lines. Most of these objects can be used for dynamically probing the flow. Particles and arrow animations can be displayed on top of stream objects. Primitive response quantities as well as derived quantities can be used. A recursive tree search algorithm is used for real-time point and value search in the CFD grid.

  1. Visual Text Analytics for Impromptu Analysts

    SciTech Connect

    Love, Oriana J.; Best, Daniel M.; Bruce, Joseph R.; Dowson, Scott T.; Larmey, Christopher S.

    2011-10-23

    The Scalable Reasoning System (SRS) is a lightweight visual analytics framework that makes analytical capabilities widely accessible to a class of users we have deemed “impromptu analysts.” By focusing on a deployment of SRS, the Lessons Learned Explorer (LLEx), we examine how to develop visualizations around analytical-oriented goals and data availability. We discuss how to help impromptu analysts to explore deeper patterns. Through designing consistent interactions, we arrive at an interdependent view capable of showcasing patterns. With the combination of SRS widget visualizations and interactions around the underlying textual data, we aim to transition the casual, infrequent user into a viable–albeit impromptu–analyst.

  2. Coalescent: an open-source and scalable framework for exact calculations in coalescent theory

    PubMed Central

    2012-01-01

    Background Currently, there is no open-source, cross-platform and scalable framework for coalescent analysis in population genetics. There is no scalable GUI based user application either. Such a framework and application would not only drive the creation of more complex and realistic models but also make them truly accessible. Results As a first attempt, we built a framework and user application for the domain of exact calculations in coalescent analysis. The framework provides an API with the concepts of model, data, statistic, phylogeny, gene tree and recursion. Infinite-alleles and infinite-sites models are considered. It defines pluggable computations such as counting and listing all the ancestral configurations and genealogies and computing the exact probability of data. It can visualize a gene tree, trace and visualize the internals of the recursion algorithm for further improvement and attach dynamically a number of output processors. The user application defines jobs in a plug-in like manner so that they can be activated, deactivated, installed or uninstalled on demand. Multiple jobs can be run and their inputs edited. Job inputs are persisted across restarts and running jobs can be cancelled where applicable. Conclusions Coalescent theory plays an increasingly important role in analysing molecular population genetic data. Models involved are mathematically difficult and computationally challenging. An open-source, scalable framework that lets users immediately take advantage of the progress made by others will enable exploration of yet more difficult and realistic models. As models become more complex and mathematically less tractable, the need for an integrated computational approach is obvious. Object oriented designs, though has upfront costs, are practical now and can provide such an integrated approach. PMID:23033878

  3. Visualization of AMR data with multi-level dual-mesh interpolation.

    PubMed

    Moran, Patrick J; Ellsworth, David

    2011-12-01

    We present a new technique for providing interpolation within cell-centered Adaptive Mesh Refinement (AMR) data that achieves C(0) continuity throughout the 3D domain. Our technique improves on earlier work in that it does not require that adjacent patches differ by at most one refinement level. Our approach takes the dual of each mesh patch and generates "stitching cells" on the fly to fill the gaps between dual meshes. We demonstrate applications of our technique with data from Enzo, an AMR cosmological structure formation simulation code. We show ray-cast visualizations that include contributions from particle data (dark matter and stars, also output by Enzo) and gridded hydrodynamic data. We also show results from isosurface studies, including surfaces in regions where adjacent patches differ by more than one refinement level.

  4. Visualization of time-varying MRI data for MS lesion analysis

    NASA Astrophysics Data System (ADS)

    Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella

    2001-05-01

    Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.

  5. Developing a personal computer-based data visualization system using public domain software

    NASA Astrophysics Data System (ADS)

    Chen, Philip C.

    1999-03-01

    The current research will investigate the possibility of developing a computing-visualization system using a public domain software system built on a personal computer. Visualization Toolkit (VTK) is available on UNIX and PC platforms. VTK uses C++ to build an executable. It has abundant programming classes/objects that are contained in the system library. Users can also develop their own classes/objects in addition to those existing in the class library. Users can develop applications with any of the C++, Tcl/Tk, and JAVA environments. The present research will show how a data visualization system can be developed with VTK running on a personal computer. The topics will include: execution efficiency; visual object quality; availability of the user interface design; and exploring the feasibility of the VTK-based World Wide Web data visualization system. The present research will feature a case study showing how to use VTK to visualize meteorological data with techniques including, iso-surface, volume rendering, vector display, and composite analysis. The study also shows how the VTK outline, axes, and two-dimensional annotation text and title are enhancing the data presentation. The present research will also demonstrate how VTK works in an internet environment while accessing an executable with a JAVA application programing in a webpage.

  6. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  7. Computing and Visualizing Reachable Volumes for Maneuvering Satellites

    SciTech Connect

    Jiang, M; de Vries, W H; Pertica, A J; Olivier, S S

    2011-09-11

    Detecting and predicting maneuvering satellites is an important problem for Space Situational Awareness. The spatial envelope of all possible locations within reach of such a maneuvering satellite is known as the Reachable Volume (RV). As soon as custody of a satellite is lost, calculating the RV and its subsequent time evolution is a critical component in the rapid recovery of the satellite. In this paper, we present a Monte Carlo approach to computing the RV for a given object. Essentially, our approach samples all possible trajectories by randomizing thrust-vectors, thrust magnitudes and time of burn. At any given instance, the distribution of the 'point-cloud' of the virtual particles defines the RV. For short orbital time-scales, the temporal evolution of the point-cloud can result in complex, multi-reentrant manifolds. Visualization plays an important role in gaining insight and understanding into this complex and evolving manifold. In the second part of this paper, we focus on how to effectively visualize the large number of virtual trajectories and the computed RV. We present a real-time out-of-core rendering technique for visualizing the large number of virtual trajectories. We also examine different techniques for visualizing the computed volume of probability density distribution, including volume slicing, convex hull and isosurfacing. We compare and contrast these techniques in terms of computational cost and visualization effectiveness, and describe the main implementation issues encountered during our development process. Finally, we will present some of the results from our end-to-end system for computing and visualizing RVs using examples of maneuvering satellites.

  8. A scalable neuristor built with Mott memristors.

    PubMed

    Pickett, Matthew D; Medeiros-Ribeiro, Gilberto; Williams, R Stanley

    2013-02-01

    The Hodgkin-Huxley model for action potential generation in biological axons is central for understanding the computational capability of the nervous system and emulating its functionality. Owing to the historical success of silicon complementary metal-oxide-semiconductors, spike-based computing is primarily confined to software simulations and specialized analogue metal-oxide-semiconductor field-effect transistor circuits. However, there is interest in constructing physical systems that emulate biological functionality more directly, with the goal of improving efficiency and scale. The neuristor was proposed as an electronic device with properties similar to the Hodgkin-Huxley axon, but previous implementations were not scalable. Here we demonstrate a neuristor built using two nanoscale Mott memristors, dynamical devices that exhibit transient memory and negative differential resistance arising from an insulating-to-conducting phase transition driven by Joule heating. This neuristor exhibits the important neural functions of all-or-nothing spiking with signal gain and diverse periodic spiking, using materials and structures that are amenable to extremely high-density integration with or without silicon transistors. PMID:23241533

  9. Scalable Production of Molybdenum Disulfide Based Biosensors.

    PubMed

    Naylor, Carl H; Kybert, Nicholas J; Schneier, Camilla; Xi, Jin; Romero, Gabriela; Saven, Jeffery G; Liu, Renyu; Johnson, A T Charlie

    2016-06-28

    We demonstrate arrays of opioid biosensors based on chemical vapor deposition grown molybdenum disulfide (MoS2) field effect transistors (FETs) coupled to a computationally redesigned, water-soluble variant of the μ-opioid receptor (MOR). By transferring dense films of monolayer MoS2 crystals onto prefabricated electrode arrays, we obtain high-quality FETs with clean surfaces that allow for reproducible protein attachment. The fabrication yield of MoS2 FETs and biosensors exceeds 95%, with an average mobility of 2.0 cm(2) V(-1) s(-1) (36 cm(2) V(-1) s(-1)) at room temperature under ambient (in vacuo). An atomic length nickel-mediated linker chemistry enables target binding events that occur very close to the MoS2 surface to maximize sensitivity. The biosensor response calibration curve for a synthetic opioid peptide known to bind to the wild-type MOR indicates binding affinity that matches values determined using traditional techniques and a limit of detection ∼3 nM (1.5 ng/mL). The combination of scalable array fabrication and rapid, precise binding readout enabled by the MoS2 transistor offers the prospect of a solid-state drug testing platform for rapid readout of the interactions between novel drugs and their intended protein targets.

  10. Scalability and interoperability within glideinWMS

    NASA Astrophysics Data System (ADS)

    Bradley, D.; Sfiligoi, I.; Padhi, S.; Frey, J.; Tannenbaum, T.

    2010-04-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  11. Scalability and interoperability within glideinWMS

    SciTech Connect

    Bradley, D.; Sfiligoi, I.; Padhi, S.; Frey, J.; Tannenbaum, T.; /Wisconsin U., Madison

    2010-01-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  12. Dynamically scalable dual-core pipelined processor

    NASA Astrophysics Data System (ADS)

    Kumar, Nishant; Aggrawal, Ekta; Rajawat, Arvind

    2015-10-01

    This article proposes design and architecture of a dynamically scalable dual-core pipelined processor. Methodology of the design is the core fusion of two processors where two independent cores can dynamically morph into a larger processing unit, or they can be used as distinct processing elements to achieve high sequential performance and high parallel performance. Processor provides two execution modes. Mode1 is multiprogramming mode for execution of streams of instruction of lower data width, i.e., each core can perform 16-bit operations individually. Performance is improved in this mode due to the parallel execution of instructions in both the cores at the cost of area. In mode2, both the processing cores are coupled and behave like single, high data width processing unit, i.e., can perform 32-bit operation. Additional core-to-core communication is needed to realise this mode. The mode can switch dynamically; therefore, this processor can provide multifunction with single design. Design and verification of processor has been done successfully using Verilog on Xilinx 14.1 platform. The processor is verified in both simulation and synthesis with the help of test programs. This design aimed to be implemented on Xilinx Spartan 3E XC3S500E FPGA.

  13. Quantum Information Processing using Scalable Techniques

    NASA Astrophysics Data System (ADS)

    Hanneke, D.; Bowler, R.; Jost, J. D.; Home, J. P.; Lin, Y.; Tan, T.-R.; Leibfried, D.; Wineland, D. J.

    2011-05-01

    We report progress towards improving our previous demonstrations that combined all the fundamental building blocks required for scalable quantum information processing using trapped atomic ions. Included elements are long-lived qubits; a laser-induced universal gate set; state initialization and readout; and information transport, including co-trapping a second ion species to reinitialize motion without qubit decoherence. Recent efforts have focused on reducing experimental overhead and increasing gate fidelity. Most of the experimental duty cycle was previously used for transport, separation, and recombination of ion chains as well as re-cooling of motional excitation. We have addressed these issues by developing and implementing an arbitrary waveform generator with an update rate far above the ions' motional frequencies. To reduce gate errors, we actively stabilize the position of several UV (313 nm) laser beams. We have also switched the two-qubit entangling gate to one that acts directly on 9Be+ hyperfine qubit states whose energy separation is magnetic-fluctuation insensitive. This work is supported by DARPA, NSA, ONR, IARPA, Sandia, and the NIST Quantum Information Program.

  14. Scalable combinatorial tools for health disparities research.

    PubMed

    Langston, Michael A; Levine, Robert S; Kilbourne, Barbara J; Rogers, Gary L; Kershenbaum, Anne D; Baktash, Suzanne H; Coughlin, Steven S; Saxton, Arnold M; Agboto, Vincent K; Hood, Darryl B; Litchveld, Maureen Y; Oyana, Tonny J; Matthews-Juarez, Patricia; Juarez, Paul D

    2014-01-01

    Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual's genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject. PMID:25310540

  15. Scalable office-based health care

    PubMed Central

    Koepp, Gabriel A.; Manohar, Chinmay U.; McCrady-Spitzer, Shelly K.; Levine, James A.

    2014-01-01

    The goal of healthcare is to provide high quality care at an affordable cost for its patients. However, the population it serves has changed dramatically since the popularization of hospital-based healthcare. With available new technology, alternative healthcare delivery methods can be designed and tested. This study examines Scalable Office Based Healthcare for Small Business, where healthcare is delivered to the office floor. This delivery was tested in 18 individuals at a small business in Minneapolis, Minnesota. The goal was to deliver modular healthcare and mitigate conditions such as diabetes, hyperlipidemia, obesity, sedentariness, and metabolic disease. The modular healthcare system was welcomed by employees – 70% of those eligible enrolled. The findings showed that the modular healthcare deliverable was feasible and effective. The data demonstrated significant improvements in weight loss, fat loss, and blood variables for at risk participants. This study leaves room for improvement and further innovation. Expansion to include offerings such as physicals, diabetes management, smoking cessation, and pre-natal treatment would improve its utility. Future studies could include testing the adaptability of delivery method, as it should adapt to reach rural and underserved populations. PMID:21471576

  16. Scalable Production of Molybdenum Disulfide Based Biosensors.

    PubMed

    Naylor, Carl H; Kybert, Nicholas J; Schneier, Camilla; Xi, Jin; Romero, Gabriela; Saven, Jeffery G; Liu, Renyu; Johnson, A T Charlie

    2016-06-28

    We demonstrate arrays of opioid biosensors based on chemical vapor deposition grown molybdenum disulfide (MoS2) field effect transistors (FETs) coupled to a computationally redesigned, water-soluble variant of the μ-opioid receptor (MOR). By transferring dense films of monolayer MoS2 crystals onto prefabricated electrode arrays, we obtain high-quality FETs with clean surfaces that allow for reproducible protein attachment. The fabrication yield of MoS2 FETs and biosensors exceeds 95%, with an average mobility of 2.0 cm(2) V(-1) s(-1) (36 cm(2) V(-1) s(-1)) at room temperature under ambient (in vacuo). An atomic length nickel-mediated linker chemistry enables target binding events that occur very close to the MoS2 surface to maximize sensitivity. The biosensor response calibration curve for a synthetic opioid peptide known to bind to the wild-type MOR indicates binding affinity that matches values determined using traditional techniques and a limit of detection ∼3 nM (1.5 ng/mL). The combination of scalable array fabrication and rapid, precise binding readout enabled by the MoS2 transistor offers the prospect of a solid-state drug testing platform for rapid readout of the interactions between novel drugs and their intended protein targets. PMID:27227361

  17. Towards Scalable Optimal Sequence Homology Detection

    SciTech Connect

    Daily, Jeffrey A.; Krishnamoorthy, Sriram; Kalyanaraman, Anantharaman

    2012-12-26

    Abstract—The field of bioinformatics and computational biol- ogy is experiencing a data revolution — experimental techniques to procure data have increased in throughput, improved in accuracy and reduced in costs. This has spurred an array of high profile sequencing and data generation projects. While the data repositories represent untapped reservoirs of rich information critical for scientific breakthroughs, the analytical software tools that are needed to analyze large volumes of such sequence data have significantly lagged behind in their capacity to scale. In this paper, we address homology detection, which is a funda- mental problem in large-scale sequence analysis with numerous applications. We present a scalable framework to conduct large- scale optimal homology detection on massively parallel super- computing platforms. Our approach employs distributed memory work stealing to effectively parallelize optimal pairwise alignment computation tasks. Results on 120,000 cores of the Hopper Cray XE6 supercomputer demonstrate strong scaling and up to 2.42 × 107 optimal pairwise sequence alignments computed per second (PSAPS), the highest reported in the literature.

  18. Scalable histopathological image analysis via active learning.

    PubMed

    Zhu, Yan; Zhang, Shaoting; Liu, Wei; Metaxas, Dimitris N

    2014-01-01

    Training an effective and scalable system for medical image analysis usually requires a large amount of labeled data, which incurs a tremendous annotation burden for pathologists. Recent progress in active learning can alleviate this issue, leading to a great reduction on the labeling cost without sacrificing the predicting accuracy too much. However, most existing active learning methods disregard the "structured information" that may exist in medical images (e.g., data from individual patients), and make a simplifying assumption that unlabeled data is independently and identically distributed. Both may not be suitable for real-world medical images. In this paper, we propose a novel batch-mode active learning method which explores and leverages such structured information in annotations of medical images to enforce diversity among the selected data, therefore maximizing the information gain. We formulate the active learning problem as an adaptive submodular function maximization problem subject to a partition matroid constraint, and further present an efficient greedy algorithm to achieve a good solution with a theoretically proven bound. We demonstrate the efficacy of our algorithm on thousands of histopathological images of breast microscopic tissues. PMID:25320821

  19. Scalable Silicon Nanostructuring for Thermoelectric Applications

    NASA Astrophysics Data System (ADS)

    Koukharenko, E.; Boden, S. A.; Platzek, D.; Bagnall, D. M.; White, N. M.

    2013-07-01

    The current limitations of commercially available thermoelectric (TE) generators include their incompatibility with human body applications due to the toxicity of commonly used alloys and possible future shortage of raw materials (Bi-Sb-Te and Se). In this respect, exploiting silicon as an environmentally friendly candidate for thermoelectric applications is a promising alternative since it is an abundant, ecofriendly semiconductor for which there already exists an infrastructure for low-cost and high-yield processing. Contrary to the existing approaches, where n/ p-legs were either heavily doped to an optimal carrier concentration of 1019 cm-3 or morphologically modified by increasing their roughness, in this work improved thermoelectric performance was achieved in smooth silicon nanostructures with low doping concentration (1.5 × 1015 cm-3). Scalable, highly reproducible e-beam lithographies, which are compatible with nanoimprint and followed by deep reactive-ion etching (DRIE), were employed to produce arrays of regularly spaced nanopillars of 400 nm height with diameters varying from 140 nm to 300 nm. A potential Seebeck microprobe (PSM) was used to measure the Seebeck coefficients of such nanostructures. This resulted in values ranging from -75 μV/K to -120 μV/K for n-type and 100 μV/K to 140 μV/K for p-type, which are significant improvements over previously reported data.

  20. CODA: A scalable, distributed data acquisition system

    SciTech Connect

    Watson, W.A. III; Chen, J.; Heyes, G.; Jastrzembski, E.; Quarrie, D. )

    1994-02-01

    A new data acquisition system has been designed for physics experiments scheduled to run at CEBAF starting in the summer of 1994. This system runs on Unix workstations connected via ethernet, FDDI, or other network hardware to multiple intelligent front end crates -- VME, CAMAC or FASTBUS. CAMAC crates may either contain intelligent processors, or may be interfaced to VME. The system is modular and scalable, from a single front end crate and one workstation linked by ethernet, to as may as 32 clusters of front end crates ultimately connected via a high speed network to a set of analysis workstations. The system includes an extensible, device independent slow controls package with drivers for CAMAC, VME, and high voltage crates, as well as a link to CEBAF accelerator controls. All distributed processes are managed by standard remote procedure calls propagating change-of-state requests, or reading and writing program variables. Custom components may be easily integrated. The system is portable to any front end processor running the VxWorks real-time kernel, and to most workstations supplying a few standard facilities such as rsh and X-windows, and Motif and socket libraries. Sample implementations exist for 2 Unix workstation families connected via ethernet or FDDI to VME (with interfaces to FASTBUS or CAMAC), and via ethernet to FASTBUS or CAMAC.

  1. CODA: a scalable, distributed data acquisition system

    SciTech Connect

    David Quarrie; Edward Jastrzembski; William Heyes; Jie Chen; William Watson

    1994-02-01

    A new data acquisition system has been designed for physics experiments scheduled to run at CEBAF starting in the summer of 1994. This system runs on Unix workstations connected via ethernet, FDDI, or other network hardware to multiple intelligent front end crates-VME, CAMAC or FASTBUS. CAMAC crates may either contain intelligent processors, or may be interfaced to VME. The system is modular and scalable, from a single front end crate and one workstation linked by ethernet, to as many as 32 clusters of front end crates ultimately connected via a high speed network to a set of analysis workstations. The system includes an extensible, device independent slow controls package with drivers for CAMAC, VME, and high voltage crates, as well as a link to CEBAF accelerator controls. All distributed processes are managed by standard remote procedure calls propagating change-of-state requests, or reading and writing program variables. Custom components may be easily integrated. The system is portable to any front end processor running the VxWorks real-time kernel, and to most workstations supplying a few standard facilities such as rsh and X-windows, and Motif and socket libraries. Sample implementations exist for 2 Unix workstation families connected via ethernet or FDDI to VME (with interfaces to FASTBUS or CAMAC), and via ethernet to FASTBUS or CAMAC

  2. Scalable Combinatorial Tools for Health Disparities Research

    PubMed Central

    Langston, Michael A.; Levine, Robert S.; Kilbourne, Barbara J.; Rogers, Gary L.; Kershenbaum, Anne D.; Baktash, Suzanne H.; Coughlin, Steven S.; Saxton, Arnold M.; Agboto, Vincent K.; Hood, Darryl B.; Litchveld, Maureen Y.; Oyana, Tonny J.; Matthews-Juarez, Patricia; Juarez, Paul D.

    2014-01-01

    Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject. PMID:25310540

  3. A scalable neuristor built with Mott memristors

    NASA Astrophysics Data System (ADS)

    Pickett, Matthew D.; Medeiros-Ribeiro, Gilberto; Williams, R. Stanley

    2013-02-01

    The Hodgkin-Huxley model for action potential generation in biological axons is central for understanding the computational capability of the nervous system and emulating its functionality. Owing to the historical success of silicon complementary metal-oxide-semiconductors, spike-based computing is primarily confined to software simulations and specialized analogue metal-oxide-semiconductor field-effect transistor circuits. However, there is interest in constructing physical systems that emulate biological functionality more directly, with the goal of improving efficiency and scale. The neuristor was proposed as an electronic device with properties similar to the Hodgkin-Huxley axon, but previous implementations were not scalable. Here we demonstrate a neuristor built using two nanoscale Mott memristors, dynamical devices that exhibit transient memory and negative differential resistance arising from an insulating-to-conducting phase transition driven by Joule heating. This neuristor exhibits the important neural functions of all-or-nothing spiking with signal gain and diverse periodic spiking, using materials and structures that are amenable to extremely high-density integration with or without silicon transistors.

  4. Towards scalable electronic structure calculations for alloys

    SciTech Connect

    Stocks, G.M.; Nicholson, D.M.C.; Wang, Y.; Shelton, W.A.; Szotek, Z.; Temmermann, W.M.

    1994-06-01

    A new approach to calculating the properties of large systems within the local density approximation (LDA) that offers the promise of scalability on massively parallel supercomputers is outlined. The electronic structure problem is formulated in real space using multiple scattering theory. The standard LDA algorithm is divided into two parts. Firstly, finding the self-consistent field (SCF) electron density, Secondly, calculating the energy corresponding to the SCF density. We show, at least for metals and alloys, that the former problem is easily solved using real space methods. For the second we take advantage of the variational properties of a generalized Harris-Foulkes free energy functional, a new conduction band Fermi function, and a fictitious finite electron temperature that again allow us to use real-space methods. Using a compute-node {R_arrow} atom equivalence the new method is naturally highly parallel and leads to O(N) scaling where N is the number of atoms making up the system. We show scaling data gathered on the Intel XP/S 35 Paragon for systems up to 512-atoms/simulation cell. To demonstrate that we can achieve metallurgical-precision, we apply the new method to the calculation the energies of disordered CuO{sub 0.5}Zn{sub 0.5} alloys using a large random sample.

  5. Visual agnosia.

    PubMed

    Álvarez, R; Masjuan, J

    2016-03-01

    Visual agnosia is defined as an impairment of object recognition, in the absence of visual acuity or cognitive dysfunction that would explain this impairment. This condition is caused by lesions in the visual association cortex, sparing primary visual cortex. There are 2 main pathways that process visual information: the ventral stream, tasked with object recognition, and the dorsal stream, in charge of locating objects in space. Visual agnosia can therefore be divided into 2 major groups depending on which of the two streams is damaged. The aim of this article is to conduct a narrative review of the various visual agnosia syndromes, including recent developments in a number of these syndromes.

  6. Memory Scalability and Efficiency Analysis of Parallel Codes

    SciTech Connect

    Janjusic, Tommy; Kartsaklis, Christos

    2015-01-01

    Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an application s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).

  7. Improving the Performance Scalability of the Community Atmosphere Model

    SciTech Connect

    Mirin, Arthur; Worley, Patrick H

    2012-01-01

    The Community Atmosphere Model (CAM), which serves as the atmosphere component of the Community Climate System Model (CCSM), is the most computationally expensive CCSM component in typical configurations. On current and next-generation leadership class computing systems, the performance of CAM is tied to its parallel scalability. Improving performance scalability in CAM has been a challenge, due largely to algorithmic restrictions necessitated by the polar singularities in its latitude-longitude computational grid. Nevertheless, through a combination of exploiting additional parallelism, implementing improved communication protocols, and eliminating scalability bottlenecks, we have been able to more than double the maximum throughput rate of CAM on production platforms. We describe these improvements and present results on the Cray XT5 and IBM BG/P. The approaches taken are not specific to CAM and may inform similar scalability enhancement activities for other codes.

  8. TriG: Next Generation Scalable Spaceborne GNSS Receiver

    NASA Technical Reports Server (NTRS)

    Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.

    2012-01-01

    TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.

  9. Visual exploration of nasal airflow.

    PubMed

    Zachow, Stefan; Muigg, Philipp; Hildebrandt, Thomas; Doleisch, Helmut; Hege, Hans-Christian

    2009-01-01

    Rhinologists are often faced with the challenge of assessing nasal breathing from a functional point of view to derive effective therapeutic interventions. While the complex nasal anatomy can be revealed by visual inspection and medical imaging, only vague information is available regarding the nasal airflow itself: Rhinomanometry delivers rather unspecific integral information on the pressure gradient as well as on total flow and nasal flow resistance. In this article we demonstrate how the understanding of physiological nasal breathing can be improved by simulating and visually analyzing nasal airflow, based on an anatomically correct model of the upper human respiratory tract. In particular we demonstrate how various Information Visualization (InfoVis) techniques, such as a highly scalable implementation of parallel coordinates, time series visualizations, as well as unstructured grid multi-volume rendering, all integrated within a multiple linked views framework, can be utilized to gain a deeper understanding of nasal breathing. Evaluation is accomplished by visual exploration of spatio-temporal airflow characteristics that include not only information on flow features but also on accompanying quantities such as temperature and humidity. To our knowledge, this is the first in-depth visual exploration of the physiological function of the nose over several simulated breathing cycles under consideration of a complete model of the nasal airways, realistic boundary conditions, and all physically relevant time-varying quantities. PMID:19834215

  10. Visualization methods for high-resolution, transient, 3-D, finite element situations

    SciTech Connect

    Christon, M.A.

    1995-01-10

    Scientific visualization is the process whereby numerical data is transformed into a visual form to augment the process of discovery and understanding. Visualizing the data generated by large-scale, transient, three-dimensional finite element simulations poses many challenges due to geometric complexity, the presence of multiple materials and multiple element types, and the inherent unstructured nature of the meshes. In this paper, the direct use of finite element data structures, nodal assembly procedures, and element interpolants for volumetric adaptive surface extraction, surface rendering, vector grids and particle tracing is discussed. A brief description of a {open_quotes}direct-to-disk{close_quotes} animation system is presented, and case studies which demonstrate the use of isosurfaces, vector plots, cutting planes, reference surfaces and particle tracing are then discussed in the context of several case studies for transient incompressible viscous flow, and acoustic fluid-structure interaction simulations. An overview of the implications of massively parallel computers on visualization is presented to highlight the issues in parallel visualization methodology, algorithms. data locality and the ultimate requirements for temporary and archival data storage and network bandwidth.

  11. Responsive, Flexible and Scalable Broader Impacts (Invited)

    NASA Astrophysics Data System (ADS)

    Decharon, A.; Companion, C.; Steinman, M.

    2010-12-01

    In many educator professional development workshops, scientists present content in a slideshow-type format and field questions afterwards. Drawbacks of this approach include: inability to begin the lecture with content that is responsive to audience needs; lack of flexible access to specific material within the linear presentation; and “Q&A” sessions are not easily scalable to broader audiences. Often this type of traditional interaction provides little direct benefit to the scientists. The Centers for Ocean Sciences Education Excellence - Ocean Systems (COSEE-OS) applies the technique of concept mapping with demonstrated effectiveness in helping scientists and educators “get on the same page” (deCharon et al., 2009). A key aspect is scientist professional development geared towards improving face-to-face and online communication with non-scientists. COSEE-OS promotes scientist-educator collaboration, tests the application of scientist-educator maps in new contexts through webinars, and is piloting the expansion of maps as long-lived resources for the broader community. Collaboration - COSEE-OS has developed and tested a workshop model bringing scientists and educators together in a peer-oriented process, often clarifying common misconceptions. Scientist-educator teams develop online concept maps that are hyperlinked to “assets” (i.e., images, videos, news) and are responsive to the needs of non-scientist audiences. In workshop evaluations, 91% of educators said that the process of concept mapping helped them think through science topics and 89% said that concept mapping helped build a bridge of communication with scientists (n=53). Application - After developing a concept map, with COSEE-OS staff assistance, scientists are invited to give webinar presentations that include live “Q&A” sessions. The webinars extend the reach of scientist-created concept maps to new contexts, both geographically and topically (e.g., oil spill), with a relatively small

  12. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  13. Statistical Scalability Analysis of Communication Operations in Distributed Applications

    SciTech Connect

    Vetter, J S; McCracken, M O

    2001-02-27

    Current trends in high performance computing suggest that users will soon have widespread access to clusters of multiprocessors with hundreds, if not thousands, of processors. This unprecedented degree of parallelism will undoubtedly expose scalability limitations in existing applications, where scalability is the ability of a parallel algorithm on a parallel architecture to effectively utilize an increasing number of processors. Users will need precise and automated techniques for detecting the cause of limited scalability. This paper addresses this dilemma. First, we argue that users face numerous challenges in understanding application scalability: managing substantial amounts of experiment data, extracting useful trends from this data, and reconciling performance information with their application's design. Second, we propose a solution to automate this data analysis problem by applying fundamental statistical techniques to scalability experiment data. Finally, we evaluate our operational prototype on several applications, and show that statistical techniques offer an effective strategy for assessing application scalability. In particular, we find that non-parametric correlation of the number of tasks to the ratio of the time for individual communication operations to overall communication time provides a reliable measure for identifying communication operations that scale poorly.

  14. Scalable tensor factorizations with missing data.

    SciTech Connect

    Morup, Morten; Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2010-04-01

    The problem of missing data is ubiquitous in domains such as biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, and communication networks|all domains in which data collection is subject to occasional errors. Moreover, these data sets can be quite large and have more than two axes of variation, e.g., sender, receiver, time. Many applications in those domains aim to capture the underlying latent structure of the data; in other words, they need to factorize data sets with missing entries. If we cannot address the problem of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP), and formulate the CP model as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) using a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes.

  15. Physical principles for scalable neural recording.

    PubMed

    Marblestone, Adam H; Zamft, Bradley M; Maguire, Yael G; Shapiro, Mikhail G; Cybulski, Thaddeus R; Glaser, Joshua I; Amodei, Dario; Stranges, P Benjamin; Kalhor, Reza; Dalrymple, David A; Seo, Dongjin; Alon, Elad; Maharbiz, Michel M; Carmena, Jose M; Rabaey, Jan M; Boyden, Edward S; Church, George M; Kording, Konrad P

    2013-01-01

    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power-bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and

  16. Parallel Heuristics for Scalable Community Detection

    SciTech Connect

    Lu, Howard; Kalyanaraman, Anantharaman; Halappanavar, Mahantesh; Choudhury, Sutanay

    2014-05-17

    Community detection has become a fundamental operation in numerous graph-theoretic applications. It is used to reveal natural divisions that exist within real world networks without imposing prior size or cardinality constraints on the set of communities. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed by Blondel et al. in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method is also inherently sequential, thereby limiting its scalability to problems that can be solved on desktops. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose multiple heuristics that are designed to break the sequential barrier. Our heuristics are agnostic to the underlying parallel architecture. For evaluation purposes, we implemented our heuristics on shared memory (OpenMP) and distributed memory (MapReduce-MPI) machines, and tested them over real world graphs derived from multiple application domains (internet, biological, natural language processing). Experimental results demonstrate the ability of our heuristics to converge to high modularity solutions comparable to those output by the serial algorithm in nearly the same number of iterations, while also drastically reducing time to solution.

  17. Myria: Scalable Analytics as a Service

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  18. Physical principles for scalable neural recording

    PubMed Central

    Zamft, Bradley M.; Maguire, Yael G.; Shapiro, Mikhail G.; Cybulski, Thaddeus R.; Glaser, Joshua I.; Amodei, Dario; Stranges, P. Benjamin; Kalhor, Reza; Dalrymple, David A.; Seo, Dongjin; Alon, Elad; Maharbiz, Michel M.; Carmena, Jose M.; Rabaey, Jan M.; Boyden, Edward S.; Church, George M.; Kording, Konrad P.

    2013-01-01

    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power–bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and

  19. Scalable Machine Learning for Massive Astronomical Datasets

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.; Astronomy Data Centre, Canadian

    2014-01-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors, and the local outlier factor. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex datasets that wishes to extract the full scientific value from its data.

  20. Scalable Machine Learning for Massive Astronomical Datasets

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.; Gray, A.

    2014-04-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors. This is likely of particular interest to the radio astronomy community given, for example, that survey projects contain groups dedicated to this topic. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex

  1. Scalability of Localized Arc Filament Plasma Actuators

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2008-01-01

    Temporal flow control of a jet has been widely studied in the past to enhance jet mixing or reduce jet noise. Most of this research, however, has been done using small diameter low Reynolds number jets that often have little resemblance to the much larger jets common in real world applications because the flow actuators available lacked either the power or bandwidth to sufficiently impact these larger higher energy jets. The Localized Arc Filament Plasma Actuators (LAFPA), developed at the Ohio State University (OSU), have demonstrated the ability to impact a small high speed jet in experiments conducted at OSU and the power to perturb a larger high Reynolds number jet in experiments conducted at the NASA Glenn Research Center. However, the response measured in the large-scale experiments was significantly reduced for the same number of actuators compared to the jet response found in the small-scale experiments. A computational study has been initiated to simulate the LAFPA system with additional actuators on a large-scale jet to determine the number of actuators required to achieve the same desired response for a given jet diameter. Central to this computational study is a model for the LAFPA that both accurately represents the physics of the actuator and can be implemented into a computational fluid dynamics solver. One possible model, based on pressure waves created by the rapid localized heating that occurs at the actuator, is investigated using simplified axisymmetric simulations. The results of these simulations will be used to determine the validity of the model before more realistic and time consuming three-dimensional simulations are conducted to ultimately determine the scalability of the LAFPA system.

  2. A Robust Scalable Transportation System Concept

    NASA Technical Reports Server (NTRS)

    Hahn, Andrew; DeLaurentis, Daniel

    2006-01-01

    This report documents the 2005 Revolutionary System Concept for Aeronautics (RSCA) study entitled "A Robust, Scalable Transportation System Concept". The objective of the study was to generate, at a high-level of abstraction, characteristics of a new concept for the National Airspace System, or the new NAS, under which transportation goals such as increased throughput, delay reduction, and improved robustness could be realized. Since such an objective can be overwhelmingly complex if pursued at the lowest levels of detail, instead a System-of-Systems (SoS) approach was adopted to model alternative air transportation architectures at a high level. The SoS approach allows the consideration of not only the technical aspects of the NAS", but also incorporates policy, socio-economic, and alternative transportation system considerations into one architecture. While the representations of the individual systems are basic, the higher level approach allows for ways to optimize the SoS at the network level, determining the best topology (i.e. configuration of nodes and links). The final product (concept) is a set of rules of behavior and network structure that not only satisfies national transportation goals, but represents the high impact rules that accomplish those goals by getting the agents to "do the right thing" naturally. The novel combination of Agent Based Modeling and Network Theory provides the core analysis methodology in the System-of-Systems approach. Our method of approach is non-deterministic which means, fundamentally, it asks and answers different questions than deterministic models. The nondeterministic method is necessary primarily due to our marriage of human systems with technological ones in a partially unknown set of future worlds. Our goal is to understand and simulate how the SoS, human and technological components combined, evolve.

  3. Memory-Scalable GPU Spatial Hierarchy Construction.

    PubMed

    Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D

    2011-04-01

    Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.

  4. BactoGeNIE: A large-scale comparative genome visualization for big displays

    SciTech Connect

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; Marai, Elisabeta G.; Leigh, Jason

    2015-08-13

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.

  5. BactoGeNIE: a large-scale comparative genome visualization for big displays

    PubMed Central

    2015-01-01

    Background The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. Results In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. Conclusions BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics. PMID:26329021

  6. BactoGeNIE: A large-scale comparative genome visualization for big displays

    DOE PAGESBeta

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; Marai, Elisabeta G.; Leigh, Jason

    2015-08-13

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less

  7. Freeprocessing: Transparent in situ visualization via data interception

    PubMed Central

    Fogal, Thomas; Proch, Fabian; Schiewe, Alexander; Hasemann, Olaf; Kempf, Andreas; Krüger, Jens

    2014-01-01

    In situ visualization has become a popular method for avoiding the slowest component of many visualization pipelines: reading data from disk. Most previous in situ work has focused on achieving visualization scalability on par with simulation codes, or on the data movement concerns that become prevalent at extreme scales. In this work, we consider in situ analysis with respect to ease of use and programmability. We describe an abstraction that opens up new applications for in situ visualization, and demonstrate that this abstraction and an expanded set of use cases can be realized without a performance cost. PMID:25995996

  8. A scalable climate health justice assessment model.

    PubMed

    McDonald, Yolanda J; Grineski, Sara E; Collins, Timothy W; Kim, Young-An

    2015-05-01

    This paper introduces a scalable "climate health justice" model for assessing and projecting incidence, treatment costs, and sociospatial disparities for diseases with well-documented climate change linkages. The model is designed to employ low-cost secondary data, and it is rooted in a perspective that merges normative environmental justice concerns with theoretical grounding in health inequalities. Since the model employs International Classification of Diseases, Ninth Revision Clinical Modification (ICD-9-CM) disease codes, it is transferable to other contexts, appropriate for use across spatial scales, and suitable for comparative analyses. We demonstrate the utility of the model through analysis of 2008-2010 hospitalization discharge data at state and county levels in Texas (USA). We identified several disease categories (i.e., cardiovascular, gastrointestinal, heat-related, and respiratory) associated with climate change, and then selected corresponding ICD-9 codes with the highest hospitalization counts for further analyses. Selected diseases include ischemic heart disease, diarrhea, heat exhaustion/cramps/stroke/syncope, and asthma. Cardiovascular disease ranked first among the general categories of diseases for age-adjusted hospital admission rate (5286.37 per 100,000). In terms of specific selected diseases (per 100,000 population), asthma ranked first (517.51), followed by ischemic heart disease (195.20), diarrhea (75.35), and heat exhaustion/cramps/stroke/syncope (7.81). Charges associated with the selected diseases over the 3-year period amounted to US$5.6 billion. Blacks were disproportionately burdened by the selected diseases in comparison to non-Hispanic whites, while Hispanics were not. Spatial distributions of the selected disease rates revealed geographic zones of disproportionate risk. Based upon a downscaled regional climate-change projection model, we estimate a >5% increase in the incidence and treatment costs of asthma attributable to

  9. A scalable climate health justice assessment model

    PubMed Central

    McDonald, Yolanda J.; Grineski, Sara E.; Collins, Timothy W.; Kim, Young-An

    2014-01-01

    This paper introduces a scalable “climate health justice” model for assessing and projecting incidence, treatment costs, and sociospatial disparities for diseases with well-documented climate change linkages. The model is designed to employ low-cost secondary data, and it is rooted in a perspective that merges normative environmental justice concerns with theoretical grounding in health inequalities. Since the model employs International Classification of Diseases, Ninth Revision Clinical Modification (ICD-9-CM) disease codes, it is transferable to other contexts, appropriate for use across spatial scales, and suitable for comparative analyses. We demonstrate the utility of the model through analysis of 2008–2010 hospitalization discharge data at state and county levels in Texas (USA). We identified several disease categories (i.e., cardiovascular, gastrointestinal, heat-related, and respiratory) associated with climate change, and then selected corresponding ICD-9 codes with the highest hospitalization counts for further analyses. Selected diseases include ischemic heart disease, diarrhea, heat exhaustion/cramps/stroke/syncope, and asthma. Cardiovascular disease ranked first among the general categories of diseases for age-adjusted hospital admission rate (5286.37 per 100,000). In terms of specific selected diseases (per 100,000 population), asthma ranked first (517.51), followed by ischemic heart disease (195.20), diarrhea (75.35), and heat exhaustion/cramps/stroke/syncope (7.81). Charges associated with the selected diseases over the 3-year period amounted to US$5.6 billion. Blacks were disproportionately burdened by the selected diseases in comparison to non-Hispanic whites, while Hispanics were not. Spatial distributions of the selected disease rates revealed geographic zones of disproportionate risk. Based upon a downscaled regional climate-change projection model, we estimate a >5% increase in the incidence and treatment costs of asthma attributable to

  10. Scalable Designs for Planar Ion Trap Arrays

    NASA Astrophysics Data System (ADS)

    Slusher, R. E.

    2007-03-01

    , ``Architecture for a large-scale ion-trap quantum computer,'' Nature, Vol.417, pp.709--711, (2002). S. Seidelin, J. Chiaverini, R. Reicle, J. J. Bollinger, D. Leibfried, J. Briton, J. H. Wesenberg, R. B. Blakestad, R. J. Epstein, D. B. Hume, J. D. Jost, C. Langer, R. Ozeri, N. Shiga, and D. J. Wineland, ``Amicrofabricated surface-electrode ion trap for scalable quantum informtion processing,'' quant-ph/0601173, (2006). J. Kim, S. Pau, Z. Ma, H.R. McLellan, J.V. Gates, A. Kornblit, and R.E. Slusher, ``System design for large-scale ion trap quantum information processor,'' Quantum Inf. Comput., Vol 5, pp 515--537, (2005).

  11. Visual Scripting.

    ERIC Educational Resources Information Center

    Halas, John

    Visual scripting is the coordination of words with pictures in sequence. This book presents the methods and viewpoints on visual scripting of fourteen film makers, from nine countries, who are involved in animated cinema; it contains concise examples of how a storybook and preproduction script can be prepared in visual terms; and it includes a…

  12. WIFIRE: A Scalable Data-Driven Monitoring, Dynamic Prediction and Resilience Cyberinfrastructure for Wildfires

    NASA Astrophysics Data System (ADS)

    Altintas, I.; Block, J.; Braun, H.; de Callafon, R. A.; Gollner, M. J.; Smarr, L.; Trouve, A.

    2013-12-01

    Recent studies confirm that climate change will cause wildfires to increase in frequency and severity in the coming decades especially for California and in much of the North American West. The most critical sustainability issue in the midst of these ever-changing dynamics is how to achieve a new social-ecological equilibrium of this fire ecology. Wildfire wind speeds and directions change in an instant, and first responders can only be effective when they take action as quickly as the conditions change. To deliver information needed for sustainable policy and management in this dynamically changing fire regime, we must capture these details to understand the environmental processes. We are building an end-to-end cyberinfrastructure (CI), called WIFIRE, for real-time and data-driven simulation, prediction and visualization of wildfire behavior. The WIFIRE integrated CI system supports social-ecological resilience to the changing fire ecology regime in the face of urban dynamics and climate change. Networked observations, e.g., heterogeneous satellite data and real-time remote sensor data is integrated with computational techniques in signal processing, visualization, modeling and data assimilation to provide a scalable, technological, and educational solution to monitor weather patterns to predict a wildfire's Rate of Spread. Our collaborative WIFIRE team of scientists, engineers, technologists, government policy managers, private industry, and firefighters architects implement CI pathways that enable joint innovation for wildfire management. Scientific workflows are used as an integrative distributed programming model and simplify the implementation of engineering modules for data-driven simulation, prediction and visualization while allowing integration with large-scale computing facilities. WIFIRE will be scalable to users with different skill-levels via specialized web interfaces and user-specified alerts for environmental events broadcasted to receivers before

  13. A scalable multi-DLP pico-projector system for virtual reality

    NASA Astrophysics Data System (ADS)

    Teubl, F.; Kurashima, C.; Cabral, M.; Fels, S.; Lopes, R.; Zuffo, M.

    2014-03-01

    Virtual Reality (VR) environments can offer immersion, interaction and realistic images to users. A VR system is usually expensive and requires special equipment in a complex setup. One approach is to use Commodity-Off-The-Shelf (COTS) desktop multi-projectors manually or camera based calibrated to reduce the cost of VR systems without significant decrease of the visual experience. Additionally, for non-planar screen shapes, special optics such as lenses and mirrors are required thus increasing costs. We propose a low-cost, scalable, flexible and mobile solution that allows building complex VR systems that projects images onto a variety of arbitrary surfaces such as planar, cylindrical and spherical surfaces. This approach combines three key aspects: 1) clusters of DLP-picoprojectors to provide homogeneous and continuous pixel density upon arbitrary surfaces without additional optics; 2) LED lighting technology for energy efficiency and light control; 3) smaller physical footprint for flexibility purposes. Therefore, the proposed system is scalable in terms of pixel density, energy and physical space. To achieve these goals, we developed a multi-projector software library called FastFusion that calibrates all projectors in a uniform image that is presented to viewers. FastFusion uses a camera to automatically calibrate geometric and photometric correction of projected images from ad-hoc positioned projectors, the only requirement is some few pixels overlapping amongst them. We present results with eight Pico-projectors, with 7 lumens (LED) and DLP 0.17 HVGA Chipset.

  14. NEXUS Scalable and Distributed Next-Generation Avionics Bus for Space Missions

    NASA Technical Reports Server (NTRS)

    He, Yutao; Shalom, Eddy; Chau, Savio N.; Some, Raphael R.; Bolotin, Gary S.

    2011-01-01

    A paper discusses NEXUS, a common, next-generation avionics interconnect that is transparently compatible with wired, fiber-optic, and RF physical layers; provides a flexible, scalable, packet switched topology; is fault-tolerant with sub-microsecond detection/recovery latency; has scalable bandwidth from 1 Kbps to 10 Gbps; has guaranteed real-time determinism with sub-microsecond latency/jitter; has built-in testability; features low power consumption (< 100 mW per Gbps); is lightweight with about a 5,000-logic-gate footprint; and is implemented in a small Bus Interface Unit (BIU) with reconfigurable back-end providing interface to legacy subsystems. NEXUS enhances a commercial interconnect standard, Serial RapidIO, to meet avionics interconnect requirements without breaking the standard. This unified interconnect technology can be used to meet performance, power, size, and reliability requirements of all ranges of equipment, sensors, and actuators at chip-to-chip, board-to-board, or box-to-box boundary. Early results from in-house modeling activity of Serial RapidIO using VisualSim indicate that the use of a switched, high-performance avionics network will provide a quantum leap in spacecraft onboard science and autonomy capability for science and exploration missions.

  15. Scalability enhancement of AODV using local link repairing

    NASA Astrophysics Data System (ADS)

    Jain, Jyoti; Gupta, Roopam; Bandhopadhyay, T. K.

    2014-09-01

    Dynamic change in the topology of an ad hoc network makes it difficult to design an efficient routing protocol. Scalability of an ad hoc network is also one of the important criteria of research in this field. Most of the research works in ad hoc network focus on routing and medium access protocols and produce simulation results for limited-size networks. Ad hoc on-demand distance vector (AODV) is one of the best reactive routing protocols. In this article, modified routing protocols based on local link repairing of AODV are proposed. Method of finding alternate routes for next-to-next node is proposed in case of link failure. These protocols are beacon-less, means periodic hello message is removed from the basic AODV to improve scalability. Few control packet formats have been changed to accommodate suggested modification. Proposed protocols are simulated to investigate scalability performance and compared with basic AODV protocol. This also proves that local link repairing of proposed protocol improves scalability of the network. From simulation results, it is clear that scalability performance of routing protocol is improved because of link repairing method. We have tested protocols for different terrain area with approximate constant node densities and different traffic load.

  16. Visual Imagery without Visual Perception?

    ERIC Educational Resources Information Center

    Bertolo, Helder

    2005-01-01

    The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…

  17. Visual Literacy and Visual Culture.

    ERIC Educational Resources Information Center

    Messaris, Paul

    Familiarity with specific images or sets of images plays a role in a culture's visual heritage. Two questions can be asked about this type of visual literacy: Is this a type of knowledge that is worth building into the formal educational curriculum of our schools? What are the educational implications of visual literacy? There is a three-part…

  18. Scalable Track Initiation for Optical Space Surveillance

    NASA Astrophysics Data System (ADS)

    Schumacher, P.; Wilkins, M. P.

    2012-09-01

    least cubic and commonly quartic or higher. Therefore, practical implementations require attention to the scalability of the algorithms, when one is dealing with the very large number of observations from large surveillance telescopes. We address two broad categories of algorithms. The first category includes and extends the classical methods of Laplace and Gauss, as well as the more modern method of Gooding, in which one solves explicitly for the apparent range to the target in terms of the given data. In particular, recent ideas offered by Mortari and Karimi allow us to construct a family of range-solution methods that can be scaled to many processors efficiently. We find that the orbit solutions (data association hypotheses) can be ranked by means of a concept we call persistence, in which a simple statistical measure of likelihood is based on the frequency of occurrence of combinations of observations in consistent orbit solutions. Of course, range-solution methods can be expected to perform poorly if the orbit solutions of most interest are not well conditioned. The second category of algorithms addresses this difficulty. Instead of solving for range, these methods attach a set of range hypotheses to each measured line of sight. Then all pair-wise combinations of observations are considered and the family of Lambert problems is solved for each pair. These algorithms also have polynomial complexity, though now the complexity is quadratic in the number of observations and also quadratic in the number of range hypotheses. We offer a novel type of admissible-region analysis, constructing partitions of the orbital element space and deriving rigorous upper and lower bounds on the possible values of the range for each partition. This analysis allows us to parallelize with respect to the element partitions and to reduce the number of range hypotheses that have to be considered in each processor simply by making the partitions smaller. Naturally, there are many ways to

  19. geoKepler Workflow Module for Computationally Scalable and Reproducible Geoprocessing and Modeling

    NASA Astrophysics Data System (ADS)

    Cowart, C.; Block, J.; Crawl, D.; Graham, J.; Gupta, A.; Nguyen, M.; de Callafon, R.; Smarr, L.; Altintas, I.

    2015-12-01

    The NSF-funded WIFIRE project has developed an open-source, online geospatial workflow platform for unifying geoprocessing tools and models for for fire and other geospatially dependent modeling applications. It is a product of WIFIRE's objective to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. geoKepler includes a set of reusable GIS components, or actors, for the Kepler Scientific Workflow System (https://kepler-project.org). Actors exist for reading and writing GIS data in formats such as Shapefile, GeoJSON, KML, and using OGC web services such as WFS. The actors also allow for calling geoprocessing tools in other packages such as GDAL and GRASS. Kepler integrates functions from multiple platforms and file formats into one framework, thus enabling optimal GIS interoperability, model coupling, and scalability. Products of the GIS actors can be fed directly to models such as FARSITE and WRF. Kepler's ability to schedule and scale processes using Hadoop and Spark also makes geoprocessing ultimately extensible and computationally scalable. The reusable workflows in geoKepler can be made to run automatically when alerted by real-time environmental conditions. Here, we show breakthroughs in the speed of creating complex data for hazard assessments with this platform. We also demonstrate geoKepler workflows that use Data Assimilation to ingest real-time weather data into wildfire simulations, and for data mining techniques to gain insight into environmental conditions affecting fire behavior. Existing machine learning tools and libraries such as R and MLlib are being leveraged for this purpose in Kepler, as well as Kepler's Distributed Data Parallel (DDP) capability to provide a framework for scalable processing. geoKepler workflows can be executed via an iPython notebook as a part of a Jupyter hub at UC San Diego for sharing and reporting of the scientific analysis and results from

  20. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  1. Scalable analysis of movement data for extracting and exploring significant places.

    PubMed

    Andrienko, Gennady; Andrienko, Natalia; Hurter, Christophe; Rinzivillo, Salvatore; Wrobel, Stefan

    2013-07-01

    Place-oriented analysis of movement data, i.e., recorded tracks of moving objects, includes finding places of interest in which certain types of movement events occur repeatedly and investigating the temporal distribution of event occurrences in these places and, possibly, other characteristics of the places and links between them. For this class of problems, we propose a visual analytics procedure consisting of four major steps: 1) event extraction from trajectories; 2) extraction of relevant places based on event clustering; 3) spatiotemporal aggregation of events or trajectories; 4) analysis of the aggregated data. All steps can be fulfilled in a scalable way with respect to the amount of the data under analysis; therefore, the procedure is not limited by the size of the computer's RAM and can be applied to very large data sets. We demonstrate the use of the procedure by example of two real-world problems requiring analysis at different spatial scales.

  2. A framework for evaluating wavelet based watermarking for scalable coded digital item adaptation attacks

    NASA Astrophysics Data System (ADS)

    Bhowmik, Deepayan; Abhayaratne, Charith

    2009-02-01

    A framework for evaluating wavelet based watermarking schemes against scalable coded visual media content adaptation attacks is presented. The framework, Watermark Evaluation Bench for Content Adaptation Modes (WEBCAM), aims to facilitate controlled evaluation of wavelet based watermarking schemes under MPEG-21 part-7 digital item adaptations (DIA). WEBCAM accommodates all major wavelet based watermarking in single generalised framework by considering a global parameter space, from which the optimum parameters for a specific algorithm may be chosen. WEBCAM considers the traversing of media content along various links and required content adaptations at various nodes of media supply chains. In this paper, the content adaptation is emulated by the JPEG2000 coded bit stream extraction for various spatial resolution and quality levels of the content. The proposed framework is beneficial not only as an evaluation tool but also as design tool for new wavelet based watermark algorithms by picking and mixing of available tools and finding the optimum design parameters.

  3. Current parallel I/O limitations to scalable data analysis.

    SciTech Connect

    Mascarenhas, Ajith Arthur; Pebay, Philippe Pierre

    2011-07-01

    This report describes the limitations to parallel scalability which we have encountered when applying our otherwise optimally scalable parallel statistical analysis tool kit to large data sets distributed across the parallel file system of the current premier DOE computational facility. This report describes our study to evaluate the effect of parallel I/O on the overall scalability of a parallel data analysis pipeline using our scalable parallel statistics tool kit [PTBM11]. In this goal, we tested it using the Jaguar-pf DOE/ORNL peta-scale platform on a large combustion simulation data under a variety of process counts and domain decompositions scenarios. In this report we have recalled the foundations of the parallel statistical analysis tool kit which we have designed and implemented, with the specific double intent of reproducing typical data analysis workflows, and achieving optimal design for scalable parallel implementations. We have briefly reviewed those earlier results and publications which allow us to conclude that we have achieved both goals. However, in this report we have further established that, when used in conjuction with a state-of-the-art parallel I/O system, as can be found on the premier DOE peta-scale platform, the scaling properties of the overall analysis pipeline comprising parallel data access routines degrade rapidly. This finding is problematic and must be addressed if peta-scale data analysis is to be made scalable, or even possible. In order to attempt to address these parallel I/O limitations, we will investigate the use the Adaptable IO System (ADIOS) [LZL+10] to improve I/O performance, while maintaining flexibility for a variety of IO options, such MPI IO, POSIX IO. This system is developed at ORNL and other collaborating institutions, and is being tested extensively on Jaguar-pf. Simulation code being developed on these systems will also use ADIOS to output the data thereby making it easier for other systems, such as ours, to

  4. Scalable multi-variate analytics of seismic and satellite-based observational data.

    PubMed

    Yuan, Xiaoru; He, Xiao; Guo, Hanqi; Guo, Peihong; Kendall, Wesley; Huang, Jian; Zhang, Yongxian

    2010-01-01

    Over the past few years, large human populations around the world have been affected by an increase in significant seismic activities. For both conducting basic scientific research and for setting critical government policies, it is crucial to be able to explore and understand seismic and geographical information obtained through all scientific instruments. In this work, we present a visual analytics system that enables explorative visualization of seismic data together with satellite-based observational data, and introduce a suite of visual analytical tools. Seismic and satellite data are integrated temporally and spatially. Users can select temporal ;and spatial ranges to zoom in on specific seismic events, as well as to inspect changes both during and after the events. Tools for designing high dimensional transfer functions have been developed to enable efficient and intuitive comprehension of the multi-modal data. Spread-sheet style comparisons are used for data drill-down as well as presentation. Comparisons between distinct seismic events are also provided for characterizing event-wise differences. Our system has been designed for scalability in terms of data size, complexity (i.e. number of modalities), and varying form factors of display environments.

  5. Scalable Video Broadcasting with Distributed Node Selection in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Lee, Yonghun; Lee, Kyujin; Lee, Kyesan; Suh, Doug Young

    We propose a distributed node selection (DNS) scheme that guarantees quality of service (QoS) of the scalable video broadcasting system over wireless channels. The proposed DNS scheme chooses the destination node based on the SVC layer information, and it selects the best relay from a set of competing candidate nodes by considering two factors: 1) wireless channel conditions between destination and relay candidates and 2) scalable video's layer information. In simulations, the performance of the proposed scheme in terms of quality gains, complexity (overhead) and applicability was examined.

  6. Providing scalable system software for high-end simulations

    SciTech Connect

    Greenberg, D.

    1997-12-31

    Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.

  7. A scalable portable object-oriented framework for parallel multisensor data-fusion applications in HPC systems

    NASA Astrophysics Data System (ADS)

    Gupta, Pankaj; Prasad, Guru

    2004-04-01

    Multi-sensor Data Fusion is synergistic integration of multiple data sets. Data fusion includes processes for aligning, associating and combining data and information in estimating and predicting the state of objects, their relationships, and characterizing situations and their significance. The combination of complex data sets and the need for real-time data storage and retrieval compounds the data fusion problem. The systematic development and use of data fusion techniques are particularly critical in applications requiring massive, diverse, ambiguous, and time-critical data. Such conditions are characteristic of new emerging requirements; e.g., network-centric and information-centric warfare, low intensity conflicts such as special operations, counter narcotics, antiterrorism, information operations and CALOW (Conventional Arms, Limited Objectives Warfare), economic and political intelligence. In this paper, Aximetric presents a novel, scalable, object-oriented, metamodel framework for parallel, cluster-based data-fusion engine on High Performance Computing (HPC) Systems. The data-clustering algorithms provide a fast, scalable technique to sift through massive, complex data sets coming through multiple streams in real-time. The load-balancing algorithm provides the capability to evenly distribute the workload among processors on-the-fly and achieve real-time scalability. The proposed data-fusion engine exploits unique data-structures for fast storage, retrieval and interactive visualization of the multiple data streams.

  8. Scalable, divergent synthesis of meroterpenoids via "borono-sclareolide".

    PubMed

    Dixon, Darryl D; Lockner, Jonathan W; Zhou, Qianghui; Baran, Phil S

    2012-05-23

    A scalable, divergent synthesis of bioactive meroterpenoids has been developed. A key component of this work is the invention of "borono-sclareolide", a terpenyl radical precursor that enables gram-scale preparation of (+)-chromazonarol. Subsequent synthetic operations on this key intermediate permit rapid access to a variety of related meroterpenoids, many of which possess important biological activity.

  9. Scalable hologram video coding for adaptive transmitting service.

    PubMed

    Seo, Young-Ho; Lee, Yoon-Hyuk; Yoo, Ji-Sang; Kim, Dong-Wook

    2013-01-01

    This paper discusses processing techniques for an adaptive digital holographic video service in various reconstruction environments, and proposes two new scalable coding schemes. The proposed schemes are constructed according to the hologram generation or acquisition schemes: hologram-based resolution-scalable coding (HRS) and light source-based signal-to-noise ratio scalable coding (LSS). HRS is applied for holograms that are already acquired or generated, while LSS is applied to the light sources before generating digital holograms. In the LSS scheme, the light source information is lossless coded because it is too important to lose, while the HRS scheme adopts a lossy coding method. In an experiment, we provide eight stages of an HRS scheme whose data compression ratios range from 1:1 to 100:1 for each layered data. For LSS, four layers and 16 layers of scalable coding schemes are provided. We experimentally show that the proposed techniques make it possible to service a digital hologram video adaptively to the various displays with different resolutions, computation capabilities of the receiver side, or bandwidths of the network.

  10. A Scalable Platform for Functional Nanomaterials via Bubble-Bursting.

    PubMed

    Feng, Jie; Nunes, Janine K; Shin, Sangwoo; Yan, Jing; Kong, Yong Lin; Prud'homme, Robert K; Arnaudov, Luben N; Stoyanov, Simeon D; Stone, Howard A

    2016-06-01

    A continuous and scalable bubbling system to generate functional nanodroplets dispersed in a continuous phase is proposed. Scaling up of this system can be achieved by simply tuning the bubbling parameters. This new and versatile system is capable of encapsulating various functional nanomaterials to form functional nanoemulsions and nanoparticles in one step. PMID:27007617

  11. Factor-Analytic Procedures for Assessing Response Pattern Scalability

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2007-01-01

    This paper proposes procedures for assessing the fit of a psychometric model at the level of the individual respondent. The procedures are intended for personality measures made up of Likert-type items, which, in applied research, are usually analyzed by means of factor analysis. Two scalability indices are proposed, which can be considered as…

  12. A Scalable Set of ESL Reading Comprehension Items.

    ERIC Educational Resources Information Center

    Perkins, Kyle

    Guttman implicational scaling techniques were used to identify a unidimensional set of English as a Second Language reading comprehension items. Data were analyzed from 202 students who sat for an institutional administration of the Test of English as a Foreign Language (TOEFL). The examinees who contributed to the scalable set had significantly…

  13. Data Intensive Architecture for Scalable Cyber Analytics

    SciTech Connect

    Olsen, Bryan K.; Johnson, John R.; Critchlow, Terence J.

    2011-11-15

    Cyber analysts are tasked with the identification and mitigation of network exploits and threats. These compromises are difficult to identify due to the characteristics of cyber communication, the volume of traffic, and the duration of possible attack. It is necessary to have analytical tools to help analysts identify anomalies that span seconds, days, and weeks. Unfortunately, providing analytical tools effective access to the volumes of underlying data requires novel architectures, which is often overlooked in operational deployments. Our work is focused on a summary record of communication, called a flow. Flow records are intended to summarize a communication session between a source and a destination, providing a level of aggregation from the base data. Despite this aggregation, many enterprise network perimeter sensors store millions of network flow records per day. The volume of data makes analytics difficult, requiring the development of new techniques to efficiently identify temporal patterns and potential threats. The massive volume makes analytics difficult, but there are other characteristics in the data which compound the problem. Within the billions of records of communication that transact, there are millions of distinct IP addresses involved. Characterizing patterns of entity behavior is very difficult with the vast number of entities that exist in the data. Research has struggled to validate a model for typical network behavior with hopes it will enable the identification of atypical behavior. Complicating matters more, typically analysts are only able to visualize and interact with fractions of data and have the potential to miss long term trends and behaviors. Our analysis approach focuses on aggregate views and visualization techniques to enable flexible and efficient data exploration as well as the capability to view trends over long periods of time. Realizing that interactively exploring summary data allowed analysts to effectively identify

  14. Visual Theorems.

    ERIC Educational Resources Information Center

    Davis, Philip J.

    1993-01-01

    Argues for a mathematics education that interprets the word "theorem" in a sense that is wide enough to include the visual aspects of mathematical intuition and reasoning. Defines the term "visual theorems" and illustrates the concept using the Marigold of Theodorus. (Author/MDH)

  15. Mathematical Visualization

    ERIC Educational Resources Information Center

    Rogness, Jonathan

    2011-01-01

    Advances in computer graphics have provided mathematicians with the ability to create stunning visualizations, both to gain insight and to help demonstrate the beauty of mathematics to others. As educators these tools can be particularly important as we search for ways to work with students raised with constant visual stimulation, from video games…

  16. Visual Closure.

    ERIC Educational Resources Information Center

    Groffman, Sidney

    An experimental test of visual closure based on an information-theory concept of perception was devised to test the ability to discriminate visual stimuli with reduced cues. The test is to be administered in a timed individual situation in which the subject is presented with sets of incomplete drawings of simple objects that he is required to name…

  17. Visual Thinking.

    ERIC Educational Resources Information Center

    Arnheim, Rudolf

    Based on the more general principle that all thinking (including reasoning) is basically perceptual in nature, the author proposes that visual perception is not a passive recording of stimulus material but an active concern of the mind. He delineates the task of visually distinguishing changes in size, shape, and position and points out the…

  18. Visual agnosia.

    PubMed

    Biran, I; Coslett, H B

    2003-11-01

    The visual agnosias are an intriguing class of clinical phenomena that have important implications for current theories of high-level vision. Visual agnosia is defined as impaired object recognition that cannot be attributed to visual loss, language impairment, or a general mental decline. At least in some instances, agnostic patients generate an adequate internal representation of the stimulus but fail to recognize it. In this review, we begin by describing the classic works related to the visual agnosias, followed by a description of the major clinical variants and their occurrence in degenerative disorders. In keeping with the theme of this issue, we then discuss recent contributions to this domain. Finally, we present evidence from functional imaging studies to support the clinical distinction between the various types of visual agnosias.

  19. Three-dimensional visualization system as an aid for facial surgical planning

    NASA Astrophysics Data System (ADS)

    Barre, Sebastien; Fernandez-Maloigne, Christine; Paume, Patricia; Subrenat, Gilles

    2001-05-01

    We present an aid for facial deformities treatment. We designed a system for surgical planning and prediction of human facial aspect after maxillo-facial surgery. We study the 3D reconstruction process of the tissues involved in the simulation, starting from CT acquisitions. 3D iso-surfaces meshes of soft tissues and bone structures are built. A sparse set of still photographs is used to reconstruct a 360 degree(s) texture of the facial surface and increase its visual realism. Reconstructed objects are inserted into an object-oriented, portable and scriptable visualization software allowing the practitioner to manipulate and visualize them interactively. Several LODs (Level-Of- Details) techniques are used to ensure usability. Bone structures are separated and moved by means of cut planes matching orthognatic surgery procedures. We simulate soft tissue deformations by creating a physically-based springs model between both tissues. The new static state of the facial model is computed by minimizing the energy of the springs system to achieve equilibrium. This process is optimized by transferring informations like participation hints at vertex-level between a warped generic model and the facial mesh.

  20. Massively parallel neural encoding and decoding of visual stimuli.

    PubMed

    Lazar, Aurel A; Zhou, Yiyin

    2012-08-01

    The massively parallel nature of video Time Encoding Machines (TEMs) calls for scalable, massively parallel decoders that are implemented with neural components. The current generation of decoding algorithms is based on computing the pseudo-inverse of a matrix and does not satisfy these requirements. Here we consider video TEMs with an architecture built using Gabor receptive fields and a population of Integrate-and-Fire neurons. We show how to build a scalable architecture for video Time Decoding Machines using recurrent neural networks. Furthermore, we extend our architecture to handle the reconstruction of visual stimuli encoded with massively parallel video TEMs having neurons with random thresholds. Finally, we discuss in detail our algorithms and demonstrate their scalability and performance on a large scale GPU cluster. PMID:22397951

  1. Oceanotron, Scalable Server for Marine Observations

    NASA Astrophysics Data System (ADS)

    Loubrieu, T.; Bregent, S.; Blower, J. D.; Griffiths, G.

    2013-12-01

    Ifremer, French marine institute, is deeply involved in data management for different ocean in-situ observation programs (ARGO, OceanSites, GOSUD, ...) or other European programs aiming at networking ocean in-situ observation data repositories (myOcean, seaDataNet, Emodnet). To capitalize the effort for implementing advance data dissemination services (visualization, download with subsetting) for these programs and generally speaking water-column observations repositories, Ifremer decided to develop the oceanotron server (2010). Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpeNDAP, ...), the server is designed to manage plugins: - StorageUnits : which enable to read specific data repository formats (netCDF/OceanSites, RDBMS schema, ODV binary format). - FrontDesks : which get external requests and send results for interoperable protocols (OGC/WMS, OGC/SOS, OpenDAP). In between a third type of plugin may be inserted: - TransformationUnits : which enable ocean business related transformation of the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). The server is released under open-source license so that partners can develop their own plugins. Within MyOcean project, University of Reading has plugged a WMS implementation as an oceanotron frontdesk. The modules are connected together by sharing the same information model for marine observations (or sampling features: vertical profiles, point series and trajectories), dataset metadata and queries. The shared information model is based on OGC/Observation & Measurement and Unidata/Common Data Model initiatives. The model is implemented in java (http://www.ifremer.fr/isi/oceanotron/javadoc/). This inner-interoperability level enables to capitalize ocean business expertise in software development without being indentured to

  2. Customizing computational methods for visual analytics with big data.

    PubMed

    Choo, Jaegul; Park, Haesun

    2013-01-01

    The volume of available data has been growing exponentially, increasing data problem's complexity and obscurity. In response, visual analytics (VA) has gained attention, yet its solutions haven't scaled well for big data. Computational methods can improve VA's scalability by giving users compact, meaningful information about the input data. However, the significant computation time these methods require hinders real-time interactive visualization of big data. By addressing crucial discrepancies between these methods and VA regarding precision and convergence, researchers have proposed ways to customize them for VA. These approaches, which include low-precision computation and iteration-level interactive visualization, ensure real-time interactive VA for big data.

  3. Visual cognition

    PubMed Central

    Cavanagh, Patrick

    2011-01-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719

  4. SciDAC Institute for Ultra-Scale Visualization: Activity Recognition for Ultra-Scale Visualization

    SciTech Connect

    Silver, Deborah

    2014-04-30

    Understanding the science behind ultra-scale simulations requires extracting meaning from data sets of hundreds of terabytes or more. Developing scalable parallel visualization algorithms is a key step enabling scientists to interact and visualize their data at this scale. However, at extreme scales, the datasets are so huge, there is not even enough time to view the data, let alone explore it with basic visualization methods. Automated tools are necessary for knowledge discovery -- to help sift through the information and isolate characteristic patterns, thereby enabling the scientist to study local interactions, the origin of features and their evolution in large volumes of data. These tools must be able to operate on data of this scale and work with the visualization process. In this project, we developed a framework for activity detection to allow scientists to model and extract spatio-temporal patterns from time-varying data.

  5. Visual impairment.

    PubMed

    Ellenberger, Carl

    2016-01-01

    This chapter can guide the use of imaging in the evaluation of common visual syndromes: transient visual disturbance, including migraine and amaurosis fugax; acute optic neuropathy complicating multiple sclerosis, neuromyelitis optica spectrum disorder, Leber hereditary optic neuropathy, and Susac syndrome; papilledema and pseudotumor cerebri syndrome; cerebral disturbances of vision, including posterior cerebral arterial occlusion, posterior reversible encephalopathy, hemianopia after anterior temporal lobe resection, posterior cortical atrophy, and conversion blindness. Finally, practical efforts in visual rehabilitation by sensory substitution for blind patients can improve their lives and disclose new information about the brain. PMID:27430448

  6. Visual cognition

    SciTech Connect

    Pinker, S.

    1985-01-01

    This book consists of essays covering issues in visual cognition presenting experimental techniques from cognitive psychology, methods of modeling cognitive processes on computers from artificial intelligence, and methods of studying brain organization from neuropsychology. Topics considered include: parts of recognition; visual routines; upward direction; mental rotation, and discrimination of left and right turns in maps; individual differences in mental imagery, computational analysis and the neurological basis of mental imagery: componental analysis.

  7. Visual search.

    PubMed

    Chan, Louis K H; Hayward, William G

    2013-07-01

    Visual search is the act of looking for a predefined target among other objects. This task has been widely used as an experimental paradigm to study visual attention, and because of its influence has also become a subject of research itself. When used as a paradigm, visual search studies address questions including the nature, function, and limits of preattentive processing and focused attention. As a subject of research, visual search studies address the role of memory in search, the procedures involved in search, and factors that affect search performance. In this article, we review major theories of visual search, the ways in which preattentive information is used to guide attentional allocation, the role of memory, and the processes and decisions involved in its successful completion. We conclude by summarizing the current state of knowledge about visual search and highlight some unresolved issues. WIREs Cogn Sci 2013, 4:415-429. doi: 10.1002/wcs.1235 The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website.

  8. Rocinante, a virtual collaborative visualizer

    SciTech Connect

    McDonald, M.J.; Ice, L.G.

    1996-12-31

    With the goal of improving the ability of people around the world to share the development and use of intelligent systems, Sandia National Laboratories` Intelligent Systems and Robotics Center is developing new Virtual Collaborative Engineering (VCE) and Virtual Collaborative Control (VCC) technologies. A key area of VCE and VCC research is in shared visualization of virtual environments. This paper describes a Virtual Collaborative Visualizer (VCV), named Rocinante, that Sandia developed for VCE and VCC applications. Rocinante allows multiple participants to simultaneously view dynamic geometrically-defined environments. Each viewer can exclude extraneous detail or include additional information in the scene as desired. Shared information can be saved and later replayed in a stand-alone mode. Rocinante automatically scales visualization requirements with computer system capabilities. Models with 30,000 polygons and 4 Megabytes of texture display at 12 to 15 frames per second (fps) on an SGI Onyx and at 3 to 8 fps (without texture) on Indigo 2 Extreme computers. In its networked mode, Rocinante synchronizes its local geometric model with remote simulators and sensory systems by monitoring data transmitted through UDP packets. Rocinante`s scalability and performance make it an ideal VCC tool. Users throughout the country can monitor robot motions and the thinking behind their motion planners and simulators.

  9. Scalable quantum memory in the ultrastrong coupling regime.

    PubMed

    Kyaw, T H; Felicetti, S; Romero, G; Solano, E; Kwek, L-C

    2015-01-01

    Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances. PMID:25727251

  10. Scalable graphene coatings for enhanced condensation heat transfer.

    PubMed

    Preston, Daniel J; Mafra, Daniela L; Miljkovic, Nenad; Kong, Jing; Wang, Evelyn N

    2015-05-13

    Water vapor condensation is commonly observed in nature and routinely used as an effective means of transferring heat with dropwise condensation on nonwetting surfaces exhibiting heat transfer improvement compared to filmwise condensation on wetting surfaces. However, state-of-the-art techniques to promote dropwise condensation rely on functional hydrophobic coatings that either have challenges with chemical stability or are so thick that any potential heat transfer improvement is negated due to the added thermal resistance of the coating. In this work, we show the effectiveness of ultrathin scalable chemical vapor deposited (CVD) graphene coatings to promote dropwise condensation while offering robust chemical stability and maintaining low thermal resistance. Heat transfer enhancements of 4× were demonstrated compared to filmwise condensation, and the robustness of these CVD coatings was superior to typical hydrophobic monolayer coatings. Our results indicate that graphene is a promising surface coating to promote dropwise condensation of water in industrial conditions with the potential for scalable application via CVD.

  11. Scalable fabrication of triboelectric nanogenerators for commercial applications

    NASA Astrophysics Data System (ADS)

    Dhakar, Lokesh; Shan, Xuechuan; Wang, Zhiping; Yang, Bin; Eng Hock Tay, Francis; Heng, Chun-Huat; Lee, Chengkuo

    2015-12-01

    Harvesting mechanical energy from irregular sources is a potential way to charge batteries for devices and sensor nodes. Triboelectric effect has been extensively utilized in energy harvesting devices as a method to convert mechanical energy into electrical energy. As triboelectric nanogenerators have immense potential to be commercialized, it is important to develop scalable fabrication methods to manufacture these devices. This paper presents scalable fabrication steps to realize large scale triboelectric nanogenerators. Roll-to-roll UV embossing and lamination techniques are used to fabricate different components of large scale triboelectric nanogenerators. The device generated a peak-to-peak voltage and current of 486 V and 21.2 μA, respectively at a frequency of 5 Hz.

  12. Scalable parallel distance field construction for large-scale applications

    SciTech Connect

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; Kolla, Hemanth; Chen, Jacqueline H.

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.

  13. NPTool: Towards Scalability and Reliability of Business Process Management

    NASA Astrophysics Data System (ADS)

    Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton

    Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.

  14. Scalable Parallel Distance Field Construction for Large-Scale Applications.

    PubMed

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan-Liu; Kolla, Hemanth; Chen, Jacqueline H

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. A new distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking over time, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. Our work greatly extends the usability of distance fields for demanding applications. PMID:26357251

  15. A look at scalable dense linear algebra libraries

    SciTech Connect

    Dongarra, J.J. |; van de Geijn, R.; Walker, D.W.

    1992-07-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization are presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 Gflop/s (double precision) for the largest problem considered.

  16. Scalable, full-colour and controllable chromotropic plasmonic printing.

    PubMed

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-11-16

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization.

  17. Scalable, full-colour and controllable chromotropic plasmonic printing

    NASA Astrophysics Data System (ADS)

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-11-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization.

  18. Development of Scalable Culture Systems for Human Embryonic Stem Cells

    PubMed Central

    Azarin, Samira M.; Palecek, Sean P.

    2009-01-01

    The use of human pluripotent stem cells, including embryonic and induced pluripotent stem cells, in therapeutic applications will require the development of robust, scalable culture technologies for undifferentiated cells. Advances made in large-scale cultures of other mammalian cells will facilitate expansion of undifferentiated human embryonic stem cells (hESCs), but challenges specific to hESCs will also have to be addressed, including development of defined, humanized culture media and substrates, monitoring spontaneous differentiation and heterogeneity in the cultures, and maintaining karyotypic integrity in the cells. This review will describe our current understanding of environmental factors that regulate hESC self-renewal and efforts to provide these cues in various scalable bioreactor culture systems. PMID:20161686

  19. Scalable quantum memory in the ultrastrong coupling regime.

    PubMed

    Kyaw, T H; Felicetti, S; Romero, G; Solano, E; Kwek, L-C

    2015-03-02

    Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances.

  20. Scalable, full-colour and controllable chromotropic plasmonic printing

    PubMed Central

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization. PMID:26567803

  1. Scalable digital hardware for a trapped ion quantum computer

    NASA Astrophysics Data System (ADS)

    Mount, Emily; Gaultney, Daniel; Vrijsen, Geert; Adams, Michael; Baek, So-Young; Hudek, Kai; Isabella, Louis; Crain, Stephen; van Rynbach, Andre; Maunz, Peter; Kim, Jungsang

    2015-09-01

    Many of the challenges of scaling quantum computer hardware lie at the interface between the qubits and the classical control signals used to manipulate them. Modular ion trap quantum computer architectures address scalability by constructing individual quantum processors interconnected via a network of quantum communication channels. Successful operation of such quantum hardware requires a fully programmable classical control system capable of frequency stabilizing the continuous wave lasers necessary for loading, cooling, initialization, and detection of the ion qubits, stabilizing the optical frequency combs used to drive logic gate operations on the ion qubits, providing a large number of analog voltage sources to drive the trap electrodes, and a scheme for maintaining phase coherence among all the controllers that manipulate the qubits. In this work, we describe scalable solutions to these hardware development challenges.

  2. A scalable quantum architecture using efficient non-local gates

    NASA Astrophysics Data System (ADS)

    Brennen, Gavin

    2003-03-01

    Many protocols for quantum information processing use a control sequence or circuit of interactions between qubits and control fields wherein arbitrary qubits can be made to interact with one another. The primary problem with many ``physically scalable" architectures is that the qubits are restricted to nearest neighbor interactions and quantum wires between distant qubits do not exist. Because of errors, nearest neighbor interactions often present difficulty with scalability. We describe a protocol that efficiently performs non-local gates between elements of separated static logical qubits using a bus of dynamic qubits as a refreshable entanglement resource. Imperfect resource preparation due to error propagation from noisy gates and measurement errors can purified within the bus channel. Because of the inherent parallelism of entanglement swapping, communication latency within the quantum computer can be significantly reduced.

  3. Scalable Parallel Distance Field Construction for Large-Scale Applications.

    PubMed

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan-Liu; Kolla, Hemanth; Chen, Jacqueline H

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. A new distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking over time, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. Our work greatly extends the usability of distance fields for demanding applications.

  4. Space Situational Awareness Data Processing Scalability Utilizing Google Cloud Services

    NASA Astrophysics Data System (ADS)

    Greenly, D.; Duncan, M.; Wysack, J.; Flores, F.

    Space Situational Awareness (SSA) is a fundamental and critical component of current space operations. The term SSA encompasses the awareness, understanding and predictability of all objects in space. As the population of orbital space objects and debris increases, the number of collision avoidance maneuvers grows and prompts the need for accurate and timely process measures. The SSA mission continually evolves to near real-time assessment and analysis demanding the need for higher processing capabilities. By conventional methods, meeting these demands requires the integration of new hardware to keep pace with the growing complexity of maneuver planning algorithms. SpaceNav has implemented a highly scalable architecture that will track satellites and debris by utilizing powerful virtual machines on the Google Cloud Platform. SpaceNav algorithms for processing CDMs outpace conventional means. A robust processing environment for tracking data, collision avoidance maneuvers and various other aspects of SSA can be created and deleted on demand. Migrating SpaceNav tools and algorithms into the Google Cloud Platform will be discussed and the trials and tribulations involved. Information will be shared on how and why certain cloud products were used as well as integration techniques that were implemented. Key items to be presented are: 1.Scientific algorithms and SpaceNav tools integrated into a scalable architecture a) Maneuver Planning b) Parallel Processing c) Monte Carlo Simulations d) Optimization Algorithms e) SW Application Development/Integration into the Google Cloud Platform 2. Compute Engine Processing a) Application Engine Automated Processing b) Performance testing and Performance Scalability c) Cloud MySQL databases and Database Scalability d) Cloud Data Storage e) Redundancy and Availability

  5. Scalable Real Time Data Management for Smart Grid

    SciTech Connect

    Yin, Jian; Kulkarni, Anand V.; Purohit, Sumit; Gorton, Ian; Akyol, Bora A.

    2011-12-16

    This paper presents GridMW, a scalable and reliable data middleware for smart grids. Smart grids promise to improve the efficiency of power grid systems and reduce green house emissions through incorporating power generation from renewable sources and shaping demand to match the supply. As a result, power grid systems will become much more dynamic and require constant adjustments, which requires analysis and decision making applications to improve the efficiency and reliability of smart grid systems.

  6. Performance and Scalability of the NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for scientific applications. In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for scientific applications.

  7. Superlinearly scalable noise robustness of redundant coupled dynamical systems.

    PubMed

    Kohar, Vivek; Kia, Behnam; Lindner, John F; Ditto, William L

    2016-03-01

    We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.

  8. Superlinearly scalable noise robustness of redundant coupled dynamical systems

    NASA Astrophysics Data System (ADS)

    Kohar, Vivek; Kia, Behnam; Lindner, John F.; Ditto, William L.

    2016-03-01

    We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.

  9. Design and Implementation of Ceph: A Scalable Distributed File System

    SciTech Connect

    Weil, S A; Brandt, S A; Miller, E L; Long, D E; Maltzahn, C

    2006-04-19

    File system designers continue to look to new architectures to improve scalability. Object-based storage diverges from server-based (e.g. NFS) and SAN-based storage systems by coupling processors and memory with disk drives, delegating low-level allocation to object storage devices (OSDs) and decoupling I/O (read/write) from metadata (file open/close) operations. Even recent object-based systems inherit decades-old architectural choices going back to early UNIX file systems, however, limiting their ability to effectively scale to hundreds of petabytes. We present Ceph, a distributed file system that provides excellent performance and reliability with unprecedented scalability. Ceph maximizes the separation between data and metadata management by replacing allocation tables with a pseudo-random data distribution function (CRUSH) designed for heterogeneous and dynamic clusters of unreliable OSDs. We leverage OSD intelligence to distribute data replication, failure detection and recovery with semi-autonomous OSDs running a specialized local object storage file system (EBOFS). Finally, Ceph is built around a dynamic distributed metadata management cluster that provides extremely efficient metadata management that seamlessly adapts to a wide range of general purpose and scientific computing file system workloads. We present performance measurements under a variety of workloads that show superior I/O performance and scalable metadata management (more than a quarter million metadata ops/sec).

  10. A Systems Approach to Scalable Transportation Network Modeling

    SciTech Connect

    Perumalla, Kalyan S

    2006-01-01

    Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory and speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.

  11. The Scalable HeterOgeneous Computing (SHOC) Benchmark Suite

    SciTech Connect

    Danalis, Antonios; Marin, Gabriel; McCurdy, Collin B; Meredith, Jeremy S; Roth, Philip C; Spafford, Kyle L; Tipparaju, Vinod; Vetter, Jeffrey S

    2010-01-01

    Scalable heterogeneous computing systems, which are composed of a mix of compute devices, such as commodity multicore processors, graphics processors, reconfigurable processors, and others, are gaining attention as one approach to continuing performance improvement while managing the new challenge of energy efficiency. As these systems become more common, it is important to be able to compare and contrast architectural designs and programming systems in a fair and open forum. To this end, we have designed the Scalable HeterOgeneous Computing benchmark suite (SHOC). SHOC's initial focus is on systems containing graphics processing units (GPUs) and multi-core processors, and on the new OpenCL programming standard. SHOC is a spectrum of programs that test the performance and stability of these scalable heterogeneous computing systems. At the lowest level, SHOC uses microbenchmarks to assess architectural features of the system. At higher levels, SHOC uses application kernels to determine system-wide performance including many system features such as intranode and internode communication among devices. SHOC includes benchmark implementations in both OpenCL and CUDA in order to provide a comparison of these programming models.

  12. Scalability, Timing, and System Design Issues for Intrinsic Evolvable Hardware

    NASA Technical Reports Server (NTRS)

    Hereford, James; Gwaltney, David

    2004-01-01

    In this paper we address several issues pertinent to intrinsic evolvable hardware (EHW). The first issue is scalability; namely, how the design space scales as the programming string for the programmable device gets longer. We develop a model for population size and the number of generations as a function of the programming string length, L, and show that the number of circuit evaluations is an O(L2) process. We compare our model to several successful intrinsic EHW experiments and discuss the many implications of our model. The second issue that we address is the timing of intrinsic EHW experiments. We show that the processing time is a small part of the overall time to derive or evolve a circuit and that major improvements in processor speed alone will have only a minimal impact on improving the scalability of intrinsic EHW. The third issue we consider is the system-level design of intrinsic EHW experiments. We review what other researchers have done to break the scalability barrier and contend that the type of reconfigurable platform and the evolutionary algorithm are tied together and impose limits on each other.

  13. Scalable fault tolerant image communication and storage grid

    NASA Astrophysics Data System (ADS)

    Slik, David; Seiler, Oliver; Altman, Tym; Montour, Mike; Kermani, Mohammad; Proseilo, Walter; Terry, David; Kawahara, Midori; Leckie, Chris; Muir, Dale

    2003-05-01

    Increasing production and use of digital medical imagery are driving new approaches to information storage and management. Traditional, centralized approaches to image communication, storage and archiving are becoming increasingly expensive to scale and operate with high levels of reliability. Multi-site, geographically-distributed deployments connected by limited-bandwidth networks present further scalability, reliability, and availability challenges. A grid storage architecture built from a distributed network of low cost, off-the-shelf servers (nodes) provides scalable data and metadata storage, processing, and communication without single points of failure. Imaging studies are stored, replicated, cached, managed, and retrieved based on defined rules, and nodes within the grid can acquire studies and respond to queries. Grid nodes transparently load-balance queries, storage/retrieval requests, and replicate data for automated backup and disaster recovery. This approach reduces latency, increases availability, provides near-linear scalability and allows the creation of a geographically distributed medical imaging network infrastructure. This paper presents some key concepts in grid storage and discusses the results of a clinical deployment of a multi-site storage grid for cancer care in the province of British Columbia.

  14. S2HAT: Scalable Spherical Harmonic Transform Library

    NASA Astrophysics Data System (ADS)

    Stompor, Radek

    2011-10-01

    Many problems in astronomy and astrophysics require a computation of the spherical harmonic transforms. This is in particular the case whenever data to be analyzed are distributed over the sphere or a set of corresponding mock data sets has to be generated. In many of those contexts, rapidly improving resolutions of both the data and simulations puts increasingly bigger emphasis on our ability to calculate the transforms quickly and reliably. The scalable spherical harmonic transform library S2HAT consists of a set of flexible, massively parallel, and scalable routines for calculating diverse (scalar, spin-weighted, etc) spherical harmonic transforms for a class of isolatitude sky grids or pixelizations. The library routines implement the standard algorithm with the complexity of O(n^3/2), where n is a number of pixels/grid points on the sphere, however, owing to their efficient parallelization and advanced numerical implementation, they achieve very competitive performance and near perfect scalability. S2HAT is written in Fortran 90 with a C interface. This software is a derivative of the spherical harmonic transforms included in the HEALPix package and is based on both serial and MPI routines of its version 2.01, however, since version 2.5 this software is fully autonomous of HEALPix and can be compiled and run without the HEALPix library.

  15. Community health workers in global health: scale and scalability.

    PubMed

    Liu, Anne; Sullivan, Sarah; Khan, Mohammed; Sachs, Sonia; Singh, Prabhjot

    2011-01-01

    Community health worker programs have emerged as one of the most effective strategies to address human resources for health shortages while improving access to and quality of primary healthcare. Many developing countries have succeeded in deploying community health worker programs in recognition of the potential of community health workers to identify, refer, and in many cases treat illnesses at the household level. However, challenges in program design and sustainability are expanded when such programs are expanded at scale, particularly with regard to systems management and integration with primary health facilities. Several nongovernmental organizations provide cases of innovation on management of community health worker programs that could support a sustainable system that is capable of being expanded without being stressed in its functionality nor effectiveness--therefore, providing for stronger scalability. This paper explores community health worker programs that have been deployed at national scale, as well as scalable innovations found in successful nongovernmental organization-run community health worker programs. In exploration of strategies to ensure sustainable community health worker programs at scale, we reconcile scaling constraints and scalable innovations by mapping strengths of nongovernmental organizations' community health worker programs to the challenges faced by programs currently deployed at national scale. PMID:21598268

  16. The intergroup protocols: Scalable group communication for the internet

    SciTech Connect

    Berket, K.

    2000-11-01

    Reliable group ordered delivery of multicast messages in a distributed system is a useful service that simplifies the programming of distributed applications. Such a service helps to maintain the consistency of replicated information and to coordinate the activities of the various processes. With the increasing popularity of the Internet, there is an increasing interest in scaling the protocols that provide this service to the environment of the Internet. The InterGroup protocol suite, described in this dissertation, provides such a service, and is intended for the environment of the Internet with scalability to large numbers of nodes and high latency links. The InterGroup protocols approach the scalability problem from various directions. They redefine the meaning of group membership, allow voluntary membership changes, add a receiver-oriented selection of delivery guarantees that permits heterogeneity of the receiver set, and provide a scalable reliability service. The InterGroup system comprises several components, executing at various sites within the system. Each component provides part of the services necessary to implement a group communication system for the wide-area. The components can be categorized as: (1) control hierarchy, (2) reliable multicast, (3) message distribution and delivery, and (4) process group membership. We have implemented a prototype of the InterGroup protocols in Java, and have tested the system performance in both local-area and wide-area networks.

  17. Encryption and authentication for scalable multimedia: current state of the art and challenges

    NASA Astrophysics Data System (ADS)

    Zhu, Bin B.; Swanson, Mitchell D.; Li, Shipeng

    2004-10-01

    Scalable coding is a technology that encodes a multimedia signal in a scalable manner where various representations can be extracted from a single codestream to fit a wide range of applications. Many new scalable coders such as JPEG 2000 and MPEG-4 FGS offer fine granularity scalability to provide near continuous optimal tradeoff between quality and rates in a large range. This fine granularity scalability poses great new challenges to the design of encryption and authentication systems for scalable media in Digital Rights Management (DRM) and other applications. It may be desirable or even mandatory to maintain a certain level of scalability in the encrypted or signed codestream so that no decryption or re-signing is needed when legitimate adaptations are applied. In other words, the encryption and authentication should be scalable, i.e., adaptation friendly. Otherwise secrets have to be shared with every intermediate stage along the content delivery system which performs adaptation manipulations. Sharing secrets with many parties would jeopardize the overall security of a system since the security depends on the weakest component of the system. In this paper, we first describe general requirements and desirable features for an encryption or authentication system for scalable media, esp. those not encountered with the non-scalable case. Then we present an overview of the current state of the art of technologies in scalable encryption and authentication. These technologies include full and selective encryption schemes that maintain the original or coarser granularity of scalability offered by an unencrypted scalable codestream, layered access control and block level authentication that reduce the fine granularity of scalability to a block level, among others. Finally, we summarize existing challenges and propose future research directions.

  18. Visual Prosthesis

    PubMed Central

    Schiller, Peter H.; Tehovnik, Edward J.

    2009-01-01

    There are more than 40 million blind individuals in the world whose plight would be greatly ameliorated by creating a visual prosthetic. We begin by outlining the basic operational characteristics of the visual system as this knowledge is essential for producing a prosthetic device based on electrical stimulation through arrays of implanted electrodes. We then list a series of tenets that we believe need to be followed in this effort. Central among these is our belief that the initial research in this area, which is in its infancy, should first be carried out in animals. We suggest that implantation of area V1 holds high promise as the area is of a large volume and can therefore accommodate extensive electrode arrays. We then proceed to consider coding operations that can effectively convert visual images viewed by a camera to stimulate electrode arrays to yield visual impressions that can provide shape, motion and depth information. We advocate experimental work that mimics electrical stimulation effects non-invasively in sighted human subjects using a camera from which visual images are converted into displays on a monitor akin to those created by electrical stimulation. PMID:19065857

  19. Visual cognition

    SciTech Connect

    Pinker, S.

    1985-01-01

    This collection of research papers on visual cognition first appeared as a special issue of Cognition: International Journal of Cognitive Science. The study of visual cognition has seen enormous progress in the past decade, bringing important advances in our understanding of shape perception, visual imagery, and mental maps. Many of these discoveries are the result of converging investigations in different areas, such as cognitive and perceptual psychology, artificial intelligence, and neuropsychology. This volume is intended to highlight a sample of work at the cutting edge of this research area for the benefit of students and researchers in a variety of disciplines. The tutorial introduction that begins the volume is designed to help the nonspecialist reader bridge the gap between the contemporary research reported here and earlier textbook introductions or literature reviews.

  20. Visualizing thought.

    PubMed

    Tversky, Barbara

    2011-07-01

    Depictive expressions of thought predate written language by thousands of years. They have evolved in communities through a kind of informal user testing that has refined them. Analyzing common visual communications reveals consistencies that illuminate how people think as well as guide design; the process can be brought into the laboratory and accelerated. Like language, visual communications abstract and schematize; unlike language, they use properties of the page (e.g., proximity and place: center, horizontal/up-down, vertical/left-right) and the marks on it (e.g., dots, lines, arrows, boxes, blobs, likenesses, symbols) to convey meanings. The visual expressions of these meanings (e.g., individual, category, order, relation, correspondence, continuum, hierarchy) have analogs in language, gesture, and especially in the patterns that are created when people design the world around them, arranging things into piles and rows and hierarchies and arrays, spatial-abstraction-action interconnections termed spractions. The designed world is a diagram.

  1. In-Situ Visualization Experiments with ParaView Cinema in RAGE

    SciTech Connect

    Kares, Robert John

    2015-10-15

    A previous paper described some numerical experiments performed using the ParaView/Catalyst in-situ visualization infrastructure deployed in the Los Alamos RAGE radiation-hydrodynamics code to produce images from a running large scale 3D ICF simulation. One challenge of the in-situ approach apparent in these experiments was the difficulty of choosing parameters likes isosurface values for the visualizations to be produced from the running simulation without the benefit of prior knowledge of the simulation results and the resultant cost of recomputing in-situ generated images when parameters are chosen suboptimally. A proposed method of addressing this difficulty is to simply render multiple images at runtime with a range of possible parameter values to produce a large database of images and to provide the user with a tool for managing the resulting database of imagery. Recently, ParaView/Catalyst has been extended to include such a capability via the so-called Cinema framework. Here I describe some initial experiments with the first delivery of Cinema and make some recommendations for future extensions of Cinema’s capabilities.

  2. Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.

    2015-08-01

    The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  3. Predictor-corrector schemes for visualization of smoothed particle hydrodynamics data.

    PubMed

    Schindler, Benjamin; Fuchs, Raphael; Biddiscombe, John; Peikert, Ronald

    2009-01-01

    In this paper we present a method for vortex core line extraction which operates directly on the smoothed particle hydrodynamics (SPH) representation and, by this, generates smoother and more (spatially and temporally) coherent results in an efficient way. The underlying predictor-corrector scheme is general enough to be applied to other line-type features and it is extendable to the extraction of surfaces such as isosurfaces or Lagrangian coherent structures. The proposed method exploits temporal coherence to speed up computation for subsequent time steps. We show how the predictor-corrector formulation can be specialized for several variants of vortex core line definitions including two recent unsteady extensions, and we contribute a theoretical and practical comparison of these. In particular, we reveal a close relation between unsteady extensions of Fuchs et al. and Weinkauf et al. and we give a proof of the Galilean invariance of the latter. When visualizing SPH data, there is the possibility to use the same interpolation method for visualization as has been used for the simulation. This is different from the case of finite volume simulation results, where it is not possible to recover from the results the spatial interpolation that was used during the simulation. Such data are typically interpolated using the basic trilinear interpolant, and if smoothness is required, some artificial processing is added. In SPH data, however, the smoothing kernels are specified from the simulation, and they provide an exact and smooth interpolation of data or gradients at arbitrary points in the domain.

  4. Visual geography

    USGS Publications Warehouse

    ,; ,; ,

    1991-01-01

    Maps are, among other things, a way of making geography visual. They are world views, ways of thinking, and ways of communicating. They depict our world and guide us through it. Visual Geography probes the essence of maps and mapmaking. It follows the story of cartography through the millennia, across the globe, and beyond the solar system. It includes some of the world's most beautiful and enduring maps, some of its most historic - a map in Columbus' hand, the map that was carried to the Moon, the first map to show America - and it examines the urge to map, to measure our world, and to record it graphically.

  5. UpSet: Visualization of Intersecting Sets

    PubMed Central

    Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter

    2016-01-01

    Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912

  6. Cascade category-aware visual search.

    PubMed

    Zhang, Shiliang; Tian, Qi; Huang, Qingming; Gao, Wen; Rui, Yong

    2014-06-01

    Incorporating image classification into image retrieval system brings many attractive advantages. For instance, the search space can be narrowed down by rejecting images in irrelevant categories of the query. The retrieved images can be more consistent in semantics by indexing and returning images in the relevant categories together. However, due to their different goals on recognition accuracy and retrieval scalability, it is hard to efficiently incorporate most image classification works into large-scale image search. To study this problem, we propose cascade category-aware visual search, which utilizes weak category clue to achieve better retrieval accuracy, efficiency, and memory consumption. To capture the category and visual clues of an image, we first learn category-visual words, which are discriminative and repeatable local features labeled with categories. By identifying category-visual words in database images, we are able to discard noisy local features and extract image visual and category clues, which are hence recorded in a hierarchical index structure. Our retrieval system narrows down the search space by: 1) filtering the noisy local features in query; 2) rejecting irrelevant categories in database; and 3) preforming discriminative visual search in relevant categories. The proposed algorithm is tested on object search, landmark search, and large-scale similar image search on the large-scale LSVRC10 data set. Although the category clue introduced is weak, our algorithm still shows substantial advantages in retrieval accuracy, efficiency, and memory consumption than the state-of-the-art.

  7. Visual Hallucinations

    PubMed Central

    Cummings, Jeffrey L.; Miller, Bruce L.

    1987-01-01

    Visual hallucinations occur in diverse clinical circumstances including ophthalmologic diseases, neurologic disorders, toxic and metabolic disorders and idiopathic psychiatric illnesses. Their content, duration and timing relate to their cause and provide useful differential diagnostic information. Hallucinations must be distinguished from delusions and confabulation. A systematic approach to differentiating among hallucinatory syndromes may improve diagnostic accuracy. ImagesFigure 2. PMID:3825109

  8. Visualizing inequality

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2016-07-01

    The study of socioeconomic inequality is of substantial importance, scientific and general alike. The graphic visualization of inequality is commonly conveyed by Lorenz curves. While Lorenz curves are a highly effective statistical tool for quantifying the distribution of wealth in human societies, they are less effective a tool for the visual depiction of socioeconomic inequality. This paper introduces an alternative to Lorenz curves-the hill curves. On the one hand, the hill curves are a potent scientific tool: they provide detailed scans of the rich-poor gaps in human societies under consideration, and are capable of accommodating infinitely many degrees of freedom. On the other hand, the hill curves are a powerful infographic tool: they visualize inequality in a most vivid and tangible way, with no quantitative skills that are required in order to grasp the visualization. The application of hill curves extends far beyond socioeconomic inequality. Indeed, the hill curves are highly effective 'hyperspectral' measures of statistical variability that are applicable in the context of size distributions at large. This paper establishes the notion of hill curves, analyzes them, and describes their application in the context of general size distributions.

  9. City Forensics: Using Visual Elements to Predict Non-Visual City Attributes.

    PubMed

    Arietta, Sean M; Efros, Alexei A; Ramamoorthi, Ravi; Agrawala, Maneesh

    2014-12-01

    We present a method for automatically identifying and validating predictive relationships between the visual appearance of a city and its non-visual attributes (e.g. crime statistics, housing prices, population density etc.). Given a set of street-level images and (location, city-attribute-value) pairs of measurements, we first identify visual elements in the images that are discriminative of the attribute. We then train a predictor by learning a set of weights over these elements using non-linear Support Vector Regression. To perform these operations efficiently, we implement a scalable distributed processing framework that speeds up the main computational bottleneck (extracting visual elements) by an order of magnitude. This speedup allows us to investigate a variety of city attributes across 6 different American cities. We find that indeed there is a predictive relationship between visual elements and a number of city attributes including violent crime rates, theft rates, housing prices, population density, tree presence, graffiti presence, and the perception of danger. We also test human performance for predicting theft based on street-level images and show that our predictor outperforms this baseline with 33% higher accuracy on average. Finally, we present three prototype applications that use our system to (1) define the visual boundary of city neighborhoods, (2) generate walking directions that avoid or seek out exposure to city attributes, and (3) validate user-specified visual elements for prediction.

  10. Tunable electrophoretic separations using a scalable, fabric-based platform.

    PubMed

    Narahari, Tanya; Dendukuri, Dhananjaya; Murthy, Shashi K

    2015-02-17

    There is a rising need for low-cost and scalable platforms for sensitive medical diagnostic testing. Fabric weaving is a mature, scalable manufacturing technology and can be used as a platform to manufacture microfluidic diagnostic tests with controlled, tunable flow. Given its scalability, low manufacturing cost (<$0.25 per device), and potential for patterning multiplexed channel geometries, fabric is a viable platform for the development of analytical devices. In this paper, we describe a fabric-based electrophoretic platform for protein separation. Appropriate yarns were selected for each region of the device and weaved into straight channel electrophoretic chips in a single step. A wide dynamic range of analyte molecules ranging from small molecule dyes (<1 kDa) to macromolecule proteins (67-150 kDa) were separated in the device. Individual yarns behave as a chromatographic medium for electrophoresis. We therefore explored the effect of yarn and fabric parameters on separation resolution. Separation speed and resolution were enhanced by increasing the number of yarns per unit area of fabric and decreasing yarn hydrophilicity. However, for protein analytes that often require hydrophilic, passivated surfaces, these effects need to be properly tuned to achieve well-resolved separations. A fabric device tuned for protein separations was built and demonstrated. As an analytical output parameter for this device, the electrophoretic mobility of a sedimentation marker, Naphthol Blue Black bovine albumin in glycine-NaOH buffer, pH 8.58 was estimated and found to be -2.7 × 10(-8) m(2) V(-1) s(-1). The ability to tune separation may be used to predefine regions in the fabric for successive preconcentrations and separations. The device may then be applied for the multiplexed detection of low abundance proteins from complex biological samples such as serum and cell lysate.

  11. VPLS: an effective technology for building scalable transparent LAN services

    NASA Astrophysics Data System (ADS)

    Dong, Ximing; Yu, Shaohua

    2005-02-01

    Virtual Private LAN Service (VPLS) is generating considerable interest with enterprises and service providers as it offers multipoint transparent LAN service (TLS) over MPLS networks. This paper describes an effective technology - VPLS, which links virtual switch instances (VSIs) through MPLS to form an emulated Ethernet switch and build Scalable Transparent Lan Services. It first focuses on the architecture of VPLS with Ethernet bridging technique at the edge and MPLS at the core, then it tries to elucidate the data forwarding mechanism within VPLS domain, including learning and aging MAC addresses on a per LSP basis, flooding of unknown frames and replication for unknown, multicast, and broadcast frames. The loop-avoidance mechanism, known as split horizon forwarding, is also analyzed. Another important aspect of VPLS service is its basic operation, including autodiscovery and signaling, is discussed. From the perspective of efficiency and scalability the paper compares two important signaling mechanism, BGP and LDP, which are used to set up a PW between the PEs and bind the PWs to a particular VSI. With the extension of VPLS and the increase of full mesh of PWs between PE devices (n*(n-1)/2 PWs in all, a n2 complete problem), VPLS instance could have a large number of remote PE associations, resulting in an inefficient use of network bandwidth and system resources as the ingress PE has to replicate each frame and append MPLS labels for remote PE. So the latter part of this paper focuses on the scalability issue: the Hierarchical VPLS. Within the architecture of HVPLS, this paper addresses two ways to cope with a possibly large number of MAC addresses, which make VPLS operate more efficiently.

  12. NeuroLines: A Subway Map Metaphor for Visualizing Nanoscale Neuronal Connectivity.

    PubMed

    Al-Awami, Ali K; Beyer, Johanna; Strobelt, Hendrik; Kasthuri, Narayanan; Lichtman, Jeff W; Pfister, Hanspeter; Hadwiger, Markus

    2014-12-01

    We present NeuroLines, a novel visualization technique designed for scalable detailed analysis of neuronal connectivity at the nanoscale level. The topology of 3D brain tissue data is abstracted into a multi-scale, relative distance-preserving subway map visualization that allows domain scientists to conduct an interactive analysis of neurons and their connectivity. Nanoscale connectomics aims at reverse-engineering the wiring of the brain. Reconstructing and analyzing the detailed connectivity of neurons and neurites (axons, dendrites) will be crucial for understanding the brain and its development and diseases. However, the enormous scale and complexity of nanoscale neuronal connectivity pose big challenges to existing visualization techniques in terms of scalability. NeuroLines offers a scalable visualization framework that can interactively render thousands of neurites, and that supports the detailed analysis of neuronal structures and their connectivity. We describe and analyze the design of NeuroLines based on two real-world use-cases of our collaborators in developmental neuroscience, and investigate its scalability to large-scale neuronal connectivity data. PMID:26356951

  13. A Collection of Visual Thesauri for Browsing Large Collections of Geographic Images.

    ERIC Educational Resources Information Center

    Ramsey, Marshall C.; Chen, Hsinchun; Zhu, Bin; Schatz, Bruce R.

    1999-01-01

    Discusses problems in creating indices and thesauri for digital libraries of geo-spatial multimedia content and proposes a scalable method to automatically generate visual thesauri of large collections of geo-spatial media using fuzzy, unsupervised machine-learning techniques. Uses satellite photographs as examples and discusses texture-based…

  14. Scalable synthesis of a prostaglandin EP4 receptor antagonist.

    PubMed

    Gauvreau, Danny; Dolman, Sarah J; Hughes, Greg; O'Shea, Paul D; Davies, Ian W

    2010-06-18

    The evolution of scalable, economically viable synthetic approaches to the potent and selective prostaglandin EP4 antagonist 1 is presented. The chromatography-free synthesis of multikilogram quantities of 1 using a seven-step sequence (six in the longest linear sequence) is described. This approach has been further modified in an effort to identify a long-term manufacturing route. Our final synthesis involves no step requiring cryogenic (< -25 degrees C) conditions; comprises a total of four steps, only three of which are in the longest linear synthesis; and features the use of two consecutive iron-catalyzed Friedel-Crafts substitutions.

  15. Robust and scalable optical one-way quantum computation

    SciTech Connect

    Wang Hefeng; Yang Chuiping; Nori, Franco

    2010-05-15

    We propose an efficient approach for deterministically generating scalable cluster states with photons. This approach involves unitary transformations performed on atoms coupled to optical cavities. Its operation cost scales linearly with the number of qubits in the cluster state, and photon qubits are encoded such that single-qubit operations can be easily implemented by using linear optics. Robust optical one-way quantum computation can be performed since cluster states can be stored in atoms and then transferred to photons that can be easily operated and measured. Therefore, this proposal could help in performing robust large-scale optical one-way quantum computation.

  16. A scalable parallel algorithm for multiple objective linear programs

    NASA Technical Reports Server (NTRS)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  17. Focal plane array with modular pixel array components for scalability

    DOEpatents

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  18. A Scalable and Practical Authentication Protocol in Mobile IP

    NASA Astrophysics Data System (ADS)

    Lee, Yong; Lee, Goo-Yeon; Kim, Hwa-Long

    Due to the proliferation of mobile devices connected to the Internet, implementing a secure and practical Mobile IP has become an important goal. A mobile IP can not work properly without authentication between the mobile node (MN), the home agent (HA) and the foreign agent (FA). In this paper, we propose a practical Mobile IP authentication protocol that uses public key cryptography only during the initial authentication. The proposed scheme is compatible with the conventional Mobile IP protocol and provides scalability against the number of MN's. We also show that the proposed protocol offers secure operation.

  19. Simplex-stochastic collocation method with improved scalability

    NASA Astrophysics Data System (ADS)

    Edeling, W. N.; Dwight, R. P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  20. Scalable brain network construction on white matter fibers

    NASA Astrophysics Data System (ADS)

    Chung, Moo K.; Adluru, Nagesh; Dalton, Kim M.; Alexander, Andrew L.; Davidson, Richard J.

    2011-03-01

    DTI offers a unique opportunity to characterize the structural connectivity of the human brain non-invasively by tracing white matter fiber tracts. Whole brain tractography studies routinely generate up to half million tracts per brain, which serves as edges in an extremely large 3D graph with up to half million edges. Currently there is no agreed-upon method for constructing the brain structural network graphs out of large number of white matter tracts. In this paper, we present a scalable iterative framework called the ɛ-neighbor method for building a network graph and apply it to testing abnormal connectivity in autism.

  1. pcircle - A Suite of Scalable Parallel File System Tools

    SciTech Connect

    WANG, FEIYI

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  2. Flexible quantum circuits using scalable continuous-variable cluster states

    NASA Astrophysics Data System (ADS)

    Alexander, Rafael N.; Menicucci, Nicolas C.

    2016-06-01

    We show that measurement-based quantum computation on scalable continuous-variable (CV) cluster states admits more quantum-circuit flexibility and compactness than similar protocols for standard square-lattice CV cluster states. This advantage is a direct result of the macronode structure of these states—that is, a lattice structure in which each graph node actually consists of several physical modes. These extra modes provide additional measurement degrees of freedom at each graph location, which can be used to manipulate the flow and processing of quantum information more robustly and with additional flexibility that is not available on an ordinary lattice.

  3. Scalable web services for the PSIPRED Protein Analysis Workbench.

    PubMed

    Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T

    2013-07-01

    Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/. PMID:23748958

  4. 2-D scalable optical controlled phased-array antenna system

    NASA Astrophysics Data System (ADS)

    Chen, Maggie Yihong; Howley, Brie; Wang, Xiaolong; Basile, Panoutsopoulos; Chen, Ray T.

    2006-02-01

    A novel optoelectronically-controlled wideband 2-D phased-array antenna system is demonstrated. The inclusion of WDM devices makes a highly scalable system structure. Only (M+N) delay lines are required to control a M×N array. The optical true-time delay lines are combination of polymer waveguides and optical switches, using a single polymeric platform and are monolithically integrated on a single substrate. The 16 time delays generated by the device are measured to range from 0 to 175 ps in 11.6 ps. Far-field patterns at different steering angles in X-band are measured.

  5. pcircle - A Suite of Scalable Parallel File System Tools

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as asyncmore » progress report, checkpoint and restart, as well as integrity checking.« less

  6. Scalability and Performance of a Large Linux Cluster

    SciTech Connect

    BRIGHTWELL,RONALD B.; PLIMPTON,STEVEN J.

    2000-01-20

    In this paper the authors present performance results from several parallel benchmarks and applications on a 400-node Linux cluster at Sandia National Laboratories. They compare the results on the Linux cluster to performance obtained on a traditional distributed-memory massively parallel processing machine, the Intel TeraFLOPS. They discuss the characteristics of these machines that influence the performance results and identify the key components of the system software that they feel are important to allow for scalability of commodity-based PC clusters to hundreds and possibly thousands of processors.

  7. Scalable Architecture for Multihop Wireless ad Hoc Networks

    NASA Technical Reports Server (NTRS)

    Arabshahi, Payman; Gray, Andrew; Okino, Clayton; Yan, Tsun-Yee

    2004-01-01

    A scalable architecture for wireless digital data and voice communications via ad hoc networks has been proposed. Although the details of the architecture and of its implementation in hardware and software have yet to be developed, the broad outlines of the architecture are fairly clear: This architecture departs from current commercial wireless communication architectures, which are characterized by low effective bandwidth per user and are not well suited to low-cost, rapid scaling in large metropolitan areas. This architecture is inspired by a vision more akin to that of more than two dozen noncommercial community wireless networking organizations established by volunteers in North America and several European countries.

  8. Scalable anti-Markovnikov hydrobromination of aliphatic and aromatic olefins.

    PubMed

    Galli, Marzia; Fletcher, Catherine J; Del Pozo, Marc; Goldup, Stephen M

    2016-06-28

    To improve access to a key synthetic intermediate we targeted a direct hydrobromination-Negishi route. Unsurprisingly, the anti-Markovnikov addition of HBr to estragole in the presence of AIBN proved successful. However, even in the absence of an added initiator, anti-Markovnikov addition was observed. Re-examination of early reports revealed that selective Markovnikov addition, often simply termed "normal" addition, is not always observed with HBr unless air is excluded, leading to the rediscovery of a reproducible and scalable initiator-free protocol. PMID:27185636

  9. Fast layout processing methodologies for scalable distributed computing applications

    NASA Astrophysics Data System (ADS)

    Kang, Chang-woo; Shin, Jae-pil; Durvasula, Bhardwaj; Seo, Sang-won; Jung, Dae-hyun; Lee, Jong-bae; Park, Young-kwan

    2012-06-01

    As the feature size shrinks to sub-20nm, more advanced OPC technologies such as ILT and the new lithographic resolution by EUV become the key solutions for device fabrication. These technologies leads to the file size explosion of up to hundreds of gigabytes of GDSII and OASIS files mainly due to the addition of complicated scattering bars and flattening of the design to compensate for long range effects. Splitting and merging layout files have been done sequentially in typical distributed computing layout applications. This portion becomes the bottle neck, causing the scalability to become poor. According to the Amdahl's law, minimizing the portion of sequential part is the key to get the maximum speed up. In this paper, we present scalable layout dividing and merging methodologies: Skeleton file based querying and direct OASIS file merging. These methods not only use a very minimum memory footprint but also achieve remarkable speed improvement. The skeleton file concept is very novel for a distributed application requiring geometrical processing, as it allows almost pseudo-random access into the input GDSII or OASIS file. Client machines can make use of the random access and perform fast query operations. The skeleton concept also works very well for flat input layouts, which is often the case of post-OPC data. Also, our OASIS file merging scheme is a smart approach which is equivalent of a binary file concatenation scheme. The merging method for OASIS files concatenates shape information in binary format with basic interpretation of bits with very low memory usage. We have observed that the skeleton file concept achieved 13.5 times speed improvement and used only 3.78% of memory on the master, over the conventional concept of converting into an internal format. Also, the merging speed is very fast, 28MB/sec and it is 44.5 times faster than conventional method. On top of the fast merging speed, it is very scalable since the merging time grows in linear fashion

  10. An Efficient, Scalable Content-Based Messaging System

    SciTech Connect

    Gorton, Ian; Almquist, Justin P.; Cramer, Nick O.; Haack, Jereme N.; Hoza, Mark

    2003-09-16

    Large-scale information processing environments must rapidly search through massive streams of raw data to locate useful information. These data streams contain textual and numeric data items, and may be highly structured or mostly freeform text. This project aims to create a high performance and scalable engine for locating relevant content in data streams. Based on the J2EE Java Messaging Service (JMS), the content-based messaging (CBM) engine provides highly efficient message formatting and filtering. This paper describes the design of the CBM engine, and presents empirical results that compare the performance with a standard JMS to demonstrate the performance improvements that are achieved.

  11. The scalable implementation of quantum walks using classical light

    NASA Astrophysics Data System (ADS)

    Goyal, Sandeep K.; Roux, F. S.; Forbes, Andrew; Konrad, Thomas

    2014-02-01

    A quantum walk is the quantum analog of the classical random walks. Despite their simple structure they form a universal platform to implement any algorithm of quantum computation. However, it is very hard to realize quantum walks with a sufficient number of iterations in quantum systems due to their sensitivity to environmental influences and subsequent loss of coherence. Here we present a scalable implementation scheme for one-dimensional quantum walks for arbitrary number of steps using the orbital angular momentum modes of classical light beams. Furthermore, we show that using the same setup with a minor adjustment we can also realize electric quantum walks.

  12. Scalable Implementation of Boson Sampling with Trapped Ions

    NASA Astrophysics Data System (ADS)

    Shen, C.; Zhang, Z.; Duan, L.-M.

    2014-02-01

    Boson sampling solves a classically intractable problem by sampling from a probability distribution given by matrix permanents. We propose a scalable implementation of boson sampling using local transverse phonon modes of trapped ions to encode the bosons. The proposed scheme allows deterministic preparation and high-efficiency readout of the bosons in the Fock states and universal mode mixing. With the state-of-the-art trapped ion technology, it is feasible to realize boson sampling with tens of bosons by this scheme, which would outperform the most powerful classical computers and constitute an effective disproof of the famous extended Church-Turing thesis.

  13. Large scale visualization on the Cray XT3 using ParaView.

    SciTech Connect

    Rogers, David; Geveci, Berk; Eschenbert, Kent; Neundorf, Alexander; Marion, Patrick; Moreland, Kenneth D.; Greenfield, John

    2008-05-01

    Post-processing and visualization are key components to understanding any simulation. Porting ParaView, a scalable visualization tool, to the Cray XT3 allows our analysts to leverage the same supercomputer they use for simulation to perform post-processing. Visualization tools traditionally rely on a variety of rendering, scripting, and networking resources; the challenge of running ParaView on the Lightweight Kernel is to provide and use the visualization and post-processing features in the absence of many OS resources. We have successfully accomplished this at Sandia National Laboratories and the Pittsburgh Supercomputing Center.

  14. Visualizing Progress

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Reality Capture Technologies, Inc. is a spinoff company from Ames Research Center. Offering e-business solutions for optimizing management, design and production processes, RCT uses visual collaboration environments (VCEs) such as those used to prepare the Mars Pathfinder mission.The product, 4-D Reality Framework, allows multiple users from different locations to manage and share data. The insurance industry is one targeted commercial application for this technology.

  15. A unified toolkit for information and scientific visualization

    NASA Astrophysics Data System (ADS)

    Wylie, Brian; Baumes, Jeffrey

    2009-01-01

    We present an expansion of the popular open source Visualization Toolkit (VTK) to support the ingestion, processing, and display of informatics data. The result is a flexible, component-based pipeline framework for the integration and deployment of algorithms in the scientific and informatics fields. This project, code named "Titan", is one of the first efforts to address the unification of information and scientific visualization in a systematic fashion. The result includes a wide range of informatics-oriented functionality: database access, graph algorithms, graph layouts, views, charts, UI components and more. Further, the data distribution, parallel processing and client/server capabilities of VTK provide an excellent platform for scalable analysis.

  16. AstroVis: Visualizing astronomical data cubes

    NASA Astrophysics Data System (ADS)

    Finniss, Stephen; Tyler, Robin; Questiaux, Jacques

    2016-08-01

    AstroVis enables rapid visualization of large data files on platforms supporting the OpenGL rendering library. Radio astronomical observations are typically three dimensional and stored as data cubes. AstroVis implements a scalable approach to accessing these files using three components: a File Access Component (FAC) that reduces the impact of reading time, which speeds up access to the data; the Image Processing Component (IPC), which breaks up the data cube into smaller pieces that can be processed locally and gives a representation of the whole file; and Data Visualization, which implements an approach of Overview + Detail to reduces the dimensions of the data being worked with and the amount of memory required to store it. The result is a 3D display paired with a 2D detail display that contains a small subsection of the original file in full resolution without reducing the data in any way.

  17. A Principled Way of Assessing Visualization Literacy.

    PubMed

    Boy, Jeremy; Rensink, Ronald A; Bertini, Enrico; Fekete, Jean-Daniel

    2014-12-01

    We describe a method for assessing the visualization literacy (VL) of a user. Assessing how well people understand visualizations has great value for research (e. g., to avoid confounds), for design (e. g., to best determine the capabilities of an audience), for teaching (e. g., to assess the level of new students), and for recruiting (e. g., to assess the level of interviewees). This paper proposes a method for assessing VL based on Item Response Theory. It describes the design and evaluation of two VL tests for line graphs, and presents the extension of the method to bar charts and scatterplots. Finally, it discusses the reimplementation of these tests for fast, effective, and scalable web-based use. PMID:26356910

  18. A Principled Way of Assessing Visualization Literacy.

    PubMed

    Boy, Jeremy; Rensink, Ronald A; Bertini, Enrico; Fekete, Jean-Daniel

    2014-12-01

    We describe a method for assessing the visualization literacy (VL) of a user. Assessing how well people understand visualizations has great value for research (e. g., to avoid confounds), for design (e. g., to best determine the capabilities of an audience), for teaching (e. g., to assess the level of new students), and for recruiting (e. g., to assess the level of interviewees). This paper proposes a method for assessing VL based on Item Response Theory. It describes the design and evaluation of two VL tests for line graphs, and presents the extension of the method to bar charts and scatterplots. Finally, it discusses the reimplementation of these tests for fast, effective, and scalable web-based use.

  19. Vials: Visualizing Alternative Splicing of Genes

    PubMed Central

    Strobelt, Hendrik; Alsallakh, Bilal; Botros, Joseph; Peterson, Brant; Borowsky, Mark; Pfister, Hanspeter; Lex, Alexander

    2016-01-01

    Alternative splicing is a process by which the same DNA sequence is used to assemble different proteins, called protein isoforms. Alternative splicing works by selectively omitting some of the coding regions (exons) typically associated with a gene. Detection of alternative splicing is difficult and uses a combination of advanced data acquisition methods and statistical inference. Knowledge about the abundance of isoforms is important for understanding both normal processes and diseases and to eventually improve treatment through targeted therapies. The data, however, is complex and current visualizations for isoforms are neither perceptually efficient nor scalable. To remedy this, we developed Vials, a novel visual analysis tool that enables analysts to explore the various datasets that scientists use to make judgments about isoforms: the abundance of reads associated with the coding regions of the gene, evidence for junctions, i.e., edges connecting the coding regions, and predictions of isoform frequencies. Vials is scalable as it allows for the simultaneous analysis of many samples in multiple groups. Our tool thus enables experts to (a) identify patterns of isoform abundance in groups of samples and (b) evaluate the quality of the data. We demonstrate the value of our tool in case studies using publicly available datasets. PMID:26529712

  20. Neutron generators with size scalability, ease of fabrication and multiple ion source functionalities

    DOEpatents

    Elizondo-Decanini, Juan M

    2014-11-18

    A neutron generator is provided with a flat, rectilinear geometry and surface mounted metallizations. This construction provides scalability and ease of fabrication, and permits multiple ion source functionalities.

  1. Massive graph visualization : LDRD final report.

    SciTech Connect

    Wylie, Brian Neil; Moreland, Kenneth D.

    2007-10-01

    Graphs are a vital way of organizing data with complex correlations. A good visualization of a graph can fundamentally change human understanding of the data. Consequently, there is a rich body of work on graph visualization. Although there are many techniques that are effective on small to medium sized graphs (tens of thousands of nodes), there is a void in the research for visualizing massive graphs containing millions of nodes. Sandia is one of the few entities in the world that has the means and motivation to handle data on such a massive scale. For example, homeland security generates graphs from prolific media sources such as television, telephone, and the Internet. The purpose of this project is to provide the groundwork for visualizing such massive graphs. The research provides for two major feature gaps: a parallel, interactive visualization framework and scalable algorithms to make the framework usable to a practical application. Both the frameworks and algorithms are designed to run on distributed parallel computers, which are already available at Sandia. Some features are integrated into the ThreatView{trademark} application and future work will integrate further parallel algorithms.

  2. Declarative language design for interactive visualization.

    PubMed

    Heer, Jeffrey; Bostock, Michael

    2010-01-01

    We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude. PMID:20975153

  3. Efficient and Scalable Graph Similarity Joins in MapReduce

    PubMed Central

    Chen, Yifan; Zhang, Weiming; Tang, Jiuyang

    2014-01-01

    Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results. PMID:25121135

  4. Scalable and sustainable electrochemical allylic C-H oxidation.

    PubMed

    Horn, Evan J; Rosen, Brandon R; Chen, Yong; Tang, Jiaze; Chen, Ke; Eastgate, Martin D; Baran, Phil S

    2016-05-01

    New methods and strategies for the direct functionalization of C-H bonds are beginning to reshape the field of retrosynthetic analysis, affecting the synthesis of natural products, medicines and materials. The oxidation of allylic systems has played a prominent role in this context as possibly the most widely applied C-H functionalization, owing to the utility of enones and allylic alcohols as versatile intermediates, and their prevalence in natural and unnatural materials. Allylic oxidations have featured in hundreds of syntheses, including some natural product syntheses regarded as "classics". Despite many attempts to improve the efficiency and practicality of this transformation, the majority of conditions still use highly toxic reagents (based around toxic elements such as chromium or selenium) or expensive catalysts (such as palladium or rhodium). These requirements are problematic in industrial settings; currently, no scalable and sustainable solution to allylic oxidation exists. This oxidation strategy is therefore rarely used for large-scale synthetic applications, limiting the adoption of this retrosynthetic strategy by industrial scientists. Here we describe an electrochemical C-H oxidation strategy that exhibits broad substrate scope, operational simplicity and high chemoselectivity. It uses inexpensive and readily available materials, and represents a scalable allylic C-H oxidation (demonstrated on 100 grams), enabling the adoption of this C-H oxidation strategy in large-scale industrial settings without substantial environmental impact.

  5. Scalable parallel distance field construction for large-scale applications

    DOE PAGESBeta

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; Kolla, Hemanth; Chen, Jacqueline H.

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate itsmore » efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.« less

  6. NOA: A Scalable Multi-Parent Clustering Hierarchy for WSNs

    SciTech Connect

    Cree, Johnathan V.; Delgado-Frias, Jose; Hughes, Michael A.; Burghard, Brion J.; Silvers, Kurt L.

    2012-08-10

    NOA is a multi-parent, N-tiered, hierarchical clustering algorithm that provides a scalable, robust and reliable solution to autonomous configuration of large-scale wireless sensor networks. The novel clustering hierarchy's inherent benefits can be utilized by in-network data processing techniques to provide equally robust, reliable and scalable in-network data processing solutions capable of reducing the amount of data sent to sinks. Utilizing a multi-parent framework, NOA reduces the cost of network setup when compared to hierarchical beaconing solutions by removing the expense of r-hop broadcasting (r is the radius of the cluster) needed to build the network and instead passes network topology information among shared children. NOA2, a two-parent clustering hierarchy solution, and NOA3, the three-parent variant, saw up to an 83% and 72% reduction in overhead, respectively, when compared to performing one round of a one-parent hierarchical beaconing, as well as 92% and 88% less overhead when compared to one round of two- and three-parent hierarchical beaconing hierarchy.

  7. Scalable Multiprocessor for High-Speed Computing in Space

    NASA Technical Reports Server (NTRS)

    Lux, James; Lang, Minh; Nishimoto, Kouji; Clark, Douglas; Stosic, Dorothy; Bachmann, Alex; Wilkinson, William; Steffke, Richard

    2004-01-01

    A report discusses the continuing development of a scalable multiprocessor computing system for hard real-time applications aboard a spacecraft. "Hard realtime applications" signifies applications, like real-time radar signal processing, in which the data to be processed are generated at "hundreds" of pulses per second, each pulse "requiring" millions of arithmetic operations. In these applications, the digital processors must be tightly integrated with analog instrumentation (e.g., radar equipment), and data input/output must be synchronized with analog instrumentation, controlled to within fractions of a microsecond. The scalable multiprocessor is a cluster of identical commercial-off-the-shelf generic DSP (digital-signal-processing) computers plus generic interface circuits, including analog-to-digital converters, all controlled by software. The processors are computers interconnected by high-speed serial links. Performance can be increased by adding hardware modules and correspondingly modifying the software. Work is distributed among the processors in a parallel or pipeline fashion by means of a flexible master/slave control and timing scheme. Each processor operates under its own local clock; synchronization is achieved by broadcasting master time signals to all the processors, which compute offsets between the master clock and their local clocks.

  8. Scalable and sustainable electrochemical allylic C–H oxidation

    NASA Astrophysics Data System (ADS)

    Horn, Evan J.; Rosen, Brandon R.; Chen, Yong; Tang, Jiaze; Chen, Ke; Eastgate, Martin D.; Baran, Phil S.

    2016-05-01

    New methods and strategies for the direct functionalization of C–H bonds are beginning to reshape the field of retrosynthetic analysis, affecting the synthesis of natural products, medicines and materials. The oxidation of allylic systems has played a prominent role in this context as possibly the most widely applied C–H functionalization, owing to the utility of enones and allylic alcohols as versatile intermediates, and their prevalence in natural and unnatural materials. Allylic oxidations have featured in hundreds of syntheses, including some natural product syntheses regarded as “classics”. Despite many attempts to improve the efficiency and practicality of this transformation, the majority of conditions still use highly toxic reagents (based around toxic elements such as chromium or selenium) or expensive catalysts (such as palladium or rhodium). These requirements are problematic in industrial settings; currently, no scalable and sustainable solution to allylic oxidation exists. This oxidation strategy is therefore rarely used for large-scale synthetic applications, limiting the adoption of this retrosynthetic strategy by industrial scientists. Here we describe an electrochemical C–H oxidation strategy that exhibits broad substrate scope, operational simplicity and high chemoselectivity. It uses inexpensive and readily available materials, and represents a scalable allylic C–H oxidation (demonstrated on 100 grams), enabling the adoption of this C–H oxidation strategy in large-scale industrial settings without substantial environmental impact.

  9. Scalable and sustainable electrochemical allylic C-H oxidation.

    PubMed

    Horn, Evan J; Rosen, Brandon R; Chen, Yong; Tang, Jiaze; Chen, Ke; Eastgate, Martin D; Baran, Phil S

    2016-05-01

    New methods and strategies for the direct functionalization of C-H bonds are beginning to reshape the field of retrosynthetic analysis, affecting the synthesis of natural products, medicines and materials. The oxidation of allylic systems has played a prominent role in this context as possibly the most widely applied C-H functionalization, owing to the utility of enones and allylic alcohols as versatile intermediates, and their prevalence in natural and unnatural materials. Allylic oxidations have featured in hundreds of syntheses, including some natural product syntheses regarded as "classics". Despite many attempts to improve the efficiency and practicality of this transformation, the majority of conditions still use highly toxic reagents (based around toxic elements such as chromium or selenium) or expensive catalysts (such as palladium or rhodium). These requirements are problematic in industrial settings; currently, no scalable and sustainable solution to allylic oxidation exists. This oxidation strategy is therefore rarely used for large-scale synthetic applications, limiting the adoption of this retrosynthetic strategy by industrial scientists. Here we describe an electrochemical C-H oxidation strategy that exhibits broad substrate scope, operational simplicity and high chemoselectivity. It uses inexpensive and readily available materials, and represents a scalable allylic C-H oxidation (demonstrated on 100 grams), enabling the adoption of this C-H oxidation strategy in large-scale industrial settings without substantial environmental impact. PMID:27096371

  10. On Eliminating Synchronous Communication in Molecular Simulations to Improve Scalability

    SciTech Connect

    Straatsma, TP; Chavarría-Miranda, Daniel

    2013-12-01

    Molecular dynamics simulation, as a complementary tool to experimentation, has become an important methodology for the understanding and design of molecular systems as it provides access to properties that are difficult, impossible or prohibitively expensive to obtain experimentally. Many of the available software packages have been parallelized to take advantage of modern massively concurrent processing resources. The challenge in achieving parallel efficiency is commonly attributed to the fact that molecular dynamics algorithms are communication intensive. This paper illustrates how an appropriately chosen data distribution and asynchronous one-sided communication approach can be used to effectively deal with the data movement within the Global Arrays/ARMCI programming model framework. A new put_notify capability is presented here, allowing the implementation of the molecular dynamics algorithm without any explicit global or local synchronization or global data reduction operations. In addition, this push-data model is shown to very effectively allow hiding data communication behind computation. Rather than data movement or explicit global reductions, the implicit synchronization of the algorithm becomes the primary challenge for scalability. Without any explicit synchronous operations, the scalability of molecular simulations is shown to depend only on the ability to evenly balance computational load.

  11. On eliminating synchronous communication in molecular simulations to improve scalability

    NASA Astrophysics Data System (ADS)

    Straatsma, T. P.; Chavarría-Miranda, Daniel G.

    2013-12-01

    Molecular dynamics simulation, as a complementary tool to experimentation, has become an important methodology for the understanding and design of molecular systems as it provides access to properties that are difficult, impossible or prohibitively expensive to obtain experimentally. Many of the available software packages have been parallelized to take advantage of modern massively concurrent processing resources. The challenge in achieving parallel efficiency is commonly attributed to the fact that molecular dynamics algorithms are communication intensive. This paper illustrates how an appropriately chosen data distribution and asynchronous one-sided communication approach can be used to effectively deal with the data movement within the Global Arrays/ARMCI programming model framework. A new put_notify capability is presented here, allowing the implementation of the molecular dynamics algorithm without any explicit global or local synchronization or global data reduction operations. In addition, this push-data model is shown to very effectively allow hiding data communication behind computation. Rather than data movement or explicit global reductions, the implicit synchronization of the algorithm becomes the primary challenge for scalability. Without any explicit synchronous operations, the scalability of molecular simulations is shown to depend only on the ability to evenly balance computational load.

  12. Scalable orbital-angular-momentum sorting without destroying photon states

    NASA Astrophysics Data System (ADS)

    Wang, Fang-Xiang; Chen, Wei; Yin, Zhen-Qiang; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu

    2016-09-01

    Single photons with orbital angular momentum (OAM) have attracted substantial attention from researchers. A single photon can carry infinite OAM values theoretically. Thus, OAM photon states have been widely used in quantum information and fundamental quantum mechanics. Although there have been many methods for sorting quantum states with different OAM values, the nondestructive and efficient sorter of high-dimensional OAM remains a fundamental challenge. Here, we propose a scalable OAM sorter which can categorize different OAM states simultaneously, meanwhile, preserving both OAM and spin angular momentum. Fundamental elements of the sorter are composed of symmetric multiport beam splitters (BSs) and Dove prisms with cascading structure, which in principle can be flexibly and effectively combined to sort arbitrarily high-dimensional OAM photons. The scalable structures proposed here greatly reduce the number of BSs required for sorting high-dimensional OAM states. In view of the nondestructive and extensible features, the sorters can be used as fundamental devices not only for high-dimensional quantum information processing, but also for traditional optics.

  13. Cheetah: A Framework for Scalable Hierarchical Collective Operations

    SciTech Connect

    Graham, Richard L; Gorentla Venkata, Manjunath; Ladd, Joshua S; Shamis, Pavel; Rabinovitz, Ishai; Filipov, Vasily; Shainer, Gilad

    2011-01-01

    Collective communication operations, used by many scientific applications, tend to limit overall parallel application performance and scalability. Computer systems are becoming more heterogeneous with increasing node and core-per-node counts. Also, a growing number of data-access mechanisms, of varying characteristics, are supported within a single computer system. We describe a new hierarchical collective communication framework that takes advantage of hardware-specific data-access mechanisms. It is flexible, with run-time hierarchy specification, and sharing of collective communication primitives between collective algorithms. Data buffers are shared between levels in the hierarchy reducing collective communication management overhead. We have implemented several versions of the Message Passing Interface (MPI) collective operations, MPI Barrier() and MPI Bcast(), and run experiments using up to 49, 152 processes on a Cray XT5, and a small InfiniBand based cluster. At 49, 152 processes our barrier implementation outperforms the optimized native implementation by 75%. 32 Byte and one Mega-Byte broadcasts outperform it by 62% and 11%, respectively, with better scalability characteristics. Improvements relative to the default Open MPI implementation are much larger.

  14. Scalable tuning of building models to hourly data

    DOE PAGESBeta

    Garrett, Aaron; New, Joshua Ryan

    2015-03-31

    Energy models of existing buildings are unreliable unless calibrated so they correlate well with actual energy usage. Manual tuning requires a skilled professional, is prohibitively expensive for small projects, imperfect, non-repeatable, non-transferable, and not scalable to the dozens of sensor channels that smart meters, smart appliances, and cheap/ubiquitous sensors are beginning to make available today. A scalable, automated methodology is needed to quickly and intelligently calibrate building energy models to all available data, increase the usefulness of those models, and facilitate speed-and-scale penetration of simulation-based capabilities into the marketplace for actualized energy savings. The "Autotune'' project is a novel, model-agnosticmore » methodology which leverages supercomputing, large simulation ensembles, and big data mining with multiple machine learning algorithms to allow automatic calibration of simulations that match measured experimental data in a way that is deployable on commodity hardware. This paper shares several methodologies employed to reduce the combinatorial complexity to a computationally tractable search problem for hundreds of input parameters. Furthermore, accuracy metrics are provided which quantify model error to measured data for either monthly or hourly electrical usage from a highly-instrumented, emulated-occupancy research home.« less

  15. Scalable tuning of building models to hourly data

    SciTech Connect

    Garrett, Aaron; New, Joshua Ryan

    2015-03-31

    Energy models of existing buildings are unreliable unless calibrated so they correlate well with actual energy usage. Manual tuning requires a skilled professional, is prohibitively expensive for small projects, imperfect, non-repeatable, non-transferable, and not scalable to the dozens of sensor channels that smart meters, smart appliances, and cheap/ubiquitous sensors are beginning to make available today. A scalable, automated methodology is needed to quickly and intelligently calibrate building energy models to all available data, increase the usefulness of those models, and facilitate speed-and-scale penetration of simulation-based capabilities into the marketplace for actualized energy savings. The "Autotune'' project is a novel, model-agnostic methodology which leverages supercomputing, large simulation ensembles, and big data mining with multiple machine learning algorithms to allow automatic calibration of simulations that match measured experimental data in a way that is deployable on commodity hardware. This paper shares several methodologies employed to reduce the combinatorial complexity to a computationally tractable search problem for hundreds of input parameters. Furthermore, accuracy metrics are provided which quantify model error to measured data for either monthly or hourly electrical usage from a highly-instrumented, emulated-occupancy research home.

  16. The Node Monitoring Component of a Scalable Systems Software Environment

    SciTech Connect

    Miller, Samuel James

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  17. Facilitating Image Search With a Scalable and Compact Semantic Mapping.

    PubMed

    Wang, Meng; Li, Weisheng; Liu, Dong; Ni, Bingbing; Shen, Jialie; Yan, Shuicheng

    2015-08-01

    This paper introduces a novel approach to facilitating image search based on a compact semantic embedding. A novel method is developed to explicitly map concepts and image contents into a unified latent semantic space for the representation of semantic concept prototypes. Then, a linear embedding matrix is learned that maps images into the semantic space, such that each image is closer to its relevant concept prototype than other prototypes. In our approach, the semantic concepts equated with query keywords and the images mapped into the vicinity of the prototype are retrieved by our scheme. In addition, a computationally efficient method is introduced to incorporate new semantic concept prototypes into the semantic space by updating the embedding matrix. This novelty improves the scalability of the method and allows it to be applied to dynamic image repositories. Therefore, the proposed approach not only narrows semantic gap but also supports an efficient image search process. We have carried out extensive experiments on various cross-modality image search tasks over three widely-used benchmark image datasets. Results demonstrate the superior effectiveness, efficiency, and scalability of our proposed approach.

  18. Rate distortion analysis for spatially scalable video coding.

    PubMed

    Zhang, Rong; Comer, Mary L

    2010-11-01

    In this paper, we derive the rate distortion lower bounds of spatially scalable video coding techniques. The methods we evaluate are subband and pyramid motion compensation where temporal redundancies in the same spatial layer as well as interlayer spatial redundancies are exploited in the enhancement layer encoding. The rate distortion bounds are derived from rate distortion theory for stationary Gaussian signals where mean square error is used as the distortion criteria. Assuming that the base layer is encoded by a non-scalable video coder, we derive the rate distortion functions for the enhancement layer, which depend on the power spectral density of the input signal, the motion prediction error probability density function and the base layer encoding performance. We will show that pyramid and subband methods are expected to outperform independently encoding the enhancement layer using motion-compensated prediction, in terms of rate distortion efficiency, when the base layer is encoded at a relatively higher quality or less accurate displacement estimation happens in the enhancement layer. PMID:20519155

  19. Scalable and sustainable electrochemical allylic C-H oxidation

    NASA Astrophysics Data System (ADS)

    Horn, Evan J.; Rosen, Brandon R.; Chen, Yong; Tang, Jiaze; Chen, Ke; Eastgate, Martin D.; Baran, Phil S.

    2016-05-01

    New methods and strategies for the direct functionalization of C-H bonds are beginning to reshape the field of retrosynthetic analysis, affecting the synthesis of natural products, medicines and materials. The oxidation of allylic systems has played a prominent role in this context as possibly the most widely applied C-H functionalization, owing to the utility of enones and allylic alcohols as versatile intermediates, and their prevalence in natural and unnatural materials. Allylic oxidations have featured in hundreds of syntheses, including some natural product syntheses regarded as “classics”. Despite many attempts to improve the efficiency and practicality of this transformation, the majority of conditions still use highly toxic reagents (based around toxic elements such as chromium or selenium) or expensive catalysts (such as palladium or rhodium). These requirements are problematic in industrial settings; currently, no scalable and sustainable solution to allylic oxidation exists. This oxidation strategy is therefore rarely used for large-scale synthetic applications, limiting the adoption of this retrosynthetic strategy by industrial scientists. Here we describe an electrochemical C-H oxidation strategy that exhibits broad substrate scope, operational simplicity and high chemoselectivity. It uses inexpensive and readily available materials, and represents a scalable allylic C-H oxidation (demonstrated on 100 grams), enabling the adoption of this C-H oxidation strategy in large-scale industrial settings without substantial environmental impact.

  20. Scalable and Fault Tolerant Failure Detection and Consensus

    SciTech Connect

    Katti, Amogh; Di Fatta, Giuseppe; Naughton III, Thomas J; Engelmann, Christian

    2015-01-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.

  1. ISMuS: interactive, scalable, multimedia streaming platform

    NASA Astrophysics Data System (ADS)

    Cha, Jihun; Kim, Hyun-Cheol; Jeong, Seyoon; Kim, Kyuheon; Patrikakis, Charalampos; van der Schaar, Mihaela

    2005-08-01

    Technical evolutions in the field of information technology have changed many aspects of the industries and the life of human beings. Internet and broadcasting technologies act as core ingredients for this revolution. Various new services that were never possible are now available to general public by utilizing these technologies. Multimedia service via IP networks becomes one of easily accessible service in these days. Technical advances in Internet services, the provision of constantly increasing network bandwidth capacity, and the evolution of multimedia technologies have made the demands for multimedia streaming services increased explosively. With this increasing demand Internet becomes deluged with multimedia traffics. Although multimedia streaming services became indispensable, the quality of a multimedia service over Internet can not be technically guaranteed. Recently users demand multimedia service whose quality is competitive to the traditional TV broadcasting service with additional functionalities. Such additional functionalities include interactivity, scalability, and adaptability. A multimedia that comprises these ancillary functionalities is often called richmedia. In order to satisfy aforementioned requirements, Interactive Scalable Multimedia Streaming (ISMuS) platform is designed and developed. In this paper, the architecture, implementation, and additional functionalities of ISMuS platform are presented. The presented platform is capable of providing user interactions based on MPEG-4 Systems technology [1] and supporting an efficient multimedia distribution through an overlay network technology. Loaded with feature-rich technologies, the platform can serve both on-demand and broadcast-like richmedia services.

  2. Complexity scalable motion estimation for H.264/AVC

    NASA Astrophysics Data System (ADS)

    Kim, Changsung; Xin, Jun; Vetro, Anthony; Kuo, C.-C. Jay

    2006-01-01

    A new complexity-scalable framework for motion estimation is proposed to efficiently reduce the motioncomplexity of encoding process, with focus on long term memory motion-compensated prediction of the H.264 video coding standard in this work. The objective is to provide a complexity scalable scheme for the given motion estimation algorithm such that it reduces the encoding complexity to the desired level with insignificant penalty in rate-distortion performance. In principle, the proposed algorithm adaptively allocates available motion-complexity budget to macroblock based on estimated impact towards overall rate-distortion (RD) performance subject to the given encoding time limit. To estimate macroblock-wise tradeoff between RD coding gain (J) and motion-complexity (C), the correlation of J-C curve between current macroblock and collocated macroblock in previous frame is exploited to predict initial motion-complexity budget of current macroblock. The initial budget is adaptively assigned to each blocksize and block-partition successively and motion-complexity budget is updated at the end of every encoding unit for remaining ones. Based on experiment, proposed J-C slope based allocation is better than uniform motion-complexity allocation scheme in terms of RDC tradeoff. It is demonstrated by experimental results that the proposed algorithm can reduce the H.264 motion estimation complexity to the desired level with little degradation in the rate distortion performance.

  3. Mobile Virtual Reality : A Solution for Big Data Visualization

    NASA Astrophysics Data System (ADS)

    Marshall, E.; Seichter, N. D.; D'sa, A.; Werner, L. A.; Yuen, D. A.

    2015-12-01

    Pursuits in geological sciences and other branches of quantitative sciences often require data visualization frameworks that are in continual need of improvement and new ideas. Virtual reality is a medium of visualization that has large audiences originally designed for gaming purposes; Virtual reality can be captured in Cave-like environment but they are unwieldy and expensive to maintain. Recent efforts by major companies such as Facebook have focussed more on a large market , The Oculus is the first of such kind of mobile devices The operating system Unity makes it possible for us to convert the data files into a mesh of isosurfaces and be rendered into 3D. A user is immersed inside of the virtual reality and is able to move within and around the data using arrow keys and other steering devices, similar to those employed in XBox.. With introductions of products like the Oculus Rift and Holo Lens combined with ever increasing mobile computing strength, mobile virtual reality data visualization can be implemented for better analysis of 3D geological and mineralogical data sets. As more new products like the Surface Pro 4 and other high power yet very mobile computers are introduced to the market, the RAM and graphics card capacity necessary to run these models is more available, opening doors to this new reality. The computing requirements needed to run these models are a mere 8 GB of RAM and 2 GHz of CPU speed, which many mobile computers are starting to exceed. Using Unity 3D software to create a virtual environment containing a visual representation of the data, any data set converted into FBX or OBJ format which can be traversed by wearing the Oculus Rift device. This new method for analysis in conjunction with 3D scanning has potential applications in many fields, including the analysis of precious stones or jewelry. Using hologram technology to capture in high-resolution the 3D shape, color, and imperfections of minerals and stones, detailed review and

  4. Reimagining the microscope in the 21st century using the scalable adaptive graphics environment

    PubMed Central

    Mateevitsi, Victor; Patel, Tushar; Leigh, Jason; Levy, Bruce

    2015-01-01

    Background: Whole-slide imaging (WSI), while technologically mature, remains in the early adopter phase of the technology adoption lifecycle. One reason for this current situation is that current methods of visualizing and using WSI closely follow long-existing workflows for glass slides. We set out to “reimagine” the digital microscope in the era of cloud computing by combining WSI with the rich collaborative environment of the Scalable Adaptive Graphics Environment (SAGE). SAGE is a cross-platform, open-source visualization and collaboration tool that enables users to access, display and share a variety of data-intensive information, in a variety of resolutions and formats, from multiple sources, on display walls of arbitrary size. Methods: A prototype of a WSI viewer app in the SAGE environment was created. While not full featured, it enabled the testing of our hypothesis that these technologies could be blended together to change the essential nature of how microscopic images are utilized for patient care, medical education, and research. Results: Using the newly created WSI viewer app, demonstration scenarios were created in the patient care and medical education scenarios. This included a live demonstration of a pathology consultation at the International Academy of Digital Pathology meeting in Boston in November 2014. Conclusions: SAGE is well suited to display, manipulate and collaborate using WSIs, along with other images and data, for a variety of purposes. It goes beyond how glass slides and current WSI viewers are being used today, changing the nature of digital pathology in the process. A fully developed WSI viewer app within SAGE has the potential to encourage the wider adoption of WSI throughout pathology. PMID:26110092

  5. STRING 3: An Advanced Groundwater Flow Visualization Tool

    NASA Astrophysics Data System (ADS)

    Schröder, Simon; Michel, Isabel; Biedert, Tim; Gräfe, Marius; Seidel, Torsten; König, Christoph

    2016-04-01

    The visualization of 3D groundwater flow is a challenging task. Previous versions of our software STRING [1] solely focused on intuitive visualization of complex flow scenarios for non-professional audiences. STRING, developed by Fraunhofer ITWM (Kaiserslautern, Germany) and delta h Ingenieurgesellschaft mbH (Witten, Germany), provides the necessary means for visualization of both 2D and 3D data on planar and curved surfaces. In this contribution we discuss how to extend this approach to a full 3D tool and its challenges in continuation of Michel et al. [2]. This elevates STRING from a post-production to an exploration tool for experts. In STRING moving pathlets provide an intuition of velocity and direction of both steady-state and transient flows. The visualization concept is based on the Lagrangian view of the flow. To capture every detail of the flow an advanced method for intelligent, time-dependent seeding is used building on the Finite Pointset Method (FPM) developed by Fraunhofer ITWM. Lifting our visualization approach from 2D into 3D provides many new challenges. With the implementation of a seeding strategy for 3D one of the major problems has already been solved (see Schröder et al. [3]). As pathlets only provide an overview of the velocity field other means are required for the visualization of additional flow properties. We suggest the use of Direct Volume Rendering and isosurfaces for scalar features. In this regard we were able to develop an efficient approach for combining the rendering through raytracing of the volume and regular OpenGL geometries. This is achieved through the use of Depth Peeling or A-Buffers for the rendering of transparent geometries. Animation of pathlets requires a strict boundary of the simulation domain. Hence, STRING needs to extract the boundary, even from unstructured data, if it is not provided. In 3D we additionally need a good visualization of the boundary itself. For this the silhouette based on the angle of

  6. Visual bioethics.

    PubMed

    Lauritzen, Paul

    2008-12-01

    Although images are pervasive in public policy debates in bioethics, few who work in the field attend carefully to the way that images function rhetorically. If the use of images is discussed at all, it is usually to dismiss appeals to images as a form of manipulation. Yet it is possible to speak meaningfully of visual arguments. Examining the appeal to images of the embryo and fetus in debates about abortion and stem cell research, I suggest that bioethicists would be well served by attending much more carefully to how images function in public policy debates. PMID:19085479

  7. Top Ten Interaction Challenges in Extreme-Scale Visual Analytics

    SciTech Connect

    Wong, Pak C.; Shen, Han-Wei; Chen, Chaomei

    2012-05-31

    The chapter presents ten selected user interfaces and interaction challenges in extreme-scale visual analytics. The study of visual analytics is often referred to as 'the science of analytical reasoning facilitated by interactive visual interfaces' in the literature. The discussion focuses on the issues of applying visual analytics technologies to extreme-scale scientific and non-scientific data ranging from petabyte to exabyte in sizes. The ten challenges are: in situ interactive analysis, user-driven data reduction, scalability and multi-level hierarchy, representation of evidence and uncertainty, heterogeneous data fusion, data summarization and triage for interactive query, analytics of temporally evolving features, the human bottleneck, design and engineering development, and the Renaissance of conventional wisdom. The discussion addresses concerns that arise from different areas of hardware, software, computation, algorithms, and human factors. The chapter also evaluates the likelihood of success in meeting these challenges in the near future.

  8. Visualizing confusion matrices for multidimensional signal detection correlational methods

    NASA Astrophysics Data System (ADS)

    Zhou, Yue; Wischgoll, Thomas; Blaha, Leslie M.; Smith, Ross; Vickery, Rhonda J.

    2013-12-01

    Advances in modeling and simulation for General Recognition Theory have produced more data than can be easily visualized using traditional techniques. In this area of psychological modeling, domain experts are struggling to find effective ways to compare large-scale simulation results. This paper describes methods that adapt the web-based D3 visualization framework combined with pre-processing tools to enable domain specialists to more easily interpret their data. The D3 framework utilizes Javascript and scalable vector graphics (SVG) to generate visualizations that can run readily within the web browser for domain specialists. Parallel coordinate plots and heat maps were developed for identification-confusion matrix data, and the results were shown to a GRT expert for an informal evaluation of their utility. There is a clear benefit to model interpretation from these visualizations when researchers need to interpret larger amounts of simulated data.

  9. Scalable graphene production: perspectives and challenges of plasma applications.

    PubMed

    Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth

    2016-05-19

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of

  10. Scalable graphene production: perspectives and challenges of plasma applications.

    PubMed

    Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth

    2016-05-19

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of

  11. Scalable graphene production: perspectives and challenges of plasma applications

    NASA Astrophysics Data System (ADS)

    Levchenko, Igor; Ostrikov, Kostya (Ken); Zheng, Jie; Li, Xingguo; Keidar, Michael; B. K. Teo, Kenneth

    2016-05-01

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h-1 m-2 was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various

  12. Towards scalable quantum communication and computation: Novel approaches and realizations

    NASA Astrophysics Data System (ADS)

    Jiang, Liang

    Quantum information science involves exploration of fundamental laws of quantum mechanics for information processing tasks. This thesis presents several new approaches towards scalable quantum information processing. First, we consider a hybrid approach to scalable quantum computation, based on an optically connected network of few-qubit quantum registers. Specifically, we develop a novel scheme for scalable quantum computation that is robust against various imperfections. To justify that nitrogen-vacancy (NV) color centers in diamond can be a promising realization of the few-qubit quantum register, we show how to isolate a few proximal nuclear spins from the rest of the environment and use them for the quantum register. We also demonstrate experimentally that the nuclear spin coherence is only weakly perturbed under optical illumination, which allows us to implement quantum logical operations that use the nuclear spins to assist the repetitive-readout of the electronic spin. Using this technique, we demonstrate more than two-fold improvement in signal-to-noise ratio. Apart from direct application to enhance the sensitivity of the NV-based nano-magnetometer, this experiment represents an important step towards the realization of robust quantum information processors using electronic and nuclear spin qubits. We then study realizations of quantum repeaters for long distance quantum communication. Specifically, we develop an efficient scheme for quantum repeaters based on atomic ensembles. We use dynamic programming to optimize various quantum repeater protocols. In addition, we propose a new protocol of quantum repeater with encoding, which efficiently uses local resources (about 100 qubits) to identify and correct errors, to achieve fast one-way quantum communication over long distances. Finally, we explore quantum systems with topological order. Such systems can exhibit remarkable phenomena such as quasiparticles with anyonic statistics and have been proposed as

  13. How information visualization novices construct visualizations.

    PubMed

    Grammel, Lars; Tory, Melanie; Storey, Margaret-Anne

    2010-01-01

    It remains challenging for information visualization novices to rapidly construct visualizations during exploratory data analysis. We conducted an exploratory laboratory study in which information visualization novices explored fictitious sales data by communicating visualization specifications to a human mediator, who rapidly constructed the visualizations using commercial visualization software. We found that three activities were central to the iterative visualization construction process: data attribute selection, visual template selection, and visual mapping specification. The major barriers faced by the participants were translating questions into data attributes, designing visual mappings, and interpreting the visualizations. Partial specification was common, and the participants used simple heuristics and preferred visualizations they were already familiar with, such as bar, line and pie charts. We derived abstract models from our observations that describe barriers in the data exploration process and uncovered how information visualization novices think about visualization specifications. Our findings support the need for tools that suggest potential visualizations and support iterative refinement, that provide explanations and help with learning, and that are tightly integrated into tool support for the overall visual analytics process.

  14. Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.

    ERIC Educational Resources Information Center

    Wang, James Z.; Du, Yanping

    Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…

  15. LVFS: A Scalable Petabye/Exabyte Data Storage System

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Masuoka, E. J.; Ye, G.; Devine, N. K.

    2013-12-01

    Managing petabytes of data with hundreds of millions of files is the first step necessary towards an effective big data computing and collaboration environment in a distributed system. We describe here the MODAPS LAADS Virtual File System (LVFS), a new storage architecture which replaces the previous MODAPS operational Level 1 Land Atmosphere Archive Distribution System (LAADS) NFS based approach to storing and distributing datasets from several instruments, such as MODIS, MERIS, and VIIRS. LAADS is responsible for the distribution of over 4 petabytes of data and over 300 million files across more than 500 disks. We present here the first LVFS big data comparative performance results and new capabilities not previously possible with the LAADS system. We consider two aspects in addressing inefficiencies of massive scales of data. First, is dealing in a reliable and resilient manner with the volume and quantity of files in such a dataset, and, second, minimizing the discovery and lookup times for accessing files in such large datasets. There are several popular file systems that successfully deal with the first aspect of the problem. Their solution, in general, is through distribution, replication, and parallelism of the storage architecture. The Hadoop Distributed File System (HDFS), Parallel Virtual File System (PVFS), and Lustre are examples of such file systems that deal with petabyte data volumes. The second aspect deals with data discovery among billions of files, the largest bottleneck in reducing access time. The metadata of a file, generally represented in a directory layout, is stored in ways that are not readily scalable. This is true for HDFS, PVFS, and Lustre as well. Recent experimental file systems, such as Spyglass or Pantheon, have attempted to address this problem through redesign of the metadata directory architecture. LVFS takes a radically different architectural approach by eliminating the need for a separate directory within the file system

  16. Scalable Interactive Middleware Components for Ubiquitous Fashionable Computers

    NASA Astrophysics Data System (ADS)

    Shim, Gyudong; Park, Kyu Ho

    The middleware for location based interactive applications requires scalability in large scale spaces. As the number of users and target services are increased, the server has to process massive spatial queries and event handling requests efficiently. Our middleware components are developed to extend the U-interactive system for large scale environments. The system manages the location information for large number of users and target objects. In addition the system handles events caused by user commands. We developed efficient tuple indexing and query mechanism by composite keys. As a new spatial query, Fan search is invented to provide efficient target selection by distance and angle. We optimized the query processing by efficient node traversing and data-aware interval skipping. The tuple matching process is performed in bounded time up to 100,000 objects. Fan search with C-Cuve has superior performance than Z-Curve in high density nodes in the experiment.

  17. Scalable, printable, surfactant-free graphene ink directly from graphite.

    PubMed

    Han, X; Chen, Y; Zhu, H; Preston, C; Wan, J; Fang, Z; Hu, L

    2013-05-24

    In this manuscript, we develop printable graphene ink through a solvent-exchange method. Printable graphene ink in ethanol and water free of any surfactant is dependent on matching the surface tension of the cross-solvent with the graphene surface energy. Percolative transport behavior is observed for films made of this printable ink. Optical conductivity is then calculated based on sheet resistance, optical transmittance, and thickness. Upon analyzing the ratio of dc/optical conductivity versus flake size/layer number, we report that our dc/optical conductivity is among the highest of films based on direct deposited graphene ink. This is the first demonstration of scalable, printable, surfactant-free graphene ink derived directly from graphite. PMID:23609377

  18. Scalable, printable, surfactant-free graphene ink directly from graphite

    NASA Astrophysics Data System (ADS)

    Han, X.; Chen, Y.; Zhu, H.; Preston, C.; Wan, J.; Fang, Z.; Hu, L.

    2013-05-01

    In this manuscript, we develop printable graphene ink through a solvent-exchange method. Printable graphene ink in ethanol and water free of any surfactant is dependent on matching the surface tension of the cross-solvent with the graphene surface energy. Percolative transport behavior is observed for films made of this printable ink. Optical conductivity is then calculated based on sheet resistance, optical transmittance, and thickness. Upon analyzing the ratio of dc/optical conductivity versus flake size/layer number, we report that our dc/optical conductivity is among the highest of films based on direct deposited graphene ink. This is the first demonstration of scalable, printable, surfactant-free graphene ink derived directly from graphite.

  19. Lilith: A scalable secure tool for massively parallel distributed computing

    SciTech Connect

    Armstrong, R.C.; Camp, L.J.; Evensky, D.A.; Gentile, A.C.

    1997-06-01

    Changes in high performance computing have necessitated the ability to utilize and interrogate potentially many thousands of processors. The ASCI (Advanced Strategic Computing Initiative) program conducted by the United States Department of Energy, for example, envisions thousands of distinct operating systems connected by low-latency gigabit-per-second networks. In addition multiple systems of this kind will be linked via high-capacity networks with latencies as low as the speed of light will allow. Code which spans systems of this sort must be scalable; yet constructing such code whether for applications, debugging, or maintenance is an unsolved problem. Lilith is a research software platform that attempts to answer these questions with an end toward meeting these needs. Presently, Lilith exists as a test-bed, written in Java, for various spanning algorithms and security schemes. The test-bed software has, and enforces, hooks allowing implementation and testing of various security schemes.

  20. Scalable Transfer of Suspended Two-Dimensional Single Crystals.

    PubMed

    Li, Bo; He, Yongmin; Lei, Sidong; Najmaei, Sina; Gong, Yongji; Wang, Xin; Zhang, Jing; Ma, Lulu; Yang, Yingchao; Hong, Sanghyun; Hao, Ji; Shi, Gang; George, Antony; Keyshar, Kunttal; Zhang, Xiang; Dong, Pei; Ge, Liehui; Vajtai, Robert; Lou, Jun; Jung, Yung Joon; Ajayan, Pulickel M

    2015-08-12

    Large-scale suspended architectures of various two-dimensional (2D) materials (MoS2, MoSe2, WS2, and graphene) are demonstrated on nanoscale patterned substrates with different physical and chemical surface properties, such as flexible polymer substrates (polydimethylsiloxane), rigid Si substrates, and rigid metal substrates (Au/Ag). This transfer method represents a generic, fast, clean, and scalable technique to suspend 2D atomic layers. The underlying principle behind this approach, which employs a capillary-force-free wet-contact printing method, was studied by characterizing the nanoscale solid-liquid-vapor interface of 2D layers with respect to different substrates. As a proof-of-concept, a photodetector of suspended MoS2 has been demonstrated with significantly improved photosensitivity. This strategy could be extended to several other 2D material systems and open the pathway toward better optoelectronic and nanoelectromechnical systems.

  1. Neuromorphic adaptive plastic scalable electronics: analog learning systems.

    PubMed

    Srinivasa, Narayan; Cruz-Albrecht, Jose

    2012-01-01

    Decades of research to build programmable intelligent machines have demonstrated limited utility in complex, real-world environments. Comparing their performance with biological systems, these machines are less efficient by a factor of 1 million1 billion in complex, real-world environments. The Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program is a multifaceted Defense Advanced Research Projects Agency (DARPA) project that seeks to break the programmable machine paradigm and define a new path for creating useful, intelligent machines. Since real-world systems exhibit infinite combinatorial complexity, electronic neuromorphic machine technology would be preferable in a host of applications, but useful and practical implementations still do not exist. HRL Laboratories LLC has embarked on addressing these challenges, and, in this article, we provide an overview of our project and progress made thus far.

  2. Scalable distributed consensus to support MPI fault tolerance.

    SciTech Connect

    Buntinas, D.

    2011-01-01

    As system sizes increase, the amount of time in which an application can run without experiencing a failure decreases. Exascale applications will need to address fault tolerance. In order to support algorithm-based fault tolerance, communication libraries will need to provide fault-tolerance features to the application. One important fault-tolerance operation is distributed consensus. This is used, for example, to collectively decide on a set of failed processes. This paper describes a scalable, distributed consensus algorithm that is used to support new MPI fault-tolerance features proposed by the MPI 3 Forum's fault-tolerance working group. The algorithm was implemented and evaluated on a 4,096-core Blue Gene/P. The implementation was able to perform a full-scale distributed consensus in 305 {mu}s and scaled logarithmically.

  3. Final Report. Center for Scalable Application Development Software

    SciTech Connect

    Mellor-Crummey, John

    2014-10-26

    The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codes for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.

  4. Overview of the Scalable Coherent Interface, IEEE STD 1596 (SCI)

    SciTech Connect

    Gustavson, D.B.; James, D.V.; Wiggers, H.A.

    1992-10-01

    The Scalable Coherent Interface standard defines a new generation of interconnection that spans the full range from supercomputer memory `bus` to campus-wide network. SCI provides bus-like services and a shared-memory software model while using an underlying, packet protocol on many independent communication links. Initially these links are 1 GByte/s (wires) and 1 GBit/s (fiber), but the protocol scales well to future faster or lower-cost technologies. The interconnect may use switches, meshes, and rings. The SCI distributed-shared-memory model is simple and versatile, enabling for the first time a smooth integration of highly parallel multiprocessors, workstations, personal computers, I/O, networking and data acquisition.

  5. Scalable high-density peptide arrays for comprehensive health monitoring.

    PubMed

    Legutki, Joseph Barten; Zhao, Zhan-Gong; Greving, Matt; Woodbury, Neal; Johnston, Stephen Albert; Stafford, Phillip

    2014-01-01

    There is an increasing awareness that health care must move from post-symptomatic treatment to presymptomatic intervention. An ideal system would allow regular inexpensive monitoring of health status using circulating antibodies to report on health fluctuations. Recently, we demonstrated that peptide microarrays can do this through antibody signatures (immunosignatures). Unfortunately, printed microarrays are not scalable. Here we demonstrate a platform based on fabricating microarrays (~10 M peptides per slide, 330,000 peptides per assay) on silicon wafers using equipment common to semiconductor manufacturing. The potential of these microarrays for comprehensive health monitoring is verified through the simultaneous detection and classification of six different infectious diseases and six different cancers. Besides diagnostics, these high-density peptide chips have numerous other applications both in health care and elsewhere. PMID:25183057

  6. Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS): architecture.

    PubMed

    Mandl, Kenneth D; Kohane, Isaac S; McFadden, Douglas; Weber, Griffin M; Natter, Marc; Mandel, Joshua; Schneeweiss, Sebastian; Weiler, Sarah; Klann, Jeffrey G; Bickel, Jonathan; Adams, William G; Ge, Yaorong; Zhou, Xiaobo; Perkins, James; Marsolo, Keith; Bernstam, Elmer; Showalter, John; Quarshie, Alexander; Ofili, Elizabeth; Hripcsak, George; Murphy, Shawn N

    2014-01-01

    We describe the architecture of the Patient Centered Outcomes Research Institute (PCORI) funded Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS, http://www.SCILHS.org) clinical data research network, which leverages the $48 billion dollar federal investment in health information technology (IT) to enable a queryable semantic data model across 10 health systems covering more than 8 million patients, plugging universally into the point of care, generating evidence and discovery, and thereby enabling clinician and patient participation in research during the patient encounter. Central to the success of SCILHS is development of innovative 'apps' to improve PCOR research methods and capacitate point of care functions such as consent, enrollment, randomization, and outreach for patient-reported outcomes. SCILHS adapts and extends an existing national research network formed on an advanced IT infrastructure built with open source, free, modular components.

  7. The zebrafish: scalable in vivo modeling for systems biology

    PubMed Central

    Deo, Rahul C.; MacRae, Calum A.

    2011-01-01

    The zebrafish offers a scalable vertebrate model for many areas of biologic investigation. There is substantial conservation of genetic and genomic features and, at a higher order, conservation of intermolecular networks, as well as physiologic systems and phenotypes. We highlight recent work demonstrating the extent of this homology, and efforts to develop high-throughput phenotyping strategies suited to genetic or chemical screening on a scale compatible with in vivo validation for systems biology. We discuss the implications of these approaches for functional annotation of the genome, elucidation of multicellular processes in vivo, and mechanistic exploration of hypotheses generated by a broad range of ‘unbiased’ ‘omic technologies such as expression profiling and genome-wide association. Finally, we outline potential strategies for the application of the zebrafish to the systematic study of phenotypic architecture, disease heterogeneity and drug responses. PMID:20882534

  8. a Scalable Hybrid Fpga/gpu FX Correlator

    NASA Astrophysics Data System (ADS)

    Kocz, J.; Greenhill, L. J.; Barsdell, B. R.; Bernardi, G.; Jameson, A.; Clark, M. A.; Craig, J.; Price, D.; Taylor, G. B.; Schinzel, F.; Werthimer, D.

    Radio astronomical imaging arrays comprising large numbers of antennas, O(102-103), have posed a signal processing challenge because of the required O(N2) cross correlation of signals from each antenna and requisite signal routing. This motivated the implementation of a Packetized Correlator architecture that applies Field Programmable Gate Arrays (FPGAs) to the O(N) "F-stage" transforming time domain to frequency domain data, and Graphics Processing Units (GPUs) to the O(N2) "X-stage" performing an outer product among spectra for each antenna. The design is readily scalable to at least O(103) antennas. Fringes, visibility amplitudes and sky image results obtained during field testing are presented.

  9. Fast and fully-scalable synthesis of reduced graphene oxide

    NASA Astrophysics Data System (ADS)

    Abdolhosseinzadeh, Sina; Asgharzadeh, Hamed; Seop Kim, Hyoung

    2015-05-01

    Exfoliation of graphite is a promising approach for large-scale production of graphene. Oxidation of graphite effectively facilitates the exfoliation process, yet necessitates several lengthy washing and reduction processes to convert the exfoliated graphite oxide (graphene oxide, GO) to reduced graphene oxide (RGO). Although filtration, centrifugation and dialysis have been frequently used in the washing stage, none of them is favorable for large-scale production. Here, we report the synthesis of RGO by sonication-assisted oxidation of graphite in a solution of potassium permanganate and concentrated sulfuric acid followed by reduction with ascorbic acid prior to any washing processes. GO loses its hydrophilicity during the reduction stage which facilitates the washing step and reduces the time required for production of RGO. Furthermore, simultaneous oxidation and exfoliation significantly enhance the yield of few-layer GO. We hope this one-pot and fully-scalable protocol paves the road toward out of lab applications of graphene.

  10. A Scalable Framework to Detect Personal Health Mentions on Twitter

    PubMed Central

    Fabbri, Daniel; Rosenbloom, S Trent

    2015-01-01

    Background Biomedical research has traditionally been conducted via surveys and the analysis of medical records. However, these resources are limited in their content, such that non-traditional domains (eg, online forums and social media) have an opportunity to supplement the view of an individual’s health. Objective The objective of this study was to develop a scalable framework to detect personal health status mentions on Twitter and assess the extent to which such information is disclosed. Methods We collected more than 250 million tweets via the Twitter streaming API over a 2-month period in 2014. The corpus was filtered down to approximately 250,000 tweets, stratified across 34 high-impact health issues, based on guidance from the Medical Expenditure Panel Survey. We created a labeled corpus of several thousand tweets via a survey, administered over Amazon Mechanical Turk, that documents when terms correspond to mentions of personal health issues or an alternative (eg, a metaphor). We engineered a scalable classifier for personal health mentions via feature selection and assessed its potential over the health issues. We further investigated the utility of the tweets by determining the extent to which Twitter users disclose personal health status. Results Our investigation yielded several notable findings. First, we find that tweets from a small subset of the health issues can train a scalable classifier to detect health mentions. Specifically, training on 2000 tweets from four health issues (cancer, depression, hypertension, and leukemia) yielded a classifier with precision of 0.77 on all 34 health issues. Second, Twitter users disclosed personal health status for all health issues. Notably, personal health status was disclosed over 50% of the time for 11 out of 34 (33%) investigated health issues. Third, the disclosure rate was dependent on the health issue in a statistically significant manner (P<.001). For instance, more than 80% of the tweets about

  11. A Practical and Scalable Tool to Find Overlaps between Sequences.

    PubMed

    Rachid, Maan Haj; Malluhi, Qutaibah

    2015-01-01

    The evolution of the next generation sequencing technology increases the demand for efficient solutions, in terms of space and time, for several bioinformatics problems. This paper presents a practical and easy-to-implement solution for one of these problems, namely, the all-pairs suffix-prefix problem, using a compact prefix tree. The paper demonstrates an efficient construction of this time-efficient and space-economical tree data structure. The paper presents techniques for parallel implementations of the proposed solution. Experimental evaluation indicates superior results in terms of space and time over existing solutions. Results also show that the proposed technique is highly scalable in a parallel execution environment. PMID:25961045

  12. Scalable lithography from Natural DNA Patterns via polyacrylamide gel

    PubMed Central

    Qu, JieHao; Hou, XianLiang; Fan, WanChao; Xi, GuangHui; Diao, HongYan; Liu, XiangDon

    2015-01-01

    A facile strategy for fabricating scalable stamps has been developed using cross-linked polyacrylamide gel (PAMG) that controllably and precisely shrinks and swells with water content. Aligned patterns of natural DNA molecules were prepared by evaporative self-assembly on a PMMA substrate, and were transferred to unsaturated polyester resin (UPR) to form a negative replica. The negative was used to pattern the linear structures onto the surface of water-swollen PAMG, and the pattern sizes on the PAMG stamp were customized by adjusting the water content of the PAMG. As a result, consistent reproduction of DNA patterns could be achieved with feature sizes that can be controlled over the range of 40%–200% of the original pattern dimensions. This methodology is novel and may pave a new avenue for manufacturing stamp-based functional nanostructures in a simple and cost-effective manner on a large scale. PMID:26639572

  13. Scalable lithography from Natural DNA Patterns via polyacrylamide gel.

    PubMed

    Qu, JieHao; Hou, XianLiang; Fan, WanChao; Xi, GuangHui; Diao, HongYan; Liu, XiangDon

    2015-12-07

    A facile strategy for fabricating scalable stamps has been developed using cross-linked polyacrylamide gel (PAMG) that controllably and precisely shrinks and swells with water content. Aligned patterns of natural DNA molecules were prepared by evaporative self-assembly on a PMMA substrate, and were transferred to unsaturated polyester resin (UPR) to form a negative replica. The negative was used to pattern the linear structures onto the surface of water-swollen PAMG, and the pattern sizes on the PAMG stamp were customized by adjusting the water content of the PAMG. As a result, consistent reproduction of DNA patterns could be achieved with feature sizes that can be controlled over the range of 40%-200% of the original pattern dimensions. This methodology is novel and may pave a new avenue for manufacturing stamp-based functional nanostructures in a simple and cost-effective manner on a large scale.

  14. Scalable lithography from Natural DNA Patterns via polyacrylamide gel.

    PubMed

    Qu, JieHao; Hou, XianLiang; Fan, WanChao; Xi, GuangHui; Diao, HongYan; Liu, XiangDon

    2015-01-01

    A facile strategy for fabricating scalable stamps has been developed using cross-linked polyacrylamide gel (PAMG) that controllably and precisely shrinks and swells with water content. Aligned patterns of natural DNA molecules were prepared by evaporative self-assembly on a PMMA substrate, and were transferred to unsaturated polyester resin (UPR) to form a negative replica. The negative was used to pattern the linear structures onto the surface of water-swollen PAMG, and the pattern sizes on the PAMG stamp were customized by adjusting the water content of the PAMG. As a result, consistent reproduction of DNA patterns could be achieved with feature sizes that can be controlled over the range of 40%-200% of the original pattern dimensions. This methodology is novel and may pave a new avenue for manufacturing stamp-based functional nanostructures in a simple and cost-effective manner on a large scale. PMID:26639572

  15. Scalable Lunar Surface Networks and Adaptive Orbit Access

    NASA Technical Reports Server (NTRS)

    Wang, Xudong

    2015-01-01

    Teranovi Technologies, Inc., has developed innovative network architecture, protocols, and algorithms for both lunar surface and orbit access networks. A key component of the overall architecture is a medium access control (MAC) protocol that includes a novel mechanism of overlaying time division multiple access (TDMA) and carrier sense multiple access with collision avoidance (CSMA/CA), ensuring scalable throughput and quality of service. The new MAC protocol is compatible with legacy Institute of Electrical and Electronics Engineers (IEEE) 802.11 networks. Advanced features include efficiency power management, adaptive channel width adjustment, and error control capability. A hybrid routing protocol combines the advantages of ad hoc on-demand distance vector (AODV) routing and disruption/delay-tolerant network (DTN) routing. Performance is significantly better than AODV or DTN and will be particularly effective for wireless networks with intermittent links, such as lunar and planetary surface networks and orbit access networks.

  16. Scalable load-balance measurement for SPMD codes

    SciTech Connect

    Gamblin, T; de Supinski, B R; Schulz, M; Fowler, R; Reed, D

    2008-08-05

    Good load balance is crucial on very large parallel systems, but the most sophisticated algorithms introduce dynamic imbalances through adaptation in domain decomposition or use of adaptive solvers. To observe and diagnose imbalance, developers need system-wide, temporally-ordered measurements from full-scale runs. This potentially requires data collection from multiple code regions on all processors over the entire execution. Doing this instrumentation naively can, in combination with the application itself, exceed available I/O bandwidth and storage capacity, and can induce severe behavioral perturbations. We present and evaluate a novel technique for scalable, low-error load balance measurement. This uses a parallel wavelet transform and other parallel encoding methods. We show that our technique collects and reconstructs system-wide measurements with low error. Compression time scales sublinearly with system size and data volume is several orders of magnitude smaller than the raw data. The overhead is low enough for online use in a production environment.

  17. Center for Programming Models for Scalable Parallel Computing

    SciTech Connect

    John Mellor-Crummey

    2008-02-29

    Rice University's achievements as part of the Center for Programming Models for Scalable Parallel Computing include: (1) design and implemention of cafc, the first multi-platform CAF compiler for distributed and shared-memory machines, (2) performance studies of the efficiency of programs written using the CAF and UPC programming models, (3) a novel technique to analyze explicitly-parallel SPMD programs that facilitates optimization, (4) design, implementation, and evaluation of new language features for CAF, including communication topologies, multi-version variables, and distributed multithreading to simplify development of high-performance codes in CAF, and (5) a synchronization strength reduction transformation for automatically replacing barrier-based synchronization with more efficient point-to-point synchronization. The prototype Co-array Fortran compiler cafc developed in this project is available as open source software from http://www.hipersoft.rice.edu/caf.

  18. Scalable lithography from Natural DNA Patterns via polyacrylamide gel

    NASA Astrophysics Data System (ADS)

    Qu, Jiehao; Hou, Xianliang; Fan, Wanchao; Xi, Guanghui; Diao, Hongyan; Liu, Xiangdon

    2015-12-01

    A facile strategy for fabricating scalable stamps has been developed using cross-linked polyacrylamide gel (PAMG) that controllably and precisely shrinks and swells with water content. Aligned patterns of natural DNA molecules were prepared by evaporative self-assembly on a PMMA substrate, and were transferred to unsaturated polyester resin (UPR) to form a negative replica. The negative was used to pattern the linear structures onto the surface of water-swollen PAMG, and the pattern sizes on the PAMG stamp were customized by adjusting the water content of the PAMG. As a result, consistent reproduction of DNA patterns could be achieved with feature sizes that can be controlled over the range of 40%-200% of the original pattern dimensions. This methodology is novel and may pave a new avenue for manufacturing stamp-based functional nanostructures in a simple and cost-effective manner on a large scale.

  19. Scalable antifouling reverse osmosis membranes utilizing perfluorophenyl azide photochemistry.

    PubMed

    McVerry, Brian T; Wong, Mavis C Y; Marsh, Kristofer L; Temple, James A T; Marambio-Jones, Catalina; Hoek, Eric M V; Kaner, Richard B

    2014-09-01

    We present a method to produce anti-fouling reverse osmosis (RO) membranes that maintains the process and scalability of current RO membrane manufacturing. Utilizing perfluorophenyl azide (PFPA) photochemistry, commercial reverse osmosis membranes were dipped into an aqueous solution containing PFPA-terminated poly(ethyleneglycol) species and then exposed to ultraviolet light under ambient conditions, a process that can easily be adapted to a roll-to-roll process. Successful covalent modification of commercial reverse osmosis membranes was confirmed with attenuated total reflectance infrared spectroscopy and contact angle measurements. By employing X-ray photoelectron spectroscopy, it was determined that PFPAs undergo UV-generated nitrene addition and bind to the membrane through an aziridine linkage. After modification with the PFPA-PEG derivatives, the reverse osmosis membranes exhibit high fouling-resistance.

  20. Scalable sensing electronics towards a motion capture suit

    NASA Astrophysics Data System (ADS)

    Xu, Daniel; Gisby, Todd A.; Xie, Shane; Anderson, Iain A.

    2013-04-01

    Being able to accurately record body motion allows complex movements to be characterised and studied. This is especially important in the film or sport coaching industry. Unfortunately, the human body has over 600 skeletal muscles, giving rise to multiple degrees of freedom. In order to accurately capture motion such as hand gestures, elbow or knee flexion and extension, vast numbers of sensors are required. Dielectric elastomer (DE) sensors are an emerging class of electroactive polymer (EAP) that is soft, lightweight and compliant. These characteristics are ideal for a motion capture suit. One challenge is to design sensing electronics that can simultaneously measure multiple sensors. This paper describes a scalable capacitive sensing device that can measure up to 8 different sensors with an update rate of 20Hz.

  1. Final Report: Center for Programming Models for Scalable Parallel Computing

    SciTech Connect

    Mellor-Crummey, John

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  2. A scalable correlator for multichannel diffuse correlation spectroscopy

    NASA Astrophysics Data System (ADS)

    Stapels, Christopher J.; Kolodziejski, Noah J.; McAdams, Daniel; Podolsky, Matthew J.; Fernandez, Daniel E.; Farkas, Dana; Christian, James F.

    2016-03-01

    Diffuse correlation spectroscopy (DCS) is a technique which enables powerful and robust non-invasive optical studies of tissue micro-circulation and vascular blood flow. The technique amounts to autocorrelation analysis of coherent photons after their migration through moving scatterers and subsequent collection by single-mode optical fibers. A primary cost driver of DCS instruments are the commercial hardware-based correlators, limiting the proliferation of multi-channel instruments for validation of perfusion analysis as a clinical diagnostic metric. We present the development of a low-cost scalable correlator enabled by microchip-based time-tagging, and a software-based multi-tau data analysis method. We will discuss the capabilities of the instrument as well as the implementation and validation of 2- and 8-channel systems built for live animal and pre-clinical settings.

  3. Image and geometry processing with Oriented and Scalable Map.

    PubMed

    Hua, Hao

    2016-05-01

    We turn the Self-organizing Map (SOM) into an Oriented and Scalable Map (OS-Map) by generalizing the neighborhood function and the winner selection. The homogeneous Gaussian neighborhood function is replaced with the matrix exponential. Thus we can specify the orientation either in the map space or in the data space. Moreover, we associate the map's global scale with the locality of winner selection. Our model is suited for a number of graphical applications such as texture/image synthesis, surface parameterization, and solid texture synthesis. OS-Map is more generic and versatile than the task-specific algorithms for these applications. Our work reveals the overlooked strength of SOMs in processing images and geometries.

  4. Scalable in situ qubit calibration during repetitive error detection

    NASA Astrophysics Data System (ADS)

    Kelly, J.; Barends, R.; Fowler, A. G.; Megrant, A.; Jeffrey, E.; White, T. C.; Sank, D.; Mutus, J. Y.; Campbell, B.; Chen, Yu; Chen, Z.; Chiaro, B.; Dunsworth, A.; Lucero, E.; Neeley, M.; Neill, C.; O'Malley, P. J. J.; Quintana, C.; Roushan, P.; Vainsencher, A.; Wenner, J.; Martinis, John M.

    2016-09-01

    We present a method to optimize qubit control parameters during error detection which is compatible with large-scale qubit arrays. We demonstrate our method to optimize single or two-qubit gates in parallel on a nine-qubit system. Additionally, we show how parameter drift can be compensated for during computation by inserting a frequency drift and using our method to remove it. We remove both drift on a single qubit and independent drifts on all qubits simultaneously. We believe this method will be useful in keeping error rates low on all physical qubits throughout the course of a computation. Our method is O (1 ) scalable to systems of arbitrary size, providing a path towards controlling the large numbers of qubits needed for a fault-tolerant quantum computer.

  5. A general and scalable synthesis approach to porous graphene.

    PubMed

    Zhou, Ding; Cui, Yi; Xiao, Pei-Wen; Jiang, Mei-Yang; Han, Bao-Hang

    2014-09-02

    Porous graphene, which features nano-scaled pores on the sheets, is mostly investigated by computational studies. The pores on the graphene sheets may contribute to the improved mass transfer and may show potential applications in many fields. To date, the preparation of porous graphene includes chemical bottom-up approach via the aryl-aryl coupling reaction and physical preparation by high-energy techniques, and is generally conducted on substrates with limited yields. Here we show a general and scalable synthesis method for porous graphene that is developed through the carbothermal reaction between graphene and metal oxide nanoparticles produced from oxometalates or polyoxometalates. The pore formation process is observed in situ with the assistance of an electron beam. Pore engineering on graphene is conducted by controlling the pore size and/or the nitrogen doping on the porous graphene sheets by varying the amount of the oxometalates or polyoxometalates, or using ammonium-containing oxometalates or polyoxometalates.

  6. Implementation of the Timepix ASIC in the Scalable Readout System

    NASA Astrophysics Data System (ADS)

    Lupberger, M.; Desch, K.; Kaminski, J.

    2016-09-01

    We report on the development of electronics hardware, FPGA firmware and software to provide a flexible multi-chip readout of the Timepix ASIC within the framework of the Scalable Readout System (SRS). The system features FPGA-based zero-suppression and the possibility to read out up to 4×8 chips with a single Front End Concentrator (FEC). By operating several FECs in parallel, in principle an arbitrary number of chips can be read out, exploiting the scaling features of SRS. Specifically, we tested the system with a setup consisting of 160 Timepix ASICs, operated as GridPix devices in a large TPC field cage in a 1 T magnetic field at a DESY test beam facility providing an electron beam of up to 6 GeV. We discuss the design choices, the dedicated hardware components, the FPGA firmware as well as the performance of the system in the test beam.

  7. Scalable Implementation of Boson Sampling with Trapped Ions

    NASA Astrophysics Data System (ADS)

    Shen, Chao; Zhang, Zhen; Duan, Luming

    2014-03-01

    Boson sampling solves a classically intractable problem by sampling from a probability distribution given by matrix permanents. We propose a scalable implementation of Boson sampling using local transverse phonon modes of trapped ions to encode the Bosons. The proposed scheme allows deterministic preparation and high-efficiency readout of the Bosons in the Fock states and universal mode mixing. With the state-of-the-art trapped ion technology, it is feasible to realize Boson sampling with tens of Bosons by this scheme, which would outperform the most powerful classical computers and constitute an effective disproof of the famous extended Church-Turing thesis. This work was supported by the NBRPC (973 Program) 2011CBA00300 (2011CBA00302), the IARPA MUSIQC program, the ARO and the AFOSR MURI programs, and the DARPA OLE program.

  8. A Scalable Nonuniform Pointer Analysis for Embedded Program

    NASA Technical Reports Server (NTRS)

    Venet, Arnaud

    2004-01-01

    In this paper we present a scalable pointer analysis for embedded applications that is able to distinguish between instances of recursively defined data structures and elements of arrays. The main contribution consists of an efficient yet precise algorithm that can handle multithreaded programs. We first perform an inexpensive flow-sensitive analysis of each function in the program that generates semantic equations describing the effect of the function on the memory graph. These equations bear numerical constraints that describe nonuniform points-to relationships. We then iteratively solve these equations in order to obtain an abstract storage graph that describes the shape of data structures at every point of the program for all possible thread interleavings. We bring experimental evidence that this approach is tractable and precise for real-size embedded applications.

  9. Fermilab's multi-petabyte scalable mass storage system

    SciTech Connect

    Oleynik, Gene; Alcorn, Bonnie; Baisley, Wayne; Bakken, Jon; Berg, David; Berman, Eileen; Huang, Chih-Hao; Jones, Terry; Kennedy, Robert D.; Kulyavtsev, Alexander; Moibenko, Alexander; Perelmutov, Timur; Petravick, Don; Podstavkov, Vladimir; Szmuksta, George; Zalokar, Michael; /Fermilab

    2005-01-01

    Fermilab provides a multi-Petabyte scale mass storage system for High Energy Physics (HEP) Experiments and other scientific endeavors. We describe the scalability aspects of the hardware and software architecture that were designed into the Mass Storage System to permit us to scale to multiple petabytes of storage capacity, manage tens of terabytes per day in data transfers, support hundreds of users, and maintain data integrity. We discuss in detail how we scale the system over time to meet the ever-increasing needs of the scientific community, and relate our experiences with many of the technical and economic issues related to scaling the system. Since the 2003 MSST conference, the experiments at Fermilab have generated more than 1.9 PB of additional data. We present results on how this system has scaled and performed for the Fermilab CDF and D0 Run II experiments as well as other HEP experiments and scientific endeavors.

  10. Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS): Architecture

    PubMed Central

    Mandl, Kenneth D; Kohane, Isaac S; McFadden, Douglas; Weber, Griffin M; Natter, Marc; Mandel, Joshua; Schneeweiss, Sebastian; Weiler, Sarah; Klann, Jeffrey G; Bickel, Jonathan; Adams, William G; Ge, Yaorong; Zhou, Xiaobo; Perkins, James; Marsolo, Keith; Bernstam, Elmer; Showalter, John; Quarshie, Alexander; Ofili, Elizabeth; Hripcsak, George; Murphy, Shawn N

    2014-01-01

    We describe the architecture of the Patient Centered Outcomes Research Institute (PCORI) funded Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS, http://www.SCILHS.org) clinical data research network, which leverages the $48 billion dollar federal investment in health information technology (IT) to enable a queryable semantic data model across 10 health systems covering more than 8 million patients, plugging universally into the point of care, generating evidence and discovery, and thereby enabling clinician and patient participation in research during the patient encounter. Central to the success of SCILHS is development of innovative ‘apps’ to improve PCOR research methods and capacitate point of care functions such as consent, enrollment, randomization, and outreach for patient-reported outcomes. SCILHS adapts and extends an existing national research network formed on an advanced IT infrastructure built with open source, free, modular components. PMID:24821734

  11. Memory bandwidth-scalable motion estimation for mobile video coding

    NASA Astrophysics Data System (ADS)

    Hsieh, Jui-Hung; Tai, Wei-Cheng; Chang, Tian-Sheuan

    2011-12-01

    The heavy memory access of motion estimation (ME) execution consumes significant power and could limit ME execution when the available memory bandwidth (BW) is reduced because of access congestion or changes in the dynamics of the power environment of modern mobile devices. In order to adapt to the changing BW while maintaining the rate-distortion (R-D) performance, this article proposes a novel data BW-scalable algorithm for ME with mobile multimedia chips. The available BW is modeled in a R-D sense and allocated to fit the dynamic contents. The simulation result shows 70% BW savings while keeping equivalent R-D performance compared with H.264 reference software for low-motion CIF-sized video. For high-motion sequences, the result shows our algorithm can better use the available BW to save an average bit rate of up to 13% with up to 0.1-dB PSNR increase for similar BW usage.

  12. Fast and fully-scalable synthesis of reduced graphene oxide

    PubMed Central

    Abdolhosseinzadeh, Sina; Asgharzadeh, Hamed; Seop Kim, Hyoung

    2015-01-01

    Exfoliation of graphite is a promising approach for large-scale production of graphene. Oxidation of graphite effectively facilitates the exfoliation process, yet necessitates several lengthy washing and reduction processes to convert the exfoliated graphite oxide (graphene oxide, GO) to reduced graphene oxide (RGO). Although filtration, centrifugation and dialysis have been frequently used in the washing stage, none of them is favorable for large-scale production. Here, we report the synthesis of RGO by sonication-assisted oxidation of graphite in a solution of potassium permanganate and concentrated sulfuric acid followed by reduction with ascorbic acid prior to any washing processes. GO loses its hydrophilicity during the reduction stage which facilitates the washing step and reduces the time required for production of RGO. Furthermore, simultaneous oxidation and exfoliation significantly enhance the yield of few-layer GO. We hope this one-pot and fully-scalable protocol paves the road toward out of lab applications of graphene. PMID:25976732

  13. Scalable high-density peptide arrays for comprehensive health monitoring.

    PubMed

    Legutki, Joseph Barten; Zhao, Zhan-Gong; Greving, Matt; Woodbury, Neal; Johnston, Stephen Albert; Stafford, Phillip

    2014-09-03

    There is an increasing awareness that health care must move from post-symptomatic treatment to presymptomatic intervention. An ideal system would allow regular inexpensive monitoring of health status using circulating antibodies to report on health fluctuations. Recently, we demonstrated that peptide microarrays can do this through antibody signatures (immunosignatures). Unfortunately, printed microarrays are not scalable. Here we demonstrate a platform based on fabricating microarrays (~10 M peptides per slide, 330,000 peptides per assay) on silicon wafers using equipment common to semiconductor manufacturing. The potential of these microarrays for comprehensive health monitoring is verified through the simultaneous detection and classification of six different infectious diseases and six different cancers. Besides diagnostics, these high-density peptide chips have numerous other applications both in health care and elsewhere.

  14. Feature and Region Selection for Visual Learning.

    PubMed

    Zhao, Ji; Wang, Liantao; Cabral, Ricardo; De la Torre, Fernando

    2016-03-01

    Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ(2) and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach. PMID:26742135

  15. VS: a surface-based system for topological analysis, quantization and visualization of voxel data.

    PubMed

    Cohen, Itay; Gordon, Dan

    2009-04-01

    VS is a simple system consisting of several techniques for various volumetric problems. Based on the marching cubes algorithm, it operates with one space sweep through the voxels and extracts all topological information: detection of all isosurfaces, partitioning the data into connected components on the basis of surface connectivity, and association of surfaces with any internal surfaces to arbitrary levels of nesting. VS extends Baker's "Weaving Wall" method by associating topological cavities with their outer surface, and by using efficient data structures for the voxel traversal and for the connected component detection. Its runtime, on average, is only about 2% more than the speeded-up marching cubes algorithm. VS operates on the original voxels without using the contour tree required by other approaches. Using linear-time preprocessing, VS constructs a data structure which can be utilized for any isovalue or interval of isovalues. An accurate estimate of each component's volume is based on the volume enclosed by the outer surface, minus the volume of any internal cavities. VS enables noise reduction by eliminating components (and cavities) with a small volume. Different components can be rendered with different visual attributes, and cutaway views by arbitrary cut-planes are displayed as if the objects were solid, without adding any new surface patches to match the intersection. PMID:19083261

  16. LiveGantt: Interactively Visualizing a Large Manufacturing Schedule.

    PubMed

    Jo, Jaemin; Huh, Jaeseok; Park, Jonghun; Kim, Bohyoung; Seo, Jinwook

    2014-12-01

    In this paper, we introduce LiveGantt as a novel interactive schedule visualization tool that helps users explore highly-concurrent large schedules from various perspectives. Although a Gantt chart is the most common approach to illustrate schedules, currently available Gantt chart visualization tools suffer from limited scalability and lack of interactions. LiveGantt is built with newly designed algorithms and interactions to improve conventional charts with better scalability, explorability, and reschedulability. It employs resource reordering and task aggregation to display the schedules in a scalable way. LiveGantt provides four coordinated views and filtering techniques to help users explore and interact with the schedules in more flexible ways. In addition, LiveGantt is equipped with an efficient rescheduler to allow users to instantaneously modify their schedules based on their scheduling experience in the fields. To assess the usefulness of the application of LiveGantt, we conducted a case study on manufacturing schedule data with four industrial engineering researchers. Participants not only grasped an overview of a schedule but also explored the schedule from multiple perspectives to make enhancements.

  17. Scalability-performance tradeoffs in MPLS and IP routing

    NASA Astrophysics Data System (ADS)

    Yilmaz, Selma; Matta, Ibrahim

    2002-07-01

    MPLS (Multi-Protocol Label Switching) has recently emerged to facilitate the engineering of network traffic. This can be achieved by directing packet flows over paths that satisfy multiple requirements. MPLS has been regarded as an enhancement to traditional IP routing, which has the following problems: (1) all packets with the same IP destination address have to follow the same path through the network; and (2) paths have often been computed based on static and single link metrics. These problems may cause traffic concentration, and thus degradation in quality of service. In this paper, we investigate by simulations a range of routing solutions and examine the tradeoff between scalability and performance. At one extreme, IP packet routing using dynamic link metrics provides a stateless solution but may lead to routing oscillations. At the other extreme, we consider a recently proposed Profile-based Routing (PBR), which uses knowledge of potential ingress-egress pairs as well as the traffic profile among them. Minimum Interference Routing (MIRA) is another recently proposed MPLS-based scheme, which only exploits knowledge of potential ingress-egress pairs but not their traffic profile. MIRA and the more conventional widest-shortest path (WSP) routing represent alternative MPLS-based approaches on the spectrum of routing solutions. We compare these solutions in terms of utility, bandwidth acceptance ratio as well as their scalability (routing state and computational overhead) and load balancing capability. While the simplest of the per-flow algorithms we consider, the performance of WSP is close to dynamic per-packet routing, without the potential instabilities of dynamic routing.

  18. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  19. ParaText : scalable text modeling and analysis.

    SciTech Connect

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-06-01

    Automated processing, modeling, and analysis of unstructured text (news documents, web content, journal articles, etc.) is a key task in many data analysis and decision making applications. As data sizes grow, scalability is essential for deep analysis. In many cases, documents are modeled as term or feature vectors and latent semantic analysis (LSA) is used to model latent, or hidden, relationships between documents and terms appearing in those documents. LSA supplies conceptual organization and analysis of document collections by modeling high-dimension feature vectors in many fewer dimensions. While past work on the scalability of LSA modeling has focused on the SVD, the goal of our work is to investigate the use of distributed memory architectures for the entire text analysis process, from data ingestion to semantic modeling and analysis. ParaText is a set of software components for distributed processing, modeling, and analysis of unstructured text. The ParaText source code is available under a BSD license, as an integral part of the Titan toolkit. ParaText components are chained-together into data-parallel pipelines that are replicated across processes on distributed-memory architectures. Individual components can be replaced or rewired to explore different computational strategies and implement new functionality. ParaText functionality can be embedded in applications on any platform using the native C++ API, Python, or Java. The ParaText MPI Process provides a 'generic' text analysis pipeline in a command-line executable that can be used for many serial and parallel analysis tasks. ParaText can also be deployed as a web service accessible via a RESTful (HTTP) API. In the web service configuration, any client can access the functionality provided by ParaText using commodity protocols ... from standard web browsers to custom clients written in any language.

  20. Scalable Conjunction Processing using Spatiotemporally Indexed Ephemeris Data

    NASA Astrophysics Data System (ADS)

    Budianto-Ho, I.; Johnson, S.; Sivilli, R.; Alberty, C.; Scarberry, R.

    2014-09-01

    The collision warnings produced by the Joint Space Operations Center (JSpOC) are of critical importance in protecting U.S. and allied spacecraft against destructive collisions and protecting the lives of astronauts during space flight. As the Space Surveillance Network (SSN) improves its sensor capabilities for tracking small and dim space objects, the number of tracked objects increases from thousands to hundreds of thousands of objects, while the number of potential conjunctions increases with the square of the number of tracked objects. Classical filtering techniques such as apogee and perigee filters have proven insufficient. Novel and orders of magnitude faster conjunction analysis algorithms are required to find conjunctions in a timely manner. Stellar Science has developed innovative filtering techniques for satellite conjunction processing using spatiotemporally indexed ephemeris data that efficiently and accurately reduces the number of objects requiring high-fidelity and computationally-intensive conjunction analysis. Two such algorithms, one based on the k-d Tree pioneered in robotics applications and the other based on Spatial Hash Tables used in computer gaming and animation, use, at worst, an initial O(N log N) preprocessing pass (where N is the number of tracked objects) to build large O(N) spatial data structures that substantially reduce the required number of O(N^2) computations, substituting linear memory usage for quadratic processing time. The filters have been implemented as Open Services Gateway initiative (OSGi) plug-ins for the Continuous Anomalous Orbital Situation Discriminator (CAOS-D) conjunction analysis architecture. We have demonstrated the effectiveness, efficiency, and scalability of the techniques using a catalog of 100,000 objects, an analysis window of one day, on a 64-core computer with 1TB shared memory. Each algorithm can process the full catalog in 6 minutes or less, almost a twenty-fold performance improvement over the

  1. Visualization rhetoric: framing effects in narrative visualization.

    PubMed

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation.

  2. Semantic Integrative Digital Pathology: Insights into Microsemiological Semantics and Image Analysis Scalability.

    PubMed

    Racoceanu, Daniel; Capron, Frédérique

    2016-01-01

    be devoted to morphological microsemiology (microscopic morphology semantics). Besides insuring the traceability of the results (second opinion) and supporting the orchestration of high-content image analysis modules, the role of semantics will be crucial for the correlation between digital pathology and noninvasive medical imaging modalities. In addition, semantics has an important role in modelling the links between traditional microscopy and recent label-free technologies. The massive amount of visual data is challenging and represents a characteristic intrinsic to digital pathology. The design of an operational integrative microscopy framework needs to focus on scalable multiscale imaging formalism. In this sense, we prospectively consider some of the most recent scalable methodologies adapted to digital pathology as marked point processes for nuclear atypia and point-set mathematical morphology for architecture grading. To orchestrate this scalable framework, semantics-based WSI management (analysis, exploration, indexing, retrieval and report generation support) represents an important means towards approaches to integrating big data into biomedicine. This insight reflects our vision through an instantiation of essential bricks of this type of architecture. The generic approach introduced here is applicable to a number of challenges related to molecular imaging, high-content image management and, more generally, bioinformatics. PMID:27100713

  3. Semantic Integrative Digital Pathology: Insights into Microsemiological Semantics and Image Analysis Scalability.

    PubMed

    Racoceanu, Daniel; Capron, Frédérique

    2016-01-01

    be devoted to morphological microsemiology (microscopic morphology semantics). Besides insuring the traceability of the results (second opinion) and supporting the orchestration of high-content image analysis modules, the role of semantics will be crucial for the correlation between digital pathology and noninvasive medical imaging modalities. In addition, semantics has an important role in modelling the links between traditional microscopy and recent label-free technologies. The massive amount of visual data is challenging and represents a characteristic intrinsic to digital pathology. The design of an operational integrative microscopy framework needs to focus on scalable multiscale imaging formalism. In this sense, we prospectively consider some of the most recent scalable methodologies adapted to digital pathology as marked point processes for nuclear atypia and point-set mathematical morphology for architecture grading. To orchestrate this scalable framework, semantics-based WSI management (analysis, exploration, indexing, retrieval and report generation support) represents an important means towards approaches to integrating big data into biomedicine. This insight reflects our vision through an instantiation of essential bricks of this type of architecture. The generic approach introduced here is applicable to a number of challenges related to molecular imaging, high-content image management and, more generally, bioinformatics.

  4. A Lightweight Remote Parallel Visualization Platform for Interactive Massive Time-varying Climate Data Analysis

    NASA Astrophysics Data System (ADS)

    Li, J.; Zhang, T.; Huang, Q.; Liu, Q.

    2014-12-01

    Today's climate datasets are featured with large volume, high degree of spatiotemporal complexity and evolving fast overtime. As visualizing large volume distributed climate datasets is computationally intensive, traditional desktop based visualization applications fail to handle the computational intensity. Recently, scientists have developed remote visualization techniques to address the computational issue. Remote visualization techniques usually leverage server-side parallel computing capabilities to perform visualization tasks and deliver visualization results to clients through network. In this research, we aim to build a remote parallel visualization platform for visualizing and analyzing massive climate data. Our visualization platform was built based on Paraview, which is one of the most popular open source remote visualization and analysis applications. To further enhance the scalability and stability of the platform, we have employed cloud computing techniques to support the deployment of the platform. In this platform, all climate datasets are regular grid data which are stored in NetCDF format. Three types of data access methods are supported in the platform: accessing remote datasets provided by OpenDAP servers, accessing datasets hosted on the web visualization server and accessing local datasets. Despite different data access methods, all visualization tasks are completed at the server side to reduce the workload of clients. As a proof of concept, we have implemented a set of scientific visualization methods to show the feasibility of the platform. Preliminary results indicate that the framework can address the computation limitation of desktop based visualization applications.

  5. Visual place recognition with repetitive structures.

    PubMed

    Torii, Akihiko; Sivic, Josef; Okutomi, Masatoshi; Pajdla, Tomas

    2015-11-01

    Repeated structures such as building facades, fences or road markings often represent a significant challenge for place recognition. Repeated structures are notoriously hard for establishing correspondences using multi-view geometry. They violate the feature independence assumed in the bag-of-visual-words representation which often leads to over-counting evidence and significant degradation of retrieval performance. In this work we show that repeated structures are not a nuisance but, when appropriately represented, they form an important distinguishing feature for many places. We describe a representation of repeated structures suitable for scalable retrieval and geometric verification. The retrieval is based on robust detection of repeated image structures and a suitable modification of weights in the bag-of-visual-word model. We also demonstrate that the explicit detection of repeated patterns is beneficial for robust visual word matching for geometric verification. Place recognition results are shown on datasets of street-level imagery from Pittsburgh and San Francisco demonstrating significant gains in recognition performance compared to the standard bag-of-visual-words baseline as well as the more recently proposed burstiness weighting and Fisher vector encoding. PMID:26440272

  6. Translation, Enhancement, Filtering, and Visualization of Large 3D Triangle Mesh

    1997-04-21

    The runthru system consists of five programs: workcell filter, just do it, transl8g, decim8, and runthru. The workcell filter program is useful if the source of your 3D triangle mesh model is IGRIP. It will traverse a directory structure of Deneb IGRIP files and filter out any IGRIP part files that are not referenced by an accompanying IGRIP work cell file. The just do it program automates translating and/or filtering of large numbers of partsmore » that are organized in hierarchical directory structures. The transl8g program facilitates the interchange, topology generation, error checking, and enhancement of large 3D triangle meshes. Such data is frequently used to represent conceptual designs, scientific visualization volume modeling, or discrete sample data. Interchange is provided between several popular commercial and defacto standard geometry formats. Error checking is included to identify duplicate and zero area triangles. Model engancement features include common vertex joining, consistent triangle vertex ordering, vertex noemal vector averaging, and triangle strip generation. Many of the traditional O(n2) algorithms required to provide the above features have been recast and are o(nlog(n)) which support large mesh sizes. The decim8 program is based on a data filter algorithm that significantly reduces the number of triangles required to represent 3D models of geometry, scientific visualization results, and discretely sampled data. It eliminates local patches of triangles whose geometries are not appreciably different and replaces them with fewer, larger triangles. The algorithm has been used to reduce triangles in large conceptual design models to facilitate virtual walk throughs and to enable interactive viewing of large 3D iso-surface volume visualizations. The runthru program provides high performance interactive display and manipulation of 3D triangle mesh models.« less

  7. RUNTHRU6.0. Translation, Enhancement, Filtering, and Visualization of Large 3D Triangle Mesh

    SciTech Connect

    Janucik, F.X.; Ross, D.M.; Sischo, K.F.

    1997-01-01

    The runthru system consists of five programs: workcell filter, just do it, transl8g, decim8, and runthru. The workcell filter program is useful if the source of your 3D triangle mesh model is IGRIP. It will traverse a directory structure of Deneb IGRIP files and filter out any IGRIP part files that are not referenced by an accompanying IGRIP work cell file. The just do it program automates translating and/or filtering of large numbers of parts that are organized in hierarchical directory structures. The transl8g program facilitates the interchange, topology generation, error checking, and enhancement of large 3D triangle meshes. Such data is frequently used to represent conceptual designs, scientific visualization volume modeling, or discrete sample data. Interchange is provided between several popular commercial and defacto standard geometry formats. Error checking is included to identify duplicate and zero area triangles. Model engancement features include common vertex joining, consistent triangle vertex ordering, vertex noemal vector averaging, and triangle strip generation. Many of the traditional O(n2) algorithms required to provide the above features have been recast and are o(nlog(n)) which support large mesh sizes. The decim8 program is based on a data filter algorithm that significantly reduces the number of triangles required to represent 3D models of geometry, scientific visualization results, and discretely sampled data. It eliminates local patches of triangles whose geometries are not appreciably different and replaces them with fewer, larger triangles. The algorithm has been used to reduce triangles in large conceptual design models to facilitate virtual walk throughs and to enable interactive viewing of large 3D iso-surface volume visualizations. The runthru program provides high performance interactive display and manipulation of 3D triangle mesh models.

  8. Translation, Enhancement, Filtering, and Visualization of Large 3D Triangle Mesh

    SciTech Connect

    1997-04-21

    The runthru system consists of five programs: workcell filter, just do it, transl8g, decim8, and runthru. The workcell filter program is useful if the source of your 3D triangle mesh model is IGRIP. It will traverse a directory structure of Deneb IGRIP files and filter out any IGRIP part files that are not referenced by an accompanying IGRIP work cell file. The just do it program automates translating and/or filtering of large numbers of parts that are organized in hierarchical directory structures. The transl8g program facilitates the interchange, topology generation, error checking, and enhancement of large 3D triangle meshes. Such data is frequently used to represent conceptual designs, scientific visualization volume modeling, or discrete sample data. Interchange is provided between several popular commercial and defacto standard geometry formats. Error checking is included to identify duplicate and zero area triangles. Model engancement features include common vertex joining, consistent triangle vertex ordering, vertex noemal vector averaging, and triangle strip generation. Many of the traditional O(n2) algorithms required to provide the above features have been recast and are o(nlog(n)) which support large mesh sizes. The decim8 program is based on a data filter algorithm that significantly reduces the number of triangles required to represent 3D models of geometry, scientific visualization results, and discretely sampled data. It eliminates local patches of triangles whose geometries are not appreciably different and replaces them with fewer, larger triangles. The algorithm has been used to reduce triangles in large conceptual design models to facilitate virtual walk throughs and to enable interactive viewing of large 3D iso-surface volume visualizations. The runthru program provides high performance interactive display and manipulation of 3D triangle mesh models.

  9. Why Teach Visual Culture?

    ERIC Educational Resources Information Center

    Passmore, Kaye

    2007-01-01

    Visual culture is a hot topic in art education right now as some teachers are dedicated to teaching it and others are adamant that it has no place in a traditional art class. Visual culture, the author asserts, can include just about anything that is visually represented. Although people often think of visual culture as contemporary visuals such…

  10. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  11. A Scalable Gaussian Process Analysis Algorithm for Biomass Monitoring

    SciTech Connect

    Chandola, Varun; Vatsavai, Raju

    2011-01-01

    Biomass monitoring is vital for studying the carbon cycle of earth's ecosystem and has several significant implications, especially in the context of understanding climate change and its impacts. Recently, several change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments, but do not satisfy one or both of the two requirements of the biomass monitoring problem, i.e., {\\em operating in online mode} and {\\em handling periodic time series}. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) have been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. We focus on addressing the scalability issues associated with the proposed GP based change detection algorithm. This paper makes several significant contributions. First, we propose a GP based online time series change detection algorithm and demonstrate its effectiveness in detecting different types of changes in {\\em Normalized Difference Vegetation Index} (NDVI) data obtained from a study area in Iowa, USA. Second, we propose an efficient Toeplitz matrix based solution which significantly improves the computational complexity and memory requirements of the proposed GP based method. Specifically, the proposed solution can analyze a time series of length $t$ in $O(t^2)$ time while maintaining a $O(t)$ memory footprint, compared to the $O(t^3)$ time and $O(t^2)$ memory requirement of standard matrix manipulation based methods. Third, we describe a parallel version of the proposed solution which can be used to simultaneously analyze a large number of time series. We study three different parallel implementations: using threads, MPI, and a hybrid

  12. High-performance, scalable optical network-on-chip architectures

    NASA Astrophysics Data System (ADS)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  13. Bubble pump: scalable strategy for in-plane liquid routing.

    PubMed

    Oskooei, Ali; Günther, Axel

    2015-07-01

    We present an on-chip liquid routing technique intended for application in well-based microfluidic systems that require long-term active pumping at low to medium flowrates. Our technique requires only one fluidic feature layer, one pneumatic control line and does not rely on flexible membranes and mechanical or moving parts. The presented bubble pump is therefore compatible with both elastomeric and rigid substrate materials and the associated scalable manufacturing processes. Directed liquid flow was achieved in a microchannel by an in-series configuration of two previously described "bubble gates", i.e., by gas-bubble enabled miniature gate valves. Only one time-dependent pressure signal is required and initiates at the upstream (active) bubble gate a reciprocating bubble motion. Applied at the downstream (passive) gate a time-constant gas pressure level is applied. In its rest state, the passive gate remains closed and only temporarily opens while the liquid pressure rises due to the active gate's reciprocating bubble motion. We have designed, fabricated and consistently operated our bubble pump with a variety of working liquids for >72 hours. Flow rates of 0-5.5 μl min(-1), were obtained and depended on the selected geometric dimensions, working fluids and actuation frequencies. The maximum operational pressure was 2.9 kPa-9.1 kPa and depended on the interfacial tension of the working fluids. Attainable flow rates compared favorably with those of available micropumps. We achieved flow rate enhancements of 30-100% by operating two bubble pumps in tandem and demonstrated scalability of the concept in a multi-well format with 12 individually and uniformly perfused microchannels (variation in flow rate <7%). We envision the demonstrated concept to allow for the consistent on-chip delivery of a wide range of different liquids that may even include highly reactive or moisture sensitive solutions. The presented bubble pump may provide active flow control for

  14. The Co Design Architecture for Exascale Systems, a Novel Approach for Scalable Designs

    SciTech Connect

    Kagan, Michael; Shainer, Gilad; Poole, Stephen W; Shamis, Pavel; Wilde, Todd; Pak, Lui; Liu, Tong; Dubman, Mike; Shahar, Yiftah; Graham, Richard L

    2012-01-01

    High performance computing (HPC) has begun scaling beyond the Petaflop range towards the Exaflop (1000 Petaflops) mark. One of the major concerns throughout the development toward such performance capability is scalability both at the system level and the application layer. In this paper we present a novel approach for a new design concept the Co Design approach with enables a tighter development of both the application communication libraries and the underlying hardware interconnect solution in order to overcome scalability issues and to enable a more efficient design approach towards Exascale computing. We have suggested a new application programing interface and have demonstrated a 50x improvement of performance and scalability increases.

  15. Energy-absorption capability and scalability of square cross section composite tube specimens

    NASA Technical Reports Server (NTRS)

    Farley, Gary L.

    1987-01-01

    Static crushing tests were conducted on graphite/epoxy and Kevlar/epoxy square cross section tubes to study the influence of specimen geometry on the energy-absorption capability and scalability of composite materials. The tube inside width-to-wall thickness (W/t) ratio was determined to significantly affect the energy-absorption capability of composite materials. As W/t ratio decreases, the energy-absorption capability increases nonlinearly. The energy-absorption capability of Kevlar epoxy tubes was found to be geometrically scalable, but the energy-absorption capability of graphite/epoxy tubes was not geometrically scalable.

  16. Learning Visualizations by Analogy: Promoting Visual Literacy through Visualization Morphing.

    PubMed

    Ruchikachorn, Puripant; Mueller, Klaus

    2015-09-01

    We propose the concept of teaching (and learning) unfamiliar visualizations by analogy, that is, demonstrating an unfamiliar visualization method by linking it to another more familiar one, where the in-betweens are designed to bridge the gap of these two visualizations and explain the difference in a gradual manner. As opposed to a textual description, our morphing explains an unfamiliar visualization through purely visual means. We demonstrate our idea by ways of four visualization pair examples: data table and parallel coordinates, scatterplot matrix and hyperbox, linear chart and spiral chart, and hierarchical pie chart and treemap. The analogy is commutative i.e. any member of the pair can be the unfamiliar visualization. A series of studies showed that this new paradigm can be an effective teaching tool. The participants could understand the unfamiliar visualization methods in all of the four pairs either fully or at least significantly better after they observed or interacted with the transitions from the familiar counterpart. The four examples suggest how helpful visualization pairings be identified and they will hopefully inspire other visualization morphings and associated transition strategies to be identified.

  17. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    SciTech Connect

    Rubel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat,

    2008-08-22

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system.

  18. SVOPME: A scalable virtual organization privileges management environment

    SciTech Connect

    Garzoglio, Gabriele; Wang, Nanbor; Sfiligoi, Igor; Levshina, Tanya; Anathan, Balamurali; /Tech-X, Boulder

    2009-05-01

    Grids enable uniform access to resources by implementing standard interfaces to resource gateways. In the Open Science Grid (OSG), privileges are granted on the basis of the user's membership to a Virtual Organization (VO). However, Grid sites are solely responsible to determine and control access privileges to resources using users identity and personal attributes, which are available through Grid credentials. While this guarantees full control on access rights to the sites, it makes VO privileges heterogeneous throughout the Grid and hardly fits with the Grid paradigm of uniform access to resources. To address these challenges, we are developing the Scalable Virtual Organization Privileges Management Environment (SVOPME), which provides tools for VOs to define and publish desired privileges and assists sites to provide the appropriate access policies. Moreover, SVOPME provides tools for Grid sites to analyze site access policies for various resources, verify compliance with preferred VO policies, and generate directives for site administrators on how the local access policies can be amended to achieve such compliance without taking control of local configurations away from site administrators. This paper discusses what access policies are of interest to the OSG community and how SVOPME implements privilege management for OSG.

  19. Jumping-Droplet-Enhanced Condensation on Scalable Superhydrophobic Nanostructured Surfaces

    SciTech Connect

    Miljkovic, N; Enright, R; Nam, Y; Lopez, K; Dou, N; Sack, J; Wang, E

    2013-01-09

    When droplets coalesce on a superhydrophobic nanostructured surface, the resulting droplet can jump from the surface due to the release of excess surface energy. If designed properly, these superhydrophobic nanostructured surfaces can not only allow for easy droplet removal at micrometric length scales during condensation but also promise to enhance heat transfer performance. However, the rationale for the design of an ideal nanostructured surface as well as heat transfer experiments demonstrating the advantage of this jumping behavior are lacking. Here, we show that silanized copper oxide surfaces created via a simple fabrication method can achieve highly efficient jumping-droplet condensation heat transfer. We experimentally demonstrated a 25% higher overall heat flux and 30% higher condensation heat transfer coefficient compared to state-of-the-art hydrophobic condensing surfaces at low supersaturations (<1.12). This work not only shows significant condensation heat transfer enhancement but also promises a low cost and scalable approach to increase efficiency for applications such as atmospheric water harvesting and dehumidification. Furthermore, the results offer insights and an avenue to achieve high flux superhydrophobic condensation.

  20. The Simulation of Read-time Scalable Coherent Interface

    NASA Technical Reports Server (NTRS)

    Li, Qiang; Grant, Terry; Grover, Radhika S.

    1997-01-01

    Scalable Coherent Interface (SCI, IEEE/ANSI Std 1596-1992) (SCI1, SCI2) is a high performance interconnect for shared memory multiprocessor systems. In this project we investigate an SCI Real Time Protocols (RTSCI1) using Directed Flow Control Symbols. We studied the issues of efficient generation of control symbols, and created a simulation model of the protocol on a ring-based SCI system. This report presents the results of the study. The project has been implemented using SES/Workbench. The details that follow encompass aspects of both SCI and Flow Control Protocols, as well as the effect of realistic client/server processing delay. The report is organized as follows. Section 2 provides a description of the simulation model. Section 3 describes the protocol implementation details. The next three sections of the report elaborate on the workload, results and conclusions. Appended to the report is a description of the tool, SES/Workbench, used in our simulation, and internal details of our implementation of the protocol.

  1. Mindfulness and compassion: an examination of mechanism and scalability.

    PubMed

    Lim, Daniel; Condon, Paul; DeSteno, David

    2015-01-01

    Emerging evidence suggests that meditation engenders prosocial behaviors meant to benefit others. However, the robustness, underlying mechanisms, and potential scalability of such effects remain open to question. The current experiment employed an ecologically valid situation that exposed participants to a person in visible pain. Following three-week, mobile-app based training courses in mindfulness meditation or cognitive skills (i.e., an active control condition), participants arrived at a lab individually to complete purported measures of cognitive ability. Upon entering a public waiting area outside the lab that contained three chairs, participants seated themselves in the last remaining unoccupied chair; confederates occupied the other two. As the participant sat and waited, a third confederate using crutches and a large walking boot entered the waiting area while displaying discomfort. Compassionate responding was assessed by whether participants gave up their seat to allow the uncomfortable confederate to sit, thereby relieving her pain. Participants' levels of empathic accuracy was also assessed. As predicted, participants assigned to the mindfulness meditation condition gave up their seats more frequently than did those assigned to the active control group. In addition, empathic accuracy was not increased by mindfulness practice, suggesting that mindfulness-enhanced compassionate behavior does not stem from associated increases in the ability to decode the emotional experiences of others. PMID:25689827

  2. Scalable Indoor Localization via Mobile Crowdsourcing and Gaussian Process

    PubMed Central

    Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei; Wang, Weiping

    2016-01-01

    Indoor localization using Received Signal Strength Indication (RSSI) fingerprinting has been extensively studied for decades. The positioning accuracy is highly dependent on the density of the signal database. In areas without calibration data, however, this algorithm breaks down. Building and updating a dense signal database is labor intensive, expensive, and even impossible in some areas. Researchers are continually searching for better algorithms to create and update dense databases more efficiently. In this paper, we propose a scalable indoor positioning algorithm that works both in surveyed and unsurveyed areas. We first propose Minimum Inverse Distance (MID) algorithm to build a virtual database with uniformly distributed virtual Reference Points (RP). The area covered by the virtual RPs can be larger than the surveyed area. A Local Gaussian Process (LGP) is then applied to estimate the virtual RPs’ RSSI values based on the crowdsourced training data. Finally, we improve the Bayesian algorithm to estimate the user’s location using the virtual database. All the parameters are optimized by simulations, and the new algorithm is tested on real-case scenarios. The results show that the new algorithm improves the accuracy by 25.5% in the surveyed area, with an average positioning error below 2.2 m for 80% of the cases. Moreover, the proposed algorithm can localize the users in the neighboring unsurveyed area. PMID:26999139

  3. A scalable population code for time in the striatum.

    PubMed

    Mello, Gustavo B M; Soares, Sofia; Paton, Joseph J

    2015-05-01

    To guide behavior and learn from its consequences, the brain must represent time over many scales. Yet, the neural signals used to encode time in the seconds-to-minute range are not known. The striatum is a major input area of the basal ganglia associated with learning and motor function. Previous studies have also shown that the striatum is necessary for normal timing behavior. To address how striatal signals might be involved in timing, we recorded from striatal neurons in rats performing an interval timing task. We found that neurons fired at delays spanning tens of seconds and that this pattern of responding reflected the interaction between time and the animals' ongoing sensorimotor state. Surprisingly, cells rescaled responses in time when intervals changed, indicating that striatal populations encoded relative time. Moreover, time estimates decoded from activity predicted timing behavior as animals adjusted to new intervals, and disrupting striatal function led to a decrease in timing performance. These results suggest that striatal activity forms a scalable population code for time, providing timing signals that animals use to guide their actions.

  4. Scalable Influence Estimation in Continuous-Time Diffusion Networks

    PubMed Central

    Du, Nan; Song, Le; Gomez-Rodriguez, Manuel; Zha, Hongyuan

    2014-01-01

    If a piece of information is released from a media site, can we predict whether it may spread to one million web pages, in a month ? This influence estimation problem is very challenging since both the time-sensitive nature of the task and the requirement of scalability need to be addressed simultaneously. In this paper, we propose a randomized algorithm for influence estimation in continuous-time diffusion networks. Our algorithm can estimate the influence of every node in a network with |V| nodes and |ε| edges to an accuracy of ε using n = O(1/ε2) randomizations and up to logarithmic factors O(n|ε|+n|V|) computations. When used as a subroutine in a greedy influence maximization approach, our proposed algorithm is guaranteed to find a set of C nodes with the influence of at least (1 − 1/e) OPT − 2Cε, where OPT is the optimal value. Experiments on both synthetic and real-world data show that the proposed algorithm can easily scale up to networks of millions of nodes while significantly improves over previous state-of-the-arts in terms of the accuracy of the estimated influence and the quality of the selected nodes in maximizing the influence. PMID:26752940

  5. Scalable PGAS Metadata Management on Extreme Scale Systems

    SciTech Connect

    Chavarría-Miranda, Daniel; Agarwal, Khushbu; Straatsma, TP

    2013-05-16

    Programming models intended to run on exascale systems have a number of challenges to overcome, specially the sheer size of the system as measured by the number of concurrent software entities created and managed by the underlying runtime. It is clear from the size of these systems that any state maintained by the programming model has to be strictly sub-linear in size, in order not to overwhelm memory usage with pure overhead. A principal feature of Partitioned Global Address Space (PGAS) models is providing easy access to global-view distributed data structures. In order to provide efficient access to these distributed data structures, PGAS models must keep track of metadata such as where array sections are located with respect to processes/threads running on the HPC system. As PGAS models and applications become ubiquitous on very large transpetascale systems, a key component to their performance and scalability will be efficient and judicious use of memory for model overhead (metadata) compared to application data. We present an evaluation of several strategies to manage PGAS metadata that exhibit different space/time tradeoffs. We use two real-world PGAS applications to capture metadata usage patterns and gain insight into their communication behavior.

  6. Superconductor digital electronics: Scalability and energy efficiency issues (Review Article)

    NASA Astrophysics Data System (ADS)

    Tolpygo, Sergey K.

    2016-05-01

    Superconductor digital electronics using Josephson junctions as ultrafast switches and magnetic-flux encoding of information was proposed over 30 years ago as a sub-terahertz clock frequency alternative to semiconductor electronics based on complementary metal-oxide-semiconductor (CMOS) transistors. Recently, interest in developing superconductor electronics has been renewed due to a search for energy saving solutions in applications related to high-performance computing. The current state of superconductor electronics and fabrication processes are reviewed in order to evaluate whether this electronics is scalable to a very large scale integration (VLSI) required to achieve computation complexities comparable to CMOS processors. A fully planarized process at MIT Lincoln Laboratory, perhaps the most advanced process developed so far for superconductor electronics, is used as an example. The process has nine superconducting layers: eight Nb wiring layers with the minimum feature size of 350 nm, and a thin superconducting layer for making compact high-kinetic-inductance bias inductors. All circuit layers are fully planarized using chemical mechanical planarization (CMP) of SiO2 interlayer dielectric. The physical limitations imposed on the circuit density by Josephson junctions, circuit inductors, shunt and bias resistors, etc., are discussed. Energy dissipation in superconducting circuits is also reviewed in order to estimate whether this technology, which requires cryogenic refrigeration, can be energy efficient. Fabrication process development required for increasing the density of superconductor digital circuits by a factor of ten and achieving densities above 107 Josephson junctions per cm2 is described.

  7. Developing a scalable modeling architecture for studying survivability technologies

    NASA Astrophysics Data System (ADS)

    Mohammad, Syed; Bounker, Paul; Mason, James; Brister, Jason; Shady, Dan; Tucker, David

    2006-05-01

    To facilitate interoperability of models in a scalable environment, and provide a relevant virtual environment in which Survivability technologies can be evaluated, the US Army Research Development and Engineering Command (RDECOM) Modeling Architecture for Technology Research and Experimentation (MATREX) Science and Technology Objective (STO) program has initiated the Survivability Thread which will seek to address some of the many technical and programmatic challenges associated with the effort. In coordination with different Thread customers, such as the Survivability branches of various Army labs, a collaborative group has been formed to define the requirements for the simulation environment that would in turn provide them a value-added tool for assessing models and gauge system-level performance relevant to Future Combat Systems (FCS) and the Survivability requirements of other burgeoning programs. An initial set of customer requirements has been generated in coordination with the RDECOM Survivability IPT lead, through the Survivability Technology Area at RDECOM Tank-automotive Research Development and Engineering Center (TARDEC, Warren, MI). The results of this project are aimed at a culminating experiment and demonstration scheduled for September, 2006, which will include a multitude of components from within RDECOM and provide the framework for future experiments to support Survivability research. This paper details the components with which the MATREX Survivability Thread was created and executed, and provides insight into the capabilities currently demanded by the Survivability faculty within RDECOM.

  8. The Scalable Coherent Interface and related standards projects

    SciTech Connect

    Gustavson, D.B.

    1991-09-01

    The Scalable Coherent Interface (SCI) project (IEEE P1596) found a way to avoid the limits that are inherent in bus technology. SCI provides bus-like services by transmitting packets on a collection of point-to-point unidirectional links. The SCI protocols support cache coherence in a distributed-shared-memory multiprocessor model, message passing, I/O, and local-area-network-like communication over fiber optic or wire links. VLSI circuits that operate parallel links at 1000 MByte/s and serial links at 1000 Mbit/s will be available early in 1992. Several ongoing SCI-related projects are applying the SCI technology to new areas or extending it to more difficult problems. P1596.1 defines the architecture of a bridge between SCI and VME; P1596.2 compatibly extends the cache coherence mechanism for efficient operation with kiloprocessor systems; P1596.3 defines new low-voltage (about 0.25 V) differential signals suitable for low power interfaces for CMOS or GaAs VLSI implementations of SCI; P1596.4 defines a high performance memory chip interface using these signals; P1596.5 defines data transfer formats for efficient interprocessor communication in heterogeneous multiprocessor systems. This paper reports the current status of SCI, related standards, and new projects. 16 refs.

  9. Building Scalable PGAS Communication Subsystem on Blue Gene/Q

    SciTech Connect

    Vishnu, Abhinav; Kerbyson, Darren J.; Barker, Kevin J.; van Dam, Hubertus

    2013-05-20

    This paper presents a design of scalable Partitioned Global Address Space (PGAS) communication subsystems on recently proposed Blue Gene/Q architecture. The proposed design provides an in-depth modeling of communication infrastructure using Parallel Active Messaging Interface (PAMI). The communication infrastructure is used to design time-space efficient communication protocols for frequently used data-types (contiguous, uniformly non-contiguous) using Remote Direct Memory Access (RDMA) get/put primitives. The proposed design accelerates load balance counters by using asynchronous threads, which are required due to the missing network hardware support for Atomic Memory Operations (AMOs). Under the proposed design, the synchronization traffic is reduced by tracking conflicting memory accesses in distributed space with slightly increment in space complexity. An evaluation with simple communication benchmarks show a adjacent node get latency of 2.89$\\mu$s and peak bandwidth of 1775 MB/s resulting in $\\approx$ 99\\% communication efficiency.The evaluation shows a reduction in the execution time by up to 30\\% for NWChem self consistent field calculation on 4096 processes using the proposed asynchronous thread based design.

  10. Phonon-based scalable quantum computing and sensing (Presentation Video)

    NASA Astrophysics Data System (ADS)

    El-Kady, Ihab

    2015-04-01

    Quantum computing fundamentally depends on the ability to concurrently entangle and individually address/control a large number of qubits. In general, the primary inhibitors of large scale entanglement are qubit dependent; for example inhomogeneity in quantum dots, spectral crowding brought about by proximity-based entanglement in ions, weak interactions of neutral atoms, and the fabrication tolerances in the case of Si-vacancies or SQUIDs. We propose an inherently scalable solid-state qubit system with individually addressable qubits based on the coupling of a phonon with an acceptor impurity in a high-Q Phononic Crystal resonant cavity. Due to their unique nonlinear properties, phonons enable new opportunities for quantum devices and physics. We present a phononic crystal-based platform for observing the phonon analogy of cavity quantum electrodynamics, called phonodynamics, in a solid-state system. Practical schemes involve selective placement of a single acceptor atom in the peak of the strain field in a high-Q phononic crystal cavity that enables strong coupling of the phonon modes to the energy levels of the atom. A qubit is then created by entangling a phonon at the resonance frequency of the cavity with the atomic acceptor states. We show theoretical optimization of the cavity design and excitation waveguides, along with estimated performance figures of the phoniton system. Qubits based on this half-sound, half-matter quasi-particle, may outcompete other quantum architectures in terms of combined emission rate, coherence lifetime, and fabrication demands.

  11. Scalable Multicore Motion Planning Using Lock-Free Concurrency

    PubMed Central

    Ichnowski, Jeffrey; Alterovitz, Ron

    2015-01-01

    We present PRRT (Parallel RRT) and PRRT* (Parallel RRT*), sampling-based methods for feasible and optimal motion planning designed for modern multicore CPUs. We parallelize RRT and RRT* such that all threads concurrently build a single motion planning tree. Parallelization in this manner requires that data structures, such as the nearest neighbor search tree and the motion planning tree, are safely shared across multiple threads. Rather than rely on traditional locks which can result in slowdowns due to lock contention, we introduce algorithms based on lock-free concurrency using atomic operations. We further improve scalability by using partition-based sampling (which shrinks each core’s working data set to improve cache efficiency) and parallel work-saving (in reducing the number of rewiring steps performed in PRRT*). Because PRRT and PRRT* are CPU-based, they can be directly integrated with existing libraries. We demonstrate that PRRT and PRRT* scale well as core counts increase, in some cases exhibiting superlinear speedup, for scenarios such as the Alpha Puzzle and Cubicles scenarios and the Aldebaran Nao robot performing a 2-handed task. PMID:26167135

  12. A Novel Coarsening Method for Scalable and Efficient Mesh Generation

    SciTech Connect

    Yoo, A; Hysom, D; Gunney, B

    2010-12-02

    matrix-vector multiplication can be performed locally on each processor and hence to minimize communication. Furthermore, a good graph partitioning scheme ensures the equal amount of computation performed on each processor. Graph partitioning is a well known NP-complete problem, and thus the most commonly used graph partitioning algorithms employ some forms of heuristics. These algorithms vary in terms of their complexity, partition generation time, and the quality of partitions, and they tend to trade off these factors. A significant challenge we are currently facing at the Lawrence Livermore National Laboratory is how to partition very large meshes on massive-size distributed memory machines like IBM BlueGene/P, where scalability becomes a big issue. For example, we have found that the ParMetis, a very popular graph partitioning tool, can only scale to 16K processors. An ideal graph partitioning method on such an environment should be fast and scale to very large meshes, while producing high quality partitions. This is an extremely challenging task, as to scale to that level, the partitioning algorithm should be simple and be able to produce partitions that minimize inter-processor communications and balance the load imposed on the processors. Our goals in this work are two-fold: (1) To develop a new scalable graph partitioning method with good load balancing and communication reduction capability. (2) To study the performance of the proposed partitioning method on very large parallel machines using actual data sets and compare the performance to that of existing methods. The proposed method achieves the desired scalability by reducing the mesh size. For this, it coarsens an input mesh into a smaller size mesh by coalescing the vertices and edges of the original mesh into a set of mega-vertices and mega-edges. A new coarsening method called brick algorithm is developed in this research. In the brick algorithm, the zones in a given mesh are first grouped into fixed size

  13. Scalable Metropolis Monte Carlo for simulation of hard shapes

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.

    2016-07-01

    We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.

  14. Multi-jagged: A scalable parallel spatial partitioning algorithm

    SciTech Connect

    Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; Catalyurek, Umit V.

    2015-03-18

    Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficient implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.

  15. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    PubMed

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable. PMID:26737215

  16. The design of a scalable, fixed-time computer benchmark

    SciTech Connect

    Gustafson, J.; Rover, D.; Elbert, S.; Carter, M.

    1990-10-01

    By using the principle of fixed time benchmarking, it is possible to compare a very wide range of computers, from a small personal computer to the most powerful parallel supercomputer, an a single scale. Fixed-time benchmarks promise far greater longevity than those based on a particular problem size, and are more appropriate for grand challenge'' capability comparison. We present the design of a benchmark, SLALOM{trademark}, that scales automatically to the computing power available, and corrects several deficiencies in various existing benchmarks: it is highly scalable, it solves a real problem, it includes input and output times, and it can be run on parallel machines of all kinds, using any convenient language. The benchmark provides a reasonable estimate of the size of problem solvable on scientific computers. Results are presented that span six orders of magnitude for contemporary computers of various architectures. The benchmarks also can be used to demonstrate a new source of superlinear speedup in parallel computers. 15 refs., 14 figs., 3 tabs.

  17. High Performance Storage System Scalability: Architecture, Implementation, and Experience

    SciTech Connect

    Watson, R W

    2005-01-05

    The High Performance Storage System (HPSS) provides scalable hierarchical storage management (HSM), archive, and file system services. Its design, implementation and current dominant use are focused on HSM and archive services. It is also a general-purpose, global, shared, parallel file system, potentially useful in other application domains. When HPSS design and implementation began over a decade ago, scientific computing power and storage capabilities at a site, such as a DOE national laboratory, was measured in a few 10s of gigaops, data archived in HSMs in a few 10s of terabytes at most, data throughput rates to an HSM in a few megabytes/s, and daily throughput with the HSM in a few gigabytes/day. At that time, the DOE national laboratories and IBM HPSS design team recognized that we were headed for a data storage explosion driven by computing power rising to teraops/petaops requiring data stored in HSMs to rise to petabytes and beyond, data transfer rates with the HSM to rise to gigabytes/s and higher, and daily throughput with a HSM in 10s of terabytes/day. This paper discusses HPSS architectural, implementation and deployment experiences that contributed to its success in meeting the above orders of magnitude scaling targets. We also discuss areas that need additional attention as we continue significant scaling into the future.

  18. Developing highly scalable fluid solvers for enabling multiphysics simulation.

    SciTech Connect

    Clausen, Jonathan R

    2013-03-01

    We performed an investigation into explicit algorithms for the simulation of incompressible flows using methods with a finite, but small amount of compressibility added. Such methods include the artificial compressibility method and the lattice-Boltzmann method. The impetus for investigating such techniques stems from the increasing use of parallel computation at all levels (processors, clusters, and graphics processing units). Explicit algorithms have the potential to leverage these resources. In our investigation, a new form of artificial compressibility was derived. This method, referred to as the Entropically Damped Artificial Compressibility (EDAC) method, demonstrated superior results to traditional artificial compressibility methods by damping the numerical acoustic waves associated with these methods. Performance nearing that of the lattice- Boltzmann technique was observed, without the requirement of recasting the problem in terms of particle distribution functions; continuum variables may be used. Several example problems were investigated using a finite-di erence and finite-element discretizations of the EDAC equations. Example problems included lid-driven cavity flow, a convecting Taylor-Green vortex, a doubly periodic shear layer, freely decaying turbulence, and flow over a square cylinder. Additionally, a scalability study was performed using in excess of one million processing cores. Explicit methods were found to have desirable scaling properties; however, some robustness and general applicability issues remained.

  19. Abacus switch: a new scalable multicast ATM switch

    NASA Astrophysics Data System (ADS)

    Chao, H. Jonathan; Park, Jin-Soo; Choe, Byeong-Seog

    1995-10-01

    This paper describes a new architecture for a scalable multicast ATM switch from a few tens to thousands of input ports. The switch, called Abacus switch, has a nonblocking memoryless switch fabric followed by small switch modules at the output ports; the switch has input and output buffers. Cell replication, cell routing, output contention resolution, and cell addressing are all performed distributedly in the Abacus switch so that it can be scaled up to thousnads input and output ports. A novel algorithm has been proposed to resolve output port contention while achieving input and output ports. A novel algorithm has been proposed to reolve output port contention while achieving input buffers sharing, fairness among the input ports, and multicast call splitting. The channel grouping concept is also adopted in the switch to reduce the hardware complexity and improve the switch's throughput. The Abacus switch has a regular structure and thus has the advantages of: 1) easy expansion, 2) relaxed synchronization for data and clock signals, and 3) building the switch fabric using existing CMOS technology.

  20. GenePING: secure, scalable management of personal genomic data

    PubMed Central

    Adida, Ben; Kohane, Isaac S

    2006-01-01

    Background Patient genomic data are rapidly becoming part of clinical decision making. Within a few years, full genome expression profiling and genotyping will be affordable enough to perform on every individual. The management of such sizeable, yet fine-grained, data in compliance with privacy laws and best practices presents significant security and scalability challenges. Results We present the design and implementation of GenePING, an extension to the PING personal health record system that supports secure storage of large, genome-sized datasets, as well as efficient sharing and retrieval of individual datapoints (e.g. SNPs, rare mutations, gene expression levels). Even with full access to the raw GenePING storage, an attacker cannot discover any stored genomic datapoint on any single patient. Given a large-enough number of patient records, an attacker cannot discover which data corresponds to which patient, or even the size of a given patient's record. The computational overhead of GenePING's security features is a small constant, making the system usable, even in emergency care, on today's hardware. Conclusion GenePING is the first personal health record management system to support the efficient and secure storage and sharing of large genomic datasets. GenePING is available online at , licensed under the LGPL. PMID:16638151

  1. Multi-jagged: A scalable parallel spatial partitioning algorithm

    DOE PAGESBeta

    Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; Catalyurek, Umit V.

    2015-03-18

    Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficientmore » implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.« less

  2. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    SciTech Connect

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.

  3. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGESBeta

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  4. A novel algorithm for scalable and accurate Bayesian network learning.

    PubMed

    Brown, Laura E; Tsamardinos, Ioannis; Aliferis, Constantin F

    2004-01-01

    Bayesian Networks (BN) is a knowledge representation formalism that has been proven to be valuable in biomedicine for constructing decision support systems and for generating causal hypotheses from data. Given the emergence of datasets in medicine and biology with thousands of variables and that current algorithms do not scale more than a few hundred variables in practical domains, new efficient and accurate algorithms are needed to learn high quality BNs from data. We present a new algorithm called Max-Min Hill-Climbing (MMHC) that builds upon and improves the Sparse Candidate (SC) algorithm; a state-of-the-art algorithm that scales up to datasets involving hundreds of variables provided the generating networks are sparse. Compared to the SC, on a number of datasets from medicine and biology, (a) MMHC discovers BNs that are structurally closer to the data-generating BN, (b) the discovered networks are more probable given the data, (c) MMHC is computationally more efficient and scalable than SC, and (d) the generating networks are not required to be uniformly sparse nor is the user of MMHC required to guess correctly the network connectivity

  5. Scalable and responsive event processing in the cloud.

    PubMed

    Suresh, Visalakshmi; Ezhilchelvan, Paul; Watson, Paul

    2013-01-28

    Event processing involves continuous evaluation of queries over streams of events. Response-time optimization is traditionally done over a fixed set of nodes and/or by using metrics measured at query-operator levels. Cloud computing makes it easy to acquire and release computing nodes as required. Leveraging this flexibility, we propose a novel, queueing-theory-based approach for meeting specified response-time targets against fluctuating event arrival rates by drawing only the necessary amount of computing resources from a cloud platform. In the proposed approach, the entire processing engine of a distinct query is modelled as an atomic unit for predicting response times. Several such units hosted on a single node are modelled as a multiple class M/G/1 system. These aspects eliminate intrusive, low-level performance measurements at run-time, and also offer portability and scalability. Using model-based predictions, cloud resources are efficiently used to meet response-time targets. The efficacy of the approach is demonstrated through cloud-based experiments.

  6. PCCM2: A GCM adapted for scalable parallel computers

    SciTech Connect

    Drake, J.; Semeraro, B.D.; Worley, P.; Foster, I.; Michalakes, J.; Toonen, B.; Hack, J.J.; Williamson, D.L.

    1994-01-01

    The Computer Hardware, Advanced Mathematics and Model Physics (CHAMMP) program seeks to provide climate researchers with an advanced modeling capability for the study of global change issues. One of the more ambitious projects being undertaken in the CHAMMP program is the development of PCCM2, an adaptation of the Community Climate Model (CCM2) for scalable parallel computers. PCCM2 uses a message-passing, domain-decomposition approach, in which each processor is allocated responsibility for computation on one part of the computational grid, and messages are generated to communicate data between processors. Much of the research effort associated with development of a parallel code of this sort is concerned with identifying efficient decomposition and communication strategies. In PCCM2, this task is complicated by the need to support both semi-Lagrangian transport and spectral transport. Load balancing and parallel I/O techniques are also required. In this paper, the authors review the various parallel algorithms used in PCCM2 and the work done to arrive at a validated model.

  7. Scalable Library for the Parallel Solution of Sparse Linear Systems

    1993-07-14

    BlockSolve is a scalable parallel software library for the solution of large sparse, symmetric systems of linear equations. It runs on a variety of parallel architectures and can easily be ported to others. BlockSovle is primarily intended for the solution of sparse linear systems that arise from physical problems having multiple degrees of freedom at each node point. For example, when the finite element method is used to solve practical problems in structural engineering, eachmore » node will typically have anywhere from 3-6 degrees of freedom associated with it. BlockSolve is written to take advantage of problems of this nature; however, it is still reasonably efficient for problems that have only one degree of freedom associated with each node, such as the three-dimensional Poisson problem. It does not require that the matrices have any particular structure other than being sparse and symmetric. BlockSolve is intended to be used within real application codes. It is designed to work best in the context of our experience which indicated that most application codes solve the same linear systems with several different right-hand sides and/or linear systems with the same structure, but different matrix values multiple times.« less

  8. Enabling Technologies for Scalable Trapped Ion Quantum Computing

    NASA Astrophysics Data System (ADS)

    Crain, Stephen; Gaultney, Daniel; Mount, Emily; Knoernschild, Caleb; Baek, Soyoung; Maunz, Peter; Kim, Jungsang

    2013-05-01

    Scalability is one of the main challenges of trapped ion based quantum computation, mainly limited by the lack of enabling technologies needed to trap, manipulate and process the increasing number of qubits. Microelectromechanical systems (MEMS) technology allows one to design movable micromirrors to focus laser beams on individual ions in a chain and steer the focal point in two dimensions. Our current MEMS system is designed to steer 355 nm pulsed laser beams to carry out logic gates on a chain of Yb ions with a waist of 1.5 μm across a 20 μm range. In order to read the state of the qubit chain we developed a 32-channel PMT with a custom read-out circuit operating near the thermal noise limit of the readout amplifier which increases state detection fidelity. We also developed a set of digital to analog converters (DACs) used to supply analog DC voltages to the electrodes of an ion trap. We designed asynchronous DACs to avoid added noise injection at the update rate commonly found in synchronous DACs. Effective noise filtering is expected to reduce the heating rate of a surface trap, thus improving multi-qubit logic gate fidelities. Our DAC system features 96 channels and an integrated FPGA that allows the system to be controlled in real time. This work was supported by IARPA/ARO.

  9. Novel scalable and monolithically integrated extracorporeal gas exchange device.

    PubMed

    Rieper, Tina; Müller, Claas; Reinecke, Holger

    2015-10-01

    A novel extracorporeal gas exchange device (EGED) is developed, implemented and characterized. The aim hereby is to overcome drawbacks of state-of-the-art devices and of state-of-science approaches, like their labor intensive fabrication and their low volume density of the gas exchange area, respectively. As a consequence of the stacked setup of alternating layers of the blood compartment and the ventilation gas compartment, the developed EGED allows for double sided gas exchange. Furthermore, it enables an adaption to the diversity of medical requirements by scaling the amount of layers. The developed fabrication chain is used to fabricate leakage-free evaluation models and allows for a transition to automated fabrication. The EGED is completely fabricated in polydimethylsiloxane (PDMS) and features diffusion membranes, which are separating the compartments, with a mean thickness of 90 μm. With the evaluation models and oxygen as ventilation gas an oxygen transfer of 60 ml/lblood (25 ml/(min m(2))) and a carbon dioxide transfer of 70 ml/lblood (30 ml/(min m(2))) are achieved. The linear scalability of the concept as well as the functionality of the EGED with air as ventilation gas is shown. PMID:26249794

  10. Scalable prediction of compound-protein interactions using minwise hashing.

    PubMed

    Tabei, Yasuo; Yamanishi, Yoshihiro

    2013-01-01

    The identification of compound-protein interactions plays key roles in the drug development toward discovery of new drug leads and new therapeutic protein targets. There is therefore a strong incentive to develop new efficient methods for predicting compound-protein interactions on a genome-wide scale. In this paper we develop a novel chemogenomic method to make a scalable prediction of compound-protein interactions from heterogeneous biological data using minwise hashing. The proposed method mainly consists of two steps: 1) construction of new compact fingerprints for compound-protein pairs by an improved minwise hashing algorithm, and 2) application of a sparsity-induced classifier to the compact fingerprints. We test the proposed method on its ability to make a large-scale prediction of compound-protein interactions from compound substructure fingerprints and protein domain fingerprints, and show superior performance of the proposed method compared with the previous chemogenomic methods in terms of prediction accuracy, computational efficiency, and interpretability of the predictive model. All the previously developed methods are not computationally feasible for the full dataset consisting of about 200 millions of compound-protein pairs. The proposed method is expected to be useful for virtual screening of a huge number of compounds against many protein targets.

  11. Efficient and scalable scaffolding using optical restriction maps

    PubMed Central

    2014-01-01

    In the next generation sequencing techniques millions of short reads are produced from a genomic sequence at a single run. The chances of low read coverage to some regions of the sequence are very high. The reads are short and very large in number. Due to erroneous base calling, there could be errors in the reads. As a consequence, sequence assemblers often fail to sequence an entire DNA molecule and instead output a set of overlapping segments that together represent a consensus region of the DNA. This set of overlapping segments are collectively called contigs in the literature. The final step of the sequencing process, called scaffolding, is to assemble the contigs into a correct order. Scaffolding techniques typically exploit additional information such as mate-pairs, pair-ends, or optical restriction maps. In this paper we introduce a series of novel algorithms for scaffolding that exploit optical restriction maps (ORMs). Simulation results show that our algorithms are indeed reliable, scalable, and efficient compared to the best known algorithms in the literature. PMID:25081913

  12. Scalability of Schottky barrier metal-oxide-semiconductor transistors

    NASA Astrophysics Data System (ADS)

    Jang, Moongyu

    2016-05-01

    In this paper, the general characteristics and the scalability of Schottky barrier metal-oxide-semiconductor field effect transistors (SB-MOSFETs) are introduced and reviewed. The most important factors, i.e., interface-trap density, lifetime and Schottky barrier height of erbium-silicided Schottky diode are estimated using equivalent circuit method. The extracted interface trap density, lifetime and Schottky barrier height for hole are estimated as 1.5 × 1013 traps/cm2, 3.75 ms and 0.76 eV, respectively. The interface traps are efficiently cured by N2 annealing. Based on the diode characteristics, various sizes of erbium-silicided/platinum-silicided n/p-type SB-MOSFETs are manufactured and analyzed. The manufactured SB-MOSFETs show enhanced drain induced barrier lowering (DIBL) characteristics due to the existence of Schottky barrier between source and channel. DIBL and subthreshold swing characteristics are comparable with the ultimate scaling limit of double gate MOSFETs which shows the possible application of SB-MOSFETs in nanoscale regime.

  13. A highly scalable, interoperable clinical decision support service

    PubMed Central

    Goldberg, Howard S; Paterno, Marilyn D; Rocha, Beatriz H; Schaeffer, Molly; Wright, Adam; Erickson, Jessica L; Middleton, Blackford

    2014-01-01

    Objective To create a clinical decision support (CDS) system that is shareable across healthcare delivery systems and settings over large geographic regions. Materials and methods The enterprise clinical rules service (ECRS) realizes nine design principles through a series of enterprise java beans and leverages off-the-shelf rules management systems in order to provide consistent, maintainable, and scalable decision support in a variety of settings. Results The ECRS is deployed at Partners HealthCare System (PHS) and is in use for a series of trials by members of the CDS consortium, including internally developed systems at PHS, the Regenstrief Institute, and vendor-based systems deployed at locations in Oregon and New Jersey. Performance measures indicate that the ECRS provides sub-second response time when measured apart from services required to retrieve data and assemble the continuity of care document used as input. Discussion We consider related work, design decisions, comparisons with emerging national standards, and discuss uses and limitations of the ECRS. Conclusions ECRS design, implementation, and use in CDS consortium trials indicate that it provides the flexibility and modularity needed for broad use and performs adequately. Future work will investigate additional CDS patterns, alternative methods of data passing, and further optimizations in ECRS performance. PMID:23828174

  14. Detailed Modeling and Evaluation of a Scalable Multilevel Checkpointing System

    SciTech Connect

    Mohror, Kathryn; Moody, Adam; Bronevetsky, Greg; de Supinski, Bronis R.

    2014-09-01

    High-performance computing (HPC) systems are growing more powerful by utilizing more components. As the system mean time before failure correspondingly drops, applications must checkpoint frequently to make progress. But, at scale, the cost of checkpointing becomes prohibitive. A solution to this problem is multilevel checkpointing, which employs multiple types of checkpoints in a single run. Moreover, lightweight checkpoints can handle the most common failure modes, while more expensive checkpoints can handle severe failures. We designed a multilevel checkpointing library, the Scalable Checkpoint/Restart (SCR) library, that writes lightweight checkpoints to node-local storage in addition to the parallel file system. We present probabilistic Markov models of SCR's performance. We show that on future large-scale systems, SCR can lead to a gain in machine efficiency of up to 35 percent, and reduce the load on the parallel file system by a factor of two. In addition, we predict that checkpoint scavenging, or only writing checkpoints to the parallel file system on application termination, can reduce the load on the parallel file system by 20 × on today's systems and still maintain high application efficiency.

  15. Scalable Indoor Localization via Mobile Crowdsourcing and Gaussian Process.

    PubMed

    Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei; Wang, Weiping

    2016-01-01

    Indoor localization using Received Signal Strength Indication (RSSI) fingerprinting has been extensively studied for decades. The positioning accuracy is highly dependent on the density of the signal database. In areas without calibration data, however, this algorithm breaks down. Building and updating a dense signal database is labor intensive, expensive, and even impossible in some areas. Researchers are continually searching for better algorithms to create and update dense databases more efficiently. In this paper, we propose a scalable indoor positioning algorithm that works both in surveyed and unsurveyed areas. We first propose Minimum Inverse Distance (MID) algorithm to build a virtual database with uniformly distributed virtual Reference Points (RP). The area covered by the virtual RPs can be larger than the surveyed area. A Local Gaussian Process (LGP) is then applied to estimate the virtual RPs' RSSI values based on the crowdsourced training data. Finally, we improve the Bayesian algorithm to estimate the user's location using the virtual database. All the parameters are optimized by simulations, and the new algorithm is tested on real-case scenarios. The results show that the new algorithm improves the accuracy by 25.5% in the surveyed area, with an average positioning error below 2.2 m for 80% of the cases. Moreover, the proposed algorithm can localize the users in the neighboring unsurveyed area. PMID:26999139

  16. A Scalable proxy cache for Grid Data Access

    NASA Astrophysics Data System (ADS)

    Cristian Cirstea, Traian; Just Keijser, Jan; Koeroo, Oscar Arthur; Starink, Ronald; Templon, Jeffrey Alan

    2012-12-01

    We describe a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for grid infrastructures. Two goals drove the project: firstly to provide a “native view” of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use cases, where multiple passes are made over the same set of data (residing on the grid). We further constrained the design by requiring that the system should be made of standard components wherever possible. The prototype that emerged from this exercise is a horizontally-scalable, cooperating system of web server / cache nodes, fronted by a customized webDAV server. The webDAV server is custom only in the sense that it supports http redirects (providing horizontal scaling) and that the authentication module has, as back end, a proxy delegation chain that can be used by the cache nodes to retrieve files from the grid. The prototype was deployed at Nikhef and tested at a scale of several terabytes of data and approximately one hundred fast cores of computing. Both small and large files were tested, in a number of scenarios, and with various numbers of cache nodes, in order to understand the scaling properties of the system. For properly-dimensioned cache-node hardware, the system showed speedup of several integer factors for the analysis-type use cases. These results and others are presented and discussed.

  17. Toward SVOPME, a Scalable Virtual Organization Privileges Management Environment

    NASA Astrophysics Data System (ADS)

    Wang, Nanbor; Garzoglio, Gabriele; Ananthan, Balamurali; Timm, Steven; Levshina, Tanya

    Grids enable uniform access to resources by implementing standard interfaces to resource gateways. In the Open Science Grid (OSG), privileges are granted on the basis of the user's membership to a Virtual Organization (VO). However, individual Grid sites are solely responsible to determine and control access privileges to resources. While this guarantees that the sites retain full control on access rights, it often leads to heterogeneous VO privileges throughout the Grid and hardly fits with the Grid paradigm of uniform access to resources. To address these challenges, we developed the Scalable Virtual Organization Privileges Management Environment (SVOPME), which provides tools for VOs to define, publish, and verify desired privileges. Moreover, SVOPME provides tools for grid sites to analyze site access policies for various resources, verify compliance with preferred VO policies, and generate directives for site administrators on how the local access policies can be amended to achieve such compliance without taking control of local configurations away from site administrators. This paper describes how SVOPME implements privilege management tools for the OSG and our experiences in deploying and running the tools in a test bed. Finally, we outline our plan to continue to improve SVOPME and have it included as part of the standard Grid software distributions.

  18. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  19. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    PubMed

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.

  20. Design and performance of a scalable, parallel statistics toolkit.

    SciTech Connect

    Thompson, David C.; Bennett, Janine Camille; Pebay, Philippe Pierre

    2010-11-01

    Most statistical software packages implement a broad range of techniques but do so in an ad hoc fashion, leaving users who do not have a broad knowledge of statistics at a disadvantage since they may not understand all the implications of a given analysis or how to test the validity of results. These packages are also largely serial in nature, or target multicore architectures instead of distributed-memory systems, or provide only a small number of statistics in parallel. This paper surveys a collection of parallel implementations of statistics algorithm developed as part of a common framework over the last 3 years. The framework strategically groups modeling techniques with associated verification and validation techniques to make the underlying assumptions of the statistics more clear. Furthermore it employs a design pattern specifically targeted for distributed-memory parallelism, where architectural advances in large-scale high-performance computing have been focused. Moment-based statistics (which include descriptive, correlative, and multicorrelative statistics, principal component analysis (PCA), and k-means statistics) scale nearly linearly with the data set size and number of processes. Entropy-based statistics (which include order and contingency statistics) do not scale well when the data in question is continuous or quasi-diffuse but do scale well when the data is discrete and compact. We confirm and extend our earlier results by now establishing near-optimal scalability with up to 10,000 processes.

  1. Scalable Virtual Network Mapping Algorithm for Internet-Scale Networks

    NASA Astrophysics Data System (ADS)

    Yang, Qiang; Wu, Chunming; Zhang, Min

    The proper allocation of network resources from a common physical substrate to a set of virtual networks (VNs) is one of the key technical challenges of network virtualization. While a variety of state-of-the-art algorithms have been proposed in an attempt to address this issue from different facets, the challenge still remains in the context of large-scale networks as the existing solutions mainly perform in a centralized manner which requires maintaining the overall and up-to-date information of the underlying substrate network. This implies the restricted scalability and computational efficiency when the network scale becomes large. This paper tackles the virtual network mapping problem and proposes a novel hierarchical algorithm in conjunction with a substrate network decomposition approach. By appropriately transforming the underlying substrate network into a collection of sub-networks, the hierarchical virtual network mapping algorithm can be carried out through a global virtual network mapping algorithm (GVNMA) and a local virtual network mapping algorithm (LVNMA) operated in the network central server and within individual sub-networks respectively with their cooperation and coordination as necessary. The proposed algorithm is assessed against the centralized approaches through a set of numerical simulation experiments for a range of network scenarios. The results show that the proposed hierarchical approach can be about 5-20 times faster for VN mapping tasks than conventional centralized approaches with acceptable communication overhead between GVNCA and LVNCA for all examined networks, whilst performs almost as well as the centralized solutions.

  2. Hardware-accelerated interactive data visualization for neuroscience in Python.

    PubMed

    Rossant, Cyrille; Harris, Kenneth D

    2013-01-01

    Large datasets are becoming more and more common in science, particularly in neuroscience where experimental techniques are rapidly evolving. Obtaining interpretable results from raw data can sometimes be done automatically; however, there are numerous situations where there is a need, at all processing stages, to visualize the data in an interactive way. This enables the scientist to gain intuition, discover unexpected patterns, and find guidance about subsequent analysis steps. Existing visualization tools mostly focus on static publication-quality figures and do not support interactive visualization of large datasets. While working on Python software for visualization of neurophysiological data, we developed techniques to leverage the computational power of modern graphics cards for high-performance interactive data visualization. We were able to achieve very high performance despite the interpreted and dynamic nature of Python, by using state-of-the-art, fast libraries such as NumPy, PyOpenGL, and PyTables. We present applications of these methods to visualization of neurophysiological data. We believe our tools will be useful in a broad range of domains, in neuroscience and beyond, where there is an increasing need for scalable and fast interactive visualization.

  3. Hardware-accelerated interactive data visualization for neuroscience in Python

    PubMed Central

    Rossant, Cyrille; Harris, Kenneth D.

    2013-01-01

    Large datasets are becoming more and more common in science, particularly in neuroscience where experimental techniques are rapidly evolving. Obtaining interpretable results from raw data can sometimes be done automatically; however, there are numerous situations where there is a need, at all processing stages, to visualize the data in an interactive way. This enables the scientist to gain intuition, discover unexpected patterns, and find guidance about subsequent analysis steps. Existing visualization tools mostly focus on static publication-quality figures and do not support interactive visualization of large datasets. While working on Python software for visualization of neurophysiological data, we developed techniques to leverage the computational power of modern graphics cards for high-performance interactive data visualization. We were able to achieve very high performance despite the interpreted and dynamic nature of Python, by using state-of-the-art, fast libraries such as NumPy, PyOpenGL, and PyTables. We present applications of these methods to visualization of neurophysiological data. We believe our tools will be useful in a broad range of domains, in neuroscience and beyond, where there is an increasing need for scalable and fast interactive visualization. PMID:24391582

  4. Data Visualization Using Immersive Virtual Reality Tools

    NASA Astrophysics Data System (ADS)

    Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.

    2013-01-01

    The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this

  5. Lilith: A software framework for the rapid development of scalable tools for distributed computing

    SciTech Connect

    Gentile, A.C.; Evensky, D.A.; Armstrong, R.C.

    1998-03-01

    Lilith is a general purpose framework, written in Java, that provides a highly scalable distribution of user code across a heterogeneous computing platform. By creation of suitable user code, the Lilith framework can be used for tool development. The scalable performance provided by Lilith is crucial to the development of effective tools for large distributed systems. Furthermore, since Lilith handles the details of code distribution and communication, the user code need focus primarily on the tool functionality, thus, greatly decreasing the time required for tool development. In this paper, the authors concentrate on the use of the Lilith framework to develop scalable tools. The authors review the functionality of Lilith and introduce a typical tool capitalizing on the features of the framework. They present new Objects directly involved with tool creation. They explain details of development and illustrate with an example. They present timing results demonstrating scalability.

  6. The scalability in the mechanochemical syntheses of edge functionalized graphene materials and biomass-derived chemicals.

    PubMed

    Blair, Richard G; Chagoya, Katerina; Biltek, Scott; Jackson, Steven; Sinclair, Ashlyn; Taraboletti, Alexandra; Restrepo, David T

    2014-01-01

    Mechanochemical approaches to chemical synthesis offer the promise of improved yields, new reaction pathways and greener syntheses. Scaling these syntheses is a crucial step toward realizing a commercially viable process. Although much work has been performed on laboratory-scale investigations little has been done to move these approaches toward industrially relevant scales. Moving reactions from shaker-type mills and planetary-type mills to scalable solutions can present a challenge. We have investigated scalability through discrete element models, thermal monitoring and reactor design. We have found that impact forces and macroscopic mixing are important factors in implementing a truly scalable process. These observations have allowed us to scale reactions from a few grams to several hundred grams and we have successfully implemented scalable solutions for the mechanocatalytic conversion of cellulose to value-added compounds and the synthesis of edge functionalized graphene. PMID:25407922

  7. SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems

    SciTech Connect

    Li, Xiaoye S.; Demmel, James W.

    2002-03-27

    In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.

  8. Scalability issues in evolutionary synthesis of electronic circuits: lessons learned and challenges ahead

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Keymeulen, D.; Zebulum, R. S.; Ferguson, M. I.

    2003-01-01

    This paper describes scalability issues of evolutionary-driven automatic synthesis of electronic circuits. The article begins by reviewing the concepts of circuit evolution and discussing the limitations of this technique when trying to achieve more complex systems.

  9. The scalability in the mechanochemical syntheses of edge functionalized graphene materials and biomass-derived chemicals.

    PubMed

    Blair, Richard G; Chagoya, Katerina; Biltek, Scott; Jackson, Steven; Sinclair, Ashlyn; Taraboletti, Alexandra; Restrepo, David T

    2014-01-01

    Mechanochemical approaches to chemical synthesis offer the promise of improved yields, new reaction pathways and greener syntheses. Scaling these syntheses is a crucial step toward realizing a commercially viable process. Although much work has been performed on laboratory-scale investigations little has been done to move these approaches toward industrially relevant scales. Moving reactions from shaker-type mills and planetary-type mills to scalable solutions can present a challenge. We have investigated scalability through discrete element models, thermal monitoring and reactor design. We have found that impact forces and macroscopic mixing are important factors in implementing a truly scalable process. These observations have allowed us to scale reactions from a few grams to several hundred grams and we have successfully implemented scalable solutions for the mechanocatalytic conversion of cellulose to value-added compounds and the synthesis of edge functionalized graphene.

  10. Visualizing the spinal neuronal dynamics of locomotion

    NASA Astrophysics Data System (ADS)

    Subramanian, Kalpathi R.; Bashor, D. P.; Miller, M. T.; Foster, J. A.

    2004-06-01

    Modern imaging and simulation techniques have enhanced system-level understanding of neural function. In this article, we present an application of interactive visualization to understanding neuronal dynamics causing locomotion of a single hip joint, based on pattern generator output of the spinal cord. Our earlier work visualized cell-level responses of multiple neuronal populations. However, the spatial relationships were abstract, making communication with colleagues difficult. We propose two approaches to overcome this: (1) building a 3D anatomical model of the spinal cord with neurons distributed inside, animated by the simulation and (2) adding limb movements predicted by neuronal activity. The new system was tested using a cat walking central pattern generator driving a pair of opposed spinal motoneuron pools. Output of opposing motoneuron pools was combined into a single metric, called "Net Neural Drive", which generated angular limb movement in proportion to its magnitude. Net neural drive constitutes a new description of limb movement control. The combination of spatial and temporal information in the visualizations elegantly conveys the neural activity of the output elements (motoneurons), as well as the resulting movement. The new system encompasses five biological levels of organization from ion channels to observed behavior. The system is easily scalable, and provides an efficient interactive platform for rapid hypothesis testing.

  11. Multivariate volume visualization through dynamic projections

    SciTech Connect

    Liu, Shusen; Wang, Bei; Thiagarajan, Jayaraman J.; Bremer, Peer -Timo; Pascucci, Valerio

    2014-11-01

    We propose a multivariate volume visualization framework that tightly couples dynamic projections with a high-dimensional transfer function design for interactive volume visualization. We assume that the complex, high-dimensional data in the attribute space can be well-represented through a collection of low-dimensional linear subspaces, and embed the data points in a variety of 2D views created as projections onto these subspaces. Through dynamic projections, we present animated transitions between different views to help the user navigate and explore the attribute space for effective transfer function design. Our framework not only provides a more intuitive understanding of the attribute space but also allows the design of the transfer function under multiple dynamic views, which is more flexible than being restricted to a single static view of the data. For large volumetric datasets, we maintain interactivity during the transfer function design via intelligent sampling and scalable clustering. As a result, using examples in combustion and climate simulations, we demonstrate how our framework can be used to visualize interesting structures in the volumetric space.

  12. Snowflake Visualization

    NASA Astrophysics Data System (ADS)

    Bliven, L. F.; Kucera, P. A.; Rodriguez, P.

    2010-12-01

    NASA Snowflake Video Imagers (SVIs) enable snowflake visualization at diverse field sites. The natural variability of frozen precipitation is a complicating factor for remote sensing retrievals in high latitude regions. Particle classification is important for understanding snow/ice physics, remote sensing polarimetry, bulk radiative properties, surface emissivity, and ultimately, precipitation rates and accumulations. Yet intermittent storms, low temperatures, high winds, remote locations and complex terrain can impede us from observing falling snow in situ. SVI hardware and software have some special features. The standard camera and optics yield 8-bit gray-scale images with resolution of 0.05 x 0.1 mm, at 60 frames per second. Gray-scale images are highly desirable because they display contrast that aids particle classification. Black and white (1-bit) systems display no contrast, so there is less information to recognize particle types, which is particularly burdensome for aggregates. Data are analyzed at one-minute intervals using NASA's Precipitation Link Software that produces (a) Particle Catalogs and (b) Particle Size Distributions (PSDs). SVIs can operate nearly continuously for long periods (e.g., an entire winter season), so natural variability can be documented. Let’s summarize results from field studies this past winter and review some recent SVI enhancements. During the winter of 2009-2010, SVIs were deployed at two sites. One SVI supported weather observations during the 2010 Winter Olympics and Paralympics. It was located close to the summit (Roundhouse) of Whistler Mountain, near the town of Whistler, British Columbia, Canada. In addition, two SVIs were located at the King City Weather Radar Station (WKR) near Toronto, Ontario, Canada. Access was prohibited to the SVI on Whistler Mountain during the Olympics due to security concerns. So to meet the schedule for daily data products, we operated the SVI by remote control. We also upgraded the

  13. Efficient Test and Visualization of Multi-Set Intersections

    PubMed Central

    Wang, Minghui; Zhao, Yongzhong; Zhang, Bin

    2015-01-01

    Identification of sets of objects with shared features is a common operation in all disciplines. Analysis of intersections among multiple sets is fundamental for in-depth understanding of their complex relationships. However, so far no method has been developed to assess statistical significance of intersections among three or more sets. Moreover, the state-of-the-art approaches for visualization of multi-set intersections are not scalable. Here, we first developed a theoretical framework for computing the statistical distributions of multi-set intersections based upon combinatorial theory, and then accordingly designed a procedure to efficiently calculate the exact probabilities of multi-set intersections. We further developed multiple efficient and scalable techniques to visualize multi-set intersections and the corresponding intersection statistics. We implemented both the theoretical framework and the visualization techniques in a unified R software package, SuperExactTest. We demonstrated the utility of SuperExactTest through an intensive simulation study and a comprehensive analysis of seven independently curated cancer gene sets as well as six disease or trait associated gene sets identified by genome-wide association studies. We expect SuperExactTest developed by this study will have a broad range of applications in scientific data analysis in many disciplines. PMID:26603754

  14. Efficient Test and Visualization of Multi-Set Intersections

    NASA Astrophysics Data System (ADS)

    Wang, Minghui; Zhao, Yongzhong; Zhang, Bin

    2015-11-01

    Identification of sets of objects with shared features is a common operation in all disciplines. Analysis of intersections among multiple sets is fundamental for in-depth understanding of their complex relationships. However, so far no method has been developed to assess statistical significance of intersections among three or more sets. Moreover, the state-of-the-art approaches for visualization of multi-set intersections are not scalable. Here, we first developed a theoretical framework for computing the statistical distributions of multi-set intersections based upon combinatorial theory, and then accordingly designed a procedure to efficiently calculate the exact probabilities of multi-set intersections. We further developed multiple efficient and scalable techniques to visualize multi-set intersections and the corresponding intersection statistics. We implemented both the theoretical framework and the visualization techniques in a unified R software package, SuperExactTest. We demonstrated the utility of SuperExactTest through an intensive simulation study and a comprehensive analysis of seven independently curated cancer gene sets as well as six disease or trait associated gene sets identified by genome-wide association studies. We expect SuperExactTest developed by this study will have a broad range of applications in scientific data analysis in many disciplines.

  15. Multi-Purpose, Application-Centric, Scalable I/O Proxy Application

    2015-06-15

    MACSio is a Multi-purpose, Application-Centric, Scalable I/O proxy application. It is designed to support a number of goals with respect to parallel I/O performance testing and benchmarking including the ability to test and compare various I/O libraries and I/O paradigms, to predict scalable performance of real applications and to help identify where improvements in I/O performance can be made within the HPC I/O software stack.

  16. XGet: a highly scalable and efficient file transfer tool for clusters

    SciTech Connect

    Greenberg, Hugh; Ionkov, Latchesar; Minnich, Ronald

    2008-01-01

    As clusters rapidly grow in size, transferring files between nodes can no longer be solved by the traditional transfer utilities due to their inherent lack of scalability. In this paper, we describe a new file transfer utility called XGet, which was designed to address the scalability problem of standard tools. We compared XGet against four transfer tools: Bittorrent, Rsync, TFTP, and Udpcast and our results show that XGet's performance is superior to the these utilities in many cases.

  17. Scalable synthesis and energy applications of defect engineeered nano materials

    NASA Astrophysics Data System (ADS)

    Karakaya, Mehmet

    Nanomaterials and nanotechnologies have attracted a great deal of attention in a few decades due to their novel physical properties such as, high aspect ratio, surface morphology, impurities, etc. which lead to unique chemical, optical and electronic properties. The awareness of importance of nanomaterials has motivated researchers to develop nanomaterial growth techniques to further control nanostructures properties such as, size, surface morphology, etc. that may alter their fundamental behavior. Carbon nanotubes (CNTs) are one of the most promising materials with their rigidity, strength, elasticity and electric conductivity for future applications. Despite their excellent properties explored by the abundant research works, there is big challenge to introduce them into the macroscopic world for practical applications. This thesis first gives a brief overview of the CNTs, it will then go on mechanical and oil absorption properties of macro-scale CNT assemblies, then following CNT energy storage applications and finally fundamental studies of defect introduced graphene systems. Chapter Two focuses on helically coiled carbon nanotube (HCNT) foams in compression. Similarly to other foams, HCNT foams exhibit preconditioning effects in response to cyclic loading; however, their fundamental deformation mechanisms are unique. Bulk HCNT foams exhibit super-compressibility and recover more than 90% of large compressive strains (up to 80%). When subjected to striker impacts, HCNT foams mitigate impact stresses more effectively compared to other CNT foams comprised of non-helical CNTs (~50% improvement). The unique mechanical properties we revealed demonstrate that the HCNT foams are ideally suited for applications in packaging, impact protection, and vibration mitigation. The third chapter describes a simple method for the scalable synthesis of three-dimensional, elastic, and recyclable multi-walled carbon nanotube (MWCNT) based light weight bucky-aerogels (BAGs) that are

  18. Simple, Scalable, Script-Based Science Processor (S4P)

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Vollmer, Bruce; Berrick, Stephen; Mack, Robert; Pham, Long; Zhou, Bryan; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    The development and deployment of data processing systems to process Earth Observing System (EOS) data has proven to be costly and prone to technical and schedule risk. Integration of science algorithms into a robust operational system has been difficult. The core processing system, based on commercial tools, has demonstrated limitations at the rates needed to produce the several terabytes per day for EOS, primarily due to job management overhead. This has motivated an evolution in the EOS Data Information System toward a more distributed one incorporating Science Investigator-led Processing Systems (SIPS). As part of this evolution, the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC) has developed a simplified processing system to accommodate the increased load expected with the advent of reprocessing and launch of a second satellite. This system, the Simple, Scalable, Script-based Science Processor (S42) may also serve as a resource for future SIPS. The current EOSDIS Core System was designed to be general, resulting in a large, complex mix of commercial and custom software. In contrast, many simpler systems, such as the EROS Data Center AVHRR IKM system, rely on a simple directory structure to drive processing, with directories representing different stages of production. The system passes input data to a directory, and the output data is placed in a "downstream" directory. The GES DAAC's Simple Scalable Script-based Science Processing System is based on the latter concept, but with modifications to allow varied science algorithms and improve portability. It uses a factory assembly-line paradigm: when work orders arrive at a station, an executable is run, and output work orders are sent to downstream stations. The stations are implemented as UNIX directories, while work orders are simple ASCII files. The core S4P infrastructure consists of a Perl program called stationmaster, which detects newly arrived work orders and forks a job to run the

  19. Improving Rural Geriatric Care Through Education: A Scalable, Collaborative Project.

    PubMed

    Buck, Harleah G; Kolanowski, Ann; Fick, Donna; Baronner, Lawrence

    2016-07-01

    HOW TO OBTAIN CONTACT HOURS BY READING THIS ISSUE Instructions: 1.2 contact hours will be awarded by Villanova University College of Nursing upon successful completion of this activity. A contact hour is a unit of measurement that denotes 60 minutes of an organized learning activity. This is a learner-based activity. Villanova University College of Nursing does not require submission of your answers to the quiz. A contact hour certificate will be awarded after you register, pay the registration fee, and complete the evaluation form online at http://goo.gl/gMfXaf. In order to obtain contact hours you must: 1. Read the article, "Improving Rural Geriatric Care Through Education: A Scalable, Collaborative Project," found on pages 306-313, carefully noting any tables and other illustrative materials that are included to enhance your knowledge and understanding of the content. Be sure to keep track of the amount of time (number of minutes) you spend reading the article and completing the quiz. 2. Read and answer each question on the quiz. After completing all of the questions, compare your answers to those provided within this issue. If you have incorrect answers, return to the article for further study. 3. Go to the Villanova website to register for contact hour credit. You will be asked to provide your name, contact information, and a VISA, MasterCard, or Discover card number for payment of the $20.00 fee. Once you complete the online evaluation, a certificate will be automatically generated. This activity is valid for continuing education credit until June 30, 2019. CONTACT HOURS This activity is co-provided by Villanova University College of Nursing and SLACK Incorporated. Villanova University College of Nursing is accredited as a provider of continuing nursing education by the American Nurses Credentialing Center's Commission on Accreditation. OBJECTIVES Describe the unique nursing challenges that occur in caring for older adults in rural areas. Discuss the

  20. Improving Rural Geriatric Care Through Education: A Scalable, Collaborative Project.

    PubMed

    Buck, Harleah G; Kolanowski, Ann; Fick, Donna; Baronner, Lawrence

    2016-07-01

    HOW TO OBTAIN CONTACT HOURS BY READING THIS ISSUE Instructions: 1.2 contact hours will be awarded by Villanova University College of Nursing upon successful completion of this activity. A contact hour is a unit of measurement that denotes 60 minutes of an organized learning activity. This is a learner-based activity. Villanova University College of Nursing does not require submission of your answers to the quiz. A contact hour certificate will be awarded after you register, pay the registration fee, and complete the evaluation form online at http://goo.gl/gMfXaf. In order to obtain contact hours you must: 1. Read the article, "Improving Rural Geriatric Care Through Education: A Scalable, Collaborative Project," found on pages 306-313, carefully noting any tables and other illustrative materials that are included to enhance your knowledge and understanding of the content. Be sure to keep track of the amount of time (number of minutes) you spend reading the article and completing the quiz. 2. Read and answer each question on the quiz. After completing all of the questions, compare your answers to those provided within this issue. If you have incorrect answers, return to the article for further study. 3. Go to the Villanova website to register for contact hour credit. You will be asked to provide your name, contact information, and a VISA, MasterCard, or Discover card number for payment of the $20.00 fee. Once you complete the online evaluation, a certificate will be automatically generated. This activity is valid for continuing education credit until June 30, 2019. CONTACT HOURS This activity is co-provided by Villanova University College of Nursing and SLACK Incorporated. Villanova University College of Nursing is accredited as a provider of continuing nursing education by the American Nurses Credentialing Center's Commission on Accreditation. OBJECTIVES Describe the unique nursing challenges that occur in caring for older adults in rural areas. Discuss the