Sample records for modern graphics processing

  1. The Visualization Toolkit (VTK): Rewriting the rendering code for modern graphics cards

    NASA Astrophysics Data System (ADS)

    Hanwell, Marcus D.; Martin, Kenneth M.; Chaudhary, Aashish; Avila, Lisa S.

    2015-09-01

    The Visualization Toolkit (VTK) is an open source, permissively licensed, cross-platform toolkit for scientific data processing, visualization, and data analysis. It is over two decades old, originally developed for a very different graphics card architecture. Modern graphics cards feature fully programmable, highly parallelized architectures with large core counts. VTK's rendering code was rewritten to take advantage of modern graphics cards, maintaining most of the toolkit's programming interfaces. This offers the opportunity to compare the performance of old and new rendering code on the same systems/cards. Significant improvements in rendering speeds and memory footprints mean that scientific data can be visualized in greater detail than ever before. The widespread use of VTK means that these improvements will reap significant benefits.

  2. Industrial Technology Modernization Program. Project 32. Factory Vision. Phase 2

    DTIC Science & Technology

    1988-04-01

    instructions for the PWA’s, generating the numerical control (NC) program instructions for factory assembly equipment, controlling the process... generating the numerical control (NC) program instructions for factory assembly equipment, controlling the production process instructions and NC... Assembly Operations the "Create Production Process Program" will automatically generate a sequence of graphics pages (in paper mode), or graphics screens

  3. Circumventing Graphical User Interfaces in Chemical Engineering Plant Design

    ERIC Educational Resources Information Center

    Romey, Noel; Schwartz, Rachel M.; Behrend, Douglas; Miao, Peter; Cheung, H. Michael; Beitle, Robert

    2007-01-01

    Graphical User Interfaces (GUIs) are pervasive elements of most modern technical software and represent a convenient tool for student instruction. For example, GUIs are used for [chemical] process design software (e.g., CHEMCAD, PRO/II and ASPEN) typically encountered in the senior capstone course. Drag and drop aspects of GUIs are challenging for…

  4. Evaluating Mobile Graphics Processing Units (GPUs) for Real-Time Resource Constrained Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meredith, J; Conger, J; Liu, Y

    2005-11-11

    Modern graphics processing units (GPUs) can provide tremendous performance boosts for some applications beyond what a single CPU can accomplish, and their performance is growing at a rate faster than CPUs as well. Mobile GPUs available for laptops have the small form factor and low power requirements suitable for use in embedded processing. We evaluated several desktop and mobile GPUs and CPUs on traditional and non-traditional graphics tasks, as well as on the most time consuming pieces of a full hyperspectral imaging application. Accuracy remained high despite small differences in arithmetic operations like rounding. Performance improvements are summarized here relativemore » to a desktop Pentium 4 CPU.« less

  5. Orthorectification by Using Gpgpu Method

    NASA Astrophysics Data System (ADS)

    Sahin, H.; Kulur, S.

    2012-07-01

    Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.

  6. Porting of the transfer-matrix method for multilayer thin-film computations on graphics processing units

    NASA Astrophysics Data System (ADS)

    Limmer, Steffen; Fey, Dietmar

    2013-07-01

    Thin-film computations are often a time-consuming task during optical design. An efficient way to accelerate these computations with the help of graphics processing units (GPUs) is described. It turned out that significant speed-ups can be achieved. We investigate the circumstances under which the best speed-up values can be expected. Therefore we compare different GPUs among themselves and with a modern CPU. Furthermore, the effect of thickness modulation on the speed-up and the runtime behavior depending on the input data is examined.

  7. 3D graphics hardware accelerator programming methods for real-time visualization systems

    NASA Astrophysics Data System (ADS)

    Souetov, Andrew E.

    2001-02-01

    The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.

  8. 3D graphics hardware accelerator programming methods for real-time visualization systems

    NASA Astrophysics Data System (ADS)

    Souetov, Andrew E.

    2000-02-01

    The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.

  9. GPU applications for data processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vladymyrov, Mykhailo, E-mail: mykhailo.vladymyrov@cern.ch; Aleksandrov, Andrey; INFN sezione di Napoli, I-80125 Napoli

    2015-12-31

    Modern experiments that use nuclear photoemulsion imply fast and efficient data acquisition from the emulsion can be performed. The new approaches in developing scanning systems require real-time processing of large amount of data. Methods that use Graphical Processing Unit (GPU) computing power for emulsion data processing are presented here. It is shown how the GPU-accelerated emulsion processing helped us to rise the scanning speed by factor of nine.

  10. New Challenge for Graphic Arts: Modernize Now!

    ERIC Educational Resources Information Center

    Sundeen, Earl I.

    1974-01-01

    The Kodak Graphic Arts Manpower Study obtained information from over 1000 graphic arts companies as to the educational needs of today in graphic arts. Vocational educators may have to stop thinking in terms of graphic arts education and begin working on curriculums for career education in the communication field. (Author/DS)

  11. Hyperspectral processing in graphical processing units

    NASA Astrophysics Data System (ADS)

    Winter, Michael E.; Winter, Edwin M.

    2011-06-01

    With the advent of the commercial 3D video card in the mid 1990s, we have seen an order of magnitude performance increase with each generation of new video cards. While these cards were designed primarily for visualization and video games, it became apparent after a short while that they could be used for scientific purposes. These Graphical Processing Units (GPUs) are rapidly being incorporated into data processing tasks usually reserved for general purpose computers. It has been found that many image processing problems scale well to modern GPU systems. We have implemented four popular hyperspectral processing algorithms (N-FINDR, linear unmixing, Principal Components, and the RX anomaly detection algorithm). These algorithms show an across the board speedup of at least a factor of 10, with some special cases showing extreme speedups of a hundred times or more.

  12. Processing-in-Memory Enabled Graphics Processors for 3D Rendering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Chenhao; Song, Shuaiwen; Wang, Jing

    2017-02-06

    The performance of 3D rendering of Graphics Processing Unit that convents 3D vector stream into 2D frame with 3D image effects significantly impact users’ gaming experience on modern computer systems. Due to the high texture throughput in 3D rendering, main memory bandwidth becomes a critical obstacle for improving the overall rendering performance. 3D stacked memory systems such as Hybrid Memory Cube (HMC) provide opportunities to significantly overcome the memory wall by directly connecting logic controllers to DRAM dies. Based on the observation that texel fetches significantly impact off-chip memory traffic, we propose two architectural designs to enable Processing-In-Memory based GPUmore » for efficient 3D rendering.« less

  13. Graphic Communications--Preparatory Area. Book I--Typography and Modern Typesetting. Teacher's Manual.

    ERIC Educational Resources Information Center

    Hertz, Andrew

    Intended for use with a companion student manual, this teacher's guide lists procedures and teaching tips for each unit of a secondary or postsecondary course of study in typography and modern typesetting. Course objectives are listed for developing student skills in the following preparatory functions of the graphic communications industry: copy…

  14. The Changing Business Environment: Implications for Vocational Curricula. State-of-the-Art Paper.

    ERIC Educational Resources Information Center

    Smith, E. Ray; Stallard, John J.

    The widespread use of the micro/personal computer and related technological advancements are having important impacts on information management in the modern electronic office. Some of the most common software applications include word processing, spread sheet analysis, data management, graphics, and communications. Ancillary hardware/software…

  15. Discovering epistasis in large scale genetic association studies by exploiting graphics cards.

    PubMed

    Chen, Gary K; Guo, Yunfei

    2013-12-03

    Despite the enormous investments made in collecting DNA samples and generating germline variation data across thousands of individuals in modern genome-wide association studies (GWAS), progress has been frustratingly slow in explaining much of the heritability in common disease. Today's paradigm of testing independent hypotheses on each single nucleotide polymorphism (SNP) marker is unlikely to adequately reflect the complex biological processes in disease risk. Alternatively, modeling risk as an ensemble of SNPs that act in concert in a pathway, and/or interact non-additively on log risk for example, may be a more sensible way to approach gene mapping in modern studies. Implementing such analyzes genome-wide can quickly become intractable due to the fact that even modest size SNP panels on modern genotype arrays (500k markers) pose a combinatorial nightmare, require tens of billions of models to be tested for evidence of interaction. In this article, we provide an in-depth analysis of programs that have been developed to explicitly overcome these enormous computational barriers through the use of processors on graphics cards known as Graphics Processing Units (GPU). We include tutorials on GPU technology, which will convey why they are growing in appeal with today's numerical scientists. One obvious advantage is the impressive density of microprocessor cores that are available on only a single GPU. Whereas high end servers feature up to 24 Intel or AMD CPU cores, the latest GPU offerings from nVidia feature over 2600 cores. Each compute node may be outfitted with up to 4 GPU devices. Success on GPUs varies across problems. However, epistasis screens fare well due to the high degree of parallelism exposed in these problems. Papers that we review routinely report GPU speedups of over two orders of magnitude (>100x) over standard CPU implementations.

  16. Discovering epistasis in large scale genetic association studies by exploiting graphics cards

    PubMed Central

    Chen, Gary K.; Guo, Yunfei

    2013-01-01

    Despite the enormous investments made in collecting DNA samples and generating germline variation data across thousands of individuals in modern genome-wide association studies (GWAS), progress has been frustratingly slow in explaining much of the heritability in common disease. Today's paradigm of testing independent hypotheses on each single nucleotide polymorphism (SNP) marker is unlikely to adequately reflect the complex biological processes in disease risk. Alternatively, modeling risk as an ensemble of SNPs that act in concert in a pathway, and/or interact non-additively on log risk for example, may be a more sensible way to approach gene mapping in modern studies. Implementing such analyzes genome-wide can quickly become intractable due to the fact that even modest size SNP panels on modern genotype arrays (500k markers) pose a combinatorial nightmare, require tens of billions of models to be tested for evidence of interaction. In this article, we provide an in-depth analysis of programs that have been developed to explicitly overcome these enormous computational barriers through the use of processors on graphics cards known as Graphics Processing Units (GPU). We include tutorials on GPU technology, which will convey why they are growing in appeal with today's numerical scientists. One obvious advantage is the impressive density of microprocessor cores that are available on only a single GPU. Whereas high end servers feature up to 24 Intel or AMD CPU cores, the latest GPU offerings from nVidia feature over 2600 cores. Each compute node may be outfitted with up to 4 GPU devices. Success on GPUs varies across problems. However, epistasis screens fare well due to the high degree of parallelism exposed in these problems. Papers that we review routinely report GPU speedups of over two orders of magnitude (>100x) over standard CPU implementations. PMID:24348518

  17. Three-Dimensional Computer Simulation as an Important Competence Based Aspect of a Modern Mining Professional

    NASA Astrophysics Data System (ADS)

    Aksenova, Olesya; Pachkina, Anna

    2017-11-01

    The article deals with the problem of necessity of educational process transformation to meet the requirements of modern miming industry; cooperative developing of new educational programs and implementation of educational process taking into account modern manufacturability. The paper proves the idea of introduction into mining professionals learning process studying of three-dimensional models of surface technological complex, ore reserves and underground digging complex as well as creating these models in different graphic editors and working with the information analysis model obtained on the basis of these three-dimensional models. The technological process of manless coal mining at the premises of the mine Polysaevskaya controlled by the information analysis models built on the basis of three-dimensional models of individual objects and technological process as a whole, and at the same time requiring the staff able to use the programs of three-dimensional positioning in the miners and equipment global frame of reference is covered.

  18. A real-time GNSS-R system based on software-defined radio and graphics processing units

    NASA Astrophysics Data System (ADS)

    Hobiger, Thomas; Amagai, Jun; Aida, Masanori; Narita, Hideki

    2012-04-01

    Reflected signals of the Global Navigation Satellite System (GNSS) from the sea or land surface can be utilized to deduce and monitor physical and geophysical parameters of the reflecting area. Unlike most other remote sensing techniques, GNSS-Reflectometry (GNSS-R) operates as a passive radar that takes advantage from the increasing number of navigation satellites that broadcast their L-band signals. Thereby, most of the GNSS-R receiver architectures are based on dedicated hardware solutions. Software-defined radio (SDR) technology has advanced in the recent years and enabled signal processing in real-time, which makes it an ideal candidate for the realization of a flexible GNSS-R system. Additionally, modern commodity graphic cards, which offer massive parallel computing performances, allow to handle the whole signal processing chain without interfering with the PC's CPU. Thus, this paper describes a GNSS-R system which has been developed on the principles of software-defined radio supported by General Purpose Graphics Processing Units (GPGPUs), and presents results from initial field tests which confirm the anticipated capability of the system.

  19. Incorporating client-server database architecture and graphical user interface into outpatient medical records.

    PubMed Central

    Fiacco, P. A.; Rice, W. H.

    1991-01-01

    Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732

  20. How Computer Graphics Work.

    ERIC Educational Resources Information Center

    Prosise, Jeff

    This document presents the principles behind modern computer graphics without straying into the arcane languages of mathematics and computer science. Illustrations accompany the clear, step-by-step explanations that describe how computers draw pictures. The 22 chapters of the book are organized into 5 sections. "Part 1: Computer Graphics in…

  1. Exploiting current-generation graphics hardware for synthetic-scene generation

    NASA Astrophysics Data System (ADS)

    Tanner, Michael A.; Keen, Wayne A.

    2010-04-01

    Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), including performance trades and possible pathways for future tool development.

  2. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL

    PubMed Central

    Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi

    2017-01-01

    A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively. PMID:29051701

  3. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL.

    PubMed

    Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi

    2017-01-01

    A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.

  4. Graphical User Interface Programming in Introductory Computer Science.

    ERIC Educational Resources Information Center

    Skolnick, Michael M.; Spooner, David L.

    Modern computing systems exploit graphical user interfaces for interaction with users; as a result, introductory computer science courses must begin to teach the principles underlying such interfaces. This paper presents an approach to graphical user interface (GUI) implementation that is simple enough for beginning students to understand, yet…

  5. Ancient Memory Arts and Modern Graphics.

    ERIC Educational Resources Information Center

    McNair, John R.

    1991-01-01

    Sketches the art of memory in the classical period, medieval times, and the sixteenth century. Maintains that in classrooms, workshops, and seminars the old memory art can illuminate the role of graphics in technical communication and can promote the creation of fresh, mnemonically powerful graphics for publications and presentation. (SR)

  6. Modernization of the graphics post-processors of the Hamburg German Climate Computer Center Carbon Cycle Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, E.J.; McNeilly, G.S.

    The existing National Center for Atmospheric Research (NCAR) code in the Hamburg Oceanic Carbon Cycle Circulation Model and the Hamburg Large-Scale Geostrophic Ocean General Circulation Model was modernized and reduced in size while still producing an equivalent end result. A reduction in the size of the existing code from more than 50,000 lines to approximately 7,500 lines in the new code has made the new code much easier to maintain. The existing code in Hamburg model uses legacy NCAR (including even emulated CALCOMP subrountines) graphics to display graphical output. The new code uses only current (version 3.1) NCAR subrountines.

  7. Application of enhanced modern structured analysis techniques to Space Station Freedom electric power system requirements

    NASA Technical Reports Server (NTRS)

    Biernacki, John; Juhasz, John; Sadler, Gerald

    1991-01-01

    A team of Space Station Freedom (SSF) system engineers are in the process of extensive analysis of the SSF requirements, particularly those pertaining to the electrical power system (EPS). The objective of this analysis is the development of a comprehensive, computer-based requirements model, using an enhanced modern structured analysis methodology (EMSA). Such a model provides a detailed and consistent representation of the system's requirements. The process outlined in the EMSA methodology is unique in that it allows the graphical modeling of real-time system state transitions, as well as functional requirements and data relationships, to be implemented using modern computer-based tools. These tools permit flexible updating and continuous maintenance of the models. Initial findings resulting from the application of EMSA to the EPS have benefited the space station program by linking requirements to design, providing traceability of requirements, identifying discrepancies, and fostering an understanding of the EPS.

  8. Mendel-GPU: haplotyping and genotype imputation on graphics processing units

    PubMed Central

    Chen, Gary K.; Wang, Kai; Stram, Alex H.; Sobel, Eric M.; Lange, Kenneth

    2012-01-01

    Motivation: In modern sequencing studies, one can improve the confidence of genotype calls by phasing haplotypes using information from an external reference panel of fully typed unrelated individuals. However, the computational demands are so high that they prohibit researchers with limited computational resources from haplotyping large-scale sequence data. Results: Our graphics processing unit based software delivers haplotyping and imputation accuracies comparable to competing programs at a fraction of the computational cost and peak memory demand. Availability: Mendel-GPU, our OpenCL software, runs on Linux platforms and is portable across AMD and nVidia GPUs. Users can download both code and documentation at http://code.google.com/p/mendel-gpu/. Contact: gary.k.chen@usc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22954633

  9. SIMD Optimization of Linear Expressions for Programmable Graphics Hardware

    PubMed Central

    Bajaj, Chandrajit; Ihm, Insung; Min, Jungki; Oh, Jinsang

    2009-01-01

    The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of ȳ = Ax̄ + b̄, where A is a matrix, and x̄, ȳ and b̄ are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. PMID:19946569

  10. Living Color Frame System: PC graphics tool for data visualization

    NASA Technical Reports Server (NTRS)

    Truong, Long V.

    1993-01-01

    Living Color Frame System (LCFS) is a personal computer software tool for generating real-time graphics applications. It is highly applicable for a wide range of data visualization in virtual environment applications. Engineers often use computer graphics to enhance the interpretation of data under observation. These graphics become more complicated when 'run time' animations are required, such as found in many typical modern artificial intelligence and expert systems. Living Color Frame System solves many of these real-time graphics problems.

  11. The Experience of Teaching of Descriptive Geometry and Engineering Graphics in Russian Language as a Foreign Language

    ERIC Educational Resources Information Center

    Voronina, Marianna V.; Tretyakova, Zlata O.

    2017-01-01

    The article considers the peculiarities of training foreign students subject "Descriptive geometry and Engineering Graphics" in a modern engineering university of Russia. The relevance of the problem conditioned by the fact that virtually there are no special studies of teaching Descriptive Geometry and Engineering Graphics in Russian…

  12. A modern approach to storing of 3D geometry of objects in machine engineering industry

    NASA Astrophysics Data System (ADS)

    Sokolova, E. A.; Aslanov, G. A.; Sokolov, A. A.

    2017-02-01

    3D graphics is a kind of computer graphics which has absorbed a lot from the vector and raster computer graphics. It is used in interior design projects, architectural projects, advertising, while creating educational computer programs, movies, visual images of parts and products in engineering, etc. 3D computer graphics allows one to create 3D scenes along with simulation of light conditions and setting up standpoints.

  13. Development of a Turbofan Engine Simulation in a Graphical Simulation Environment

    NASA Technical Reports Server (NTRS)

    Parker, Khary I.; Guo, Ten-Heui

    2003-01-01

    This paper presents the development of a generic component level model of a turbofan engine simulation with a digital controller, in an advanced graphical simulation environment. The goal of this effort is to develop and demonstrate a flexible simulation platform for future research in propulsion system control and diagnostic technology. A previously validated FORTRAN-based model of a modern, high-performance, military-type turbofan engine is being used to validate the platform development. The implementation process required the development of various innovative procedures, which are discussed in the paper. Open-loop and closed-loop comparisons are made between the two simulations. Future enhancements that are to be made to the modular engine simulation are summarized.

  14. A High Performance VLSI Computer Architecture For Computer Graphics

    NASA Astrophysics Data System (ADS)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  15. Opacity and Transparency in Phonological Change

    ERIC Educational Resources Information Center

    Gress-Wright, Jonathan

    2010-01-01

    Final obstruent devoicing is attested in both Middle and Modern High German, and the modern rule is usually assumed to have been directly inherited from the medieval rule without any chronological break (Reichmann & Wegera 1993), despite the fact that the graphic representation of final devoicing ceased in the Early Modern period. However, an…

  16. An introduction to real-time graphical techniques for analyzing multivariate data

    NASA Astrophysics Data System (ADS)

    Friedman, Jerome H.; McDonald, John Alan; Stuetzle, Werner

    1987-08-01

    Orion I is a graphics system used to study applications of computer graphics - especially interactive motion graphics - in statistics. Orion I is the newest of a family of "Prim" systems, whose most striking common feature is the use of real-time motion graphics to display three dimensional scatterplots. Orion I differs from earlier Prim systems through the use of modern and relatively inexpensive raster graphics and microprocessor technology. It also delivers more computing power to its user; Orion I can perform more sophisticated real-time computations than were possible on previous such systems. We demonstrate some of Orion I's capabilities in our film: "Exploring data with Orion I".

  17. Computer simulations and real-time control of ELT AO systems using graphical processing units

    NASA Astrophysics Data System (ADS)

    Wang, Lianqi; Ellerbroek, Brent

    2012-07-01

    The adaptive optics (AO) simulations at the Thirty Meter Telescope (TMT) have been carried out using the efficient, C based multi-threaded adaptive optics simulator (MAOS, http://github.com/lianqiw/maos). By porting time-critical parts of MAOS to graphical processing units (GPU) using NVIDIA CUDA technology, we achieved a 10 fold speed up for each GTX 580 GPU used compared to a modern quad core CPU. Each time step of full scale end to end simulation for the TMT narrow field infrared AO system (NFIRAOS) takes only 0.11 second in a desktop with two GTX 580s. We also demonstrate that the TMT minimum variance reconstructor can be assembled in matrix vector multiply (MVM) format in 8 seconds with 8 GTX 580 GPUs, meeting the TMT requirement for updating the reconstructor. Analysis show that it is also possible to apply the MVM using 8 GTX 580s within the required latency.

  18. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    PubMed

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  19. CCP4i2: the new graphical user interface to the CCP4 program suite.

    PubMed

    Potterton, Liz; Agirre, Jon; Ballard, Charles; Cowtan, Kevin; Dodson, Eleanor; Evans, Phil R; Jenkins, Huw T; Keegan, Ronan; Krissinel, Eugene; Stevenson, Kyle; Lebedev, Andrey; McNicholas, Stuart J; Nicholls, Robert A; Noble, Martin; Pannu, Navraj S; Roth, Christian; Sheldrick, George; Skubak, Pavol; Turkenburg, Johan; Uski, Ville; von Delft, Frank; Waterman, David; Wilson, Keith; Winn, Martyn; Wojdyr, Marcin

    2018-02-01

    The CCP4 (Collaborative Computational Project, Number 4) software suite for macromolecular structure determination by X-ray crystallography groups brings together many programs and libraries that, by means of well established conventions, interoperate effectively without adhering to strict design guidelines. Because of this inherent flexibility, users are often presented with diverse, even divergent, choices for solving every type of problem. Recently, CCP4 introduced CCP4i2, a modern graphical interface designed to help structural biologists to navigate the process of structure determination, with an emphasis on pipelining and the streamlined presentation of results. In addition, CCP4i2 provides a framework for writing structure-solution scripts that can be built up incrementally to create increasingly automatic procedures.

  20. Real-time graphic display utility for nuclear safety applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, S.; Huang, X.; Taylor, J.

    2006-07-01

    With the increasing interests in the nuclear energy, new nuclear power plants will be constructed and licensed, and older generation ones will be upgraded for assuring continuing operation. The tendency of adopting the latest proven technology and the fact of older parts becoming obsolete have made the upgrades imperative. One of the areas for upgrades is the older CRT display being replaced by the latest graphics displays running under modern real time operating system (RTOS) with safety graded modern computer. HFC has developed a graphic display utility (GDU) under the QNX RTOS. A standard off-the-shelf software with a long historymore » of performance in industrial applications, QNX RTOS used for safety applications has been examined via a commercial dedication process that is consistent with the regulatory guidelines. Through a commercial survey, a design life cycle and an operating history evaluation, and necessary tests dictated by the dedication plan, it is reasonably confirmed that the QNX RTOS was essentially equivalent to what would be expected in the nuclear industry. The developed GDU operates and communicates with the existing equipment through a dedicated serial channel of a flat panel controller (FPC) module. The FPC module drives a flat panel display (FPD) monitor. A touch screen mounted on the FPD serves as the normal operator interface with the FPC/FPD monitor system. The GDU can be used not only for replacing older CRTs but also in new applications. The replacement of the older CRT does not disturb the function of the existing equipment. It not only provides modern proven technology upgrade but also improves human ergonomics. The FPC, which can be used as a standalone controller running with the GDU, is an integrated hardware and software module. It operates as a single board computer within a control system, and applies primarily to the graphics display, targeting, keyboard and mouse. During normal system operation, the GDU has two sources of data input: a serial interface with field equipment and a serial input from the FPD touch screen. The mechanism for data collection from the field equipment consists of the regular exchange of the data update request messages and target commands sent to the equipment and the update messages returned to the FPC. The data updates from field equipment control displays presented on the graphic pages. Touch screen contacts are decoded to identify physical position that was contacted. If that position corresponds with one of the buttons on the graphic page, the software uses that input to initiate the function defined for the particular button contacted. In this paper, the FPC will be illustrated as a standalone system as well as a module in a dedicated control system. The GDU design concepts and its design flow will be demonstrated. The dedication process of the QNX RTOS needed for the GDU will be highlighted. Finally, the GDU with a specific application example used in one of the nuclear power plants will be presented. (authors)« less

  1. GPU-BSM: A GPU-Based Tool to Map Bisulfite-Treated Reads

    PubMed Central

    Manconi, Andrea; Orro, Alessandro; Manca, Emanuele; Armano, Giuliano; Milanesi, Luciano

    2014-01-01

    Cytosine DNA methylation is an epigenetic mark implicated in several biological processes. Bisulfite treatment of DNA is acknowledged as the gold standard technique to study methylation. This technique introduces changes in the genomic DNA by converting cytosines to uracils while 5-methylcytosines remain nonreactive. During PCR amplification 5-methylcytosines are amplified as cytosine, whereas uracils and thymines as thymine. To detect the methylation levels, reads treated with the bisulfite must be aligned against a reference genome. Mapping these reads to a reference genome represents a significant computational challenge mainly due to the increased search space and the loss of information introduced by the treatment. To deal with this computational challenge we devised GPU-BSM, a tool based on modern Graphics Processing Units. Graphics Processing Units are hardware accelerators that are increasingly being used successfully to accelerate general-purpose scientific applications. GPU-BSM is a tool able to map bisulfite-treated reads from whole genome bisulfite sequencing and reduced representation bisulfite sequencing, and to estimate methylation levels, with the goal of detecting methylation. Due to the massive parallelization obtained by exploiting graphics cards, GPU-BSM aligns bisulfite-treated reads faster than other cutting-edge solutions, while outperforming most of them in terms of unique mapped reads. PMID:24842718

  2. The Representational Value of Hats

    ERIC Educational Resources Information Center

    Watson, Jane M.; Fitzallen, Noleine E.; Wilson, Karen G.; Creed, Julie F.

    2008-01-01

    The literature that is available on the topic of representations in mathematics is vast. One commonly discussed item is graphical representations. From the history of mathematics to modern uses of technology, a variety of graphical forms are available for middle school students to use to represent mathematical ideas. The ideas range from algebraic…

  3. Information-computational platform for collaborative multidisciplinary investigations of regional climatic changes and their impacts

    NASA Astrophysics Data System (ADS)

    Gordov, Evgeny; Lykosov, Vasily; Krupchatnikov, Vladimir; Okladnikov, Igor; Titov, Alexander; Shulgina, Tamara

    2013-04-01

    Analysis of growing volume of related to climate change data from sensors and model outputs requires collaborative multidisciplinary efforts of researchers. To do it timely and in reliable way one needs in modern information-computational infrastructure supporting integrated studies in the field of environmental sciences. Recently developed experimental software and hardware platform Climate (http://climate.scert.ru/) provides required environment for regional climate change related investigations. The platform combines modern web 2.0 approach, GIS-functionality and capabilities to run climate and meteorological models, process large geophysical datasets and support relevant analysis. It also supports joint software development by distributed research groups, and organization of thematic education for students and post-graduate students. In particular, platform software developed includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also run of integrated into the platform WRF and «Planet Simulator» models, modeling results data preprocessing and visualization is provided. All functions of the platform are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of selection of geographical region of interest (pan and zoom), data layers manipulation (order, enable/disable, features extraction) and visualization of results. Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches. Using it even unskilled user without specific knowledge can perform reliable computational processing and visualization of large meteorological, climatic and satellite monitoring datasets through unified graphical web-interface. Partial support of RF Ministry of Education and Science grant 8345, SB RAS Program VIII.80.2 and Projects 69, 131, 140 and APN CBA2012-16NSY project is acknowledged.

  4. Improved compliance by BPM-driven workflow automation.

    PubMed

    Holzmüller-Laue, Silke; Göde, Bernd; Fleischer, Heidi; Thurow, Kerstin

    2014-12-01

    Using methods and technologies of business process management (BPM) for the laboratory automation has important benefits (i.e., the agility of high-level automation processes, rapid interdisciplinary prototyping and implementation of laboratory tasks and procedures, and efficient real-time process documentation). A principal goal of the model-driven development is the improved transparency of processes and the alignment of process diagrams and technical code. First experiences of using the business process model and notation (BPMN) show that easy-to-read graphical process models can achieve and provide standardization of laboratory workflows. The model-based development allows one to change processes quickly and an easy adaption to changing requirements. The process models are able to host work procedures and their scheduling in compliance with predefined guidelines and policies. Finally, the process-controlled documentation of complex workflow results addresses modern laboratory needs of quality assurance. BPMN 2.0 as an automation language to control every kind of activity or subprocess is directed to complete workflows in end-to-end relationships. BPMN is applicable as a system-independent and cross-disciplinary graphical language to document all methods in laboratories (i.e., screening procedures or analytical processes). That means, with the BPM standard, a communication method of sharing process knowledge of laboratories is also available. © 2014 Society for Laboratory Automation and Screening.

  5. CCP4i2: the new graphical user interface to the CCP4 program suite

    PubMed Central

    Potterton, Liz; Ballard, Charles; Dodson, Eleanor; Evans, Phil R.; Keegan, Ronan; Krissinel, Eugene; Stevenson, Kyle; Lebedev, Andrey; McNicholas, Stuart J.; Noble, Martin; Pannu, Navraj S.; Roth, Christian; Sheldrick, George; Skubak, Pavol; Uski, Ville

    2018-01-01

    The CCP4 (Collaborative Computational Project, Number 4) software suite for macromolecular structure determination by X-ray crystallography groups brings together many programs and libraries that, by means of well established conventions, interoperate effectively without adhering to strict design guidelines. Because of this inherent flexibility, users are often presented with diverse, even divergent, choices for solving every type of problem. Recently, CCP4 introduced CCP4i2, a modern graphical interface designed to help structural biologists to navigate the process of structure determination, with an emphasis on pipelining and the streamlined presentation of results. In addition, CCP4i2 provides a framework for writing structure-solution scripts that can be built up incrementally to create increasingly automatic procedures. PMID:29533233

  6. Digital Waveguide Architectures for Virtual Musical Instruments

    NASA Astrophysics Data System (ADS)

    Smith, Julius O.

    Digital sound synthesis has become a standard staple of modern music studios, videogames, personal computers, and hand-held devices. As processing power has increased over the years, sound synthesis implementations have evolved from dedicated chip sets, to single-chip solutions, and ultimately to software implementations within processors used primarily for other tasks (such as for graphics or general purpose computing). With the cost of implementation dropping closer and closer to zero, there is increasing room for higher quality algorithms.

  7. Graphic Communications--Preparatory Area. Book I--Typography and Modern Typesetting. Student Manual.

    ERIC Educational Resources Information Center

    Hertz, Andrew

    Designed to develop in the student skills in all of the preparatory functions of the graphic communications industry, this student guide covers copy preparation, art preparation, typography, camera, stripping, production management, and forms design, preparation, and analysis. In addition to the skills areas, material is included on the history of…

  8. User's Guide for Flight Simulation Data Visualization Workstation

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Chen, Ronnie; Kenney, Patrick S.; Koval, Christopher M.; Hutchinson, Brian K.

    1996-01-01

    Today's modern flight simulation research produces vast amounts of time sensitive data. The meaning of this data can be difficult to assess while in its raw format. Therefore, a method of breaking the data down and presenting it to the user in a graphical format is necessary. Simulation Graphics (SimGraph) is intended as a data visualization software package that will incorporate simulation data into a variety of animated graphical displays for easy interpretation by the simulation researcher. This document is intended as an end user's guide.

  9. Graphics-processing-unit-accelerated finite-difference time-domain simulation of the interaction between ultrashort laser pulses and metal nanoparticles

    NASA Astrophysics Data System (ADS)

    Nikolskiy, V. P.; Stegailov, V. V.

    2018-01-01

    Metal nanoparticles (NPs) serve as important tools for many modern technologies. However, the proper microscopic models of the interaction between ultrashort laser pulses and metal NPs are currently not very well developed in many cases. One part of the problem is the description of the warm dense matter that is formed in NPs after intense irradiation. Another part of the problem is the description of the electromagnetic waves around NPs. Description of wave propagation requires the solution of Maxwell’s equations and the finite-difference time-domain (FDTD) method is the classic approach for solving them. There are many commercial and free implementations of FDTD, including the open source software that supports graphics processing unit (GPU) acceleration. In this report we present the results on the FDTD calculations for different cases of the interaction between ultrashort laser pulses and metal nanoparticles. Following our previous results, we analyze the efficiency of the GPU acceleration of the FDTD algorithm.

  10. Computer generated hologram from point cloud using graphics processor.

    PubMed

    Chen, Rick H-Y; Wilkinson, Timothy D

    2009-12-20

    Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum. We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologram plane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique.

  11. A Framework for the Design of Effective Graphics for Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Miceli, Kristina D.

    1992-01-01

    This proposal presents a visualization framework, based on a data model, that supports the production of effective graphics for scientific visualization. Visual representations are effective only if they augment comprehension of the increasing amounts of data being generated by modern computer simulations. These representations are created by taking into account the goals and capabilities of the scientist, the type of data to be displayed, and software and hardware considerations. This framework is embodied in an assistant-based visualization system to guide the scientist in the visualization process. This will improve the quality of the visualizations and decrease the time the scientist is required to spend in generating the visualizations. I intend to prove that such a framework will create a more productive environment for tile analysis and interpretation of large, complex data sets.

  12. Web-GIS platform for monitoring and forecasting of regional climate and ecological changes

    NASA Astrophysics Data System (ADS)

    Gordov, E. P.; Krupchatnikov, V. N.; Lykosov, V. N.; Okladnikov, I.; Titov, A. G.; Shulgina, T. M.

    2012-12-01

    Growing volume of environmental data from sensors and model outputs makes development of based on modern information-telecommunication technologies software infrastructure for information support of integrated scientific researches in the field of Earth sciences urgent and important task (Gordov et al, 2012, van der Wel, 2005). It should be considered that original heterogeneity of datasets obtained from different sources and institutions not only hampers interchange of data and analysis results but also complicates their intercomparison leading to a decrease in reliability of analysis results. However, modern geophysical data processing techniques allow combining of different technological solutions for organizing such information resources. Nowadays it becomes a generally accepted opinion that information-computational infrastructure should rely on a potential of combined usage of web- and GIS-technologies for creating applied information-computational web-systems (Titov et al, 2009, Gordov et al. 2010, Gordov, Okladnikov and Titov, 2011). Using these approaches for development of internet-accessible thematic information-computational systems, and arranging of data and knowledge interchange between them is a very promising way of creation of distributed information-computation environment for supporting of multidiscipline regional and global research in the field of Earth sciences including analysis of climate changes and their impact on spatial-temporal vegetation distribution and state. Experimental software and hardware platform providing operation of a web-oriented production and research center for regional climate change investigations which combines modern web 2.0 approach, GIS-functionality and capabilities of running climate and meteorological models, large geophysical datasets processing, visualization, joint software development by distributed research groups, scientific analysis and organization of students and post-graduate students education is presented. Platform software developed (Shulgina et al, 2012, Okladnikov et al, 2012) includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also data preprocessing, run and visualization of modeling results of models WRF and «Planet Simulator» integrated into the platform is provided. All functions of the center are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of visualization of processing results, selection of geographical region of interest (pan and zoom) and data layers manipulation (order, enable/disable, features extraction). Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches (Shulgina et al, 2011). Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified graphical web-interface.

  13. Modern Teaching Methods in Physics with the Aid of Original Computer Codes and Graphical Representations

    ERIC Educational Resources Information Center

    Ivanov, Anisoara; Neacsu, Andrei

    2011-01-01

    This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…

  14. Real-time colouring and filtering with graphics shaders

    NASA Astrophysics Data System (ADS)

    Vohl, D.; Fluke, C. J.; Barnes, D. G.; Hassan, A. H.

    2017-11-01

    Despite the popularity of the Graphics Processing Unit (GPU) for general purpose computing, one should not forget about the practicality of the GPU for fast scientific visualization. As astronomers have increasing access to three-dimensional (3D) data from instruments and facilities like integral field units and radio interferometers, visualization techniques such as volume rendering offer means to quickly explore spectral cubes as a whole. As most 3D visualization techniques have been developed in fields of research like medical imaging and fluid dynamics, many transfer functions are not optimal for astronomical data. We demonstrate how transfer functions and graphics shaders can be exploited to provide new astronomy-specific explorative colouring methods. We present 12 shaders, including four novel transfer functions specifically designed to produce intuitive and informative 3D visualizations of spectral cube data. We compare their utility to classic colour mapping. The remaining shaders highlight how common computation like filtering, smoothing and line ratio algorithms can be integrated as part of the graphics pipeline. We discuss how this can be achieved by utilizing the parallelism of modern GPUs along with a shading language, letting astronomers apply these new techniques at interactive frame rates. All shaders investigated in this work are included in the open source software shwirl (Vohl 2017).

  15. Data display and analysis with μView

    NASA Astrophysics Data System (ADS)

    Tucakov, Ivan; Cosman, Jacob; Brewer, Jess H.

    2006-03-01

    The μView utility is a new Java applet version of the old db program, extended to include direct access to MUD data files, from which it can construct a variety of spectrum types, including complex and RRF-transformed spectra. By using graphics features built into all modern Web browsers, it provides full graphical display capabilities consistently across all platforms. It has the full command-line functionality of db as well as a more intuitive graphical user interface and extensive documentation, and can read and write db, csv and XML format files.

  16. Modernization of the NASA scientific and technical information program

    NASA Technical Reports Server (NTRS)

    Cotter, Gladys A.; Hunter, Judy F.; Ostergaard, K.

    1993-01-01

    The NASA Scientific and Technical Information Program utilizes a technology infrastructure assembled in the mid 1960s to late 1970s to process and disseminate its information products. When this infrastructure was developed it placed NASA as a leader in processing STI. The retrieval engine for the STI database was the first of its kind and was used as the basis for developing commercial, other U.S., and foreign government agency retrieval systems. Due to the combination of changes in user requirements and the tremendous increase in technological capabilities readily available in the marketplace, this infrastructure is no longer the most cost-effective or efficient methodology available. Consequently, the NASA STI Program is pursuing a modernization effort that applies new technology to current processes to provide near-term benefits to the user. In conjunction with this activity, we are developing a long-term modernization strategy designed to transition the Program to a multimedia, global 'library without walls.' Critical pieces of the long-term strategy include streamlining access to sources of STI by using advances in computer networking and graphical user interfaces; creating and disseminating technical information in various electronic media including optical disks, video, and full text; and establishing a Technology Focus Group to maintain a current awareness of emerging technology and to plan for the future.

  17. Using Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data

    NASA Astrophysics Data System (ADS)

    O'Connor, A. S.; Justice, B.; Harris, A. T.

    2013-12-01

    Graphics Processing Units (GPUs) are high-performance multiple-core processors capable of very high computational speeds and large data throughput. Modern GPUs are inexpensive and widely available commercially. These are general-purpose parallel processors with support for a variety of programming interfaces, including industry standard languages such as C. GPU implementations of algorithms that are well suited for parallel processing can often achieve speedups of several orders of magnitude over optimized CPU codes. Significant improvements in speeds for imagery orthorectification, atmospheric correction, target detection and image transformations like Independent Components Analsyis (ICA) have been achieved using GPU-based implementations. Additional optimizations, when factored in with GPU processing capabilities, can provide 50x - 100x reduction in the time required to process large imagery. Exelis Visual Information Solutions (VIS) has implemented a CUDA based GPU processing frame work for accelerating ENVI and IDL processes that can best take advantage of parallelization. Testing Exelis VIS has performed shows that orthorectification can take as long as two hours with a WorldView1 35,0000 x 35,000 pixel image. With GPU orthorecification, the same orthorectification process takes three minutes. By speeding up image processing, imagery can successfully be used by first responders, scientists making rapid discoveries with near real time data, and provides an operational component to data centers needing to quickly process and disseminate data.

  18. Application of web-GIS approach for climate change study

    NASA Astrophysics Data System (ADS)

    Okladnikov, Igor; Gordov, Evgeny; Titov, Alexander; Bogomolov, Vasily; Martynova, Yuliya; Shulgina, Tamara

    2013-04-01

    Georeferenced datasets are currently actively used in numerous applications including modeling, interpretation and forecast of climatic and ecosystem changes for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their huge size which might constitute up to tens terabytes for a single dataset at present studies in the area of climate and environmental change require a special software support. A dedicated web-GIS information-computational system for analysis of georeferenced climatological and meteorological data has been created. It is based on OGC standards and involves many modern solutions such as object-oriented programming model, modular composition, and JavaScript libraries based on GeoExt library, ExtJS Framework and OpenLayers software. The main advantage of the system lies in a possibility to perform mathematical and statistical data analysis, graphical visualization of results with GIS-functionality, and to prepare binary output files with just only a modern graphical web-browser installed on a common desktop computer connected to Internet. Several geophysical datasets represented by two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, ECMWF ERA-40 Reanalysis, ECMWF ERA Interim Reanalysis, MRI/JMA APHRODITE's Water Resources Project Reanalysis, DWD Global Precipitation Climatology Centre's data, GMAO Modern Era-Retrospective analysis for Research and Applications, meteorological observational data for the territory of the former USSR for the 20th century, results of modeling by global and regional climatological models, and others are available for processing by the system. And this list is extending. Also a functionality to run WRF and "Planet simulator" models was implemented in the system. Due to many preset parameters and limited time and spatial ranges set in the system these models have low computational power requirements and could be used in educational workflow for better understanding of basic climatological and meteorological processes. The Web-GIS information-computational system for geophysical data analysis provides specialists involved into multidisciplinary research projects with reliable and practical instruments for complex analysis of climate and ecosystems changes on global and regional scales. Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified web-interface in a common graphical web-browser. This work is partially supported by the Ministry of education and science of the Russian Federation (contract #8345), SB RAS project VIII.80.2.1, RFBR grant #11-05-01190a, and integrated project SB RAS #131.

  19. A Novel Technique for Running the NASA Legacy Code LAPIN Synchronously With Simulations Developed Using Simulink

    NASA Technical Reports Server (NTRS)

    Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.

    2012-01-01

    This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.

  20. A Survey Of Techniques for Managing and Leveraging Caches in GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh

    2014-09-01

    Initially introduced as special-purpose accelerators for graphics applications, graphics processing units (GPUs) have now emerged as general purpose computing platforms for a wide range of applications. To address the requirements of these applications, modern GPUs include sizable hardware-managed caches. However, several factors, such as unique architecture of GPU, rise of CPU–GPU heterogeneous computing, etc., demand effective management of caches to achieve high performance and energy efficiency. Recently, several techniques have been proposed for this purpose. In this paper, we survey several architectural and system-level techniques proposed for managing and leveraging GPU caches. We also discuss the importance and challenges ofmore » cache management in GPUs. The aim of this paper is to provide the readers insights into cache management techniques for GPUs and motivate them to propose even better techniques for leveraging the full potential of caches in the GPUs of tomorrow.« less

  1. Integrating macromolecular X-ray diffraction data with the graphical user interface iMOSFLM

    PubMed Central

    Powell, Harold R; Battye, T Geoff G; Kontogiannis, Luke; Johnson, Owen; Leslie, Andrew GW

    2017-01-01

    X-ray crystallography is the overwhelmingly dominant source of structural information for biological macromolecules, providing fundamental insights into biological function. Collection of X-ray diffraction data underlies the technique, and robust and user-friendly software to process the diffraction images makes the technique accessible to a wider range of scientists. iMosflm/MOSFLM (www.mrc-lmb.cam.ac.uk/harry/imosflm) is a software package designed to achieve this goal. The graphical user interface (GUI) version of MOSFLM (called iMosflm) is designed to guide inexperienced users through the steps of data integration, while retaining powerful features for more experienced users. Images from almost all commercially available X-ray detectors can be handled. Although the program only utilizes two-dimensional profile fitting, it can readily integrate data collected in “fine phi-slicing” mode (where the rotation angle per image is less than the crystal mosaic spread by a factor of at least 2) that is commonly employed with modern very fast readout detectors. The graphical user interface provides real-time feedback on the success of the indexing step and the progress of data processing. This feedback includes the ability to monitor detector and crystal parameter refinement and to display the average spot shape in different regions of the detector. Data scaling and merging tasks can be initiated directly from the interface. Using this protocol, a dataset of 360 images with ~2000 reflections per image can be processed in approximately four minutes. PMID:28569763

  2. Smartphones as image processing systems for prosthetic vision.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  3. The End of the Rainbow? Color Schemes for Improved Data Graphics

    NASA Astrophysics Data System (ADS)

    Light, Adam; Bartlein, Patrick J.

    2004-10-01

    Modern computer displays and printers enable the widespread use of color in scientific communication, but the expertise for designing effective graphics has not kept pace with the technology for producing them. Historically, even the most prestigious publications have tolerated high defect rates in figures and illustrations, and technological advances that make creating and reproducing graphics easier do not appear to have decreased the frequency of errors. Flawed graphics consequently beget more flawed graphics as authors emulate published examples. Color has the potential to enhance communication, but design mistakes can result in color figures that are less effective than gray scale displays of the same data. Empirical research on human subjects can build a fundamental understanding of visual perception and scientific methods can be used to evaluate existing designs, but creating effective data graphics is a design task and not fundamentally a scientific pursuit. Like writing well, creating good data graphics requires a combination of formal knowledge and artistic sensibility tempered by experience: a combination of ``substance, statistics, and design''.

  4. Global magnetohydrodynamic simulations on multiple GPUs

    NASA Astrophysics Data System (ADS)

    Wong, Un-Hong; Wong, Hon-Cheng; Ma, Yonghui

    2014-01-01

    Global magnetohydrodynamic (MHD) models play the major role in investigating the solar wind-magnetosphere interaction. However, the huge computation requirement in global MHD simulations is also the main problem that needs to be solved. With the recent development of modern graphics processing units (GPUs) and the Compute Unified Device Architecture (CUDA), it is possible to perform global MHD simulations in a more efficient manner. In this paper, we present a global magnetohydrodynamic (MHD) simulator on multiple GPUs using CUDA 4.0 with GPUDirect 2.0. Our implementation is based on the modified leapfrog scheme, which is a combination of the leapfrog scheme and the two-step Lax-Wendroff scheme. GPUDirect 2.0 is used in our implementation to drive multiple GPUs. All data transferring and kernel processing are managed with CUDA 4.0 API instead of using MPI or OpenMP. Performance measurements are made on a multi-GPU system with eight NVIDIA Tesla M2050 (Fermi architecture) graphics cards. These measurements show that our multi-GPU implementation achieves a peak performance of 97.36 GFLOPS in double precision.

  5. Implementing WebGL and HTML5 in Macromolecular Visualization and Modern Computer-Aided Drug Design.

    PubMed

    Yuan, Shuguang; Chan, H C Stephen; Hu, Zhenquan

    2017-06-01

    Web browsers have long been recognized as potential platforms for remote macromolecule visualization. However, the difficulty in transferring large-scale data to clients and the lack of native support for hardware-accelerated applications in the local browser undermine the feasibility of such utilities. With the introduction of WebGL and HTML5 technologies in recent years, it is now possible to exploit the power of a graphics-processing unit (GPU) from a browser without any third-party plugin. Many new tools have been developed for biological molecule visualization and modern drug discovery. In contrast to traditional offline tools, real-time computing, interactive data analysis, and cross-platform analyses feature WebGL- and HTML5-based tools, facilitating biological research in a more efficient and user-friendly way. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Neuroimaging Study Designs, Computational Analyses and Data Provenance Using the LONI Pipeline

    PubMed Central

    Dinov, Ivo; Lozev, Kamen; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Zamanyan, Alen; Chakrapani, Shruthi; Van Horn, John; Parker, D. Stott; Magsipoc, Rico; Leung, Kelvin; Gutman, Boris; Woods, Roger; Toga, Arthur

    2010-01-01

    Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges—management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu. PMID:20927408

  7. Porting and redesign of Geotool software system to Qt

    NASA Astrophysics Data System (ADS)

    Miljanovic Tamarit, V.; Carneiro, L.; Henson, I. H.; Tomuta, E.

    2016-12-01

    Geotool is a software system that allows a user to interactively display and process seismoacoustic data from International Monitoring System (IMS) station. Geotool can be used to perform a number of analysis and review tasks, including data I/O, waveform filtering, quality control, component rotation, amplitude and arrival measurement and review, array beamforming, correlation, Fourier analysis, FK analysis, event review and location, particle motion visualization, polarization analysis, instrument response convolution/deconvolution, real-time display, signal to noise measurement, spectrogram, and travel time model display. The Geotool program was originally written in C using the X11/Xt/Motif libraries for graphics. It was later ported to C++. Now the program is being ported to the Qt graphics system to be more compatible with the other software in the International Data Centre (IDC). Along with this port, a redesign of the architecture is underway to achieve a separation between user interface, control, and data model elements, in line with design patterns such as Model-View-Controller. Qt is a cross-platform application framework that will allow geotool to easily run on Linux, Mac, and Windows. The Qt environment includes modern libraries and user interfaces for standard utilities such as file and database access, printing, and inter-process communications. The Qt Widgets for Technical Applications library (QWT) provides tools for displaying standard data analysis graphics.

  8. Graphic Poetry: How to Help Students Get the Most out of Pictures

    ERIC Educational Resources Information Center

    Chiang, River Ya-ling

    2013-01-01

    This paper attempts to give an account of some innovative work in paintings and modern poetry and to show how modern poets, such as Jane Flanders and Anne Sexton, the two American poets in particular, express and develop radically new conventions for their respective arts. Also elaborated are how such changes in artistic techniques are related to…

  9. The Helicopter Antenna Radiation Prediction Code (HARP)

    NASA Technical Reports Server (NTRS)

    Klevenow, F. T.; Lynch, B. G.; Newman, E. H.; Rojas, R. G.; Scheick, J. T.; Shamansky, H. T.; Sze, K. Y.

    1990-01-01

    The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results.

  10. High-performance image reconstruction in fluorescence tomography on desktop computers and graphics hardware.

    PubMed

    Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann

    2011-11-01

    Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.

  11. Distributed GPU Computing in GIScience

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE Transactions on, 9(3), 378-394. 2. Li, J., Jiang, Y., Yang, C., Huang, Q., & Rice, M. (2013). Visualizing 3D/4D Environmental Data Using Many-core Graphics Processing Units (GPUs) and Multi-core Central Processing Units (CPUs). Computers & Geosciences, 59(9), 78-89. 3. Owens, J. D., Houston, M., Luebke, D., Green, S., Stone, J. E., & Phillips, J. C. (2008). GPU computing. Proceedings of the IEEE, 96(5), 879-899.

  12. Performance Evaluation of UHF RFID Technologies for Real-Time Bus Recognition in the Taipei Bus Station

    PubMed Central

    Own, Chung-Ming; Lee, Da-Sheng; Wang, Ti-Ho; Wang, De-Jun; Ting, Yu-Lun

    2013-01-01

    Transport stations such as airports, ports, and railways have adopted blocked-type pathway management to process and control travel systems in a one-directional manner. However, this excludes highway transportation where large buses have great variability and mobility; thus, an instant influx of numerous buses increases risks and complicates station management. Focusing on Taipei Bus Station, this study employed RFID technology to develop a system platform integrated with modern information technology that has numerous characteristics. This modern information technology comprised the following systems: ultra-high frequency (UHF) radio-frequency identification (RFID), ultrasound and license number identification, and backstage graphic controls. In conclusion, the system enabled management, bus companies, and passengers to experience the national bus station's new generation technology, which provides diverse information and synchronization functions. Furthermore, this technology reached a new milestone in the energy-saving and efficiency-increasing performance of Taiwan's buses. PMID:23778192

  13. Modernization and multiscale databases at the U.S. geological survey

    USGS Publications Warehouse

    Morrison, J.L.

    1992-01-01

    The U.S. Geological Survey (USGS) has begun a digital cartographic modernization program. Keys to that program are the creation of a multiscale database, a feature-based file structure that is derived from a spatial data model, and a series of "templates" or rules that specify the relationships between instances of entities in reality and features in the database. The database will initially hold data collected from the USGS standard map products at scales of 1:24,000, 1:100,000, and 1:2,000,000. The spatial data model is called the digital line graph-enhanced model, and the comprehensive rule set consists of collection rules, product generation rules, and conflict resolution rules. This modernization program will affect the USGS mapmaking process because both digital and graphic products will be created from the database. In addition, non-USGS map users will have more flexibility in uses of the databases. These remarks are those of the session discussant made in response to the six papers and the keynote address given in the session. ?? 1992.

  14. Modernization (Selected Articles),

    DTIC Science & Technology

    1986-09-18

    newly developed science such as control theory, artificial intelligence, model identification, computer and microelectronics technology, graphic...five "top guns" from around the country specializing in intellignece , mechanics, software and hardware as our technical advisors. In addition

  15. Romanian traditional motif - element of modernity in clothing

    NASA Astrophysics Data System (ADS)

    Doble, L.; Stan, O.; Suteu, M. D.; Albu, A.; Bohm, G.; Tsatsarou-Michalaki, A.; Gialinou, E.

    2017-10-01

    In this paper are presented the phases for improving from an aesthetic point of view a clothing item, the jacket respectively, with a straight cut for women using software design patterns, computerised graphics and textile different modern technologies including: industrial embroidery, digital printing, sublimation. In the first phase a documentation was prepared in the Ethnographic Museum of Transylvania from Cluj Napoca where more traditional motifs were selected specific to Transylvania etnographic region and were reintepreted and stylized whilst preserving the symbolism and color range specified to the area. For the styling phase was used CorelDraw vector graphics program that allows changing the shape, size and color of the drawings without affecting the identity of the pattern. In the patterns design phase Gemini CAD software was used and for the modeling and model development Optitex software was used. The part for garnishing the model was performed using Embrodery machine software reproducing the stylized motif identically. In order to obtain a significantly improved aesthetic look and an added artistic value the pattern chosen for the jacket was done using a combination of modern textile technologies. This has allowed the realization of a particular texture on the surface of the designed product, demonstrating that traditional patterns can be reintepreted in modern clothing

  16. LOSITAN: a workbench to detect molecular adaptation based on a Fst-outlier method.

    PubMed

    Antao, Tiago; Lopes, Ana; Lopes, Ricardo J; Beja-Pereira, Albano; Luikart, Gordon

    2008-07-28

    Testing for selection is becoming one of the most important steps in the analysis of multilocus population genetics data sets. Existing applications are difficult to use, leaving many non-trivial, error-prone tasks to the user. Here we present LOSITAN, a selection detection workbench based on a well evaluated Fst-outlier detection method. LOSITAN greatly facilitates correct approximation of model parameters (e.g., genome-wide average, neutral Fst), provides data import and export functions, iterative contour smoothing and generation of graphics in a easy to use graphical user interface. LOSITAN is able to use modern multi-core processor architectures by locally parallelizing fdist, reducing computation time by half in current dual core machines and with almost linear performance gains in machines with more cores. LOSITAN makes selection detection feasible to a much wider range of users, even for large population genomic datasets, by both providing an easy to use interface and essential functionality to complete the whole selection detection process.

  17. Fast ray-tracing of human eye optics on Graphics Processing Units.

    PubMed

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. GRAPEVINE: Grids about anything by Poisson's equation in a visually interactive networking environment

    NASA Technical Reports Server (NTRS)

    Sorenson, Reese L.; Mccann, Karen

    1992-01-01

    A proven 3-D multiple-block elliptic grid generator, designed to run in 'batch mode' on a supercomputer, is improved by the creation of a modern graphical user interface (GUI) running on a workstation. The two parts are connected in real time by a network. The resultant system offers a significant speedup in the process of preparing and formatting input data and the ability to watch the grid solution converge by replotting the grid at each iteration step. The result is a reduction in user time and CPU time required to generate the grid and an enhanced understanding of the elliptic solution process. This software system, called GRAPEVINE, is described, and certain observations are made concerning the creation of such software.

  19. Analysis of impact of general-purpose graphics processor units in supersonic flow modeling

    NASA Astrophysics Data System (ADS)

    Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.

    2017-06-01

    Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  20. Parallel algorithm for solving Kepler’s equation on Graphics Processing Units: Application to analysis of Doppler exoplanet searches

    NASA Astrophysics Data System (ADS)

    Ford, Eric B.

    2009-05-01

    We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.

  1. Using parallel computing for the display and simulation of the space debris environment

    NASA Astrophysics Data System (ADS)

    Möckel, M.; Wiedemann, C.; Flegel, S.; Gelhaus, J.; Vörsmann, P.; Klinkrad, H.; Krag, H.

    2011-07-01

    Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction to OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.

  2. Using parallel computing for the display and simulation of the space debris environment

    NASA Astrophysics Data System (ADS)

    Moeckel, Marek; Wiedemann, Carsten; Flegel, Sven Kevin; Gelhaus, Johannes; Klinkrad, Heiner; Krag, Holger; Voersmann, Peter

    Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction of OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.

  3. [The abuse of radiological diagnostic tests as a metaphor of the post-modern, new-media and consumerism society].

    PubMed

    Dimonte, Mariano

    2008-03-01

    Aim of this paper is to offer some cue of reflection about some sociological aspects on the emergent phenomenon of the abuse of Imaging tests, interpreting this issue in the light of general dynamics crossing the actual post-modern society, so well characterized from the consumerism and the dominion of information and communication technologies, as vectors of messages mainly transmitted in a graphic format.

  4. Regional seismic lines reprocessed using post-stack processing techniques; National Petroleum Reserve, Alaska

    USGS Publications Warehouse

    Miller, John J.; Agena, W.F.; Lee, M.W.; Zihlman, F.N.; Grow, J.A.; Taylor, D.J.; Killgore, Michele; Oliver, H.L.

    2000-01-01

    This CD-ROM contains stacked, migrated, 2-Dimensional seismic reflection data and associated support information for 22 regional seismic lines (3,470 line-miles) recorded in the National Petroleum Reserve ? Alaska (NPRA) from 1974 through 1981. Together, these lines constitute about one-quarter of the seismic data collected as part of the Federal Government?s program to evaluate the petroleum potential of the Reserve. The regional lines, which form a grid covering the entire NPRA, were created by combining various individual lines recorded in different years using different recording parameters. These data were reprocessed by the USGS using modern, post-stack processing techniques, to create a data set suitable for interpretation on interactive seismic interpretation computer workstations. Reprocessing was done in support of ongoing petroleum resource studies by the USGS Energy Program. The CD-ROM contains the following files: 1) 22 files containing the digital seismic data in standard, SEG-Y format; 2) 1 file containing navigation data for the 22 lines in standard SEG-P1 format; 3) 22 small scale graphic images of each seismic line in Adobe Acrobat? PDF format; 4) a graphic image of the location map, generated from the navigation file, with hyperlinks to the graphic images of the seismic lines; 5) an ASCII text file with cross-reference information for relating the sequential trace numbers on each regional line to the line number and shotpoint number of the original component lines; and 6) an explanation of the processing used to create the final seismic sections (this document). The SEG-Y format seismic files and SEG-P1 format navigation file contain all the information necessary for loading the data onto a seismic interpretation workstation.

  5. A survey of GPU-based acceleration techniques in MRI reconstructions

    PubMed Central

    Wang, Haifeng; Peng, Hanchuan; Chang, Yuchou

    2018-01-01

    Image reconstruction in magnetic resonance imaging (MRI) clinical applications has become increasingly more complicated. However, diagnostic and treatment require very fast computational procedure. Modern competitive platforms of graphics processing unit (GPU) have been used to make high-performance parallel computations available, and attractive to common consumers for computing massively parallel reconstruction problems at commodity price. GPUs have also become more and more important for reconstruction computations, especially when deep learning starts to be applied into MRI reconstruction. The motivation of this survey is to review the image reconstruction schemes of GPU computing for MRI applications and provide a summary reference for researchers in MRI community. PMID:29675361

  6. A survey of GPU-based acceleration techniques in MRI reconstructions.

    PubMed

    Wang, Haifeng; Peng, Hanchuan; Chang, Yuchou; Liang, Dong

    2018-03-01

    Image reconstruction in magnetic resonance imaging (MRI) clinical applications has become increasingly more complicated. However, diagnostic and treatment require very fast computational procedure. Modern competitive platforms of graphics processing unit (GPU) have been used to make high-performance parallel computations available, and attractive to common consumers for computing massively parallel reconstruction problems at commodity price. GPUs have also become more and more important for reconstruction computations, especially when deep learning starts to be applied into MRI reconstruction. The motivation of this survey is to review the image reconstruction schemes of GPU computing for MRI applications and provide a summary reference for researchers in MRI community.

  7. Fast Photon Monte Carlo for Water Cherenkov Detectors

    NASA Astrophysics Data System (ADS)

    Latorre, Anthony; Seibert, Stanley

    2012-03-01

    We present Chroma, a high performance optical photon simulation for large particle physics detectors, such as the water Cerenkov far detector option for LBNE. This software takes advantage of the CUDA parallel computing platform to propagate photons using modern graphics processing units. In a computer model of a 200 kiloton water Cerenkov detector with 29,000 photomultiplier tubes, Chroma can propagate 2.5 million photons per second, around 200 times faster than the same simulation with Geant4. Chroma uses a surface based approach to modeling geometry which offers many benefits over a solid based modelling approach which is used in other simulations like Geant4.

  8. Color graphics, interactive processing, and the supercomputer

    NASA Technical Reports Server (NTRS)

    Smith-Taylor, Rudeen

    1987-01-01

    The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.

  9. Software complex for geophysical data visualization

    NASA Astrophysics Data System (ADS)

    Kryukov, Ilya A.; Tyugin, Dmitry Y.; Kurkin, Andrey A.; Kurkina, Oxana E.

    2013-04-01

    The effectiveness of current research in geophysics is largely determined by the degree of implementation of the procedure of data processing and visualization with the use of modern information technology. Realistic and informative visualization of the results of three-dimensional modeling of geophysical processes contributes significantly into the naturalness of physical modeling and detailed view of the phenomena. The main difficulty in this case is to interpret the results of the calculations: it is necessary to be able to observe the various parameters of the three-dimensional models, build sections on different planes to evaluate certain characteristics and make a rapid assessment. Programs for interpretation and visualization of simulations are spread all over the world, for example, software systems such as ParaView, Golden Software Surfer, Voxler, Flow Vision and others. However, it is not always possible to solve the problem of visualization with the help of a single software package. Preprocessing, data transfer between the packages and setting up a uniform visualization style can turn into a long and routine work. In addition to this, sometimes special display modes for specific data are required and existing products tend to have more common features and are not always fully applicable to certain special cases. Rendering of dynamic data may require scripting languages that does not relieve the user from writing code. Therefore, the task was to develop a new and original software complex for the visualization of simulation results. Let us briefly list of the primary features that are developed. Software complex is a graphical application with a convenient and simple user interface that displays the results of the simulation. Complex is also able to interactively manage the image, resize the image without loss of quality, apply a two-dimensional and three-dimensional regular grid, set the coordinate axes with data labels and perform slice of data. The feature of geophysical data is their size. Detailed maps used in the simulations are large, thus rendering in real time can be difficult task even for powerful modern computers. Therefore, the performance of the software complex is an important aspect of this work. Complex is based on the latest version of graphic API: Microsoft - DirectX 11, which reduces overhead and harness the power of modern hardware. Each geophysical calculation is the adjustment of the mathematical model for a particular case, so the architecture of the complex visualization is created with the scalability and the ability to customize visualization objects, for better visibility and comfort. In the present study, software complex 'GeoVisual' was developed. One of the main features of this research is the use of bleeding-edge techniques of computer graphics in scientific visualization. The research was supported by The Ministry of education and science of Russian Federation, project 14.B37.21.0642.

  10. Advantages of GPU technology in DFT calculations of intercalated graphene

    NASA Astrophysics Data System (ADS)

    Pešić, J.; Gajić, R.

    2014-09-01

    Over the past few years, the expansion of general-purpose graphic-processing unit (GPGPU) technology has had a great impact on computational science. GPGPU is the utilization of a graphics-processing unit (GPU) to perform calculations in applications usually handled by the central processing unit (CPU). Use of GPGPUs as a way to increase computational power in the material sciences has significantly decreased computational costs in already highly demanding calculations. A level of the acceleration and parallelization depends on the problem itself. Some problems can benefit from GPU acceleration and parallelization, such as the finite-difference time-domain algorithm (FTDT) and density-functional theory (DFT), while others cannot take advantage of these modern technologies. A number of GPU-supported applications had emerged in the past several years (www.nvidia.com/object/gpu-applications.html). Quantum Espresso (QE) is reported as an integrated suite of open source computer codes for electronic-structure calculations and materials modeling at the nano-scale. It is based on DFT, the use of a plane-waves basis and a pseudopotential approach. Since the QE 5.0 version, it has been implemented as a plug-in component for standard QE packages that allows exploiting the capabilities of Nvidia GPU graphic cards (www.qe-forge.org/gf/proj). In this study, we have examined the impact of the usage of GPU acceleration and parallelization on the numerical performance of DFT calculations. Graphene has been attracting attention worldwide and has already shown some remarkable properties. We have studied an intercalated graphene, using the QE package PHonon, which employs GPU. The term ‘intercalation’ refers to a process whereby foreign adatoms are inserted onto a graphene lattice. In addition, by intercalating different atoms between graphene layers, it is possible to tune their physical properties. Our experiments have shown there are benefits from using GPUs, and we reached an acceleration of several times compared to standard CPU calculations.

  11. Spatial Visualization by Isometric View

    ERIC Educational Resources Information Center

    Yue, Jianping

    2007-01-01

    Spatial visualization is a fundamental skill in technical graphics and engineering designs. From conventional multiview drawing to modern solid modeling using computer-aided design, visualization skills have always been essential for representing three-dimensional objects and assemblies. Researchers have developed various types of tests to measure…

  12. A Theoretical Analysis of Learning with Graphics--Implications for Computer Graphics Design.

    ERIC Educational Resources Information Center

    ChanLin, Lih-Juan

    This paper reviews the literature pertinent to learning with graphics. The dual coding theory provides explanation about how graphics are stored and precessed in semantic memory. The level of processing theory suggests how graphics can be employed in learning to encourage deeper processing. In addition to dual coding theory and level of processing…

  13. Cloud-based Monte Carlo modelling of BSSRDF for the rendering of human skin appearance (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.

    2016-03-01

    We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.

  14. ADFStealthViewer User Manual

    DTIC Science & Technology

    2013-05-01

    environmental objects are used to immerse the user in a 3D visualisation of the simulated war game. ADFStealthViewer has several ADF produced 3D models...OpenGL, audio , and networking devices. Some advanced functionality of the engine relies on modern graphics pixel and vertex shaders. These advanced

  15. [Strategies of medical self-authorization in early modern medicine: the example of Volcher Coiter (1534-1576)].

    PubMed

    Gross, Dominik; Steinmetzer, Jan

    2005-01-01

    Based on the example of Volcher Coiter--a town physician at Nuremberg and one of the leading anatomists in early modern medicine--, this essay points out that the authoritative status of contemporary physicians mainly was predicated on an interplay of self-fashioning and outside perception. It provides ample evidence that Coiter made use of several characteristic rhetorical and discourse-related strategies of self-authorisation such as the participation in social networks, a highly convincing technique of self-fashioning by emphasizing particular erudition, the presentation of academic medicine as a science authorised by god and the concurrent devaluation of non-academic healers. Furthermore, graphic and visual strategies of self-authorisation could be ascertained: Coiter took care for a premium typography of his books. He also used his talent as a graphic artist in his books to visualise his medical concepts. Moreover, the so-called 'Nuremberg Portrait' of Coiter served to illustrate his outstanding authority.

  16. Generating Billion-Edge Scale-Free Networks in Seconds: Performance Study of a Novel GPU-based Preferential Attachment Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S.; Alam, Maksudul

    A novel parallel algorithm is presented for generating random scale-free networks using the preferential-attachment model. The algorithm, named cuPPA, is custom-designed for single instruction multiple data (SIMD) style of parallel processing supported by modern processors such as graphical processing units (GPUs). To the best of our knowledge, our algorithm is the first to exploit GPUs, and also the fastest implementation available today, to generate scale free networks using the preferential attachment model. A detailed performance study is presented to understand the scalability and runtime characteristics of the cuPPA algorithm. In one of the best cases, when executed on an NVidiamore » GeForce 1080 GPU, cuPPA generates a scale free network of a billion edges in less than 2 seconds.« less

  17. CalcHEP 3.4 for collider physics within and beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Belyaev, Alexander; Christensen, Neil D.; Pukhov, Alexander

    2013-07-01

    We present version 3.4 of the CalcHEP software package which is designed for effective evaluation and simulation of high energy physics collider processes at parton level. The main features of CalcHEP are the computation of Feynman diagrams, integration over multi-particle phase space and event simulation at parton level. The principle attractive key-points along these lines are that it has: (a) an easy startup and usage even for those who are not familiar with CalcHEP and programming; (b) a friendly and convenient graphical user interface (GUI); (c) the option for the user to easily modify a model or introduce a new model by either using the graphical interface or by using an external package with the possibility of cross checking the results in different gauges; (d) a batch interface which allows to perform very complicated and tedious calculations connecting production and decay modes for processes with many particles in the final state. With this features set, CalcHEP can efficiently perform calculations with a high level of automation from a theory in the form of a Lagrangian down to phenomenology in the form of cross sections, parton level event simulation and various kinematical distributions. In this paper we report on the new features of CalcHEP 3.4 which improves the power of our package to be an effective tool for the study of modern collider phenomenology. Program summaryProgram title: CalcHEP Catalogue identifier: AEOV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 78535 No. of bytes in distributed program, including test data, etc.: 818061 Distribution format: tar.gz Programming language: C. Computer: PC, MAC, Unix Workstations. Operating system: Unix. RAM: Depends on process under study Classification: 4.4, 5. External routines: X11 Nature of problem: Implement new models of particle interactions. Generate Feynman diagrams for a physical process in any implemented theoretical model. Integrate phase space for Feynman diagrams to obtain cross sections or particle widths taking into account kinematical cuts. Simulate collisions at modern colliders and generate respective unweighted events. Mix events for different subprocesses and connect them with the decays of unstable particles. Solution method: Symbolic calculations. Squared Feynman diagram approach Vegas Monte Carlo algorithm. Restrictions: Up to 2→4 production (1→5 decay) processes are realistic on typical computers. Higher multiplicities sometimes possible for specific 2→5 and 2→6 processes. Unusual features: Graphical user interface, symbolic algebra calculation of squared matrix element, parallelization on a pbs cluster. Running time: Depends strongly on the process. For a typical 2→2 process it takes seconds. For 2→3 processes the typical running time is of the order of minutes. For higher multiplicities it could take much longer.

  18. SimGraph: A Flight Simulation Data Visualization Workstation

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Kenney, Patrick S.

    1997-01-01

    Today's modern flight simulation research produces vast amounts of time sensitive data, making a qualitative analysis of the data difficult while it remains in a numerical representation. Therefore, a method of merging related data together and presenting it to the user in a more comprehensible format is necessary. Simulation Graphics (SimGraph) is an object-oriented data visualization software package that presents simulation data in animated graphical displays for easy interpretation. Data produced from a flight simulation is presented by SimGraph in several different formats, including: 3-Dimensional Views, Cockpit Control Views, Heads-Up Displays, Strip Charts, and Status Indicators. SimGraph can accommodate the addition of new graphical displays to allow the software to be customized to each user s particular environment. A new display can be developed and added to SimGraph without having to design a new application, allowing the graphics programmer to focus on the development of the graphical display. The SimGraph framework can be reused for a wide variety of visualization tasks. Although it was created for the flight simulation facilities at NASA Langley Research Center, SimGraph can be reconfigured to almost any data visualization environment. This paper describes the capabilities and operations of SimGraph.

  19. galario: Gpu Accelerated Library for Analyzing Radio Interferometer Observations

    NASA Astrophysics Data System (ADS)

    Tazzari, Marco; Beaujean, Frederik; Testi, Leonardo

    2017-10-01

    The galario library exploits the computing power of modern graphic cards (GPUs) to accelerate the comparison of model predictions to radio interferometer observations. It speeds up the computation of the synthetic visibilities given a model image (or an axisymmetric brightness profile) and their comparison to the observations.

  20. Online Resources for Teaching Units on: Ecological Footprint of Human Food

    ERIC Educational Resources Information Center

    Marrocco, Aldo T.

    2011-01-01

    The modern food system involves high consumption of natural resources and other forms of environmental degradation. This paper is a presentation of internet resources such as scientific contributions, graphics, tables, images, animations and interactive atlases that can help to teach this subject. The discussion contains some subjects considered…

  1. The Language Laboratory and Modern Language Teaching. Revised Edition.

    ERIC Educational Resources Information Center

    Stack, Edward M.

    Since the audiolingual forms of a foreign language (hearing and speaking) must be controlled before the graphic skills (reading and writing) are taught, exercises in a language laboratory, which affords students intensive, active, individual drill, ought to precede written exercises on the same material. The three major forms of language…

  2. Visual Narrative Structure

    ERIC Educational Resources Information Center

    Cohn, Neil

    2013-01-01

    Narratives are an integral part of human expression. In the graphic form, they range from cave paintings to Egyptian hieroglyphics, from the Bayeux Tapestry to modern day comic books (Kunzle, 1973; McCloud, 1993). Yet not much research has addressed the structure and comprehension of narrative images, for example, how do people create meaning out…

  3. Integrating macromolecular X-ray diffraction data with the graphical user interface iMosflm.

    PubMed

    Powell, Harold R; Battye, T Geoff G; Kontogiannis, Luke; Johnson, Owen; Leslie, Andrew G W

    2017-07-01

    X-ray crystallography is the predominant source of structural information for biological macromolecules, providing fundamental insights into biological function. The availability of robust and user-friendly software to process the collected X-ray diffraction images makes the technique accessible to a wider range of scientists. iMosflm/MOSFLM (http://www.mrc-lmb.cam.ac.uk/harry/imosflm) is a software package designed to achieve this goal. The graphical user interface (GUI) version of MOSFLM (called iMosflm) is designed to guide inexperienced users through the steps of data integration, while retaining powerful features for more experienced users. Images from almost all commercially available X-ray detectors can be handled using this software. Although the program uses only 2D profile fitting, it can readily integrate data collected in the 'fine phi-slicing' mode (in which the rotation angle per image is less than the crystal mosaic spread by a factor of at least 2), which is commonly used with modern very fast readout detectors. The GUI provides real-time feedback on the success of the indexing step and the progress of data processing. This feedback includes the ability to monitor detector and crystal parameter refinement and to display the average spot shape in different regions of the detector. Data scaling and merging tasks can be initiated directly from the interface. Using this protocol, a data set of 360 images with ∼2,000 reflections per image can be processed in ∼4 min.

  4. Student Thinking Processes While Constructing Graphic Representations of Textbook Content: What Insights Do Think-Alouds Provide?

    ERIC Educational Resources Information Center

    Scott, D. Beth; Dreher, Mariam Jean

    2016-01-01

    This study examined the thinking processes students engage in while constructing graphic representations of textbook content. Twenty-eight students who either used graphic representations in a routine manner during social studies instruction or learned to construct graphic representations based on the rhetorical patterns used to organize textbook…

  5. Graphic Design in Libraries: A Conceptual Process

    ERIC Educational Resources Information Center

    Ruiz, Miguel

    2014-01-01

    Providing successful library services requires efficient and effective communication with users; therefore, it is important that content creators who develop visual materials understand key components of design and, specifically, develop a holistic graphic design process. Graphic design, as a form of visual communication, is the process of…

  6. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    PubMed

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.

  7. Fast approach for toner saving

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Kurilin, Ilya V.; Rychagov, Michael N.; Lee, Hokeun; Kim, Sangho; Choi, Donchul

    2011-01-01

    Reducing toner consumption is an important task in modern printing devices and has a significant positive ecological impact. Existing toner saving approaches have two main drawbacks: appearance of hardcopy in toner saving mode is worse in comparison with normal mode; processing of whole rendered page bitmap requires significant computational costs. We propose to add small holes of various shapes and sizes to random places inside a character bitmap stored in font cache. Such random perforation scheme is based on processing pipeline in RIP of standard printer languages Postscript and PCL. Processing of text characters only, and moreover, processing of each character for given font and size alone, is an extremely fast procedure. The approach does not deteriorate halftoned bitmap and business graphics and provide toner saving for typical office documents up to 15-20%. Rate of toner saving is adjustable. Alteration of resulted characters' appearance is almost indistinguishable in comparison with solid black text due to random placement of small holes inside the character regions. The suggested method automatically skips small fonts to preserve its quality. Readability of text processed by proposed method is fine. OCR programs process that scanned hardcopy successfully too.

  8. SPIDERplan: A tool to support decision-making in radiation therapy treatment plan assessment.

    PubMed

    Ventura, Tiago; Lopes, Maria do Carmo; Ferreira, Brigida Costa; Khouri, Leila

    2016-01-01

    In this work, a graphical method for radiotherapy treatment plan assessment and comparison, named SPIDERplan, is proposed. It aims to support plan approval allowing independent and consistent comparisons of different treatment techniques, algorithms or treatment planning systems. Optimized plans from modern radiotherapy are not easy to evaluate and compare because of their inherent multicriterial nature. The clinical decision on the best treatment plan is mostly based on subjective options. SPIDERplan combines a graphical analysis with a scoring index. Customized radar plots based on the categorization of structures into groups and on the determination of individual structures scores are generated. To each group and structure, an angular amplitude is assigned expressing the clinical importance defined by the radiation oncologist. Completing the graphical evaluation, a global plan score, based on the structures score and their clinical weights, is determined. After a necessary clinical validation of the group weights, SPIDERplan efficacy, to compare and rank different plans, was tested through a planning exercise where plans had been generated for a nasal cavity case using different treatment planning systems. SPIDERplan method was applied to the dose metrics achieved by the nasal cavity test plans. The generated diagrams and scores successfully ranked the plans according to the prescribed dose objectives and constraints and the radiation oncologist priorities, after a necessary clinical validation process. SPIDERplan enables a fast and consistent evaluation of plan quality considering all targets and organs at risk.

  9. GPU accelerated FDTD solver and its application in MRI.

    PubMed

    Chi, J; Liu, F; Jin, J; Mason, D G; Crozier, S

    2010-01-01

    The finite difference time domain (FDTD) method is a popular technique for computational electromagnetics (CEM). The large computational power often required, however, has been a limiting factor for its applications. In this paper, we will present a graphics processing unit (GPU)-based parallel FDTD solver and its successful application to the investigation of a novel B1 shimming scheme for high-field magnetic resonance imaging (MRI). The optimized shimming scheme exhibits considerably improved transmit B(1) profiles. The GPU implementation dramatically shortened the runtime of FDTD simulation of electromagnetic field compared with its CPU counterpart. The acceleration in runtime has made such investigation possible, and will pave the way for other studies of large-scale computational electromagnetic problems in modern MRI which were previously impractical.

  10. Identification of high-level functional/system requirements for future civil transports

    NASA Technical Reports Server (NTRS)

    Swink, Jay R.; Goins, Richard T.

    1992-01-01

    In order to accommodate the rapid growth in commercial aviation throughout the remainder of this century, the Federal Aviation Administration (FAA) is faced with a formidable challenge to upgrade and/or modernize the National Airspace System (NAS) without compromising safety or efficiency. A recurring theme in both the Aviation System Capital Investment Plan (CIP), which has replaced the NAS Plan, and the new FAA Plan for Research, Engineering, and Development (RE&D) rely on the application of new technologies and a greater use of automation. Identifying the high-level functional and system impacts of such modernization efforts on future civil transport operational requirements, particularly in terms of cockpit functionality and information transfer, was the primary objective of this project. The FAA planning documents for the NAS of the 2005 era and beyond were surveyed; major aircraft functional capabilities and system components required for such an operating environment were identified. A hierarchical structured analysis of the information processing and flows emanating from such functional/system components were conducted and the results documented in graphical form depicting the relationships between functions and systems.

  11. SpecPad: device-independent NMR data visualization and processing based on the novel DART programming language and Html5 Web technology.

    PubMed

    Guigas, Bruno

    2017-09-01

    SpecPad is a new device-independent software program for the visualization and processing of one-dimensional and two-dimensional nuclear magnetic resonance (NMR) time domain (FID) and frequency domain (spectrum) data. It is the result of a project to investigate whether the novel programming language DART, in combination with Html5 Web technology, forms a suitable base to write an NMR data evaluation software which runs on modern computing devices such as Android, iOS, and Windows tablets as well as on Windows, Linux, and Mac OS X desktop PCs and notebooks. Another topic of interest is whether this technique also effectively supports the required sophisticated graphical and computational algorithms. SpecPad is device-independent because DART's compiled executable code is JavaScript and can, therefore, be run by the browsers of PCs and tablets. Because of Html5 browser cache technology, SpecPad may be operated off-line. Network access is only required during data import or export, e.g. via a Cloud service, or for software updates. A professional and easy to use graphical user interface consistent across all hardware platforms supports touch screen features on mobile devices for zooming and panning and for NMR-related interactive operations such as phasing, integration, peak picking, or atom assignment. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Multiprocessor graphics computation and display using transputers

    NASA Technical Reports Server (NTRS)

    Ellis, Graham K.

    1988-01-01

    A package of two-dimensional graphics routines was developed to run on a transputer-based parallel processing system. These routines were designed to enable applications programmers to easily generate and display results from the transputer network in a graphic format. The graphics procedures were designed for the lowest possible network communication overhead for increased performance. The routines were designed for ease of use and to present an intuitive approach to generating graphics on the transputer parallel processing system.

  13. Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU

    NASA Astrophysics Data System (ADS)

    Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis

    2016-06-01

    Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20x to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.

  14. j5 DNA assembly design automation.

    PubMed

    Hillson, Nathan J

    2014-01-01

    Modern standardized methodologies, described in detail in the previous chapters of this book, have enabled the software-automated design of optimized DNA construction protocols. This chapter describes how to design (combinatorial) scar-less DNA assembly protocols using the web-based software j5. j5 assists biomedical and biotechnological researchers construct DNA by automating the design of optimized protocols for flanking homology sequence as well as type IIS endonuclease-mediated DNA assembly methodologies. Unlike any other software tool available today, j5 designs scar-less combinatorial DNA assembly protocols, performs a cost-benefit analysis to identify which portions of an assembly process would be less expensive to outsource to a DNA synthesis service provider, and designs hierarchical DNA assembly strategies to mitigate anticipated poor assembly junction sequence performance. Software integrated with j5 add significant value to the j5 design process through graphical user-interface enhancement and downstream liquid-handling robotic laboratory automation.

  15. Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis

    Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20xmore » to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.« less

  16. Rapid Parallel Calculation of shell Element Based On GPU

    NASA Astrophysics Data System (ADS)

    Wanga, Jian Hua; Lia, Guang Yao; Lib, Sheng; Li, Guang Yao

    2010-06-01

    Long computing time bottlenecked the application of finite element. In this paper, an effective method to speed up the FEM calculation by using the existing modern graphic processing unit and programmable colored rendering tool was put forward, which devised the representation of unit information in accordance with the features of GPU, converted all the unit calculation into film rendering process, solved the simulation work of all the unit calculation of the internal force, and overcame the shortcomings of lowly parallel level appeared ever before when it run in a single computer. Studies shown that this method could improve efficiency and shorten calculating hours greatly. The results of emulation calculation about the elasticity problem of large number cells in the sheet metal proved that using the GPU parallel simulation calculation was faster than using the CPU's. It is useful and efficient to solve the project problems in this way.

  17. Commercial Off-The-Shelf (COTS) Graphics Processing Board (GPB) Radiation Test Evaluation Report

    NASA Technical Reports Server (NTRS)

    Salazar, George A.; Steele, Glen F.

    2013-01-01

    Large round trip communications latency for deep space missions will require more onboard computational capabilities to enable the space vehicle to undertake many tasks that have traditionally been ground-based, mission control responsibilities. As a result, visual display graphics will be required to provide simpler vehicle situational awareness through graphical representations, as well as provide capabilities never before done in a space mission, such as augmented reality for in-flight maintenance or Telepresence activities. These capabilities will require graphics processors and associated support electronic components for high computational graphics processing. In an effort to understand the performance of commercial graphics card electronics operating in the expected radiation environment, a preliminary test was performed on five commercial offthe- shelf (COTS) graphics cards. This paper discusses the preliminary evaluation test results of five COTS graphics processing cards tested to the International Space Station (ISS) low earth orbit radiation environment. Three of the five graphics cards were tested to a total dose of 6000 rads (Si). The test articles, test configuration, preliminary results, and recommendations are discussed.

  18. Drawing Invisible Wounds: War Comics and the Treatment of Trauma.

    PubMed

    Leone, Joshua M

    2017-04-08

    Since the Vietnam War, graphic novels about war have shifted from simply representing it to portraying avenues for survivors to establish psychological wellness in their lives following traumatic events. While modern diagnostic medicine often looks to science, technology, and medications to treat the psychosomatic damage produced by trauma, my article examines the therapeutic potential of the comics medium with close attention to war comics. Graphic novels draw trauma in a different light: because of the medium's particular combination of words and images in sequence, war comics represent that which is typically unrepresentable, and these books serve as useful tools to promote healing among the psychologically wounded. Graphic narratives, both fictional and non-fictional, illuminate the ways that the unseen wounds of traumatic experience affect public health by compromising the ability of communities, individuals, and survivors to create and maintain meaningful relationships with others.

  19. The relevance of error analysis in graphical symbols evaluation.

    PubMed

    Piamonte, D P

    1999-01-01

    In an increasing number of modern tools and devices, small graphical symbols appear simultaneously in sets as parts of the human-machine interfaces. The presence of each symbol can influence the other's recognizability and correct association to its intended referents. Thus, aside from correct associations, it is equally important to perform certain error analysis of the wrong answers, misses, confusions, and even lack of answers. This research aimed to show how such error analyses could be valuable in evaluating graphical symbols especially across potentially different user groups. The study tested 3 sets of icons representing 7 videophone functions. The methods involved parameters such as hits, confusions, missing values, and misses. The association tests showed similar hit rates of most symbols across the majority of the participant groups. However, exploring the error patterns helped detect differences in the graphical symbols' performances between participant groups, which otherwise seemed to have similar levels of recognition. These are very valuable not only in determining the symbols to be retained, replaced or re-designed, but also in formulating instructions and other aids in learning to use new products faster and more satisfactorily.

  20. SPINS: standardized protein NMR storage. A data dictionary and object-oriented relational database for archiving protein NMR spectra.

    PubMed

    Baran, Michael C; Moseley, Hunter N B; Sahota, Gurmukh; Montelione, Gaetano T

    2002-10-01

    Modern protein NMR spectroscopy laboratories have a rapidly growing need for an easily queried local archival system of raw experimental NMR datasets. SPINS (Standardized ProteIn Nmr Storage) is an object-oriented relational database that provides facilities for high-volume NMR data archival, organization of analyses, and dissemination of results to the public domain by automatic preparation of the header files required for submission of data to the BioMagResBank (BMRB). The current version of SPINS coordinates the process from data collection to BMRB deposition of raw NMR data by standardizing and integrating the storage and retrieval of these data in a local laboratory file system. Additional facilities include a data mining query tool, graphical database administration tools, and a NMRStar v2. 1.1 file generator. SPINS also includes a user-friendly internet-based graphical user interface, which is optionally integrated with Varian VNMR NMR data collection software. This paper provides an overview of the data model underlying the SPINS database system, a description of its implementation in Oracle, and an outline of future plans for the SPINS project.

  1. Design Theory and the Military’s Understanding of Our Complex World

    DTIC Science & Technology

    2011-08-07

    France .” 5 Antoine Bousquet, The Scientific Way of Warfare; Order and Chaos on the Battlefields of Modernity (New York: Columbia University Press, 2009... bourgeoisie .‟ While each logic system represents a combination of many unique factors, the graphic below attempts to frame one way of depicting a generic

  2. The Effectiveness of Screencasts and Cognitive Tools as Scaffolding for Novice Object-Oriented Programmers

    ERIC Educational Resources Information Center

    Lee, Mark J. W.; Pradhan, Sunam; Dalgarno, Barney

    2008-01-01

    Modern information technology and computer science curricula employ a variety of graphical tools and development environments to facilitate student learning of introductory programming concepts and techniques. While the provision of interactive features and the use of visualization can enhance students' understanding and assist them in grasping…

  3. Top Ten Reasons To Use InDesign for Scholastic Media.

    ERIC Educational Resources Information Center

    Communication: Journalism Education Today, 2003

    2003-01-01

    Explains that Adobe InDesign 2.0 moves desktop to new possibilities because it combines the best of modern graphics techniques. Provides explanations of the following aspects of InDesign: drop shadow; align objects; define styles; type on a path; grids; accessible patterns; gradients; create outlines; indexing; and shows missing point. (PM)

  4. Accelerating epistasis analysis in human genetics with consumer graphics hardware.

    PubMed

    Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H

    2009-07-24

    Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.

  5. The Performance Improvement of the Lagrangian Particle Dispersion Model (LPDM) Using Graphics Processing Unit (GPU) Computing

    DTIC Science & Technology

    2017-08-01

    access to the GPU for general purpose processing .5 CUDA is designed to work easily with multiple programming languages , including Fortran. CUDA is a...Using Graphics Processing Unit (GPU) Computing by Leelinda P Dawson Approved for public release; distribution unlimited...The Performance Improvement of the Lagrangian Particle Dispersion Model (LPDM) Using Graphics Processing Unit (GPU) Computing by Leelinda

  6. A Relational Reasoning Approach to Text-Graphic Processing

    ERIC Educational Resources Information Center

    Danielson, Robert W.; Sinatra, Gale M.

    2017-01-01

    We propose that research on text-graphic processing could be strengthened by the inclusion of relational reasoning perspectives. We briefly outline four aspects of relational reasoning: "analogies," "anomalies," "antinomies", and "antitheses". Next, we illustrate how text-graphic researchers have been…

  7. Printing--Graphic Arts--Graphic Communications

    ERIC Educational Resources Information Center

    Hauenstein, A. Dean

    1975-01-01

    Recently, "graphic arts" has shifted from printing skills to a conceptual approach of production processes. "Graphic communications" must embrace the total system of communication through graphic media, to serve broad career education purposes; students taught concepts and principles can be flexible and adaptive. The author…

  8. Measuring Cognitive Load in Test Items: Static Graphics versus Animated Graphics

    ERIC Educational Resources Information Center

    Dindar, M.; Kabakçi Yurdakul, I.; Inan Dönmez, F.

    2015-01-01

    The majority of multimedia learning studies focus on the use of graphics in learning process but very few of them examine the role of graphics in testing students' knowledge. This study investigates the use of static graphics versus animated graphics in a computer-based English achievement test from a cognitive load theory perspective. Three…

  9. Cots Correlator Platform

    NASA Astrophysics Data System (ADS)

    Schaaf, Kjeld; Overeem, Ruud

    2004-06-01

    Moore’s law is best exploited by using consumer market hardware. In particular, the gaming industry pushes the limit of processor performance thus reducing the cost per raw flop even faster than Moore’s law predicts. Next to the cost benefits of Common-Of-The-Shelf (COTS) processing resources, there is a rapidly growing experience pool in cluster based processing. The typical Beowulf cluster of PC’s supercomputers are well known. Multiple examples exists of specialised cluster computers based on more advanced server nodes or even gaming stations. All these cluster machines build upon the same knowledge about cluster software management, scheduling, middleware libraries and mathematical libraries. In this study, we have integrated COTS processing resources and cluster nodes into a very high performance processing platform suitable for streaming data applications, in particular to implement a correlator. The required processing power for the correlator in modern radio telescopes is in the range of the larger supercomputers, which motivates the usage of supercomputer technology. Raw processing power is provided by graphical processors and is combined with an Infiniband host bus adapter with integrated data stream handling logic. With this processing platform a scalable correlator can be built with continuously growing processing power at consumer market prices.

  10. Process and representation in graphical displays

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne

    1990-01-01

    How people comprehend graphics is examined. Graphical comprehension involves the cognitive representation of information from a graphic display and the processing strategies that people apply to answer questions about graphics. Research on representation has examined both the features present in a graphic display and the cognitive representation of the graphic. The key features include the physical components of a graph, the relation between the figure and its axes, and the information in the graph. Tests of people's memory for graphs indicate that both the physical and informational aspect of a graph are important in the cognitive representation of a graph. However, the physical (or perceptual) features overshadow the information to a large degree. Processing strategies also involve a perception-information distinction. In order to answer simple questions (e.g., determining the value of a variable, comparing several variables, and determining the mean of a set of variables), people switch between two information processing strategies: (1) an arithmetic, look-up strategy in which they use a graph much like a table, looking up values and performing arithmetic calculations; and (2) a perceptual strategy in which they use the spatial characteristics of the graph to make comparisons and estimations. The user's choice of strategies depends on the task and the characteristics of the graph. A theory of graphic comprehension is presented.

  11. Software Graphics Processing Unit (sGPU) for Deep Space Applications

    NASA Technical Reports Server (NTRS)

    McCabe, Mary; Salazar, George; Steele, Glen

    2015-01-01

    A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.

  12. From Geocentrism to Allocentrism: Teaching the Phases of the Moon in a Digital Full-Dome Planetarium

    ERIC Educational Resources Information Center

    Chastenay, Pierre

    2016-01-01

    An increasing number of planetariums worldwide are turning digital, using ultra-fast computers, powerful graphic cards, and high-resolution video projectors to create highly realistic astronomical imagery in real time. This modern technology makes it so that the audience can observe astronomical phenomena from a geocentric as well as an…

  13. The Graphics Tablet--A Valuable Tool for the Digital STEM Teacher

    ERIC Educational Resources Information Center

    Stephens, Jeff

    2018-01-01

    I am inspired to write this article after coming across some publications in "The Physics Teacher" that all hit on topics of personal interest and experience. Similarly to Christensen my goal in writing this is to encourage other physics educators to take advantage of modern technology in delivering content to students and to feel…

  14. Historical Analyses of Disordered Handwriting: Perspectives on Early 20th-Century Material From a German Psychiatric Hospital

    ERIC Educational Resources Information Center

    Schiegg, Markus; Thorpe, Deborah

    2017-01-01

    Handwritten texts carry significant information, extending beyond the meaning of their words. Modern neurology, for example, benefits from the interpretation of the graphic features of writing and drawing for the diagnosis and monitoring of diseases and disorders. This article examines how handwriting analysis can be used, and has been used…

  15. Zolotopia: A New Classic for Design

    ERIC Educational Resources Information Center

    Payne, Janet

    2007-01-01

    While working on a graphic design job at FAO Schwartz, entrepreneurs Sandra Higashi and Byron Glaser recognized a need for something new in toys. The result was the birth of Zolo, an innovative, interactive toy, designed and produced by Higashi and Glaser and distributed by the Museum of Modern Art (MoMA) in New York. The initial idea for Zolo…

  16. Using Graphic Organizers to Improve Reading Comprehension Skills for the Middle School ESL Students

    ERIC Educational Resources Information Center

    Praveen, Sam D.; Rajan, Premalatha

    2013-01-01

    "A picture is worth a thousand words." In a modern-day classroom, students are surrounded by visual imagery through textbooks, notice boards, television, videos, or computers. Many middle school classrooms are filled with colorful pictures and photographs. However, it is unclear how--or if --these images impact the middle school ESL…

  17. Efficient implementation of constant pH molecular dynamics on modern graphics processors.

    PubMed

    Arthur, Evan J; Brooks, Charles L

    2016-09-15

    The treatment of pH sensitive ionization states for titratable residues in proteins is often omitted from molecular dynamics (MD) simulations. While static charge models can answer many questions regarding protein conformational equilibrium and protein-ligand interactions, pH-sensitive phenomena such as acid-activated chaperones and amyloidogenic protein aggregation are inaccessible to such models. Constant pH molecular dynamics (CPHMD) coupled with the Generalized Born with a Simple sWitching function (GBSW) implicit solvent model provide an accurate framework for simulating pH sensitive processes in biological systems. Although this combination has demonstrated success in predicting pKa values of protein structures, and in exploring dynamics of ionizable side-chains, its speed has been an impediment to routine application. The recent availability of low-cost graphics processing unit (GPU) chipsets with thousands of processing cores, together with the implementation of the accurate GBSW implicit solvent model on those chipsets (Arthur and Brooks, J. Comput. Chem. 2016, 37, 927), provide an opportunity to improve the speed of CPHMD and ionization modeling greatly. Here, we present a first implementation of GPU-enabled CPHMD within the CHARMM-OpenMM simulation package interface. Depending on the system size and nonbonded force cutoff parameters, we find speed increases of between one and three orders of magnitude. Additionally, the algorithm scales better with system size than the CPU-based algorithm, thus allowing for larger systems to be modeled in a cost effective manner. We anticipate that the improved performance of this methodology will open the door for broad-spread application of CPHMD in its modeling pH-mediated biological processes. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  18. Explicit integration with GPU acceleration for large kinetic networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brock, Benjamin; Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830; Belt, Andrew

    2015-12-01

    We demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. This orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies that important coupled, multiphysics problems inmore » various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.« less

  19. Effect of Bearing Housings on Centrifugal Pump Rotor Dynamics

    NASA Astrophysics Data System (ADS)

    Yashchenko, A. S.; Rudenko, A. A.; Simonovskiy, V. I.; Kozlov, O. M.

    2017-08-01

    The article deals with the effect of a bearing housing on rotor dynamics of a barrel casing centrifugal boiler feed pump rotor. The calculation of the rotor model including the bearing housing has been performed by the method of initial parameters. The calculation of a rotor solid model including the bearing housing has been performed by the finite element method. Results of both calculations highlight the need to add bearing housings into dynamic analyses of the pump rotor. The calculation performed by modern software packages is more a time-taking process, at the same time it is a preferred one due to a graphic editor that is employed for creating a numerical model. When it is necessary to view many variants of design parameters, programs for beam modeling should be used.

  20. KSC-99pp0808

    NASA Image and Video Library

    1999-07-08

    KENNEDY SPACE CENTER, FLA. -- In the cockpit of the orbiter Atlantis, which is in the Orbiter Processing Facility, U.S. Rep. Dave Weldon looks at the newly installed Multifunction Electronic Display Subsystem (MEDS), known as the "glass cockpit." Weldon is on the House Science Committee and vice chairman of the Space and Aeronautics Subcommittee. He was in Palmdale, Calif., when Atlantis underwent the modification and he wanted to see the final product. The full-color, flat-panel MEDS upgrade improves crew/orbiter interaction with easy-to-read, graphic portrayals of key flight indicators like attitude display and mach speed. The installation makes Atlantis the most modern orbiter in the fleet and equals the systems on current commercial jet airliners and military aircraft. Atlantis is scheduled to fly on mission STS-101 in early December

  1. Process and representation in graphical displays

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne

    1993-01-01

    Our initial model of graphic comprehension has focused on statistical graphs. Like other models of human-computer interaction, models of graphical comprehension can be used by human-computer interface designers and developers to create interfaces that present information in an efficient and usable manner. Our investigation of graph comprehension addresses two primary questions: how do people represent the information contained in a data graph?; and how do they process information from the graph? The topics of focus for graphic representation concern the features into which people decompose a graph and the representations of the graph in memory. The issue of processing can be further analyzed as two questions: what overall processing strategies do people use?; and what are the specific processing skills required?

  2. Integration of rocket turbine design and analysis through computer graphics

    NASA Technical Reports Server (NTRS)

    Hsu, Wayne; Boynton, Jim

    1988-01-01

    An interactive approach with engineering computer graphics is used to integrate the design and analysis processes of a rocket engine turbine into a progressive and iterative design procedure. The processes are interconnected through pre- and postprocessors. The graphics are used to generate the blade profiles, their stacking, finite element generation, and analysis presentation through color graphics. Steps of the design process discussed include pitch-line design, axisymmetric hub-to-tip meridional design, and quasi-three-dimensional analysis. The viscous two- and three-dimensional analysis codes are executed after acceptable designs are achieved and estimates of initial losses are confirmed.

  3. Use of a horror film in psychotherapy.

    PubMed

    Turley, J M; Derdeyn, A P

    1990-11-01

    Modern improvements in the technology of cinematic special effects have ushered in a new genre of vivid and graphic horror film. The numerous sequels of these films attest to their popularity among adolescents and young adults. Considerable concern has arisen on the part of parents, professionals, and policymakers regarding adverse effects of these films upon children. The authors discuss the meaning of a horror film to a troubled 13-year-old boy and describe the use of the film in his psychotherapy. The modern horror film serves many of the same functions for the adolescent that the traditional fairy tale serves for the younger child.

  4. Graphic Design Is Not a Medium.

    ERIC Educational Resources Information Center

    Gruber, John Edward, Jr.

    2001-01-01

    Discusses graphic design and reviews its development from analog processes to a digital tool with the use of computers. Topics include graphical user interfaces; the need for visual communication concepts; transmedia as opposed to repurposing; and graphic design instruction in higher education. (LRW)

  5. Motion compensation in digital subtraction angiography using graphics hardware.

    PubMed

    Deuerling-Zheng, Yu; Lell, Michael; Galant, Adam; Hornegger, Joachim

    2006-07-01

    An inherent disadvantage of digital subtraction angiography (DSA) is its sensitivity to patient motion which causes artifacts in the subtraction images. These artifacts could often reduce the diagnostic value of this technique. Automated, fast and accurate motion compensation is therefore required. To cope with this requirement, we first examine a method explicitly designed to detect local motions in DSA. Then, we implement a motion compensation algorithm by means of block matching on modern graphics hardware. Both methods search for maximal local similarity by evaluating a histogram-based measure. In this context, we are the first who have mapped an optimizing search strategy on graphics hardware while paralleling block matching. Moreover, we provide an innovative method for creating histograms on graphics hardware with vertex texturing and frame buffer blending. It turns out that both methods can effectively correct the artifacts in most case, as the hardware implementation of block matching performs much faster: the displacements of two 1024 x 1024 images can be calculated at 3 frames/s with integer precision or 2 frames/s with sub-pixel precision. Preliminary clinical evaluation indicates that the computation with integer precision could already be sufficient.

  6. An interactive graphics program for manipulation and display of panel method geometry

    NASA Technical Reports Server (NTRS)

    Hall, J. F.; Neuhart, D. H.; Walkley, K. B.

    1983-01-01

    Modern aerodynamic panel methods that handle large, complex geometries have made evident the need to interactively manipulate, modify, and view such configurations. With this purpose in mind, the GEOM program was developed. It is a menu driven, interactive program that uses the Tektronix PLOT 10 graphics software to display geometry configurations which are characterized by an abutting set of networks. These networks are composed of quadrilateral panels which are described by the coordinates of their corners. GEOM is divided into fourteen executive controlled functions. These functions are used to build configurations, scale and rotate networks, transpose networks defining M and N lines, graphically display selected networks, join and split networks, create wake networks, produce symmetric images of networks, repanel and rename networks, display configuration cross sections, and output network geometry in two formats. A data base management system is used to facilitate data transfers in this program. A sample session illustrating various capabilities of the code is included as a guide to program operation.

  7. [Computer graphic display of retinal examination results. Software improving the quality of documenting fundus changes].

    PubMed

    Jürgens, Clemens; Grossjohann, Rico; Czepita, Damian; Tost, Frank

    2009-01-01

    Graphic documentation of retinal examination results in clinical ophthalmological practice is often depicted using pictures or in handwritten form. Popular software products used to describe changes in the fundus do not vary much from simple graphic programs that enable to insert, scale and edit basic graphic elements such as: a circle, rectangle, arrow or text. Displaying the results of retinal examinations in a unified way is difficult to achieve. Therefore, we devised and implemented modern software tools for this purpose. A computer program enabling to quickly and intuitively form graphs of the fundus, that can be digitally archived or printed was created. Especially for the needs of ophthalmological clinics, a set of standard digital symbols used to document the results of retinal examinations was developed and installed in a library of graphic symbols. These symbols are divided into the following categories: preoperative, postoperative, neovascularization, retinopathy of prematurity. The appropriate symbol can be selected with a click of the mouse and dragged-and-dropped on the canvas of the fundus. Current forms of documenting results of retinal examinations are unsatisfactory, due to the fact that they are time consuming and imprecise. Unequivocal interpretation is difficult or in some cases impossible. Using the developed computer program a sketch of the fundus can be created much more quickly than by hand drawing. Additionally the quality of the medica documentation using a system of well described and standardized symbols will be enhanced. (1) Graphic symbols used to document the results of retinal examinations are a part of everyday clinical practice. (2) The designed computer program will allow quick and intuitive graphical creation of fundus sketches that can be either digitally archived or printed.

  8. Forecasting Workload for Defense Logistics Agency Distribution

    DTIC Science & Technology

    2014-12-01

    Distribution workload ...........................18 Monthly DD Sales for the four primary supply chains ( Avn , Land, Maritime, Ind HW) plotted to...average AVN Aviation BSM Business Systems Modernization CIT consumable items transfer C&E Construction and Equipment C&T Clothing...992081.437 See Figure 2 below for the graphical output of the linear regression. Monthly DD Sales for the four primary supply chains ( Avn , Land

  9. Learning Projectile Motion with the Computer Game "Scorched 3D"

    ERIC Educational Resources Information Center

    Jurcevic, John S.

    2008-01-01

    For most of our students, video games are a normal part of their lives. We should take advantage of this medium to teach physics in a manner that is engrossing for our students. In particular, modern video games incorporate accurate physics in their game engines, and they allow us to visualize the physics through flashy and captivating graphics. I…

  10. ChalkBoard: Mapping Functions to Polygons

    NASA Astrophysics Data System (ADS)

    Matlage, Kevin; Gill, Andy

    ChalkBoard is a domain specific language for describing images. The ChalkBoard language is uncompromisingly functional and encourages the use of modern functional idioms. ChalkBoard uses off-the-shelf graphics cards to speed up rendering of functional descriptions. In this paper, we describe the design of the core ChalkBoard language, and the architecture of our static image generation accelerator.

  11. Modern Display Technologies for Airborne Applications.

    DTIC Science & Technology

    1983-04-01

    the case of LED head-down direct view displays, this requires that special attention be paid to the optical filtering , the electrical drive/address...effectively attenuates the LED specular reflectance component, the colour and neutral density filtering attentuate the diffuse component and the... filter techniques are planned for use with video, multi- colour and advanced versions of numeric, alphanumeric and graphic displays; this technique

  12. Time-dependent density-functional theory in massively parallel computer architectures: the octopus project

    NASA Astrophysics Data System (ADS)

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A.; Oliveira, Micael J. T.; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G.; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A. L.

    2012-06-01

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  13. Time-dependent density-functional theory in massively parallel computer architectures: the OCTOPUS project.

    PubMed

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A; Oliveira, Micael J T; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A L

    2012-06-13

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  14. Grace: A cross-platform micromagnetic simulator on graphics processing units

    NASA Astrophysics Data System (ADS)

    Zhu, Ru

    2015-12-01

    A micromagnetic simulator running on graphics processing units (GPUs) is presented. Different from GPU implementations of other research groups which are predominantly running on NVidia's CUDA platform, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform independent. It runs on GPUs from venders including NVidia, AMD and Intel, and achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude. The simulator paved the way for running large size micromagnetic simulations on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics cards, and is freely available to download.

  15. The Effects of Interactive Graphics Analogies on Recall of Concepts in Science

    DTIC Science & Technology

    1976-08-01

    processing , in the Craik and Lockhart sense, were induced by this postlesson condition. 3. The fact that students were able to deal with both...higher scores on a graphics posttest in Experiment III. These results suggest that both shallow and deep processing , in the Craik and Lockhart ...graphics posttest in Experiment III. These results suggest that both shallow and deep processing , in the Cralk and Lockhart sense, were induced by

  16. Graphic Arts: Process Camera, Stripping, and Platemaking. Fourth Edition. Teacher Edition [and] Student Edition.

    ERIC Educational Resources Information Center

    Multistate Academic and Vocational Curriculum Consortium, Stillwater, OK.

    This publication contains both a teacher edition and a student edition of materials for a course in graphic arts that covers the process camera, stripping, and platemaking. The course introduces basic concepts and skills necessary for entry-level employment in a graphic communication occupation. The contents of the materials are tied to measurable…

  17. Advances in the TRIDEC Cloud

    NASA Astrophysics Data System (ADS)

    Hammitzsch, Martin; Spazier, Johannes; Reißland, Sven

    2016-04-01

    The TRIDEC Cloud is a platform that merges several complementary cloud-based services for instant tsunami propagation calculations and automated background computation with graphics processing units (GPU), for web-mapping of hazard specific geospatial data, and for serving relevant functionality to handle, share, and communicate threat specific information in a collaborative and distributed environment. The platform offers a modern web-based graphical user interface so that operators in warning centres and stakeholders of other involved parties (e.g. CPAs, ministries) just need a standard web browser to access a full-fledged early warning and information system with unique interactive features such as Cloud Messages and Shared Maps. Furthermore, the TRIDEC Cloud can be accessed in different modes, e.g. the monitoring mode, which provides important functionality required to act in a real event, and the exercise-and-training mode, which enables training and exercises with virtual scenarios re-played by a scenario player. The software system architecture and open interfaces facilitate global coverage so that the system is applicable for any region in the world and allow the integration of different sensor systems as well as the integration of other hazard types and use cases different to tsunami early warning. Current advances of the TRIDEC Cloud platform will be summarized in this presentation.

  18. HMI conventions for process control graphics.

    PubMed

    Pikaar, Ruud N

    2012-01-01

    Process operators supervise and control complex processes. To enable the operator to do an adequate job, instrumentation and process control engineers need to address several related topics, such as console design, information design, navigation, and alarm management. In process control upgrade projects, usually a 1:1 conversion of existing graphics is proposed. This paper suggests another approach, efficiently leading to a reduced number of new powerful process graphics, supported by a permanent process overview displays. In addition a road map for structuring content (process information) and conventions for the presentation of objects, symbols, and so on, has been developed. The impact of the human factors engineering approach on process control upgrade projects is illustrated by several cases.

  19. LCFM - LIVING COLOR FRAME MAKER: PC GRAPHICS GENERATION AND MANAGEMENT TOOL FOR REAL-TIME APPLICATIONS

    NASA Technical Reports Server (NTRS)

    Truong, L. V.

    1994-01-01

    Computer graphics are often applied for better understanding and interpretation of data under observation. These graphics become more complicated when animation is required during "run-time", as found in many typical modern artificial intelligence and expert systems. Living Color Frame Maker is a solution to many of these real-time graphics problems. Living Color Frame Maker (LCFM) is a graphics generation and management tool for IBM or IBM compatible personal computers. To eliminate graphics programming, the graphic designer can use LCFM to generate computer graphics frames. The graphical frames are then saved as text files, in a readable and disclosed format, which can be easily accessed and manipulated by user programs for a wide range of "real-time" visual information applications. For example, LCFM can be implemented in a frame-based expert system for visual aids in management of systems. For monitoring, diagnosis, and/or controlling purposes, circuit or systems diagrams can be brought to "life" by using designated video colors and intensities to symbolize the status of hardware components (via real-time feedback from sensors). Thus status of the system itself can be displayed. The Living Color Frame Maker is user friendly with graphical interfaces, and provides on-line help instructions. All options are executed using mouse commands and are displayed on a single menu for fast and easy operation. LCFM is written in C++ using the Borland C++ 2.0 compiler for IBM PC series computers and compatible computers running MS-DOS. The program requires a mouse and an EGA/VGA display. A minimum of 77K of RAM is also required for execution. The documentation is provided in electronic form on the distribution medium in WordPerfect format. A sample MS-DOS executable is provided on the distribution medium. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The Living Color Frame Maker tool was developed in 1992.

  20. An unsupervised learning approach to find ovarian cancer genes through integration of biological data

    PubMed Central

    2015-01-01

    Cancer is a disease characterized largely by the accumulation of out-of-control somatic mutations during the lifetime of a patient. Distinguishing driver mutations from passenger mutations has posed a challenge in modern cancer research. With the advanced development of microarray experiments and clinical studies, a large numbers of candidate cancer genes have been extracted and distinguishing informative genes out of them is essential. As a matter of fact, we proposed to find the informative genes for cancer by using mutation data from ovarian cancers in our framework. In our model we utilized the patient gene mutation profile, gene expression data and gene gene interactions network to construct a graphical representation of genes and patients. Markov processes for mutation and patients are triggered separately. After this process, cancer genes are prioritized automatically by examining their scores at their stationary distributions in the eigenvector. Extensive experiments demonstrate that the integration of heterogeneous sources of information is essential in finding important cancer genes. PMID:26328548

  1. High resolution ultrasonic spectroscopy system for nondestructive evaluation

    NASA Technical Reports Server (NTRS)

    Chen, C. H.

    1991-01-01

    With increased demand for high resolution ultrasonic evaluation, computer based systems or work stations become essential. The ultrasonic spectroscopy method of nondestructive evaluation (NDE) was used to develop a high resolution ultrasonic inspection system supported by modern signal processing, pattern recognition, and neural network technologies. The basic system which was completed consists of a 386/20 MHz PC (IBM AT compatible), a pulser/receiver, a digital oscilloscope with serial and parallel communications to the computer, an immersion tank with motor control of X-Y axis movement, and the supporting software package, IUNDE, for interactive ultrasonic evaluation. Although the hardware components are commercially available, the software development is entirely original. By integrating signal processing, pattern recognition, maximum entropy spectral analysis, and artificial neural network functions into the system, many NDE tasks can be performed. The high resolution graphics capability provides visualization of complex NDE problems. The phase 3 efforts involve intensive marketing of the software package and collaborative work with industrial sectors.

  2. INSPECT: A graphical user interface software package for IDARC-2D

    NASA Astrophysics Data System (ADS)

    AlHamaydeh, Mohammad; Najib, Mohamad; Alawnah, Sameer

    Modern day Performance-Based Earthquake Engineering (PBEE) pivots about nonlinear analysis and its feasibility. IDARC-2D is a widely used and accepted software for nonlinear analysis; it possesses many attractive features and capabilities. However, it is operated from the command prompt in the DOS/Unix systems and requires elaborate text-based input files creation by the user. To complement and facilitate the use of IDARC-2D, a pre-processing GUI software package (INSPECT) is introduced herein. INSPECT is created in the C# environment and utilizes the .NET libraries and SQLite database. Extensive testing and verification demonstrated successful and high-fidelity re-creation of several existing IDARC-2D input files. Its design and built-in features aim at expediting, simplifying and assisting in the modeling process. Moreover, this practical aid enhances the reliability of the results and improves accuracy by reducing and/or eliminating many potential and common input mistakes. Such benefits would be appreciated by novice and veteran IDARC-2D users alike.

  3. Neural image analysis in the process of quality assessment: domestic pig oocytes

    NASA Astrophysics Data System (ADS)

    Boniecki, P.; Przybył, J.; Kuzimska, T.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.; Zaborowicz, M.; Koszela, K.

    2014-04-01

    The questions related to quality classification of animal oocytes are explored by numerous scientific and research centres. This research is important, particularly in the context of improving the breeding value of farm animals. The methods leading to the stimulation of normal development of a larger number of fertilised animal oocytes in extracorporeal conditions are of special importance. Growing interest in the techniques of supported reproduction resulted in searching for new, increasingly effective methods for quality assessment of mammalian gametes and embryos. Progress in the production of in vitro animal embryos in fact depends on proper classification of obtained oocytes. The aim of this paper was the development of an original method for quality assessment of oocytes, performed on the basis of their graphical presentation in the form of microscopic digital images. The classification process was implemented on the basis of the information coded in the form of microphotographic pictures of the oocytes of domestic pig, using the modern methods of neural image analysis.

  4. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    PubMed

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-08

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  5. Write Is Right: Using Graphic Organizers to Improve Student Mathematical Problem Solving

    ERIC Educational Resources Information Center

    Zollman, Alan

    2012-01-01

    Teachers have used graphic organizers successfully in teaching the writing process. This paper describes graphic organizers and their potential mathematics benefits for both students and teachers, elucidates a specific graphic organizer adaptation for mathematical problem solving, and discusses results using the "four-corners-and-a-diamond"…

  6. Building Regression Models: The Importance of Graphics.

    ERIC Educational Resources Information Center

    Dunn, Richard

    1989-01-01

    Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)

  7. Mathematical Creative Activity and the Graphic Calculator

    ERIC Educational Resources Information Center

    Duda, Janina

    2011-01-01

    Teaching mathematics using graphic calculators has been an issue of didactic discussions for years. Finding ways in which graphic calculators can enrich the development process of creative activity in mathematically gifted students between the ages of 16-17 is the focus of this article. Research was conducted using graphic calculators with…

  8. Defining Identities through Multiliteracies: EL Teens Narrate Their Immigration Experiences as Graphic Stories

    ERIC Educational Resources Information Center

    Danzak, Robin L.

    2011-01-01

    Based on a framework of identity-as-narrative and multiliteracies, this article describes "Graphic Journeys," a multimedia literacy project in which English learners (ELs) in middle school created graphic stories that expressed their families' immigration experiences. The process involved reading graphic novels, journaling, interviewing, and…

  9. Services for domain specific developments in the Cloud

    NASA Astrophysics Data System (ADS)

    Schwichtenberg, Horst; Gemuend, André

    2015-04-01

    We will discuss and demonstrate the possibilities of new Cloud Services where the complete development of code is in the Cloud. We will discuss the possibilities of such services where the complete development cycle from programing to testing is in the cloud. This can be also combined with dedicated research domain specific services and hide the burden of accessing available infrastructures. As an example, we will show a service that is intended to complement the services of the VERCE projects infrastructure, a service that utilizes Cloud resources to offer simplified execution of data pre- and post-processing scripts. It offers users access to the ObsPy seismological toolbox for processing data with the Python programming language, executed on virtual Cloud resources in a secured sandbox. The solution encompasses a frontend with a modern graphical user interface, a messaging infrastructure as well as Python worker nodes for background processing. All components are deployable in the Cloud and have been tested on different environments based on OpenStack and OpenNebula. Deployments on commercial, public Clouds will be tested in the future.

  10. Robot graphic simulation testbed

    NASA Technical Reports Server (NTRS)

    Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.

    1991-01-01

    The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts.

  11. Explicit integration with GPU acceleration for large kinetic networks

    DOE PAGES

    Brock, Benjamin; Belt, Andrew; Billings, Jay Jay; ...

    2015-09-15

    In this study, we demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. In addition, this orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies thatmore » important coupled, multiphysics problems in various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.« less

  12. KSC-99pp0807

    NASA Image and Video Library

    1999-07-08

    KENNEDY SPACE CENTER, FLA. -- In the cockpit of the orbiter Atlantis, which is in the Orbiter Processing Facility, U.S. Rep. Dave Weldon (right) looks at the newly installed Multifunction Electronic Display Subsystem (MEDS), known as the "glass cockpit." At left is Laural Patrick, a systems engineer with MEDS. Weldon is on the House Science Committee and vice chairman of the Space and Aeronautics Subcommittee. He was in Palmdale, Calif., when Atlantis underwent the modification and he wanted to see the final product. The full-color, flat-panel MEDS upgrade improves crew/orbiter interaction with easy-to-read, graphic portrayals of key flight indicators like attitude display and mach speed. The installation makes Atlantis the most modern orbiter in the fleet and equals the systems on current commercial jet airliners and military aircraft. Atlantis is scheduled to fly on mission STS-101 in early December

  13. CUDA-Accelerated Geodesic Ray-Tracing for Fiber Tracking

    PubMed Central

    van Aart, Evert; Sepasian, Neda; Jalba, Andrei; Vilanova, Anna

    2011-01-01

    Diffusion Tensor Imaging (DTI) allows to noninvasively measure the diffusion of water in fibrous tissue. By reconstructing the fibers from DTI data using a fiber-tracking algorithm, we can deduce the structure of the tissue. In this paper, we outline an approach to accelerating such a fiber-tracking algorithm using a Graphics Processing Unit (GPU). This algorithm, which is based on the calculation of geodesics, has shown promising results for both synthetic and real data, but is limited in its applicability by its high computational requirements. We present a solution which uses the parallelism offered by modern GPUs, in combination with the CUDA platform by NVIDIA, to significantly reduce the execution time of the fiber-tracking algorithm. Compared to a multithreaded CPU implementation of the same algorithm, our GPU mapping achieves a speedup factor of up to 40 times. PMID:21941525

  14. A new approach to configurable primary data collection.

    PubMed

    Stanek, J; Babkin, E; Zubov, M

    2016-09-01

    The formats, semantics and operational rules of data processing tasks in genomics (and health in general) are highly divergent and can rapidly change. In such an environment, the problem of consistent transformation and loading of heterogeneous input data to various target repositories becomes a critical success factor. The objective of the project was to design a new conceptual approach to configurable data transformation, de-identification, and submission of health and genomic data sets. Main motivation was to facilitate automated or human-driven data uploading, as well as consolidation of heterogeneous sources in large genomic or health projects. Modern methods of on-demand specialization of generic software components were applied. For specification of input-output data and required data collection activities, we propose a simple data model of flat tables as well as a domain-oriented graphical interface and portable representation of transformations in XML. Using such methods, the prototype of the Configurable Data Collection System (CDCS) was implemented in Java programming language with Swing graphical interfaces. The core logic of transformations was implemented as a library of reusable plugins. The solution is implemented as a software prototype for a configurable service-oriented system for semi-automatic data collection, transformation, sanitization and safe uploading to heterogeneous data repositories-CDCS. To address the dynamic nature of data schemas and data collection processes, the CDCS prototype facilitates interactive, user-driven configuration of the data collection process and extends basic functionality with a wide range of third-party plugins. Notably, our solution also allows for the reduction of manual data entry for data originally missing in the output data sets. First experiments and feedback from domain experts confirm the prototype is flexible, configurable and extensible; runs well on data owner's systems; and is not dependent on vendor's standards. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Parallel processor-based raster graphics system architecture

    DOEpatents

    Littlefield, Richard J.

    1990-01-01

    An apparatus for generating raster graphics images from the graphics command stream includes a plurality of graphics processors connected in parallel, each adapted to receive any part of the graphics command stream for processing the command stream part into pixel data. The apparatus also includes a frame buffer for mapping the pixel data to pixel locations and an interconnection network for interconnecting the graphics processors to the frame buffer. Through the interconnection network, each graphics processor may access any part of the frame buffer concurrently with another graphics processor accessing any other part of the frame buffer. The plurality of graphics processors can thereby transmit concurrently pixel data to pixel locations in the frame buffer.

  16. Scientific Visualization: The Modern Oscilloscope for "Seeing the Unseeable" (LBNL Summer Lecture Series)

    ScienceCinema

    Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division and Scientific Visualization Group

    2018-05-07

    Summer Lecture Series 2008: Scientific visualization transforms abstract data into readily comprehensible images, provide a vehicle for "seeing the unseeable," and play a central role in both experimental and computational sciences. Wes Bethel, who heads the Scientific Visualization Group in the Computational Research Division, presents an overview of visualization and computer graphics, current research challenges, and future directions for the field.

  17. The Transition from Paper to Digital: Lessons for Medical Specialty Societies

    PubMed Central

    Miller, Donald W.

    2008-01-01

    Medical specialty societies often serve their membership by publishing paper forms that may simultaneously include practice guidelines, dataset specifications, and suggested layouts. Many times these forms become de facto standards for the specialty but transform poorly to the logic, structure, preciseness, and flexibility needed in modern electronic medical records. This paper analyzes one such form - a prenatal record published by the American College of Obstetricians and Gynecologists - with the intent to elucidate lessons for other specialty societies who might craft their recommendations to be effectively incorporated within modern electronic medical records. Lessons learned include separating datasets from guidelines/recommendations, specifying, codifying, and qualifying atomic data elements, and leaving graphic design to professionals. PMID:18998856

  18. cudaMap: a GPU accelerated program for gene expression connectivity mapping.

    PubMed

    McArt, Darragh G; Bankhead, Peter; Dunne, Philip D; Salto-Tellez, Manuel; Hamilton, Peter; Zhang, Shu-Dong

    2013-10-11

    Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take > 2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping. cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance. Emerging 'omics' technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap.

  19. Viewpoints: A High-Performance High-Dimensional Exploratory Data Analysis Tool

    NASA Astrophysics Data System (ADS)

    Gazis, P. R.; Levit, C.; Way, M. J.

    2010-12-01

    Scientific data sets continue to increase in both size and complexity. In the past, dedicated graphics systems at supercomputing centers were required to visualize large data sets, but as the price of commodity graphics hardware has dropped and its capability has increased, it is now possible, in principle, to view large complex data sets on a single workstation. To do this in practice, an investigator will need software that is written to take advantage of the relevant graphics hardware. The Viewpoints visualization package described herein is an example of such software. Viewpoints is an interactive tool for exploratory visual analysis of large high-dimensional (multivariate) data. It leverages the capabilities of modern graphics boards (GPUs) to run on a single workstation or laptop. Viewpoints is minimalist: it attempts to do a small set of useful things very well (or at least very quickly) in comparison with similar packages today. Its basic feature set includes linked scatter plots with brushing, dynamic histograms, normalization, and outlier detection/removal. Viewpoints was originally designed for astrophysicists, but it has since been used in a variety of fields that range from astronomy, quantum chemistry, fluid dynamics, machine learning, bioinformatics, and finance to information technology server log mining. In this article, we describe the Viewpoints package and show examples of its usage.

  20. Engineering visualization utilizing advanced animation

    NASA Technical Reports Server (NTRS)

    Sabionski, Gunter R.; Robinson, Thomas L., Jr.

    1989-01-01

    Engineering visualization is the use of computer graphics to depict engineering analysis and simulation in visual form from project planning through documentation. Graphics displays let engineers see data represented dynamically which permits the quick evaluation of results. The current state of graphics hardware and software generally allows the creation of two types of 3D graphics. The use of animated video as an engineering visualization tool is presented. The engineering, animation, and videography aspects of animated video production are each discussed. Specific issues include the integration of staffing expertise, hardware, software, and the various production processes. A detailed explanation of the animation process reveals the capabilities of this unique engineering visualization method. Automation of animation and video production processes are covered and future directions are proposed.

  1. Target Information Processing: A Joint Decision and Estimation Approach

    DTIC Science & Technology

    2012-03-29

    ground targets ( track - before - detect ) using computer cluster and graphics processing unit. Estimation and filtering theory is one of the most important...targets ( track - before - detect ) using computer cluster and graphics processing unit. Estimation and filtering theory is one of the most important

  2. NGL Viewer: a web application for molecular visualization

    PubMed Central

    Rose, Alexander S.; Hildebrand, Peter W.

    2015-01-01

    The NGL Viewer (http://proteinformatics.charite.de/ngl) is a web application for the visualization of macromolecular structures. By fully adopting capabilities of modern web browsers, such as WebGL, for molecular graphics, the viewer can interactively display large molecular complexes and is also unaffected by the retirement of third-party plug-ins like Flash and Java Applets. Generally, the web application offers comprehensive molecular visualization through a graphical user interface so that life scientists can easily access and profit from available structural data. It supports common structural file-formats (e.g. PDB, mmCIF) and a variety of molecular representations (e.g. ‘cartoon, spacefill, licorice’). Moreover, the viewer can be embedded in other web sites to provide specialized visualizations of entries in structural databases or results of structure-related calculations. PMID:25925569

  3. Graphical Man/Machine Communications

    DTIC Science & Technology

    Progress is reported concerning the use of computer controlled graphical displays in the areas of radiaton diffusion and hydrodynamics, general...ventricular dynamics. Progress is continuing on the use of computer graphics in architecture. Some progress in halftone graphics is reported with no basic...developments presented. Colored halftone perspective pictures are being used to represent multivariable situations. Nonlinear waveform processing is

  4. EMG amplifier with wireless data transmission

    NASA Astrophysics Data System (ADS)

    Kowalski, Grzegorz; Wildner, Krzysztof

    2017-08-01

    Wireless medical diagnostics is a trend in modern technology used in medicine. This paper presents a concept of realization, architecture of hardware and software implementation of an elecromyography signal (EMG) amplifier with wireless data transmission. This amplifier consists of three components: analogue processing of bioelectric signal module, micro-controller circuit and an application enabling data acquisition via a personal computer. The analogue bioelectric signal processing circuit receives electromyography signals from the skin surface, followed by initial analogue processing and preparation of the signals for further digital processing. The second module is a micro-controller circuit designed to wirelessly transmit the electromyography signals from the analogue signal converter to a personal computer. Its purpose is to eliminate the need for wired connections between the patient and the data logging device. The third block is a computer application designed to display the transmitted electromyography signals, as well as data capture and analysis. Its purpose is to provide a graphical representation of the collected data. The entire device has been thoroughly tested to ensure proper functioning. In use, the device displayed the captured electromyography signal from the arm of the patient. Amplitude- frequency characteristics were set in order to investigate the bandwidth and the overall gain of the device.

  5. Weather information network including graphical display

    NASA Technical Reports Server (NTRS)

    Leger, Daniel R. (Inventor); Burdon, David (Inventor); Son, Robert S. (Inventor); Martin, Kevin D. (Inventor); Harrison, John (Inventor); Hughes, Keith R. (Inventor)

    2006-01-01

    An apparatus for providing weather information onboard an aircraft includes a processor unit and a graphical user interface. The processor unit processes weather information after it is received onboard the aircraft from a ground-based source, and the graphical user interface provides a graphical presentation of the weather information to a user onboard the aircraft. Preferably, the graphical user interface includes one or more user-selectable options for graphically displaying at least one of convection information, turbulence information, icing information, weather satellite information, SIGMET information, significant weather prognosis information, and winds aloft information.

  6. Study of the Usability of Spaced Retrieval Exercise Using Mobile Devices for Alzheimer’s Disease Rehabilitation

    PubMed Central

    Mowafi, Yaser; Mashal, Ehab

    2014-01-01

    Background Alzheimer's disease (AD) is an irreversible brain disease that slowly destroys memory and thinking skills, and eventually the ability to carry out the simplest daily tasks. Recent studies showed that people with AD might actually benefit from physical exercises and rehabilitation processes. Studies show that rehabilitation would also add value in making the day for an individual with AD a little less foggy, frustrating, isolated, and stressful for as long as possible. Objective The focus of our work was to explore the use of modern mobile technology to enable people with AD to improve their abilities to perform activities of daily living, and hence to promote independence and participation in social activities. Our work also aimed at reducing the burden on caregivers by increasing the AD patients’ sense of competence and ability to handle behavior problems. Methods We developed ADcope, an integrated app that includes several modules that targeted individuals with AD, using mobile devices. We have developed two different user interfaces: text-based and graphic-based. To evaluate the usability of the app, 10 participants with early stages of AD were asked to run the two user interfaces of the spaced retrieval memory exercise using a tablet mobile device. Results We selected 10 participants with early stages of AD (average age: 75 years; 6/10, 60% males, 4/10, 40% females). The average elapsed time per question between the text-based task (14.04 seconds) and the graphic-based task (12.89 seconds) was significantly different (P=.047). There was also a significant difference (P<.001) between the average correct answer score between the text-based task (7.60/10) and the graphic-based task (8.30/10), and between the text-based task (31.50/100) and the graphic-based task (27.20/100; P<.001). Correlation analysis for the graphic-based task showed that the average elapsed time per question and the workload score were negatively correlated (−.93, and −.79, respectively) to the participants’ performance (P<.001 and P=.006, respectively). Conclusions We found that people with early stages of AD used mobile devices successfully without any prior experience in using such devices. Participants’ measured workload scores were low and posttask satisfaction in fulfilling the required task was conceivable. Results indicate better performance, less workload, and better response time for the graphic-based task compared with the text-based task. PMID:25124077

  7. Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.

    2007-03-01

    In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.

  8. GPU COMPUTING FOR PARTICLE TRACKING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, Hiroshi; Song, Kai; Muriki, Krishna

    2011-03-25

    This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculationmore » of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.« less

  9. SVGenes: a library for rendering genomic features in scalable vector graphic format.

    PubMed

    Etherington, Graham J; MacLean, Daniel

    2013-08-01

    Drawing genomic features in attractive and informative ways is a key task in visualization of genomics data. Scalable Vector Graphics (SVG) format is a modern and flexible open standard that provides advanced features including modular graphic design, advanced web interactivity and animation within a suitable client. SVGs do not suffer from loss of image quality on re-scaling and provide the ability to edit individual elements of a graphic on the whole object level independent of the whole image. These features make SVG a potentially useful format for the preparation of publication quality figures including genomic objects such as genes or sequencing coverage and for web applications that require rich user-interaction with the graphical elements. SVGenes is a Ruby-language library that uses SVG primitives to render typical genomic glyphs through a simple and flexible Ruby interface. The library implements a simple Page object that spaces and contains horizontal Track objects that in turn style, colour and positions features within them. Tracks are the level at which visual information is supplied providing the full styling capability of the SVG standard. Genomic entities like genes, transcripts and histograms are modelled in Glyph objects that are attached to a track and take advantage of SVG primitives to render the genomic features in a track as any of a selection of defined glyphs. The feature model within SVGenes is simple but flexible and not dependent on particular existing gene feature formats meaning graphics for any existing datasets can easily be created without need for conversion. The library is provided as a Ruby Gem from https://rubygems.org/gems/bio-svgenes under the MIT license, and open source code is available at https://github.com/danmaclean/bioruby-svgenes also under the MIT License. dan.maclean@tsl.ac.uk.

  10. Model for mapping settlements

    DOEpatents

    Vatsavai, Ranga Raju; Graesser, Jordan B.; Bhaduri, Budhendra L.

    2016-07-05

    A programmable media includes a graphical processing unit in communication with a memory element. The graphical processing unit is configured to detect one or more settlement regions from a high resolution remote sensed image based on the execution of programming code. The graphical processing unit identifies one or more settlements through the execution of the programming code that executes a multi-instance learning algorithm that models portions of the high resolution remote sensed image. The identification is based on spectral bands transmitted by a satellite and on selected designations of the image patches.

  11. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures

    PubMed Central

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.

    2012-01-01

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient’s skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures. PMID:24027616

  12. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures.

    PubMed

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R

    2012-02-23

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  13. Web-Based Computational Chemistry Education with CHARMMing I: Lessons and Tutorial

    PubMed Central

    Miller, Benjamin T.; Singh, Rishi P.; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S.; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R.; Woodcock, H. Lee

    2014-01-01

    This article describes the development, implementation, and use of web-based “lessons” to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that “point and click” simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance. PMID:25057988

  14. Web-based computational chemistry education with CHARMMing I: Lessons and tutorial.

    PubMed

    Miller, Benjamin T; Singh, Rishi P; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R; Woodcock, H Lee

    2014-07-01

    This article describes the development, implementation, and use of web-based "lessons" to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that "point and click" simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance.

  15. [Influence of the recording interval and a graphic organizer on the writing process/product and on other psychological variables].

    PubMed

    García Sánchez, Jesús N; Rodríguez Pérez, Celestino

    2007-05-01

    An experimental study of the influence of the recording interval and a graphic organizer on the processes of writing composition and on the final product is presented. We studied 326 participants, age 10 to 16 years old, by means of a nested design. Two groups were compared: one group was aided in the writing process with a graphic organizer and the other was not. Each group was subdivided into two further groups: one with a mean recording interval of 45 seconds and the other with approximately 90 seconds recording interval in a writing log. The results showed that the group aided by a graphic organizer obtained better results both in processes and writing product, and that the groups assessed with an average interval of 45 seconds obtained worse results. Implications for educational practice are discussed, and limitations and future perspectives are commented on.

  16. Chromium: A Stress-Processing Framework for Interactive Rendering on Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, G,; Houston, M.; Ng, Y.-R.

    2002-01-11

    We describe Chromium, a system for manipulating streams of graphics API commands on clusters of workstations. Chromium's stream filters can be arranged to create sort-first and sort-last parallel graphics architectures that, in many cases, support the same applications while using only commodity graphics accelerators. In addition, these stream filters can be extended programmatically, allowing the user to customize the stream transformations performed by nodes in a cluster. Because our stream processing mechanism is completely general, any cluster-parallel rendering algorithm can be either implemented on top of or embedded in Chromium. In this paper, we give examples of real-world applications thatmore » use Chromium to achieve good scalability on clusters of workstations, and describe other potential uses of this stream processing technology. By completely abstracting the underlying graphics architecture, network topology, and API command processing semantics, we allow a variety of applications to run in different environments.« less

  17. Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor

    NASA Astrophysics Data System (ADS)

    Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert

    2009-10-01

    Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.

  18. Task-Analytic Design of Graphic Presentations

    DTIC Science & Technology

    1990-05-18

    important premise of Larkin and Simon’s work is that, when comparing alternative presentations, it is fruitful to characterize graphic-based problem solving...using the same information-processing models used to help understand problem solving using other representations [Newell and Simon, 19721...luring execution of graphic presentation- 4 based problem -solving procedures. Chapter 2 reviews other work related to the problem of designing graphic

  19. Interactive computer graphics system for structural sizing and analysis of aircraft structures

    NASA Technical Reports Server (NTRS)

    Bendavid, D.; Pipano, A.; Raibstein, A.; Somekh, E.

    1975-01-01

    A computerized system for preliminary sizing and analysis of aircraft wing and fuselage structures was described. The system is based upon repeated application of analytical program modules, which are interactively interfaced and sequence-controlled during the iterative design process with the aid of design-oriented graphics software modules. The entire process is initiated and controlled via low-cost interactive graphics terminals driven by a remote computer in a time-sharing mode.

  20. Integrated Approach to Industrial Packaging Design

    NASA Astrophysics Data System (ADS)

    Vorobeva, O.

    2017-11-01

    The article reviews studies in the field of industrial packaging design. The major factors which influence technological, ergonomic, economic and ecological features of packaging are established. The main modern trends in packaging design are defined, the principles of marketing communications and their influence on consumers’ consciousness are indicated, and the function of packaging as a transmitter of brand values is specified. Peculiarities of packaging technology and printing techniques in modern printing industry are considered. The role of designers in the stage-by-stage development of the construction, form and graphic design concept of packaging is defined. The examples of authentic packaging are given and the mention of the tetrahedron packaging history is made. At the end of the article, conclusions on the key research aspects are made.

  1. U.S. Rep. Dave Weldon looks at the U.S. Lab Destiny in the SSPF.

    NASA Technical Reports Server (NTRS)

    1999-01-01

    In the cockpit of the orbiter Atlantis, which is in the Orbiter Processing Facility, Laural Patrick (left), a systems engineer with MEDS, points out a feature of the newly installed Multifunction Electronic Display Subsystem (MEDS), known as the 'glass cockpit,' to U.S. Rep. Dave Weldon. The congressman is on the House Science Committee and vice chairman of the Space and Aeronautics Subcommittee. He was in Palmdale, Calif., when Atlantis underwent the modification and he wanted to see the final product. The full-color, flat-panel MEDS upgrade improves crew/orbiter interaction with easy-to-read, graphic portrayals of key flight indicators like attitude display and mach speed. The installation makes Atlantis the most modern orbiter in the fleet and equals the systems on current commercial jet airliners and military aircraft. Atlantis is scheduled to fly on mission STS- 101 in early December.

  2. GPU-Accelerated Molecular Modeling Coming Of Age

    PubMed Central

    Stone, John E.; Hardy, David J.; Ufimtsev, Ivan S.

    2010-01-01

    Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. PMID:20675161

  3. C-State: an interactive web app for simultaneous multi-gene visualization and comparative epigenetic pattern search.

    PubMed

    Sowpati, Divya Tej; Srivastava, Surabhi; Dhawan, Jyotsna; Mishra, Rakesh K

    2017-09-13

    Comparative epigenomic analysis across multiple genes presents a bottleneck for bench biologists working with NGS data. Despite the development of standardized peak analysis algorithms, the identification of novel epigenetic patterns and their visualization across gene subsets remains a challenge. We developed a fast and interactive web app, C-State (Chromatin-State), to query and plot chromatin landscapes across multiple loci and cell types. C-State has an interactive, JavaScript-based graphical user interface and runs locally in modern web browsers that are pre-installed on all computers, thus eliminating the need for cumbersome data transfer, pre-processing and prior programming knowledge. C-State is unique in its ability to extract and analyze multi-gene epigenetic information. It allows for powerful GUI-based pattern searching and visualization. We include a case study to demonstrate its potential for identifying user-defined epigenetic trends in context of gene expression profiles.

  4. Is Accessibility an Issue in the Knowledge Society? Modern Web Applications in the Light of Accessibility

    NASA Astrophysics Data System (ADS)

    Bártek, Luděk; Ošlejšek, Radek; Pitner, Tomáš

    Recent development in Web shows a significant trend towards more user participation, massive use of new devices including portables, and high interactivity. The user participation goes hand in hand with inclusion of all potential user groups - also with special needs. However, we claim that albeit all the effort towards accessibility, it has not yet found an appopriate reflection among stakeholders of the "Top Web Applications" nor their users. This leads to undesired causes - the business-driven Web without all user participation is not a really democratic medium and, actually, does not comply with the original characteristics of Web 2.0. The paper tries to identify perspectives of further development including standardization processes and technical obstacles behind. It also shows ways and techniques to cope with the challenge based on our own research and development in accessible graphics and dialog-based systems.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peratt, A.L.; Mostrom, M.A.

    With the availability of 80--125 MHz microprocessors, the methodology developed for the simulation of problems in pulsed power and plasma physics on modern day supercomputers is now amenable to application on a wide range of platforms including laptops and workstations. While execution speeds with these processors do not match those of large scale computing machines, resources such as computer-aided-design (CAD) and graphical analysis codes are available to automate simulation setup and process data. This paper reports on the adaptation of IVORY, a three-dimensional, fully-electromagnetic, particle-in-cell simulation code, to this platform independent CAD environment. The primary purpose of this talk ismore » to demonstrate how rapidly a pulsed power/plasma problem can be scoped out by an experimenter on a dedicated workstation. Demonstrations include a magnetically insulated transmission line, power flow in a graded insulator stack, a relativistic klystron oscillator, and the dynamics of a coaxial thruster for space applications.« less

  6. A Bitslice Implementation of Anderson's Attack on A5/1

    NASA Astrophysics Data System (ADS)

    Bulavintsev, Vadim; Semenov, Alexander; Zaikin, Oleg; Kochemazov, Stepan

    2018-03-01

    The A5/1 keystream generator is a part of Global System for Mobile Communications (GSM) protocol, employed in cellular networks all over the world. Its cryptographic resistance was extensively analyzed in dozens of papers. However, almost all corresponding methods either employ a specific hardware or require an extensive preprocessing stage and significant amounts of memory. In the present study, a bitslice variant of Anderson's Attack on A5/1 is implemented. It requires very little computer memory and no preprocessing. Moreover, the attack can be made even more efficient by harnessing the computing power of modern Graphics Processing Units (GPUs). As a result, using commonly available GPUs this method can quite efficiently recover the secret key using only 64 bits of keystream. To test the performance of the implementation, a volunteer computing project was launched. 10 instances of A5/1 cryptanalysis have been successfully solved in this project in a single week.

  7. A sample implementation for parallelizing Divide-and-Conquer algorithms on the GPU.

    PubMed

    Mei, Gang; Zhang, Jiayin; Xu, Nengxiong; Zhao, Kunyang

    2018-01-01

    The strategy of Divide-and-Conquer (D&C) is one of the frequently used programming patterns to design efficient algorithms in computer science, which has been parallelized on shared memory systems and distributed memory systems. Tzeng and Owens specifically developed a generic paradigm for parallelizing D&C algorithms on modern Graphics Processing Units (GPUs). In this paper, by following the generic paradigm proposed by Tzeng and Owens, we provide a new and publicly available GPU implementation of the famous D&C algorithm, QuickHull, to give a sample and guide for parallelizing D&C algorithms on the GPU. The experimental results demonstrate the practicality of our sample GPU implementation. Our research objective in this paper is to present a sample GPU implementation of a classical D&C algorithm to help interested readers to develop their own efficient GPU implementations with fewer efforts.

  8. GPU-accelerated molecular modeling coming of age.

    PubMed

    Stone, John E; Hardy, David J; Ufimtsev, Ivan S; Schulten, Klaus

    2010-09-01

    Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. (c) 2010 Elsevier Inc. All rights reserved.

  9. Accelerating gravitational microlensing simulations using the Xeon Phi coprocessor

    NASA Astrophysics Data System (ADS)

    Chen, B.; Kantowski, R.; Dai, X.; Baron, E.; Van der Mark, P.

    2017-04-01

    Recently Graphics Processing Units (GPUs) have been used to speed up very CPU-intensive gravitational microlensing simulations. In this work, we use the Xeon Phi coprocessor to accelerate such simulations and compare its performance on a microlensing code with that of NVIDIA's GPUs. For the selected set of parameters evaluated in our experiment, we find that the speedup by Intel's Knights Corner coprocessor is comparable to that by NVIDIA's Fermi family of GPUs with compute capability 2.0, but less significant than GPUs with higher compute capabilities such as the Kepler. However, the very recently released second generation Xeon Phi, Knights Landing, is about 5.8 times faster than the Knights Corner, and about 2.9 times faster than the Kepler GPU used in our simulations. We conclude that the Xeon Phi is a very promising alternative to GPUs for modern high performance microlensing simulations.

  10. GPU accelerated implementation of NCI calculations using promolecular density.

    PubMed

    Rubez, Gaëtan; Etancelin, Jean-Matthieu; Vigouroux, Xavier; Krajecki, Michael; Boisson, Jean-Charles; Hénon, Eric

    2017-05-30

    The NCI approach is a modern tool to reveal chemical noncovalent interactions. It is particularly attractive to describe ligand-protein binding. A custom implementation for NCI using promolecular density is presented. It is designed to leverage the computational power of NVIDIA graphics processing unit (GPU) accelerators through the CUDA programming model. The code performances of three versions are examined on a test set of 144 systems. NCI calculations are particularly well suited to the GPU architecture, which reduces drastically the computational time. On a single compute node, the dual-GPU version leads to a 39-fold improvement for the biggest instance compared to the optimal OpenMP parallel run (C code, icc compiler) with 16 CPU cores. Energy consumption measurements carried out on both CPU and GPU NCI tests show that the GPU approach provides substantial energy savings. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Using Phun to Study ``Perpetual Motion'' Machines

    NASA Astrophysics Data System (ADS)

    Koreš, Jaroslav

    2012-05-01

    The concept of "perpetual motion" has a long history. The Indian astronomer and mathematician Bhaskara II (12th century) was the first person to describe a perpetual motion (PM) machine. An example of a 13th- century PM machine is shown in Fig. 1. Although the law of conservation of energy clearly implies the impossibility of PM construction, over the centuries numerous proposals for PM have been made, involving ever more elements of modern science in their construction. It is possible to test a variety of PM machines in the classroom using a program called Phun2 or its commercial version Algodoo.3 The programs are designed to simulate physical processes and we can easily simulate mechanical machines using them. They provide an intuitive graphical environment controlled with a mouse; a programming language is not needed. This paper describes simulations of four different (supposed) PM machines.4

  12. CrossTalk. The Journal of Defense Software Engineering. Volume 25, Number 3

    DTIC Science & Technology

    2012-06-01

    OMG) standard Business Process Modeling and Nota- tion ( BPMN ) [6] graphical notation. I will address each of these: identify and document steps...to a value stream map using BPMN and textual process narratives. The resulting process narratives or process metadata includes key information...objectives. Once the processes are identified we can graphically document them capturing the process using BPMN (see Figure 1). The BPMN models

  13. VSDMIP 1.5: an automated structure- and ligand-based virtual screening platform with a PyMOL graphical user interface.

    PubMed

    Cabrera, Álvaro Cortés; Gil-Redondo, Rubén; Perona, Almudena; Gago, Federico; Morreale, Antonio

    2011-09-01

    A graphical user interface (GUI) for our previously published virtual screening (VS) and data management platform VSDMIP (Gil-Redondo et al. J Comput Aided Mol Design, 23:171-184, 2009) that has been developed as a plugin for the popular molecular visualization program PyMOL is presented. In addition, a ligand-based VS module (LBVS) has been implemented that complements the already existing structure-based VS (SBVS) module and can be used in those cases where the receptor's 3D structure is not known or for pre-filtering purposes. This updated version of VSDMIP is placed in the context of similar available software and its LBVS and SBVS capabilities are tested here on a reduced set of the Directory of Useful Decoys database. Comparison of results from both approaches confirms the trend found in previous studies that LBVS outperforms SBVS. We also show that by combining LBVS and SBVS, and using a cluster of ~100 modern processors, it is possible to perform complete VS studies of several million molecules in less than a month. As the main processes in VSDMIP are 100% scalable, more powerful processors and larger clusters would notably decrease this time span. The plugin is distributed under an academic license upon request from the authors. © Springer Science+Business Media B.V. 2011

  14. A new tool for supervised classification of satellite images available on web servers: Google Maps as a case study

    NASA Astrophysics Data System (ADS)

    García-Flores, Agustín.; Paz-Gallardo, Abel; Plaza, Antonio; Li, Jun

    2016-10-01

    This paper describes a new web platform dedicated to the classification of satellite images called Hypergim. The current implementation of this platform enables users to perform classification of satellite images from any part of the world thanks to the worldwide maps provided by Google Maps. To perform this classification, Hypergim uses unsupervised algorithms like Isodata and K-means. Here, we present an extension of the original platform in which we adapt Hypergim in order to use supervised algorithms to improve the classification results. This involves a significant modification of the user interface, providing the user with a way to obtain samples of classes present in the images to use in the training phase of the classification process. Another main goal of this development is to improve the runtime of the image classification process. To achieve this goal, we use a parallel implementation of the Random Forest classification algorithm. This implementation is a modification of the well-known CURFIL software package. The use of this type of algorithms to perform image classification is widespread today thanks to its precision and ease of training. The actual implementation of Random Forest was developed using CUDA platform, which enables us to exploit the potential of several models of NVIDIA graphics processing units using them to execute general purpose computing tasks as image classification algorithms. As well as CUDA, we use other parallel libraries as Intel Boost, taking advantage of the multithreading capabilities of modern CPUs. To ensure the best possible results, the platform is deployed in a cluster of commodity graphics processing units (GPUs), so that multiple users can use the tool in a concurrent way. The experimental results indicate that this new algorithm widely outperform the previous unsupervised algorithms implemented in Hypergim, both in runtime as well as precision of the actual classification of the images.

  15. Potential of Laboratory Execution Systems (LESs) to Simplify the Application of Business Process Management Systems (BPMSs) in Laboratory Automation.

    PubMed

    Neubert, Sebastian; Göde, Bernd; Gu, Xiangyu; Stoll, Norbert; Thurow, Kerstin

    2017-04-01

    Modern business process management (BPM) is increasingly interesting for laboratory automation. End-to-end workflow automation and improved top-level systems integration for information technology (IT) and automation systems are especially prominent objectives. With the ISO Standard Business Process Model and Notation (BPMN) 2.X, a system-independent and interdisciplinary accepted graphical process control notation is provided, allowing process analysis, while also being executable. The transfer of BPM solutions to structured laboratory automation places novel demands, for example, concerning the real-time-critical process and systems integration. The article discusses the potential of laboratory execution systems (LESs) for an easier implementation of the business process management system (BPMS) in hierarchical laboratory automation. In particular, complex application scenarios, including long process chains based on, for example, several distributed automation islands and mobile laboratory robots for a material transport, are difficult to handle in BPMSs. The presented approach deals with the displacement of workflow control tasks into life science specialized LESs, the reduction of numerous different interfaces between BPMSs and subsystems, and the simplification of complex process modelings. Thus, the integration effort for complex laboratory workflows can be significantly reduced for strictly structured automation solutions. An example application, consisting of a mixture of manual and automated subprocesses, is demonstrated by the presented BPMS-LES approach.

  16. Graphic Arts: Book Two. Process Camera, Stripping, and Platemaking.

    ERIC Educational Resources Information Center

    Farajollahi, Karim; And Others

    The second of a three-volume set of instructional materials for a course in graphic arts, this manual consists of 10 instructional units dealing with the process camera, stripping, and platemaking. Covered in the individual units are the process camera and darkroom photography, line photography, half-tone photography, other darkroom techniques,…

  17. Graphic Arts: Process Camera, Stripping, and Platemaking. Third Edition.

    ERIC Educational Resources Information Center

    Crummett, Dan

    This document contains teacher and student materials for a course in graphic arts concentrating on camera work, stripping, and plate making in the printing process. Eight units of instruction cover the following topics: (1) the process camera and darkroom equipment; (2) line photography; (3) halftone photography; (4) other darkroom techniques; (5)…

  18. Mechanical properties of bovine cortical bone based on the automated ball indentation technique and graphics processing method.

    PubMed

    Zhang, Airong; Zhang, Song; Bian, Cuirong

    2018-02-01

    Cortical bone provides the main form of support in humans and other vertebrates against various forces. Thus, capturing its mechanical properties is important. In this study, the mechanical properties of cortical bone were investigated by using automated ball indentation and graphics processing at both the macroscopic and microstructural levels under dry conditions. First, all polished samples were photographed under a metallographic microscope, and the area ratio of the circumferential lamellae and osteons was calculated through the graphics processing method. Second, fully-computer-controlled automated ball indentation (ABI) tests were performed to explore the micro-mechanical properties of the cortical bone at room temperature and a constant indenter speed. The indentation defects were examined with a scanning electron microscope. Finally, the macroscopic mechanical properties of the cortical bone were estimated with the graphics processing method and mixture rule. Combining ABI and graphics processing proved to be an effective tool to obtaining the mechanical properties of the cortical bone, and the indenter size had a significant effect on the measurement. The methods presented in this paper provide an innovative approach to acquiring the macroscopic mechanical properties of cortical bone in a nondestructive manner. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Pre and post processing using the IBM 3277 display station graphics attachment (RPQ7H0284)

    NASA Technical Reports Server (NTRS)

    Burroughs, S. H.; Lawlor, M. B.; Miller, I. M.

    1978-01-01

    A graphical interactive procedure operating under TSO and utilizing two CRT display terminals is shown to be an effective means of accomplishing mesh generation, establishing boundary conditions, and reviewing graphic output for finite element analysis activity.

  20. Getting Graphic at the School Library.

    ERIC Educational Resources Information Center

    Kan, Kat

    2003-01-01

    Provides information for school libraries interested in acquiring graphic novels. Discusses theft prevention; processing and cataloging; maintaining the collection; what to choose, with two Web sites for more information on graphic novels for libraries; collection development decisions; and Japanese comics called Manga. Includes an annotated list…

  1. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    NASA Technical Reports Server (NTRS)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  2. Guide to making time-lapse graphics using the facilities of the National Magnetic Fusion Energy Computing Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munro, J.K. Jr.

    1980-05-01

    The advent of large, fast computers has opened the way to modeling more complex physical processes and to handling very large quantities of experimental data. The amount of information that can be processed in a short period of time is so great that use of graphical displays assumes greater importance as a means of displaying this information. Information from dynamical processes can be displayed conveniently by use of animated graphics. This guide presents the basic techniques for generating black and white animated graphics, with consideration of aesthetic, mechanical, and computational problems. The guide is intended for use by someone whomore » wants to make movies on the National Magnetic Fusion Energy Computing Center (NMFECC) CDC-7600. Problems encountered by a geographically remote user are given particular attention. Detailed information is given that will allow a remote user to do some file checking and diagnosis before giving graphics files to the system for processing into film in order to spot problems without having to wait for film to be delivered. Source listings of some useful software are given in appendices along with descriptions of how to use it. 3 figures, 5 tables.« less

  3. Learning Projectile Motion with the Computer Game ``Scorched 3D``

    NASA Astrophysics Data System (ADS)

    Jurcevic, John S.

    2008-01-01

    For most of our students, video games are a normal part of their lives. We should take advantage of this medium to teach physics in a manner that is engrossing for our students. In particular, modern video games incorporate accurate physics in their game engines, and they allow us to visualize the physics through flashy and captivating graphics. I recently used the game "Scorched 3D" to help my students understand projectile motion.

  4. Energy Supply Options for Modernizing Army Heating Systems

    DTIC Science & Technology

    1999-01-01

    Army Regulation (AR) 420-49, Heating, Energy Selection and Fuel Storage, Distribution, and Dispens- ing Systems and Technical Manual (TM) 5-650...analysis. 26 USACERL TR 99/23 HEATMAP uses the AutoLISP program in AutoCAD to take the graphical input to populate a Microsoft® Access database in...of 1992, Subtitle F, Federal Agency Energy Man- agement. Technical Manual (TM) 5-650, Repairs and Utilities: Central Boiler Plants (HQDA, 13 October

  5. CSciBox: An Intelligent Assistant for Dating Ice and Sediment Cores

    NASA Astrophysics Data System (ADS)

    Finlinson, K.; Bradley, E.; White, J. W. C.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; Jones, T. R.; Lindsay, C. M.; Israelsen, B.

    2015-12-01

    CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmental archives. It incorporates a number of data-processing and visualization facilities, ranging from simple interpolation to reservoir-age correction and 14C calibration via the Calib algorithm, as well as a number of firn and ice-flow models. It employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form, and offers the user access to those data and computational elements via a modern graphical user interface (GUI). In the case of truly large data or computations, CSciBox is parallelizable across modern multi-core processors, or clusters, or even the cloud. The code is open source and freely available on github, as are one-click installers for various versions of Windows and Mac OSX. The system's architecture allows users to incorporate their own software in the form of computational components that can be built smoothly into CSciBox workflows, taking advantage of CSciBox's GUI, data importing facilities, and plotting capabilities. To date, BACON and StratiCounter have been integrated into CSciBox as embedded components. The user can manipulate and compose all of these tools and facilities as she sees fit. Alternatively, she can employ CSciBox's automated reasoning engine, which uses artificial intelligence techniques to explore the gamut of age models and cross-dating scenarios automatically. The automated reasoning engine captures the knowledge of expert geoscientists, and can output a description of its reasoning.

  6. Graphic Warning Labels Elicit Affective and Thoughtful Responses from Smokers: Results of a Randomized Clinical Trial.

    PubMed

    Evans, Abigail T; Peters, Ellen; Strasser, Andrew A; Emery, Lydia F; Sheerin, Kaitlin M; Romer, Daniel

    2015-01-01

    Observational research suggests that placing graphic images on cigarette warning labels can reduce smoking rates, but field studies lack experimental control. Our primary objective was to determine the psychological processes set in motion by naturalistic exposure to graphic vs. text-only warnings in a randomized clinical trial involving exposure to modified cigarette packs over a 4-week period. Theories of graphic-warning impact were tested by examining affect toward smoking, credibility of warning information, risk perceptions, quit intentions, warning label memory, and smoking risk knowledge. Adults who smoked between 5 and 40 cigarettes daily (N = 293; mean age = 33.7), did not have a contra-indicated medical condition, and did not intend to quit were recruited from Philadelphia, PA and Columbus, OH. Smokers were randomly assigned to receive their own brand of cigarettes for four weeks in one of three warning conditions: text only, graphic images plus text, or graphic images with elaborated text. Data from 244 participants who completed the trial were analyzed in structural-equation models. The presence of graphic images (compared to text-only) caused more negative affect toward smoking, a process that indirectly influenced risk perceptions and quit intentions (e.g., image->negative affect->risk perception->quit intention). Negative affect from graphic images also enhanced warning credibility including through increased scrutiny of the warnings, a process that also indirectly affected risk perceptions and quit intentions (e.g., image->negative affect->risk scrutiny->warning credibility->risk perception->quit intention). Unexpectedly, elaborated text reduced warning credibility. Finally, graphic warnings increased warning-information recall and indirectly increased smoking-risk knowledge at the end of the trial and one month later. In the first naturalistic clinical trial conducted, graphic warning labels are more effective than text-only warnings in encouraging smokers to consider quitting and in educating them about smoking's risks. Negative affective reactions to smoking, thinking about risks, and perceptions of credibility are mediators of their impact. Clinicaltrials.gov NCT01782053.

  7. A GPU-Based Wide-Band Radio Spectrometer

    NASA Astrophysics Data System (ADS)

    Chennamangalam, Jayanth; Scott, Simon; Jones, Glenn; Chen, Hong; Ford, John; Kepley, Amanda; Lorimer, D. R.; Nie, Jun; Prestage, Richard; Roshi, D. Anish; Wagner, Mark; Werthimer, Dan

    2014-12-01

    The graphics processing unit has become an integral part of astronomical instrumentation, enabling high-performance online data reduction and accelerated online signal processing. In this paper, we describe a wide-band reconfigurable spectrometer built using an off-the-shelf graphics processing unit card. This spectrometer, when configured as a polyphase filter bank, supports a dual-polarisation bandwidth of up to 1.1 GHz (or a single-polarisation bandwidth of up to 2.2 GHz) on the latest generation of graphics processing units. On the other hand, when configured as a direct fast Fourier transform, the spectrometer supports a dual-polarisation bandwidth of up to 1.4 GHz (or a single-polarisation bandwidth of up to 2.8 GHz).

  8. Vital Signs for Instructional Design

    ERIC Educational Resources Information Center

    Ley, Kathryn; Gannon-Cook, Ruth

    2014-01-01

    The purpose of this study was to investigate the relationship between a collaborative design process for selecting instructional graphics and online learner perceptions of graphic appropriateness. At the end of their online graduate course, 9 students ranked how appropriately each of 25 graphics represented 1 of 8 human performance technology…

  9. GPU-computing in econophysics and statistical physics

    NASA Astrophysics Data System (ADS)

    Preis, T.

    2011-03-01

    A recent trend in computer science and related fields is general purpose computing on graphics processing units (GPUs), which can yield impressive performance. With multiple cores connected by high memory bandwidth, today's GPUs offer resources for non-graphics parallel processing. This article provides a brief introduction into the field of GPU computing and includes examples. In particular computationally expensive analyses employed in financial market context are coded on a graphics card architecture which leads to a significant reduction of computing time. In order to demonstrate the wide range of possible applications, a standard model in statistical physics - the Ising model - is ported to a graphics card architecture as well, resulting in large speedup values.

  10. Graphical Representation of Parallel Algorithmic Processes

    DTIC Science & Technology

    1990-12-01

    interface with the AAARF main process . The source code for the AAARF class-common library is in the common subdi- rectory and consists of the following files... for public release; distribution unlimited AFIT/GCE/ENG/90D-07 Graphical Representation of Parallel Algorithmic Processes THESIS Presented to the...goal of this study is to develop an algorithm animation facility for parallel processes executing on different architectures, from multiprocessor

  11. Finite Element Optimization for Nondestructive Evaluation on a Graphics Processing Unit for Ground Vehicle Hull Inspection

    DTIC Science & Technology

    2013-08-22

    4 cores, where the code may simultaneously run on the multiple cores or the graphics processing unit (or GPU – to be more specific on an NVIDIA ...allowed to get accurate crack shapes. DISCLAIMER Reference herein to any specific commercial company , product, process, or service by trade name

  12. On the Road to Graphicacy: The Learning of Graphical Representation Systems

    ERIC Educational Resources Information Center

    Postigo, Yolanda; Pozo, Juan Ignacio

    2004-01-01

    This article examines the learning of different types of graphic information by subjects with different levels of education and knowledge of the content represented. Three levels of graphic information learning were distinguished (explicit, implicit, and conceptual information processing) and two experiments were conducted, looking at graph and…

  13. Conceptual Learning with Multiple Graphical Representations: Intelligent Tutoring Systems Support for Sense-Making and Fluency-Building Processes

    ERIC Educational Resources Information Center

    Rau, Martina A.

    2013-01-01

    Most learning environments in the STEM disciplines use multiple graphical representations along with textual descriptions and symbolic representations. Multiple graphical representations are powerful learning tools because they can emphasize complementary aspects of complex learning contents. However, to benefit from multiple graphical…

  14. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models

    PubMed Central

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348

  15. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.

    PubMed

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.

  16. GPURFSCREEN: a GPU based virtual screening tool using random forest classifier.

    PubMed

    Jayaraj, P B; Ajay, Mathias K; Nufail, M; Gopakumar, G; Jaleel, U C A

    2016-01-01

    In-silico methods are an integral part of modern drug discovery paradigm. Virtual screening, an in-silico method, is used to refine data models and reduce the chemical space on which wet lab experiments need to be performed. Virtual screening of a ligand data model requires large scale computations, making it a highly time consuming task. This process can be speeded up by implementing parallelized algorithms on a Graphical Processing Unit (GPU). Random Forest is a robust classification algorithm that can be employed in the virtual screening. A ligand based virtual screening tool (GPURFSCREEN) that uses random forests on GPU systems has been proposed and evaluated in this paper. This tool produces optimized results at a lower execution time for large bioassay data sets. The quality of results produced by our tool on GPU is same as that on a regular serial environment. Considering the magnitude of data to be screened, the parallelized virtual screening has a significantly lower running time at high throughput. The proposed parallel tool outperforms its serial counterpart by successfully screening billions of molecules in training and prediction phases.

  17. Evaluation of a Multicore-Optimized Implementation for Tomographic Reconstruction

    PubMed Central

    Agulleiro, Jose-Ignacio; Fernández, José Jesús

    2012-01-01

    Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far. PMID:23139768

  18. SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.

    PubMed

    Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi

    2018-01-01

    Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. The value of animations in biology teaching: a study of long-term memory retention.

    PubMed

    O'Day, Danton H

    2007-01-01

    Previous work has established that a narrated animation is more effective at communicating a complex biological process (signal transduction) than the equivalent graphic with figure legend. To my knowledge, no study has been done in any subject area on the effectiveness of animations versus graphics in the long-term retention of information, a primary and critical issue in studies of teaching and learning. In this study, involving 393 student responses, three different animations and two graphics-one with and one lacking a legend-were used to determine the long-term retention of information. The results show that students retain more information 21 d after viewing an animation without narration compared with an equivalent graphic whether or not that graphic had a legend. Students' comments provide additional insight into the value of animations in the pedagogical process, and suggestions for future work are proposed.

  20. Web mapping system for complex processing and visualization of environmental geospatial datasets

    NASA Astrophysics Data System (ADS)

    Titov, Alexander; Gordov, Evgeny; Okladnikov, Igor

    2016-04-01

    Environmental geospatial datasets (meteorological observations, modeling and reanalysis results, etc.) are used in numerous research applications. Due to a number of objective reasons such as inherent heterogeneity of environmental datasets, big dataset volume, complexity of data models used, syntactic and semantic differences that complicate creation and use of unified terminology, the development of environmental geodata access, processing and visualization services as well as client applications turns out to be quite a sophisticated task. According to general INSPIRE requirements to data visualization geoportal web applications have to provide such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. It should be noted that modern web mapping systems as integrated geoportal applications are developed based on the SOA and might be considered as complexes of interconnected software tools for working with geospatial data. In the report a complex web mapping system including GIS web client and corresponding OGC services for working with geospatial (NetCDF, PostGIS) dataset archive is presented. There are three basic tiers of the GIS web client in it: 1. Tier of geospatial metadata retrieved from central MySQL repository and represented in JSON format 2. Tier of JavaScript objects implementing methods handling: --- NetCDF metadata --- Task XML object for configuring user calculations, input and output formats --- OGC WMS/WFS cartographical services 3. Graphical user interface (GUI) tier representing JavaScript objects realizing web application business logic Metadata tier consists of a number of JSON objects containing technical information describing geospatial datasets (such as spatio-temporal resolution, meteorological parameters, valid processing methods, etc). The middleware tier of JavaScript objects implementing methods for handling geospatial metadata, task XML object, and WMS/WFS cartographical services interconnects metadata and GUI tiers. The methods include such procedures as JSON metadata downloading and update, launching and tracking of the calculation task running on the remote servers as well as working with WMS/WFS cartographical services including: obtaining the list of available layers, visualizing layers on the map, exporting layers in graphical (PNG, JPG, GeoTIFF), vector (KML, GML, Shape) and digital (NetCDF) formats. Graphical user interface tier is based on the bundle of JavaScript libraries (OpenLayers, GeoExt and ExtJS) and represents a set of software components implementing web mapping application business logic (complex menus, toolbars, wizards, event handlers, etc.). GUI provides two basic capabilities for the end user: configuring the task XML object functionality and cartographical information visualizing. The web interface developed is similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. Web mapping system developed has shown its effectiveness in the process of solving real climate change research problems and disseminating investigation results in cartographical form. The work is supported by SB RAS Basic Program Projects VIII.80.2.1 and IV.38.1.7.

  1. Surface and deep structures in graphics comprehension.

    PubMed

    Schnotz, Wolfgang; Baadte, Christiane

    2015-05-01

    Comprehension of graphics can be considered as a process of schema-mediated structure mapping from external graphics on internal mental models. Two experiments were conducted to test the hypothesis that graphics possess a perceptible surface structure as well as a semantic deep structure both of which affect mental model construction. The same content was presented to different groups of learners by graphics from different perspectives with different surface structures but the same deep structure. Deep structures were complementary: major features of the learning content in one experiment became minor features in the other experiment, and vice versa. Text was held constant. Participants were asked to read, understand, and memorize the learning material. Furthermore, they were either instructed to process the material from the perspective supported by the graphic or from an alternative perspective, or they received no further instruction. After learning, they were asked to recall the learning content from different perspectives by completing graphs of different formats as accurately as possible. Learners' recall was more accurate if the format of recall was the same as the learning format which indicates surface structure influences. However, participants also showed more accurate recall when they remembered the content from a perspective emphasizing the deep structure, regardless of the graphics format presented before. This included better recall of what they had not seen than of what they really had seen before. That is, deep structure effects overrode surface effects. Depending on context conditions, stimulation of additional cognitive processing by instruction had partially positive and partially negative effects.

  2. A State Articulated Instructional Objectives Guide for Occupational Education Programs. State Pilot Model for Drafting (Graphic Communications). Part I--Basic. Part II--Specialty Programs. Section A (Mechanical Drafting and Design). Section B (Architectural Drafting and Design).

    ERIC Educational Resources Information Center

    North Carolina State Dept. of Community Colleges, Raleigh.

    A two-part articulation instructional objective guide for drafting (graphic communications) is provided. Part I contains summary information on seven blocks (courses) of instruction. They are as follow: introduction; basic technical drafting; problem solving in graphics; reproduction processes; freehand drawing and sketching; graphics composition;…

  3. Processing Information in Graphical Form.

    ERIC Educational Resources Information Center

    Curcio, Frances R.; Smith-Burke, M. Trika

    The purpose of this exploratory, descriptive study was to examine how children process different tasks of comprehension presented in graphical form. During the Spring 1981, 8 fourth graders and 9 seventh graders were interviewed. The children were presented with graphs accompanied by six questions reflecting three levels of comprehension:…

  4. Concept Learning through Image Processing.

    ERIC Educational Resources Information Center

    Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

    This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

  5. Accelerating Wright–Fisher Forward Simulations on the Graphics Processing Unit

    PubMed Central

    Lawrie, David S.

    2017-01-01

    Forward Wright–Fisher simulations are powerful in their ability to model complex demography and selection scenarios, but suffer from slow execution on the Central Processor Unit (CPU), thus limiting their usefulness. However, the single-locus Wright–Fisher forward algorithm is exceedingly parallelizable, with many steps that are so-called “embarrassingly parallel,” consisting of a vast number of individual computations that are all independent of each other and thus capable of being performed concurrently. The rise of modern Graphics Processing Units (GPUs) and programming languages designed to leverage the inherent parallel nature of these processors have allowed researchers to dramatically speed up many programs that have such high arithmetic intensity and intrinsic concurrency. The presented GPU Optimized Wright–Fisher simulation, or “GO Fish” for short, can be used to simulate arbitrary selection and demographic scenarios while running over 250-fold faster than its serial counterpart on the CPU. Even modest GPU hardware can achieve an impressive speedup of over two orders of magnitude. With simulations so accelerated, one can not only do quick parametric bootstrapping of previously estimated parameters, but also use simulated results to calculate the likelihoods and summary statistics of demographic and selection models against real polymorphism data, all without restricting the demographic and selection scenarios that can be modeled or requiring approximations to the single-locus forward algorithm for efficiency. Further, as many of the parallel programming techniques used in this simulation can be applied to other computationally intensive algorithms important in population genetics, GO Fish serves as an exciting template for future research into accelerating computation in evolution. GO Fish is part of the Parallel PopGen Package available at: http://dl42.github.io/ParallelPopGen/. PMID:28768689

  6. Integrating Communication Best Practices in the Third National Climate Assessment

    NASA Astrophysics Data System (ADS)

    Hassol, S. J.

    2014-12-01

    Modern climate science assessments now have a history of nearly a quarter-century. This experience, together with important advances in relevant social sciences, has greatly improved our ability to communicate climate science effectively. As a result, the Third National Climate Assessment (NCA) was designed to be truly accessible and useful to all its intended audiences, while still being comprehensive and scientifically accurate. At a time when meeting the challenge of climate change is increasingly recognized as an urgent national and global priority, the NCA is proving to be valuable to decision-makers, the media, and the public. In producing this latest NCA, a communication perspective was an important part of the process from the beginning, rather than an afterthought as has often been the case with scientific reports. Lessons learned from past projects and science communications research fed into developing the communication strategy for the Third NCA. A team of editors and graphic designers worked closely with the authors on language, graphics, and photographs throughout the development of the report, Highlights document, and other products. A web design team helped bring the report to life online. There were also innovations in outreach, including a network of organizations intended to extend the reach of the assessment by engaging stakeholders throughout the process. Professional slide set development and media training were part of the preparation for the report's release. The launch of the NCA in May 2014 saw widespread and ongoing media coverage, continued references to the NCA by decision-makers, and praise from many quarters for its excellence in making complex science clear and accessible. This NCA is a professionally crafted report that exemplifies best practices in 21st century communications.

  7. cuSwift --- a suite of numerical integration methods for modelling planetary systems implemented in C/CUDA

    NASA Astrophysics Data System (ADS)

    Hellmich, S.; Mottola, S.; Hahn, G.; Kührt, E.; Hlawitschka, M.

    2014-07-01

    Simulations of dynamical processes in planetary systems represent an important tool for studying the orbital evolution of the systems [1--3]. Using modern numerical integration methods, it is possible to model systems containing many thousands of objects over timescales of several hundred million years. However, in general, supercomputers are needed to get reasonable simulation results in acceptable execution times [3]. To exploit the ever-growing computation power of Graphics Processing Units (GPUs) in modern desktop computers, we implemented cuSwift, a library of numerical integration methods for studying long-term dynamical processes in planetary systems. cuSwift can be seen as a re-implementation of the famous SWIFT integrator package written by Hal Levison and Martin Duncan. cuSwift is written in C/CUDA and contains different integration methods for various purposes. So far, we have implemented three algorithms: a 15th-order Radau integrator [4], the Wisdom-Holman Mapping (WHM) integrator [5], and the Regularized Mixed Variable Symplectic (RMVS) Method [6]. These algorithms treat only the planets as mutually gravitationally interacting bodies whereas asteroids and comets (or other minor bodies of interest) are treated as massless test particles which are gravitationally influenced by the massive bodies but do not affect each other or the massive bodies. The main focus of this work is on the symplectic methods (WHM and RMVS) which use a larger time step and thus are capable of integrating many particles over a large time span. As an additional feature, we implemented the non-gravitational Yarkovsky effect as described by M. Brož [7]. With cuSwift, we show that the use of modern GPUs makes it possible to speed up these methods by more than one order of magnitude compared to the single-core CPU implementation, thereby enabling modest workstation computers to perform long-term dynamical simulations. We use these methods to study the influence of the Yarkovsky effect on resonant asteroids. We present first results and compare them with integrations done with the original algorithms implemented in SWIFT in order to assess the numerical precision of cuSwift and to demonstrate the speed-up we achieved using the GPU.

  8. StePS: Stereographically Projected Cosmological Simulations

    NASA Astrophysics Data System (ADS)

    Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László

    2018-05-01

    StePS (Stereographically Projected Cosmological Simulations) compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to simulate the evolution of the large-scale structure. This eliminates the need for periodic boundary conditions, which are a numerical convenience unsupported by observation and which modifies the law of force on large scales in an unrealistic fashion. StePS uses stereographic projection for space compactification and naive O(N2) force calculation; this arrives at a correlation function of the same quality more quickly than standard (tree or P3M) algorithms with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence StePS can function as a high-speed prediction tool for modern large-scale surveys.

  9. A browser-based tool for conversion between Fortran NAMELIST and XML/HTML

    NASA Astrophysics Data System (ADS)

    Naito, O.

    A browser-based tool for conversion between Fortran NAMELIST and XML/HTML is presented. It runs on an HTML5 compliant browser and generates reusable XML files to aid interoperability. It also provides a graphical interface for editing and annotating variables in NAMELIST, hence serves as a primitive code documentation environment. Although the tool is not comprehensive, it could be viewed as a test bed for integrating legacy codes into modern systems.

  10. Using Computers in Fluids Engineering Education

    NASA Technical Reports Server (NTRS)

    Benson, Thomas J.

    1998-01-01

    Three approaches for using computers to improve basic fluids engineering education are presented. The use of computational fluid dynamics solutions to fundamental flow problems is discussed. The use of interactive, highly graphical software which operates on either a modern workstation or personal computer is highlighted. And finally, the development of 'textbooks' and teaching aids which are used and distributed on the World Wide Web is described. Arguments for and against this technology as applied to undergraduate education are also discussed.

  11. Use of graphics in the design office at the Military Aircraft Division of the British Aircraft Corporation

    NASA Technical Reports Server (NTRS)

    Coles, W. A.

    1975-01-01

    The CAD/CAM interactive computer graphics system was described; uses to which it has been put were shown, and current developments of the system were outlined. The system supports batch, time sharing, and fully interactive graphic processing. Engineers using the system may switch between these methods of data processing and problem solving to make the best use of the available resources. It is concluded that the introduction of on-line computing in the form of teletypes, storage tubes, and fully interactive graphics has resulted in large increases in productivity and reduced timescales in the geometric computing, numerical lofting and part programming areas, together with a greater utilization of the system in the technical departments.

  12. KAGLVis - On-line 3D Visualisation of Earth-observing-satellite Data

    NASA Astrophysics Data System (ADS)

    Szuba, Marek; Ameri, Parinaz; Grabowski, Udo; Maatouki, Ahmad; Meyer, Jörg

    2015-04-01

    One of the goals of the Large-Scale Data Management and Analysis project is to provide a high-performance framework facilitating management of data acquired by Earth-observing satellites such as Envisat. On the client-facing facet of this framework, we strive to provide visualisation and basic analysis tool which could be used by scientists with minimal to no knowledge of the underlying infrastructure. Our tool, KAGLVis, is a JavaScript client-server Web application which leverages modern Web technologies to provide three-dimensional visualisation of satellite observables on a wide range of client systems. It takes advantage of the WebGL API to employ locally available GPU power for 3D rendering; this approach has been demonstrated to perform well even on relatively weak hardware such as integrated graphics chipsets found in modern laptop computers and with some user-interface tuning could even be usable on embedded devices such as smartphones or tablets. Data is fetched from the database back-end using a ReST API and cached locally, both in memory and using HTML5 Web Storage, to minimise network use. Computations, calculation of cloud altitude from cloud-index measurements for instance, can depending on configuration be performed on either the client or the server side. Keywords: satellite data, Envisat, visualisation, 3D graphics, Web application, WebGL, MEAN stack.

  13. Distributed computation of graphics primitives on a transputer network

    NASA Technical Reports Server (NTRS)

    Ellis, Graham K.

    1988-01-01

    A method is developed for distributing the computation of graphics primitives on a parallel processing network. Off-the-shelf transputer boards are used to perform the graphics transformations and scan-conversion tasks that would normally be assigned to a single transputer based display processor. Each node in the network performs a single graphics primitive computation. Frequently requested tasks can be duplicated on several nodes. The results indicate that the current distribution of commands on the graphics network shows a performance degradation when compared to the graphics display board alone. A change to more computation per node for every communication (perform more complex tasks on each node) may cause the desired increase in throughput.

  14. A Strategy for Making Content Reading Successful: Grades 4-6.

    ERIC Educational Resources Information Center

    Alvermann, Donna E.; Boothby, Paula R.

    A graphic organizer is a tree diagram that consists of vocabulary related to one particular concept. A modified version of a graphic organizer contains empty slots that represent missing information and actively involves students during the reading process as opposed to before or after. This modified graphic organizer can provide both the…

  15. Role of Graphics Tools in the Learning Design Process

    ERIC Educational Resources Information Center

    Laisney, Patrice; Brandt-Pomares, Pascale

    2015-01-01

    This paper discusses the design activities of students in secondary school in France. Graphics tools are now part of the capacity of design professionals. It is therefore apt to reflect on their integration into the technological education. Has the use of intermediate graphical tools changed students' performance, and if so in what direction, in…

  16. Graphic Arts: Process Camera, Stripping, and Platemaking. Teacher Guide.

    ERIC Educational Resources Information Center

    Feasley, Sue C., Ed.

    This curriculum guide is the second in a three-volume series of instructional materials for competency-based graphic arts instruction. Each publication is designed to include the technical content and tasks necessary for a student to be employed in an entry-level graphic arts occupation. Introductory materials include an instructional/task…

  17. Critique and Process: Signature Pedagogies in the Graphic Design Classroom

    ERIC Educational Resources Information Center

    Motley, Phillip

    2017-01-01

    Like many disciplines in design and the visual fine arts, critique is a signature pedagogy in the graphic design classroom. It serves as both a formative and summative assessment while also giving students the opportunity to practice the habits of graphic design. Critiques help students become keen observers of relevant disciplinary criteria;…

  18. InteractiveROSETTA: a graphical user interface for the PyRosetta protein modeling suite.

    PubMed

    Schenkelberg, Christian D; Bystroff, Christopher

    2015-12-15

    Modern biotechnical research is becoming increasingly reliant on computational structural modeling programs to develop novel solutions to scientific questions. Rosetta is one such protein modeling suite that has already demonstrated wide applicability to a number of diverse research projects. Unfortunately, Rosetta is largely a command-line-driven software package which restricts its use among non-computational researchers. Some graphical interfaces for Rosetta exist, but typically are not as sophisticated as commercial software. Here, we present InteractiveROSETTA, a graphical interface for the PyRosetta framework that presents easy-to-use controls for several of the most widely used Rosetta protocols alongside a sophisticated selection system utilizing PyMOL as a visualizer. InteractiveROSETTA is also capable of interacting with remote Rosetta servers, facilitating sophisticated protocols that are not accessible in PyRosetta or which require greater computational resources. InteractiveROSETTA is freely available at https://github.com/schenc3/InteractiveROSETTA/releases and relies upon a separate download of PyRosetta which is available at http://www.pyrosetta.org after obtaining a license (free for academic use). © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. OASYS (OrAnge SYnchrotron Suite): an open-source graphical environment for x-ray virtual experiments

    NASA Astrophysics Data System (ADS)

    Rebuffi, Luca; Sanchez del Rio, Manuel

    2017-08-01

    The evolution of the hardware platforms, the modernization of the software tools, the access to the codes of a large number of young people and the popularization of the open source software for scientific applications drove us to design OASYS (ORange SYnchrotron Suite), a completely new graphical environment for modelling X-ray experiments. The implemented software architecture allows to obtain not only an intuitive and very-easy-to-use graphical interface, but also provides high flexibility and rapidity for interactive simulations, making configuration changes to quickly compare multiple beamline configurations. Its purpose is to integrate in a synergetic way the most powerful calculation engines available. OASYS integrates different simulation strategies via the implementation of adequate simulation tools for X-ray Optics (e.g. ray tracing and wave optics packages). It provides a language to make them to communicate by sending and receiving encapsulated data. Python has been chosen as main programming language, because of its universality and popularity in scientific computing. The software Orange, developed at the University of Ljubljana (SLO), is the high level workflow engine that provides the interaction with the user and communication mechanisms.

  20. A DDC Bibliography on Optical or Graphic Information Processing (Information Sciences Series). Volume I.

    ERIC Educational Resources Information Center

    Defense Documentation Center, Alexandria, VA.

    This unclassified-unlimited bibliography contains 183 references, with abstracts, dealing specifically with optical or graphic information processing. Citations are grouped under three headings: display devices and theory, character recognition, and pattern recognition. Within each group, they are arranged in accession number (AD-number) sequence.…

  1. An Interactive Graphics Program for Investigating Digital Signal Processing.

    ERIC Educational Resources Information Center

    Miller, Billy K.; And Others

    1983-01-01

    Describes development of an interactive computer graphics program for use in teaching digital signal processing. The program allows students to interactively configure digital systems on a monitor display and observe their system's performance by means of digital plots on the system's outputs. A sample program run is included. (JN)

  2. Performance Testing of GPU-Based Approximate Matching Algorithm on Network Traffic

    DTIC Science & Technology

    2015-03-01

    Defense Department’s use. vi THIS PAGE INTENTIONALLY LEFT BLANK vii TABLE OF CONTENTS I.  INTRODUCTION...22  D.  GENERATING DIGESTS ............................................................................23  1.  Reference...the-shelf GPU Graphical Processing Unit GPGPU General -Purpose Graphic Processing Unit HBSS Host-Based Security System HIPS Host Intrusion

  3. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    USDA-ARS?s Scientific Manuscript database

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  4. Graphic Arts: Book Three. The Press and Related Processes.

    ERIC Educational Resources Information Center

    Farajollahi, Karim; And Others

    The third of a three-volume set of instructional materials for a graphic arts course, this manual consists of nine instructional units dealing with presses and related processes. Covered in the units are basic press fundamentals, offset press systems, offset press operating procedures, offset inks and dampening chemistry, preventive maintenance…

  5. The Use of Computer Graphics in the Design Process.

    ERIC Educational Resources Information Center

    Palazzi, Maria

    This master's thesis examines applications of computer technology to the field of industrial design and ways in which technology can transform the traditional process. Following a statement of the problem, the history and applications of the fields of computer graphics and industrial design are reviewed. The traditional industrial design process…

  6. A Prototype Lisp-Based Soft Real-Time Object-Oriented Graphical User Interface for Control System Development

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Wong, Edmond; Simon, Donald L.

    1994-01-01

    A prototype Lisp-based soft real-time object-oriented Graphical User Interface for control system development is presented. The Graphical User Interface executes alongside a test system in laboratory conditions to permit observation of the closed loop operation through animation, graphics, and text. Since it must perform interactive graphics while updating the screen in real time, techniques are discussed which allow quick, efficient data processing and animation. Examples from an implementation are included to demonstrate some typical functionalities which allow the user to follow the control system's operation.

  7. A graphical language for reliability model generation

    NASA Technical Reports Server (NTRS)

    Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.

    1990-01-01

    A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.

  8. GRASP/Ada: Graphical Representations of Algorithms, Structures, and Processes for Ada. The development of a program analysis environment for Ada: Reverse engineering tools for Ada, task 2, phase 3

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1991-01-01

    The main objective is the investigation, formulation, and generation of graphical representations of algorithms, structures, and processes for Ada (GRASP/Ada). The presented task, in which various graphical representations that can be extracted or generated from source code are described and categorized, is focused on reverse engineering. The following subject areas are covered: the system model; control structure diagram generator; object oriented design diagram generator; user interface; and the GRASP library.

  9. Valorisation of Como Historical Cadastral Maps Through Modern Web Geoservices

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Zamboni, G.

    2012-07-01

    Cartographic cultural heritage preserved in worldwide archives is often stored in the original paper version only, thus restricting both the chances of utilization and the range of possible users. The Web C.A.R.T.E. system addressed this issue with regard to the precious cadastral maps preserved at the State Archive of Como. Aim of the project was to improve the visibility and accessibility of this heritage using the latest free and open source tools for processing, cataloguing and web publishing the maps. The resulting architecture should therefore assist the State Archive of Como in managing its cartographic contents. After a pre-processing consisting of digitization and georeferencing steps, maps were provided with metadata, compiled according to the current Italian standards and managed through an ad hoc version of the GeoNetwork Opensource geocatalog software. A dedicated MapFish-based webGIS client, with an optimized version also for mobile platforms, was built for maps publication and 2D navigation. A module for 3D visualization of cadastral maps was finally developed using the NASA World Wind Virtual Globe. Thanks to a temporal slidebar, time was also included in the system producing a 4D Graphical User Interface. The overall architecture was totally built with free and open source software and allows a direct and intuitive consultation of historical maps. Besides the notable advantage of keeping original paper maps intact, the system greatly simplifies the work of the State Archive of Como common users and together widens the same range of users thanks to the modernization of map consultation tools.

  10. Graphical Language for Data Processing

    NASA Technical Reports Server (NTRS)

    Alphonso, Keith

    2011-01-01

    A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.

  11. Accelerating Fibre Orientation Estimation from Diffusion Weighted Magnetic Resonance Imaging Using GPUs

    PubMed Central

    Hernández, Moisés; Guerrero, Ginés D.; Cecilia, José M.; García, José M.; Inuggi, Alberto; Jbabdi, Saad; Behrens, Timothy E. J.; Sotiropoulos, Stamatios N.

    2013-01-01

    With the performance of central processing units (CPUs) having effectively reached a limit, parallel processing offers an alternative for applications with high computational demands. Modern graphics processing units (GPUs) are massively parallel processors that can execute simultaneously thousands of light-weight processes. In this study, we propose and implement a parallel GPU-based design of a popular method that is used for the analysis of brain magnetic resonance imaging (MRI). More specifically, we are concerned with a model-based approach for extracting tissue structural information from diffusion-weighted (DW) MRI data. DW-MRI offers, through tractography approaches, the only way to study brain structural connectivity, non-invasively and in-vivo. We parallelise the Bayesian inference framework for the ball & stick model, as it is implemented in the tractography toolbox of the popular FSL software package (University of Oxford). For our implementation, we utilise the Compute Unified Device Architecture (CUDA) programming model. We show that the parameter estimation, performed through Markov Chain Monte Carlo (MCMC), is accelerated by at least two orders of magnitude, when comparing a single GPU with the respective sequential single-core CPU version. We also illustrate similar speed-up factors (up to 120x) when comparing a multi-GPU with a multi-CPU implementation. PMID:23658616

  12. NGL Viewer: a web application for molecular visualization.

    PubMed

    Rose, Alexander S; Hildebrand, Peter W

    2015-07-01

    The NGL Viewer (http://proteinformatics.charite.de/ngl) is a web application for the visualization of macromolecular structures. By fully adopting capabilities of modern web browsers, such as WebGL, for molecular graphics, the viewer can interactively display large molecular complexes and is also unaffected by the retirement of third-party plug-ins like Flash and Java Applets. Generally, the web application offers comprehensive molecular visualization through a graphical user interface so that life scientists can easily access and profit from available structural data. It supports common structural file-formats (e.g. PDB, mmCIF) and a variety of molecular representations (e.g. 'cartoon, spacefill, licorice'). Moreover, the viewer can be embedded in other web sites to provide specialized visualizations of entries in structural databases or results of structure-related calculations. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. The Graphics Tablet - A Valuable Tool for the Digital STEM Teacher

    NASA Astrophysics Data System (ADS)

    Stephens, Jeff

    2018-04-01

    I am inspired to write this article after coming across some publications in The Physics Teacher that all hit on topics of personal interest and experience. Similarly to Christensen my goal in writing this is to encourage other physics educators to take advantage of modern technology in delivering content to students and to feel comfortable doing so. There are numerous ways in which to create screencasts and lecture videos, some of which have been addressed in other articles. I invite those interested in learning how to create these videos to contact their educational technology staff or perform some internet searches on the topic. I will focus this article on the technology that enhanced the content I was delivering to my students. I will share a bit of my journey towards creating video materials and introduce a vital piece of technology, the graphics tablet, which changed the way I communicate with my students.

  14. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950

  15. Graphical tools for TV weather presentation

    NASA Astrophysics Data System (ADS)

    Najman, M.

    2010-09-01

    Contemporary meteorology and its media presentation faces in my opinion following key tasks: - Delivering the meteorological information to the end user/spectator in understandable and modern fashion, which follows industry standard of video output (HD, 16:9) - Besides weather icons show also the outputs of numerical weather prediction models, climatological data, satellite and radar images, observed weather as actual as possible. - Does not compromise the accuracy of presented data. - Ability to prepare and adjust the weather show according to actual synoptic situtation. - Ability to refocus and completely adjust the weather show to actual extreme weather events. - Ground map resolution weather data presentation need to be at least 20 m/pixel to be able to follow the numerical weather prediction model resolution. - Ability to switch between different numerical weather prediction models each day, each show or even in the middle of one weather show. - The graphical weather software need to be flexible and fast. The graphical changes nee to be implementable and airable within minutes before the show or even live. These tasks are so demanding and the usual original approach of custom graphics could not deal with it. It was not able to change the show every day, the shows were static and identical day after day. To change the content of the weather show daily was costly and most of the time impossible with the usual approach. The development in this area is fast though and there are several different options for weather predicting organisations such as national meteorological offices and private meteorological companies to solve this problem. What are the ways to solve it? What are the limitations and advantages of contemporary graphical tools for meteorologists? All these questions will be answered.

  16. TRIIG - Time-lapse reproduction of images through interactive graphics. [digital processing of quality hard copy

    NASA Technical Reports Server (NTRS)

    Buckner, J. D.; Council, H. W.; Edwards, T. R.

    1974-01-01

    Description of the hardware and software implementing the system of time-lapse reproduction of images through interactive graphics (TRIIG). The system produces a quality hard copy of processed images in a fast and inexpensive manner. This capability allows for optimal development of processing software through the rapid viewing of many image frames in an interactive mode. Three critical optical devices are used to reproduce an image: an Optronics photo reader/writer, the Adage Graphics Terminal, and Polaroid Type 57 high speed film. Typical sources of digitized images are observation satellites, such as ERTS or Mariner, computer coupled electron microscopes for high-magnification studies, or computer coupled X-ray devices for medical research.

  17. Graphic Warning Labels Elicit Affective and Thoughtful Responses from Smokers: Results of a Randomized Clinical Trial

    PubMed Central

    Evans, Abigail T.; Peters, Ellen; Strasser, Andrew A.; Emery, Lydia F.; Sheerin, Kaitlin M.; Romer, Daniel

    2015-01-01

    Objective Observational research suggests that placing graphic images on cigarette warning labels can reduce smoking rates, but field studies lack experimental control. Our primary objective was to determine the psychological processes set in motion by naturalistic exposure to graphic vs. text-only warnings in a randomized clinical trial involving exposure to modified cigarette packs over a 4-week period. Theories of graphic-warning impact were tested by examining affect toward smoking, credibility of warning information, risk perceptions, quit intentions, warning label memory, and smoking risk knowledge. Methods Adults who smoked between 5 and 40 cigarettes daily (N = 293; mean age = 33.7), did not have a contra-indicated medical condition, and did not intend to quit were recruited from Philadelphia, PA and Columbus, OH. Smokers were randomly assigned to receive their own brand of cigarettes for four weeks in one of three warning conditions: text only, graphic images plus text, or graphic images with elaborated text. Results Data from 244 participants who completed the trial were analyzed in structural-equation models. The presence of graphic images (compared to text-only) caused more negative affect toward smoking, a process that indirectly influenced risk perceptions and quit intentions (e.g., image->negative affect->risk perception->quit intention). Negative affect from graphic images also enhanced warning credibility including through increased scrutiny of the warnings, a process that also indirectly affected risk perceptions and quit intentions (e.g., image->negative affect->risk scrutiny->warning credibility->risk perception->quit intention). Unexpectedly, elaborated text reduced warning credibility. Finally, graphic warnings increased warning-information recall and indirectly increased smoking-risk knowledge at the end of the trial and one month later. Conclusions In the first naturalistic clinical trial conducted, graphic warning labels are more effective than text-only warnings in encouraging smokers to consider quitting and in educating them about smoking’s risks. Negative affective reactions to smoking, thinking about risks, and perceptions of credibility are mediators of their impact. Trial Registration Clinicaltrials.gov NCT01782053 PMID:26672982

  18. The Study of Graphic Sense and Its Effects on the Acquisition of Literacy. Final Report.

    ERIC Educational Resources Information Center

    Hernandez-Chavez, Eduardo; Curtis, Jan

    This report describes a study on the development of children's conceptualizations of written language, that is, their graphic sense. The study investigated three issues: (1) whether acquisition of literacy is a developmental process common to all normal children, (2) whether the levels of graphic sense tend to be associated with particular…

  19. Groundwater Resources Assessment under the Pressures of Humanity and Climate Changes

    Treesearch

    Bret Bruce; Diana Allen; Henrique Chaves; Gordon Grant; Gualbert Oude Essink; Henk Kooi; Ian White; Jason Gurdak; Jay Famiglietti; Jose Luis Martin-Bordes; Kevin Hiscock; Matthew Rodell; Neno Kukuric; Peter B. McMahon; Richard Taylor; Timothy Green; Yoseph Yechieli

    2008-01-01

    Given the vision and mission statements for GRAPHIC above, this document provides an updated framework for the GRAPHIC program. The approach to addressing global issues under the GRAPHIC umbrella involves case studies designed to cover a broad range of the identified Subjects, Methods, and Regions. Interdependencies of factors and processes affecting subsurface water...

  20. Interplay of Computer and Paper-Based Sketching in Graphic Design

    ERIC Educational Resources Information Center

    Pan, Rui; Kuo, Shih-Ping; Strobel, Johannes

    2013-01-01

    The purpose of this study is to investigate student designers' attitude and choices towards the use of computers and paper sketches when involved in a graphic design process. 65 computer graphic technology undergraduates participated in this research. A mixed method study with survey and in-depth interviews was applied to answer the research…

  1. Pre-Service Science Teachers' Construction and Interpretation of Graphs

    ERIC Educational Resources Information Center

    Ergül, N. Remziye

    2018-01-01

    Data and graphic analysis and interpretation are important parts of science process skills and science curriculum. So it refers to visual display of data using relevant graphical representations. One of the tools used in science courses is graphics for explain the relationship among each of the concepts and therefore it is important to know data…

  2. Improving aircraft conceptual design - A PHIGS interactive graphics interface for ACSYNT

    NASA Technical Reports Server (NTRS)

    Wampler, S. G.; Myklebust, A.; Jayaram, S.; Gelhausen, P.

    1988-01-01

    A CAD interface has been created for the 'ACSYNT' aircraft conceptual design code that permits the execution and control of the design process via interactive graphics menus. This CAD interface was coded entirely with the new three-dimensional graphics standard, the Programmer's Hierarchical Interactive Graphics System. The CAD/ACSYNT system is designed for use by state-of-the-art high-speed imaging work stations. Attention is given to the approaches employed in modeling, data storage, and rendering.

  3. A graphical weather system design for the NASA transport systems research vehicle B-737

    NASA Technical Reports Server (NTRS)

    Scanlon, Charles H.

    1992-01-01

    A graphical weather system was designed for testing in the NASA Transport Systems Research Vehicle B-737 airplane and simulator. The purpose of these tests was to measure the impact of graphical weather products on aircrew decision processes, weather situation awareness, reroute clearances, workload, and weather monitoring. The flight crew graphical weather interface is described along with integration of the weather system with the flight navigation system, and data link transmission methods for sending weather data to the airplane.

  4. SU-F-J-72: A Clinical Usable Integrated Contouring Quality Evaluation Software for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, S; Dolly, S; Cai, B

    Purpose: To introduce the Auto Contour Evaluation (ACE) software, which is the clinical usable, user friendly, efficient and all-in-one toolbox for automatically identify common contouring errors in radiotherapy treatment planning using supervised machine learning techniques. Methods: ACE is developed with C# using Microsoft .Net framework and Windows Presentation Foundation (WPF) for elegant GUI design and smooth GUI transition animations through the integration of graphics engines and high dots per inch (DPI) settings on modern high resolution monitors. The industrial standard software design pattern, Model-View-ViewModel (MVVM) pattern, is chosen to be the major architecture of ACE for neat coding structure, deepmore » modularization, easy maintainability and seamless communication with other clinical software. ACE consists of 1) a patient data importing module integrated with clinical patient database server, 2) a 2D DICOM image and RT structure simultaneously displaying module, 3) a 3D RT structure visualization module using Visualization Toolkit or VTK library and 4) a contour evaluation module using supervised pattern recognition algorithms to detect contouring errors and display detection results. ACE relies on supervised learning algorithms to handle all image processing and data processing jobs. Implementations of related algorithms are powered by Accord.Net scientific computing library for better efficiency and effectiveness. Results: ACE can take patient’s CT images and RT structures from commercial treatment planning software via direct user input or from patients’ database. All functionalities including 2D and 3D image visualization and RT contours error detection have been demonstrated with real clinical patient cases. Conclusion: ACE implements supervised learning algorithms and combines image processing and graphical visualization modules for RT contours verification. ACE has great potential for automated radiotherapy contouring quality verification. Structured with MVVM pattern, it is highly maintainable and extensible, and support smooth connections with other clinical software tools.« less

  5. GRASP - A Prototype Interactive Graphic Sawing Program - (Forest Products Journal)

    Treesearch

    Luis G. Occeña; Daniel L. Schmoldt

    1996-01-01

    A versatile microcomputer-based interactive graphics sawing program has been developed as a tool for modeling various hardwood processes, from bucking and topping to log sawing, lumber edging, secondary processing, and even veneering. The microcomputer platform makes the tool affordable and accessible. A solid modeling basis provides the tool with a sound geometrical...

  6. Digital-Computer Processing of Graphical Data. Final Report.

    ERIC Educational Resources Information Center

    Freeman, Herbert

    The final report of a two-year study concerned with the digital-computer processing of graphical data. Five separate investigations carried out under this study are described briefly, and a detailed bibliography, complete with abstracts, is included in which are listed the technical papers and reports published during the period of this program.…

  7. GRASP - A Prototype Interactive Graphic Sawing Program - (MU-IE Technical Report)

    Treesearch

    Luis G. Occeña; Daniel L. Schmoldt

    1995-01-01

    A versatile microcomputer-based interactive graphics program has been developed as a tool for modeling various hardwood processes, from bucking and topping to log sawing, lumber edging, secondary processing, even veneering. The microcomputer platform makes the tool affordable and accessible.A solid modeling basis provides the tool with a sound geometrical and...

  8. How to Apply for Protection Time Graphic

    EPA Pesticide Factsheets

    We will review insect repellent products that voluntarily apply to use the repellency awareness graphic to ensure that their scientific data meet current testing protocols and standard evaluation processes.

  9. Graphical representations of data improve student understanding of measurement and uncertainty: An eye-tracking study

    NASA Astrophysics Data System (ADS)

    Susac, Ana; Bubic, Andreja; Martinjak, Petra; Planinic, Maja; Palmovic, Marijan

    2017-12-01

    Developing a better understanding of the measurement process and measurement uncertainty is one of the main goals of university physics laboratory courses. This study investigated the influence of graphical representation of data on student understanding and interpreting of measurement results. A sample of 101 undergraduate students (48 first year students and 53 third and fifth year students) from the Department of Physics, University of Zagreb were tested with a paper-and-pencil test consisting of eight multiple-choice test items about measurement uncertainties. One version of the test items included graphical representations of the measurement data. About half of the students solved that version of the test while the remaining students solved the same test without graphical representations. The results have shown that the students who had the graphical representation of data scored higher than their colleagues without graphical representation. In the second part of the study, measurements of eye movements were carried out on a sample of thirty undergraduate students from the Department of Physics, University of Zagreb while students were solving the same test on a computer screen. The results revealed that students who had the graphical representation of data spent considerably less time viewing the numerical data than the other group of students. These results indicate that graphical representation may be beneficial for data processing and data comparison. Graphical representation helps with visualization of data and therefore reduces the cognitive load on students while performing measurement data analysis, so students should be encouraged to use it.

  10. Analyzing women's roles through graphic representation of narratives.

    PubMed

    Hall, Joanne M

    2003-08-01

    A 1992 triangulated international nursing study of women's health was reported. The researchers used the perspectives of feminism and symbolic interactionism, specifically role theory. A narrative analysis was done to clarify the concept of role integration. The narrative analysis was reported in 1992, but graphic/visual techniques used in the team dialogue process of narrative analysis were not reported due to space limitations. These techniques have not been reported elsewhere and thus remain innovative. Specific steps in the method are outlined here in detail as an audit trail. The process would be useful to other qualitative researchers as an exemplar of one novel way that verbal data can be abstracted visually/graphically. Suggestions are included for aspects of narrative, in addition to roles, that could be depicted graphically in qualitative research.

  11. Large-scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU).

    PubMed

    Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin

    2015-01-15

    Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  13. Spatial 3D infrastructure: display-independent software framework, high-speed rendering electronics, and several new displays

    NASA Astrophysics Data System (ADS)

    Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.

    2005-03-01

    We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.

  14. Parallel Calculations in LS-DYNA

    NASA Astrophysics Data System (ADS)

    Vartanovich Mkrtychev, Oleg; Aleksandrovich Reshetov, Andrey

    2017-11-01

    Nowadays, structural mechanics exhibits a trend towards numeric solutions being found for increasingly extensive and detailed tasks, which requires that capacities of computing systems be enhanced. Such enhancement can be achieved by different means. E.g., in case a computing system is represented by a workstation, its components can be replaced and/or extended (CPU, memory etc.). In essence, such modification eventually entails replacement of the entire workstation, i.e. replacement of certain components necessitates exchange of others (faster CPUs and memory devices require buses with higher throughput etc.). Special consideration must be given to the capabilities of modern video cards. They constitute powerful computing systems capable of running data processing in parallel. Interestingly, the tools originally designed to render high-performance graphics can be applied for solving problems not immediately related to graphics (CUDA, OpenCL, Shaders etc.). However, not all software suites utilize video cards’ capacities. Another way to increase capacity of a computing system is to implement a cluster architecture: to add cluster nodes (workstations) and to increase the network communication speed between the nodes. The advantage of this approach is extensive growth due to which a quite powerful system can be obtained by combining not particularly powerful nodes. Moreover, separate nodes may possess different capacities. This paper considers the use of a clustered computing system for solving problems of structural mechanics with LS-DYNA software. To establish a range of dependencies a mere 2-node cluster has proven sufficient.

  15. The VLBI Data Analysis Software νSolve: Development Progress and Plans for the Future

    NASA Astrophysics Data System (ADS)

    Bolotin, S.; Baver, K.; Gipson, J.; Gordon, D.; MacMillan, D.

    2014-12-01

    The program νSolve is a part of the CALC/SOLVE VLBI data analysis system. It is a replacement for interactive SOLVE, the part of CALC/SOLVE that is used for preliminary data analysis of new VLBI sessions. νSolve is completely new software. It is written in C++ and has a modern graphical user interface. In this article we present the capabilities of the software, its current status, and our plans for future development.

  16. Remote Sensing and Characterization of Oil on Water Using Coherent Fringe Projection and Holographic in-Line Interferometry

    DTIC Science & Technology

    2013-03-01

    holo- graphic recording on photo-thermo-plastic structure ,” J. Modern Opt. 57(10), 854–858 (2010). 6. N. Kukhtarev and T. Kukhtareva, “ Dynamic ...RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 21-10-2013 Journal Article Remote Sensing and Characterization of Oil on Water Using...green-blue region can also degrade oil. This finding indicates that properly structured laser clean-up can be an alternative method of decontamination

  17. Software-Based Visual Loan Calculator For Banking Industry

    NASA Astrophysics Data System (ADS)

    Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.

    2012-03-01

    industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.

  18. Optimal service using Matlab - simulink controlled Queuing system at call centers

    NASA Astrophysics Data System (ADS)

    Balaji, N.; Siva, E. P.; Chandrasekaran, A. D.; Tamilazhagan, V.

    2018-04-01

    This paper presents graphical integrated model based academic research on telephone call centres. This paper introduces an important feature of impatient customers and abandonments in the queue system. However the modern call centre is a complex socio-technical system. Queuing theory has now become a suitable application in the telecom industry to provide better online services. Through this Matlab-simulink multi queuing structured models provide better solutions in complex situations at call centres. Service performance measures analyzed at optimal level through Simulink queuing model.

  19. KSC-99pp0441

    NASA Image and Video Library

    1999-04-26

    In this broad view, the new full-color, flat panel Multifunction Electronic Display Subsystem (MEDS) is shown in the cockpit of the orbiter Atlantis. It is often called the "glass cockpit." The recently installed MEDS upgrade improves crew/orbiter interaction with easy-to-read, graphic portrayals of key flight indicators like attitude display and mach speed. The installation makes Atlantis the most modern orbiter in the fleet and equals the systems on current commercial jet airliners and military aircraft. Atlantis is scheduled to fly on mission STS-101 in early December

  20. The Role of Western Germany in West European Defense

    DTIC Science & Technology

    1966-04-08

    Ralph. Modern German History. New York: E. P. Dutton & Co., Inc., 1964. (DD175 F5) 34. German Research Association. Germany: Franz Steiner Verlag Gmb...and Rudolf , Walter. This Germany. New York: New York Graphic Society Publishers, Ltd., 1954. (DD257 L42) 39. Heidenheimer, Arnold J. The Government...202-07, 243. 47. Lauder, K. H. A Brief Review of Science and Technoloc in Western Germany. London: HIISO, 1955. (Q18 G4G7) 48. Leonhardt, Rudolf Walter

  1. UI Review Results and NARAC Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisher, J.; Eme, B.; Kim, S.

    2017-03-08

    This report describes the results of an inter-program design review completed February 16th, 2017, during the second year of a FY16-FY18 NA-84 Technology Integration (TI) project to modernize the core software system used in DOE/NNSA's National Atmospheric Release Advisory Center (NARAC, narac.llnl.gov). This review focused on the graphical user interfaces (GUI) frameworks. Reviewers (described in Appendix 2) were selected from multiple areas of the LLNL Computation directorate, based on their expertise in GUI and Web technologies.

  2. DspaceOgre 3D Graphics Visualization Tool

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.

    2011-01-01

    This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.

  3. ESO Reflex: a graphical workflow engine for data reduction

    NASA Astrophysics Data System (ADS)

    Hook, Richard; Ullgrén, Marko; Romaniello, Martino; Maisala, Sami; Oittinen, Tero; Solin, Otto; Savolainen, Ville; Järveläinen, Pekka; Tyynelä, Jani; Péron, Michèle; Ballester, Pascal; Gabasch, Armin; Izzo, Carlo

    ESO Reflex is a prototype software tool that provides a novel approach to astronomical data reduction by integrating a modern graphical workflow system (Taverna) with existing legacy data reduction algorithms. Most of the raw data produced by instruments at the ESO Very Large Telescope (VLT) in Chile are reduced using recipes. These are compiled C applications following an ESO standard and utilising routines provided by the Common Pipeline Library (CPL). Currently these are run in batch mode as part of the data flow system to generate the input to the ESO/VLT quality control process and are also exported for use offline. ESO Reflex can invoke CPL-based recipes in a flexible way through a general purpose graphical interface. ESO Reflex is based on the Taverna system that was originally developed within the UK life-sciences community. Workflows have been created so far for three VLT/VLTI instruments, and the GUI allows the user to make changes to these or create workflows of their own. Python scripts or IDL procedures can be easily brought into workflows and a variety of visualisation and display options, including custom product inspection and validation steps, are available. Taverna is intended for use with web services and experiments using ESO Reflex to access Virtual Observatory web services have been successfully performed. ESO Reflex is the main product developed by Sampo, a project led by ESO and conducted by a software development team from Finland as an in-kind contribution to joining ESO. The goal was to look into the needs of the ESO community in the area of data reduction environments and to create pilot software products that illustrate critical steps along the road to a new system. Sampo concluded early in 2008. This contribution will describe ESO Reflex and show several examples of its use both locally and using Virtual Observatory remote web services. ESO Reflex is expected to be released to the community in early 2009.

  4. Design Application Translates 2-D Graphics to 3-D Surfaces

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Fabric Images Inc., specializing in the printing and manufacturing of fabric tension architecture for the retail, museum, and exhibit/tradeshow communities, designed software to translate 2-D graphics for 3-D surfaces prior to print production. Fabric Images' fabric-flattening design process models a 3-D surface based on computer-aided design (CAD) specifications. The surface geometry of the model is used to form a 2-D template, similar to a flattening process developed by NASA's Glenn Research Center. This template or pattern is then applied in the development of a 2-D graphic layout. Benefits of this process include 11.5 percent time savings per project, less material wasted, and the ability to improve upon graphic techniques and offer new design services. Partners include Exhibitgroup/Giltspur (end-user client: TAC Air, a division of Truman Arnold Companies Inc.), Jack Morton Worldwide (end-user client: Nickelodeon), as well as 3D Exhibits Inc., and MG Design Associates Corp.

  5. cudaMap: a GPU accelerated program for gene expression connectivity mapping

    PubMed Central

    2013-01-01

    Background Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take > 2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping. Results cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance. Conclusion Emerging ‘omics’ technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap. PMID:24112435

  6. Curriculum Design of Computer Graphics Programs: A Survey of Art/Design Programs at the University Level.

    ERIC Educational Resources Information Center

    McKee, Richard Lee

    This master's thesis reports the results of a survey submitted to over 30 colleges and universities that currently offer computer graphics courses or are in the planning stage of curriculum design. Intended to provide a profile of the computer graphics programs and insight into the process of curriculum design, the survey gathered data on program…

  7. Graphic Arts Technology: Industrial Arts Curriculum Guide. Grades 9-12. Bulletin No. 1334 (Tentative).

    ERIC Educational Resources Information Center

    Louisiana State Dept. of Education, Baton Rouge.

    The tentative guide in graphic arts technology for senior high schools is part of a series of industrial arts curriculum materials developed by the State of Louisiana. The course is designed to provide "hands-on" experience with tools and materials along with a study of the industrial processes in graphic arts technology. In addition,…

  8. Computer graphics application in the engineering design integration system

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  9. Heterogeneous real-time computing in radio astronomy

    NASA Astrophysics Data System (ADS)

    Ford, John M.; Demorest, Paul; Ransom, Scott

    2010-07-01

    Modern computer architectures suited for general purpose computing are often not the best choice for either I/O-bound or compute-bound problems. Sometimes the best choice is not to choose a single architecture, but to take advantage of the best characteristics of different computer architectures to solve your problems. This paper examines the tradeoffs between using computer systems based on the ubiquitous X86 Central Processing Units (CPU's), Field Programmable Gate Array (FPGA) based signal processors, and Graphical Processing Units (GPU's). We will show how a heterogeneous system can be produced that blends the best of each of these technologies into a real-time signal processing system. FPGA's tightly coupled to analog-to-digital converters connect the instrument to the telescope and supply the first level of computing to the system. These FPGA's are coupled to other FPGA's to continue to provide highly efficient processing power. Data is then packaged up and shipped over fast networks to a cluster of general purpose computers equipped with GPU's, which are used for floating-point intensive computation. Finally, the data is handled by the CPU and written to disk, or further processed. Each of the elements in the system has been chosen for its specific characteristics and the role it can play in creating a system that does the most for the least, in terms of power, space, and money.

  10. Interactive Computing and Graphics in Undergraduate Digital Signal Processing. Microcomputing Working Paper Series F 84-9.

    ERIC Educational Resources Information Center

    Onaral, Banu; And Others

    This report describes the development of a Drexel University electrical and computer engineering course on digital filter design that used interactive computing and graphics, and was one of three courses in a senior-level sequence on digital signal processing (DSP). Interactive and digital analysis/design routines and the interconnection of these…

  11. Malleable Thought: The Role of Craft Thinking in Practice-Led Graphic Design

    ERIC Educational Resources Information Center

    Ings, Welby

    2015-01-01

    This article considers the potential of craft processes as creative engagements in graphic design research. It initially discusses the uneasy history of craft within the discipline, then draws upon case studies undertaken by three established designers who, in their postgraduate theses, engaged with craft as a process of thinking. In doing so, the…

  12. Graphic model of the processes involved in the production of casegood furniture

    Treesearch

    Kristen G. Hoff; Subhash C. Sarin; R. Bruce Anderson; R. Bruce Anderson

    1992-01-01

    Imports from foreign furniture manufacturers are on ,the rise, and American manufacturers must take advantage of recent technological advances to regain their lost market share. To facilitate the implementation of these technologies for improving productivity and quality, a graphic model of the wood furniture production process is presented using the IDEF modeling...

  13. iMOSFLM: a new graphical interface for diffraction-image processing with MOSFLM

    PubMed Central

    Battye, T. Geoff G.; Kontogiannis, Luke; Johnson, Owen; Powell, Harold R.; Leslie, Andrew G. W.

    2011-01-01

    iMOSFLM is a graphical user interface to the diffraction data-integration program MOSFLM. It is designed to simplify data processing by dividing the process into a series of steps, which are normally carried out sequentially. Each step has its own display pane, allowing control over parameters that influence that step and providing graphical feedback to the user. Suitable values for integration parameters are set automatically, but additional menus provide a detailed level of control for experienced users. The image display and the interfaces to the different tasks (indexing, strategy calculation, cell refinement, integration and history) are described. The most important parameters for each step and the best way of assessing success or failure are discussed. PMID:21460445

  14. Redefining the Data Pipeline Using GPUs

    NASA Astrophysics Data System (ADS)

    Warner, C.; Eikenberry, S. S.; Gonzalez, A. H.; Packham, C.

    2013-10-01

    There are two major challenges facing the next generation of data processing pipelines: 1) handling an ever increasing volume of data as array sizes continue to increase and 2) the desire to process data in near real-time to maximize observing efficiency by providing rapid feedback on data quality. Combining the power of modern graphics processing units (GPUs), relational database management systems (RDBMSs), and extensible markup language (XML) to re-imagine traditional data pipelines will allow us to meet these challenges. Modern GPUs contain hundreds of processing cores, each of which can process hundreds of threads concurrently. Technologies such as Nvidia's Compute Unified Device Architecture (CUDA) platform and the PyCUDA (http://mathema.tician.de/software/pycuda) module for Python allow us to write parallel algorithms and easily link GPU-optimized code into existing data pipeline frameworks. This approach has produced speed gains of over a factor of 100 compared to CPU implementations for individual algorithms and overall pipeline speed gains of a factor of 10-25 compared to traditionally built data pipelines for both imaging and spectroscopy (Warner et al., 2011). However, there are still many bottlenecks inherent in the design of traditional data pipelines. For instance, file input/output of intermediate steps is now a significant portion of the overall processing time. In addition, most traditional pipelines are not designed to be able to process data on-the-fly in real time. We present a model for a next-generation data pipeline that has the flexibility to process data in near real-time at the observatory as well as to automatically process huge archives of past data by using a simple XML configuration file. XML is ideal for describing both the dataset and the processes that will be applied to the data. Meta-data for the datasets would be stored using an RDBMS (such as mysql or PostgreSQL) which could be easily and rapidly queried and file I/O would be kept at a minimum. We believe this redefined data pipeline will be able to process data at the telescope, concurrent with continuing observations, thus maximizing precious observing time and optimizing the observational process in general. We also believe that using this design, it is possible to obtain a speed gain of a factor of 30-40 over traditional data pipelines when processing large archives of data.

  15. Geometric Processing and Its Relational Graphics

    DTIC Science & Technology

    1976-10-01

    20, If different from Report) f3. SUPPLEMENTARY NOTES 9. KEY WORDS (Cbnttnue on reverse aide if neceaaary .mdldentlfy by bfock number) Graphics GIFT ...are typified by defining an object as a series of adjacent triangular or rectangular patches or surfaces (ruled surfaces may also be used). The GIFT ...code embodies the Patch code concept in one of its solids, the ARS; however, processing of a many-faceted GIFT solid takes longer to process than its

  16. Status of parallel Python-based implementation of UEDGE

    NASA Astrophysics Data System (ADS)

    Umansky, M. V.; Pankin, A. Y.; Rognlien, T. D.; Dimits, A. M.; Friedman, A.; Joseph, I.

    2017-10-01

    The tokamak edge transport code UEDGE has long used the code-development and run-time framework Basis. However, with the support for Basis expected to terminate in the coming years, and with the advent of the modern numerical language Python, it has become desirable to move UEDGE to Python, to ensure its long-term viability. Our new Python-based UEDGE implementation takes advantage of the portable build system developed for FACETS. The new implementation gives access to Python's graphical libraries and numerical packages for pre- and post-processing, and support of HDF5 simplifies exchanging data. The older serial version of UEDGE has used for time-stepping the Newton-Krylov solver NKSOL. The renovated implementation uses backward Euler discretization with nonlinear solvers from PETSc, which has the promise to significantly improve the UEDGE parallel performance. We will report on assessment of some of the extended UEDGE capabilities emerging in the new implementation, and will discuss the future directions. Work performed for U.S. DOE by LLNL under contract DE-AC52-07NA27344.

  17. On-line surface inspection using cylindrical lens-based spectral domain low-coherence interferometry.

    PubMed

    Tang, Dawei; Gao, Feng; Jiang, X

    2014-08-20

    We present a spectral domain low-coherence interferometry (SD-LCI) method that is effective for applications in on-line surface inspection because it can obtain a surface profile in a single shot. It has an advantage over existing spectral interferometry techniques by using cylindrical lenses as the objective lenses in a Michelson interferometric configuration to enable the measurement of long profiles. Combined with a modern high-speed CCD camera, general-purpose graphics processing unit, and multicore processors computing technology, fast measurement can be achieved. By translating the tested sample during the measurement procedure, real-time surface inspection was implemented, which is proved by the large-scale 3D surface measurement in this paper. ZEMAX software is used to simulate the SD-LCI system and analyze the alignment errors. Two step height surfaces were measured, and the captured interferograms were analyzed using a fast Fourier transform algorithm. Both 2D profile results and 3D surface maps closely align with the calibrated specifications given by the manufacturer.

  18. Visualization for Hyper-Heuristics: Back-End Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, Luke

    Modern society is faced with increasingly complex problems, many of which can be formulated as generate-and-test optimization problems. Yet, general-purpose optimization algorithms may sometimes require too much computational time. In these instances, hyperheuristics may be used. Hyper-heuristics automate the design of algorithms to create a custom algorithm for a particular scenario, finding the solution significantly faster than its predecessor. However, it may be difficult to understand exactly how a design was derived and why it should be trusted. This project aims to address these issues by creating an easy-to-use graphical user interface (GUI) for hyper-heuristics and an easy-to-understand scientific visualizationmore » for the produced solutions. To support the development of this GUI, my portion of the research involved developing algorithms that would allow for parsing of the data produced by the hyper-heuristics. This data would then be sent to the front-end, where it would be displayed to the end user.« less

  19. AUSPEX: a graphical tool for X-ray diffraction data analysis.

    PubMed

    Thorn, Andrea; Parkhurst, James; Emsley, Paul; Nicholls, Robert A; Vollmar, Melanie; Evans, Gwyndaf; Murshudov, Garib N

    2017-09-01

    In this paper, AUSPEX, a new software tool for experimental X-ray data analysis, is presented. Exploring the behaviour of diffraction intensities and the associated estimated uncertainties facilitates the discovery of underlying problems and can help users to improve their data acquisition and processing in order to obtain better structural models. The program enables users to inspect the distribution of observed intensities (or amplitudes) against resolution as well as the associated estimated uncertainties (sigmas). It is demonstrated how AUSPEX can be used to visually and automatically detect ice-ring artefacts in integrated X-ray diffraction data. Such artefacts can hamper structure determination, but may be difficult to identify from the raw diffraction images produced by modern pixel detectors. The analysis suggests that a significant portion of the data sets deposited in the PDB contain ice-ring artefacts. Furthermore, it is demonstrated how other problems in experimental X-ray data caused, for example, by scaling and data-conversion procedures can be detected by AUSPEX.

  20. Fast live-cell conventional fluorophore nanoscopy with ImageJ through super-resolution radial fluctuations

    PubMed Central

    Gustafsson, Nils; Culley, Siân; Ashdown, George; Owen, Dylan M.; Pereira, Pedro Matos; Henriques, Ricardo

    2016-01-01

    Despite significant progress, high-speed live-cell super-resolution studies remain limited to specialized optical setups, generally requiring intense phototoxic illumination. Here, we describe a new analytical approach, super-resolution radial fluctuations (SRRF), provided as a fast graphics processing unit-enabled ImageJ plugin. In the most challenging data sets for super-resolution, such as those obtained in low-illumination live-cell imaging with GFP, we show that SRRF is generally capable of achieving resolutions better than 150 nm. Meanwhile, for data sets similar to those obtained in PALM or STORM imaging, SRRF achieves resolutions approaching those of standard single-molecule localization analysis. The broad applicability of SRRF and its performance at low signal-to-noise ratios allows super-resolution using modern widefield, confocal or TIRF microscopes with illumination orders of magnitude lower than methods such as PALM, STORM or STED. We demonstrate this by super-resolution live-cell imaging over timescales ranging from minutes to hours. PMID:27514992

  1. ANNarchy: a code generation approach to neural simulations on parallel hardware

    PubMed Central

    Vitay, Julien; Dinkelbach, Helge Ü.; Hamker, Fred H.

    2015-01-01

    Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions. PMID:26283957

  2. Toward Microscopic Equations of State for Core-Collapse Supernovae from Chiral Effective Field Theory

    NASA Astrophysics Data System (ADS)

    Aboona, Bassam; Holt, Jeremy

    2017-09-01

    Chiral effective field theory provides a modern framework for understanding the structure and dynamics of nuclear many-body systems. Recent works have had much success in applying the theory to describe the ground- and excited-state properties of light and medium-mass atomic nuclei when combined with ab initio numerical techniques. Our aim is to extend the application of chiral effective field theory to describe the nuclear equation of state required for supercomputer simulations of core-collapse supernovae. Given the large range of densities, temperatures, and proton fractions probed during stellar core collapse, microscopic calculations of the equation of state require large computational resources on the order of one million CPU hours. We investigate the use of graphics processing units (GPUs) to significantly reduce the computational cost of these calculations, which will enable a more accurate and precise description of this important input to numerical astrophysical simulations. Cyclotron Institute at Texas A&M, NSF Grant: PHY 1659847, DOE Grant: DE-FG02-93ER40773.

  3. Delivery of laboratory data with World Wide Web technology.

    PubMed

    Hahn, A W; Leon, M A; Klein-Leon, S; Allen, G K; Boon, G D; Patrick, T B; Klimczak, J C

    1997-01-01

    We have developed an experimental World Wide Web (WWW) based system to deliver laboratory results to clinicians in our Veterinary Medical Teaching Hospital. Laboratory results are generated by the clinical pathology section of our Veterinary Medical Diagnostic Laboratory and stored in a legacy information system. This system does not interface directly to the hospital information system, and it cannot be accessed directly by clinicians. Our "meta" system first parses routine print reports and then instantiates the data into a modern, open-architecture relational database using a data model constructed with currently accepted international standards for data representation and communication. The system does not affect either of the existing legacy systems. Location-independent delivery of patient data is via a secure WWW based system which maximizes usability and allows "value-added" graphic representations. The data can be viewed with any web browser. Future extensibility and intra- and inter-institutional compatibility served as key design criteria. The system is in the process of being evaluated using accepted methods of assessment of information technologies.

  4. Common Graphics Library (CGL). Volume 1: LEZ user's guide

    NASA Technical Reports Server (NTRS)

    Taylor, Nancy L.; Hammond, Dana P.; Hofler, Alicia S.; Miner, David L.

    1988-01-01

    Users are introduced to and instructed in the use of the Langley Easy (LEZ) routines of the Common Graphics Library (CGL). The LEZ routines form an application independent graphics package which enables the user community to view data quickly and easily, while providing a means of generating scientific charts conforming to the publication and/or viewgraph process. A distinct advantage for using the LEZ routines is that the underlying graphics package may be replaced or modified without requiring the users to change their application programs. The library is written in ANSI FORTRAN 77, and currently uses a CORE-based underlying graphics package, and is therefore machine independent, providing support for centralized and/or distributed computer systems.

  5. Network, system, and status software enhancements for the autonomously managed electrical power system breadboard. Volume 4: Graphical status display

    NASA Technical Reports Server (NTRS)

    Mckee, James W.

    1990-01-01

    This volume (4 of 4) contains the description, structured flow charts, prints of the graphical displays, and source code to generate the displays for the AMPS graphical status system. The function of these displays is to present to the manager of the AMPS system a graphical status display with the hot boxes that allow the manager to get more detailed status on selected portions of the AMPS system. The development of the graphical displays is divided into two processes; the creation of the screen images and storage of them in files on the computer, and the running of the status program which uses the screen images.

  6. The role of word order in the interpretation of canonical and non-canonical graphic symbol utterances: A developmental study.

    PubMed

    Trudeau, Natacha; Morford, Jill P; Sutton, Ann

    2010-06-01

    Graphic symbols are often used to represent words in Augmentative and Alternative Communication systems. Previous findings suggest that different processes operate when using graphic symbols and when using speech. This study assessed the ability of native speakers of French with no communication disorders from four age groups to interpret graphic-symbol sequences of varying length and canonicity. Results reveal that, as they get older, participants show an increase in their capacity to interpret graphic-symbol sequences. Constituent order played an important role in the interpretation of the sequences. However, the specific word-order strategies used varied depending on the age group and the type of sequence presented.

  7. Using pedagogical discipline representations (PDRs) to enable Astro 101 students to reason about modern astrophysics

    NASA Astrophysics Data System (ADS)

    Wallace, Colin Scott; Prather, Edward E.; Chambers, Timothy G.; Kamenetzky, Julia R.; Hornstein, Seth D.

    2017-01-01

    Instructors of introductory, college-level, general education astronomy (Astro 101) often want to include topics from the cutting-edge of modern astrophysics in their course. Unfortunately, the teaching of these cutting-edge topics is typically confined to advanced undergraduate or graduate classes, using representations (graphical, mathematical, etc.) that are inaccessible to the vast majority of Astro 101 students. Consequently, many Astro 101 instructors feel that they have no choice but to cover these modern topics at a superficial level. Pedagogical discipline representations (PDRs) are one solution to this problem. Pedagogical discipline representations are representations that are explicitly designed to enhance the teaching and learning of a topic, even though these representations may not typically be found in traditional textbooks or used by experts in the discipline who are engaged in topic-specific discourse. In some cases, PDRs are significantly simplified or altered versions of typical discipline representations (graphs, data tables, etc.); in others they may be novel and highly contextualized representations with unique features that purposefully engage novice learners’ pre-existing mental models and reasoning difficulties, facilitating critical discourse. In this talk, I will discuss important lessons that my colleagues and I have learned while developing PDRs and describe how PDRs can enable students to reason about complex modern astrophysical topics.

  8. Graphics processing unit-assisted lossless decompression

    DOEpatents

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  9. 76 FR 70490 - Certain Electronic Devices With Graphics Data Processing Systems, Components Thereof, and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-14

    ... Graphics Data Processing Systems, Components Thereof, and Associated Software; Institution of Investigation... associated software by reason of infringement of certain claims of U.S. Patent No. 5,945,997 (``the `997... software that infringe one or more of claims 1, 3-5, 9, and 16 of the `997 patent; claims 1, 5, and 9 of...

  10. Graphic facilitation as a novel approach to practice development.

    PubMed

    Leonard, Angela; Bonaconsa, Candice; Ssenyonga, Lydia; Coetzee, Minette

    2017-10-10

    The active participation of staff from the outset of any health service or practice improvement process ensures they are more likely to become engaged in the implementation phases that follow initial service analyses. Graphic facilitation is a way of getting participants to develop an understanding of complex systems and articulate solutions from within them. This article describes how a graphic facilitation process enabled the members of a multidisciplinary team at a specialist paediatric neurosurgery hospital in Uganda to understand how their system worked. The large graphic representation the team created helped each team member to visualise their day-to-day practice, understand each person's contribution, celebrate their triumphs and highlight opportunities for service improvement. The process highlighted three features of their practice: an authentic interdisciplinary team approach to care, admission of the primary carer with the child, and previously unrecognised delays in patient flow through the outpatients department. The team's active participation and ownership of the process resulted in sustainable improvements to clinical practice. ©2012 RCN Publishing Company Ltd. All rights reserved. Not to be copied, transmitted or recorded in any way, in whole or part, without prior permission of the publishers.

  11. Efficient particle-in-cell simulation of auroral plasma phenomena using a CUDA enabled graphics processing unit

    NASA Astrophysics Data System (ADS)

    Sewell, Stephen

    This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.

  12. UWGSP4: an imaging and graphics superworkstation and its medical applications

    NASA Astrophysics Data System (ADS)

    Jong, Jing-Ming; Park, Hyun Wook; Eo, Kilsu; Kim, Min-Hwan; Zhang, Peng; Kim, Yongmin

    1992-05-01

    UWGSP4 is configured with a parallel architecture for image processing and a pipelined architecture for computer graphics. The system's peak performance is 1,280 MFLOPS for image processing and over 200,000 Gouraud shaded 3-D polygons per second for graphics. The simulated sustained performance is about 50% of the peak performance in general image processing. Most of the 2-D image processing functions are efficiently vectorized and parallelized in UWGSP4. A performance of 770 MFLOPS in convolution and 440 MFLOPS in FFT is achieved. The real-time cine display, up to 32 frames of 1280 X 1024 pixels per second, is supported. In 3-D imaging, the update rate for the surface rendering is 10 frames of 20,000 polygons per second; the update rate for the volume rendering is 6 frames of 128 X 128 X 128 voxels per second. The system provides 1280 X 1024 X 32-bit double frame buffers and one 1280 X 1024 X 8-bit overlay buffer for supporting realistic animation, 24-bit true color, and text annotation. A 1280 X 1024- pixel, 66-Hz noninterlaced display screen with 1:1 aspect ratio can be windowed into the frame buffer for the display of any portion of the processed image or graphics.

  13. Apparatus and method for implementing power saving techniques when processing floating point values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Young Moon; Park, Sang Phill

    An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.

  14. Hyper-Spectral Synthesis of Active OB Stars Using GLaDoS

    NASA Astrophysics Data System (ADS)

    Hill, N. R.; Townsend, R. H. D.

    2016-11-01

    In recent years there has been considerable interest in using graphics processing units (GPUs) to perform scientific computations that have traditionally been handled by central processing units (CPUs). However, there is one area where the scientific potential of GPUs has been overlooked - computer graphics, the task they were originally designed for. Here we introduce GLaDoS, a hyper-spectral code which leverages the graphics capabilities of GPUs to synthesize spatially and spectrally resolved images of complex stellar systems. We demonstrate how GLaDoS can be applied to calculate observables for various classes of stars including systems with inhomogenous surface temperatures and contact binaries.

  15. The role of graphics super-workstations in a supercomputing environment

    NASA Technical Reports Server (NTRS)

    Levin, E.

    1989-01-01

    A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms).

  16. Creating Interactive Graphical Overlays in the Advanced Weather Interactive Processing System (AWIPS) Using Shapefiles and DGM Files

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Lafosse, Richard; Hood, Doris; Hoeth, Brian

    2007-01-01

    Graphical overlays can be created in real-time in the Advanced Weather Interactive Processing System (AWIPS) using shapefiles or DARE Graphics Metafile (DGM) files. This presentation describes how to create graphical overlays on-the-fly for AWIPS, by using two examples of AWIPS applications that were created by the Applied Meteorology Unit (AMU). The first example is the Anvil Threat Corridor Forecast Tool, which produces a shapefile that depicts a graphical threat corridor of the forecast movement of thunderstorm anvil clouds, based on the observed or forecast upper-level winds. This tool is used by the Spaceflight Meteorology Group (SMG) and 45th Weather Squadron (45 WS) to analyze the threat of natural or space vehicle-triggered lightning over a location. The second example is a launch and landing trajectory tool that produces a DGM file that plots the ground track of space vehicles during launch or landing. The trajectory tool can be used by SMG and the 45 WS forecasters to analyze weather radar imagery along a launch or landing trajectory. Advantages of both file types will be listed.

  17. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    PubMed

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  18. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    PubMed Central

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures. PMID:24723812

  19. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  20. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics.

    PubMed

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  1. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  2. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; Russell, Samuel S.

    2012-01-01

    Objective Develop a software application utilizing high performance computing techniques, including general purpose graphics processing units (GPGPUs), for the analysis and visualization of large thermographic data sets. Over the past several years, an increasing effort among scientists and engineers to utilize graphics processing units (GPUs) in a more general purpose fashion is allowing for previously unobtainable levels of computation by individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU which yield significant increases in performance. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Image processing is one area were GPUs are being used to greatly increase the performance of certain analysis and visualization techniques.

  3. Graphical Environment Tools for Application to Gamma-Ray Energy Tracking Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todd, Richard A.; Radford, David C.

    2013-12-30

    Highly segmented, position-sensitive germanium detector systems are being developed for nuclear physics research where traditional electronic signal processing with mixed analog and digital function blocks would be enormously complex and costly. Future systems will be constructed using pipelined processing of high-speed digitized signals as is done in the telecommunications industry. Techniques which provide rapid algorithm and system development for future systems are desirable. This project has used digital signal processing concepts and existing graphical system design tools to develop a set of re-usable modular functions and libraries targeted for the nuclear physics community. Researchers working with complex nuclear detector arraysmore » such as the Gamma-Ray Energy Tracking Array (GRETA) have been able to construct advanced data processing algorithms for implementation in field programmable gate arrays (FPGAs) through application of these library functions using intuitive graphical interfaces.« less

  4. Flexible Environmental Modeling with Python and Open - GIS

    NASA Astrophysics Data System (ADS)

    Pryet, Alexandre; Atteia, Olivier; Delottier, Hugo; Cousquer, Yohann

    2015-04-01

    Numerical modeling now represents a prominent task of environmental studies. During the last decades, numerous commercial programs have been made available to environmental modelers. These software applications offer user-friendly graphical user interfaces that allow an efficient management of many case studies. However, they suffer from a lack of flexibility and closed-source policies impede source code reviewing and enhancement for original studies. Advanced modeling studies require flexible tools capable of managing thousands of model runs for parameter optimization, uncertainty and sensitivity analysis. In addition, there is a growing need for the coupling of various numerical models associating, for instance, groundwater flow modeling to multi-species geochemical reactions. Researchers have produced hundreds of open-source powerful command line programs. However, there is a need for a flexible graphical user interface allowing an efficient processing of geospatial data that comes along any environmental study. Here, we present the advantages of using the free and open-source Qgis platform and the Python scripting language for conducting environmental modeling studies. The interactive graphical user interface is first used for the visualization and pre-processing of input geospatial datasets. Python scripting language is then employed for further input data processing, call to one or several models, and post-processing of model outputs. Model results are eventually sent back to the GIS program, processed and visualized. This approach combines the advantages of interactive graphical interfaces and the flexibility of Python scripting language for data processing and model calls. The numerous python modules available facilitate geospatial data processing and numerical analysis of model outputs. Once input data has been prepared with the graphical user interface, models may be run thousands of times from the command line with sequential or parallel calls. We illustrate this approach with several case studies in groundwater hydrology and geochemistry and provide links to several python libraries that facilitate pre- and post-processing operations.

  5. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing

    PubMed Central

    Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300

  6. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    PubMed

    Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  7. The Value of Animations in Biology Teaching: A Study of Long-Term Memory Retention

    PubMed Central

    2007-01-01

    Previous work has established that a narrated animation is more effective at communicating a complex biological process (signal transduction) than the equivalent graphic with figure legend. To my knowledge, no study has been done in any subject area on the effectiveness of animations versus graphics in the long-term retention of information, a primary and critical issue in studies of teaching and learning. In this study, involving 393 student responses, three different animations and two graphics—one with and one lacking a legend—were used to determine the long-term retention of information. The results show that students retain more information 21 d after viewing an animation without narration compared with an equivalent graphic whether or not that graphic had a legend. Students' comments provide additional insight into the value of animations in the pedagogical process, and suggestions for future work are proposed. PMID:17785404

  8. Are Graphic Novels Always "Cool"? Parent and Student Perspectives on Elementary Mathematics and Science Graphic Novels: The Need for Action Research by School Leaders

    ERIC Educational Resources Information Center

    Nesmith, Suzanne; Cooper, Sandi; Schwarz, Gretchen; Walker, Amanda

    2016-01-01

    Often the stakeholders most affected by curriculum change are uninvolved in the change process, leading to curriculum reforms that fail. Thus, a group of university researchers conducted a small-scale study to explore the thoughts and opinions of parents and elementary students on the use of mathematics and science graphic novels to support the…

  9. The Integrated Mode Management Interface

    NASA Technical Reports Server (NTRS)

    Hutchins, Edwin

    1996-01-01

    Mode management is the processes of understanding the character and consequences of autoflight modes, planning and selecting the engagement, disengagement and transitions between modes, and anticipating automatic mode transitions made by the autoflight system itself. The state of the art is represented by the latest designs produced by each of the major airframe manufacturers, the Boeing 747-400, the Boeing 777, the McDonnell Douglas MD-11, and the Airbus A320/A340 family of airplanes. In these airplanes autoflight modes are selected by manipulating switches on the control panel. The state of the autoflight system is displayed on the flight mode annunciators. The integrated mode management interface (IMMI) is a graphical interface to autoflight mode management systems for aircraft equipped with flight management computer systems (FMCS). The interface consists of a vertical mode manager and a lateral mode manager. Autoflight modes are depicted by icons on a graphical display. Mode selection is accomplished by touching (or mousing) the appropriate icon. The IMMI provides flight crews with an integrated interface to autoflight systems for aircraft equipped with flight management computer systems (FMCS). The current version is modeled on the Boeing glass-cockpit airplanes (747-400, 757/767). It runs on the SGI Indigo workstation. A working prototype of this graphics-based crew interface to the autoflight mode management tasks of glass cockpit airplanes has been installed in the Advanced Concepts Flight Simulator of the CSSRF of NASA Ames Research Center. This IMMI replaces the devices in FMCS equipped airplanes currently known as mode control panel (Boeing), flight guidance control panel (McDonnell Douglas), and flight control unit (Airbus). It also augments the functions of the flight mode annunciators. All glass cockpit airplanes are sufficiently similar that the IMMI could be tailored to the mode management system of any modern cockpit. The IMMI does not replace the functions of the FMCS control and display unit. The purpose of the INMI is to provide flight crews with a shared medium in which they can assess the state of the autoflight system, take control actions on it, reason about its behavior, and communicate with each other about its behavior. The design is intended to increase mode awareness and provide a better interface to autoflight mode management. This report describes the IMMI, the methods that were used in designing and developing it, and the theory underlying the design and development processes.

  10. Animation graphic interface for the space shuttle onboard computer

    NASA Technical Reports Server (NTRS)

    Wike, Jeffrey; Griffith, Paul

    1989-01-01

    Graphics interfaces designed to operate on space qualified hardware challenge software designers to display complex information under processing power and physical size constraints. Under contract to Johnson Space Center, MICROEXPERT Systems is currently constructing an intelligent interface for the LASER DOCKING SENSOR (LDS) flight experiment. Part of this interface is a graphic animation display for Rendezvous and Proximity Operations. The displays have been designed in consultation with Shuttle astronauts. The displays show multiple views of a satellite relative to the shuttle, coupled with numeric attitude information. The graphics are generated using position data received by the Shuttle Payload and General Support Computer (PGSC) from the Laser Docking Sensor. Some of the design considerations include crew member preferences in graphic data representation, single versus multiple window displays, mission tailoring of graphic displays, realistic 3D images versus generic icon representations of real objects, the physical relationship of the observers to the graphic display, how numeric or textual information should interface with graphic data, in what frame of reference objects should be portrayed, recognizing conditions of display information-overload, and screen format and placement consistency.

  11. Graphical workstation capability for reliability modeling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.

    1992-01-01

    In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.

  12. Transputer parallel processing at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Ellis, Graham K.

    1989-01-01

    The transputer parallel processing lab at NASA Lewis Research Center (LeRC) consists of 69 processors (transputers) that can be connected into various networks for use in general purpose concurrent processing applications. The main goal of the lab is to develop concurrent scientific and engineering application programs that will take advantage of the computational speed increases available on a parallel processor over the traditional sequential processor. Current research involves the development of basic programming tools. These tools will help standardize program interfaces to specific hardware by providing a set of common libraries for applications programmers. The thrust of the current effort is in developing a set of tools for graphics rendering/animation. The applications programmer currently has two options for on-screen plotting. One option can be used for static graphics displays and the other can be used for animated motion. The option for static display involves the use of 2-D graphics primitives that can be called from within an application program. These routines perform the standard 2-D geometric graphics operations in real-coordinate space as well as allowing multiple windows on a single screen.

  13. Assessing the impact of graphical quality on automatic text recognition in digital maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi; Leyk, Stefan; Honarvar Nazari, Narges; Moghaddam, Sima; Tan, Tian Xiang

    2016-08-01

    Converting geographic features (e.g., place names) in map images into a vector format is the first step for incorporating cartographic information into a geographic information system (GIS). With the advancement in computational power and algorithm design, map processing systems have been considerably improved over the last decade. However, the fundamental map processing techniques such as color image segmentation, (map) layer separation, and object recognition are sensitive to minor variations in graphical properties of the input image (e.g., scanning resolution). As a result, most map processing results would not meet user expectations if the user does not "properly" scan the map of interest, pre-process the map image (e.g., using compression or not), and train the processing system, accordingly. These issues could slow down the further advancement of map processing techniques as such unsuccessful attempts create a discouraged user community, and less sophisticated tools would be perceived as more viable solutions. Thus, it is important to understand what kinds of maps are suitable for automatic map processing and what types of results and process-related errors can be expected. In this paper, we shed light on these questions by using a typical map processing task, text recognition, to discuss a number of map instances that vary in suitability for automatic processing. We also present an extensive experiment on a diverse set of scanned historical maps to provide measures of baseline performance of a standard text recognition tool under varying map conditions (graphical quality) and text representations (that can vary even within the same map sheet). Our experimental results help the user understand what to expect when a fully or semi-automatic map processing system is used to process a scanned map with certain (varying) graphical properties and complexities in map content.

  14. Span graphics display utilities handbook, first edition

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Green, J. L.; Newman, R.

    1985-01-01

    The Space Physics Analysis Network (SPAN) is a computer network connecting scientific institutions throughout the United States. This network provides an avenue for timely, correlative research between investigators, in a multidisciplinary approach to space physics studies. An objective in the development of SPAN is to make available direct and simplified procedures that scientists can use, without specialized training, to exchange information over the network. Information exchanges include raw and processes data, analysis programs, correspondence, documents, and graphite images. This handbook details procedures that can be used to exchange graphic images over SPAN. The intent is to periodically update this handbook to reflect the constantly changing facilities available on SPAN. The utilities described within reflect an earnest attempt to provide useful descriptions of working utilities that can be used to transfer graphic images across the network. Whether graphic images are representative of satellite servations or theoretical modeling and whether graphics images are of device dependent or independent type, the SPAN graphics display utilities handbook will be the users guide to graphic image exchange.

  15. Common Graphics Library (CGL). Volume 2: Low-level user's guide

    NASA Technical Reports Server (NTRS)

    Taylor, Nancy L.; Hammond, Dana P.; Theophilos, Pauline M.

    1989-01-01

    The intent is to instruct the users of the Low-Level routines of the Common Graphics Library (CGL). The Low-Level routines form an application-independent graphics package enabling the user community to construct and design scientific charts conforming to the publication and/or viewgraph process. The Low-Level routines allow the user to design unique or unusual report-quality charts from a set of graphics utilities. The features of these routines can be used stand-alone or in conjunction with other packages to enhance or augment their capabilities. This library is written in ANSI FORTRAN 77, and currently uses a CORE-based underlying graphics package, and is therefore machine-independent, providing support for centralized and/or distributed computer systems.

  16. Note: Quasi-real-time analysis of dynamic near field scattering data using a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Cerchiari, G.; Croccolo, F.; Cardinaux, F.; Scheffold, F.

    2012-10-01

    We present an implementation of the analysis of dynamic near field scattering (NFS) data using a graphics processing unit. We introduce an optimized data management scheme thereby limiting the number of operations required. Overall, we reduce the processing time from hours to minutes, for typical experimental conditions. Previously the limiting step in such experiments, the processing time is now comparable to the data acquisition time. Our approach is applicable to various dynamic NFS methods, including shadowgraph, Schlieren and differential dynamic microscopy.

  17. Mathematic in science progress reort, June 1, 1973-May 31, 1974

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellman, R.

    1974-01-01

    The purpose of the matematical biosciences group is to use the conceptual, analytic and computational methods of modern mathematics to treat biomedical and environmental problems. We are employing a systems approach to research, beginning with experiment and medical practice at one end of the scale, continuing through the intermediary of mathematical models and computer techniques, and culminating in clinical applications working closely with teams of doctors. We are pursuing the application of biostatistical methods to a number of medical questions as well as a thoroughgoing use of operations research and systems analysis to hospital practice. The overall objective is tomore » make the Medicare program operational, effective as well as cheap. Pattern recognition and other aspects of artificial intelligence are important here for patient screening. Major efforts are devoted to nuclear medicine, radiotherapy and neurophysiology using the mathematical theory of control and decision processes (dynamic programming and invarient imbedding). Important savings have been made in the time required for tumor scanning using techniques of nuclear medicine. Major mathematical breakthroughs have been made in the treatment of large scale systems, and parameter identification processes. In the field of mental health, we have developed and extended the computerized simulation processes using graphics which are versatile tools for research and training in human interaction processes, particularly in the initial psychotherapy interview.« less

  18. Ecological determinants of divorce: a structural approach to the explanation of Japanese divorce.

    PubMed

    Fukurai, H; Alston, J P

    1992-01-01

    This paper examines the ecological determinants of contemporary Japanese divorce rates on the prefectural level. LISREL and computer-generated graphics are the analytic methods used. The aggregate level of analysis demands the use of the ecological model which posits that demographic changes, economic activities, migration patterns, and the level of urbanization are significant predictors of divorce rate. Our analysis demonstrates that sex ratio, female labor force participation, female in-migration patterns, population increase, and net household income all play a significant role in affecting the divorce rate. Our findings also confirm the well-supported hypothesis that both population density and modernization positively influence modern Japan's divorce rates. The residual analysis also points out that in order to account for the large proportion of the unexplained variance of Japanese divorce, behavioral-related variables and island- or prefecture-specific dimensions need to be included in the ecological model of divorce.

  19. Aircraft attitude measurement using a vector magnetometer

    NASA Technical Reports Server (NTRS)

    Peitila, R.; Dunn, W. R., Jr.

    1977-01-01

    The feasibility of a vector magnetometer system was investigated by developing a technique to determine attitude given magnetic field components. Sample calculations are then made using the earth's magnetic field data acquired during actual flight conditions. Results of these calculations are compared graphically with measured attitude data acquired simultaneously with the magnetic data. The role and possible implementation of various reference angles are discussed along with other pertinent considerations. Finally, it is concluded that the earth's magnetic field as measured by modern vector magnetometers can play a significant role in attitude control systems.

  20. On the tidal evolution and tails formation of disc galaxies

    NASA Astrophysics Data System (ADS)

    Alavi, M.; Razmi, H.

    2015-11-01

    In this paper, we want to study the tidal effect of an external perturber upon a disc galaxy based on the generalization of already used Keplerian potential. The generalization of the simple ideal Keplerian potential includes an orbital centripetal term and an overall finite range controlling correction. Considering the generalized form of the interaction potential, the velocity impulse expressions resulting from tidal forces are computed; then, using typical real values already known from modern observational data, the evolution of the disc including tidal tails formation is graphically investigated.

  1. Graphical Interface for the Study of Gas-Phase Reaction Kinetics: Cyclopentene Vapor Pyrolysis

    NASA Astrophysics Data System (ADS)

    Marcotte, Ronald E.; Wilson, Lenore D.

    2001-06-01

    The undergraduate laboratory experiment on the pyrolysis of gaseous cyclopentene has been modernized to improve safety, speed, and precision and to better reflect the current practice of physical chemistry. It now utilizes virtual instrument techniques to create a graphical computer interface for the collection and display of experimental data. An electronic pressure gauge has replaced the mercury manometer formerly needed in proximity to the 500 °C pyrolysis oven. Students have much better real-time information available to them and no longer require multiple lab periods to get rate constants and acceptable Arrhenius parameters. The time saved on manual data collection is used to give the students a tour of the computer interfacing hardware and software and a hands-on introduction to gas-phase reagent preparation using a research-grade high-vacuum system. This includes loading the sample, degassing it by the freeze-pump-thaw technique, handling liquid nitrogen and working through the logic necessary for each reconfiguration of the diffusion pump section and the submanifolds.

  2. Visualization for Hyper-Heuristics. Front-End Graphical User Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroenung, Lauren

    Modern society is faced with ever more complex problems, many of which can be formulated as generate-and-test optimization problems. General-purpose optimization algorithms are not well suited for real-world scenarios where many instances of the same problem class need to be repeatedly and efficiently solved because they are not targeted to a particular scenario. Hyper-heuristics automate the design of algorithms to create a custom algorithm for a particular scenario. While such automated design has great advantages, it can often be difficult to understand exactly how a design was derived and why it should be trusted. This project aims to address thesemore » issues of usability by creating an easy-to-use graphical user interface (GUI) for hyper-heuristics to support practitioners, as well as scientific visualization of the produced automated designs. My contributions to this project are exhibited in the user-facing portion of the developed system and the detailed scientific visualizations created from back-end data.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Von Dreele, Robert

    One of the goals in developing GSAS-II was to expand from the capabilities of the original General Structure Analysis System (GSAS) which largely encompassed just structure refinement and post refinement analysis. GSAS-II has been written almost entirely in Python loaded with graphics, GUI and mathematical packages (matplotlib, pyOpenGL, wxpython, numpy and scipy). Thus, GSAS-II has a fully developed modern GUI as well as extensive graphical display of data and results. However, the structure and operation of Python has required new approaches to many of the algorithms used in crystal structure analysis. The extensions beyond GSAS include image calibration/integration as wellmore » as peak fitting and unit cell indexing for powder data which are precursors for structure solution. Structure solution within GSAS-II begins with either Pawley or LeBail extracted structure factors from powder data or those measured in a single crystal experiment. Both charge flipping and Monte Carlo-Simulated Annealing techniques are available; the former can be applied to (3+1) incommensurate structures as well as conventional 3D structures.« less

  4. Accelerated Adaptive MGS Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang

    2011-01-01

    The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.

  5. A Modernized Approach to Meet Diversified Earth Observing System (EOS) AM-1 Mission Requirements

    NASA Technical Reports Server (NTRS)

    Newman, Lauri Kraft; Hametz, Mark E.; Conway, Darrel J.

    1998-01-01

    From a flight dynamics perspective, the EOS AM-1 mission design and maneuver operations present a number of interesting challenges. The mission design itself is relatively complex for a low Earth mission, requiring a frozen, Sun-synchronous, polar orbit with a repeating ground track. Beyond the need to design an orbit that meets these requirements, the recent focus on low-cost, "lights out" operations has encouraged a shift to more automated ground support. Flight dynamics activities previously performed in special facilities created solely for that purpose and staffed by personnel with years of design experience are now being shifted to the mission operations centers (MOCs) staffed by flight operations team (FOT) operators. These operators' responsibilities include flight dynamics as a small subset of their work; therefore, FOT personnel often do not have the experience to make critical maneuver design decisions. Thus, streamlining the analysis and planning work required for such a complicated orbit design and preparing FOT personnel to take on the routine operation of such a spacecraft both necessitated increasing the automation level of the flight dynamics functionality. The FreeFlyer(trademark) software developed by AI Solutions provides a means to achieve both of these goals. The graphic interface enables users to interactively perform analyses that previously required many parametric studies and much data reduction to achieve the same result. In addition, the fuzzy logic engine .enables the simultaneous evaluation of multiple conflicting constraints, removing the analyst from the loop and allowing the FOT to perform more of the operations without much background in orbit design. Modernized techniques were implemented for EOS AM-1 flight dynamics support in several areas, including launch window determination, orbit maintenance maneuver control strategies, and maneuver design and calibration automation. The benefits of implementing these techniques include increased fuel available for on-orbit maneuvering, a simplified orbit maintenance process to minimize science data downtime, and an automated routine maneuver planning process. This paper provides an examination of the modernized techniques implemented for EOS AM-1 to achieve these benefits.

  6. A modernized approach to meet diversified earth observing system (EOS) AM-1 mission requirements

    NASA Technical Reports Server (NTRS)

    Newman, Lauri Kraft; Hametz, Mark E.; Conway, Darrel J.

    1998-01-01

    From a flight dynamics perspective, the EOS AM-1 mission design and maneuver operations present a number of interesting challenges. The mission design itself is relatively complex for a low Earth mission, requiring a frozen, Sun-synchronous, polar orbit with a repeating ground track. Beyond the need to design an orbit that meets these requirements, the recent focus on low-cost, 'lights out' operations has encouraged a shift to more automated ground support. Flight dynamics activities previously performed in special facilities created solely for that purpose and staffed by personnel with years of design experience are now being shifted to the mission operations centers (MOCs) staffed by flight operations team (FOT) operators. These operators' responsibilities include flight dynamics as a small subset of their work; therefore, FOT personnel often do not have the experience to make critical maneuver design decisions. Thus, streamlining the analysis and planning work required for such a complicated orbit design and preparing FOT personnel to take on the routine operation of such a spacecraft both necessitated increasing the automation level of the flight dynamics functionality. The FreeFlyer(TM) software developed by AI Solutions provides a means to achieve both of these goals. The graphic interface enables users to interactively perform analyses that previously required many parametric studies and much data reduction to achieve the same result In addition, the fuzzy logic engine enables the simultaneous evaluation of multiple conflicting constraints, removing the analyst from the loop and allowing the FOT to perform more of the operations without much background in orbit design. Modernized techniques were implemented for EOS AM-1 flight dynamics support in several areas, including launch window determination, orbit maintenance maneuver control strategies, and maneuver design and calibration automation. The benefits of implementing these techniques include increased fuel available for on-orbit maneuvering, a simplified orbit maintenance process to minimize science data downtime, and an automated routine maneuver planning process. This paper provides an examination of the modernized techniques implemented for EOS AM-1 to achieve these benefits.

  7. Image reproduction with interactive graphics

    NASA Technical Reports Server (NTRS)

    Buckner, J. D.; Council, H. W.; Edwards, T. R.

    1974-01-01

    Software application or development in optical image digital data processing requires a fast, good quality, yet inexpensive hard copy of processed images. To achieve this, a Cambo camera with an f 2.8/150-mm Xenotar lens in a Copal shutter having a Graflok back for 4 x 5 Polaroid type 57 pack-film has been interfaced to an existing Adage, AGT-30/Electro-Mechanical Research, EMR 6050 graphic computer system. Time-lapse photography in conjunction with a log to linear voltage transformation has resulted in an interactive system capable of producing a hard copy in 54 sec. The interactive aspect of the system lies in a Tektronix 4002 graphic computer terminal and its associated hard copy unit.

  8. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    NASA Astrophysics Data System (ADS)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  9. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    NASA Astrophysics Data System (ADS)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  10. U.S. Tsunami Information technology (TIM) Modernization:Developing a Maintainable and Extensible Open Source Earthquake and Tsunami Warning System

    NASA Astrophysics Data System (ADS)

    Hellman, S. B.; Lisowski, S.; Baker, B.; Hagerty, M.; Lomax, A.; Leifer, J. M.; Thies, D. A.; Schnackenberg, A.; Barrows, J.

    2015-12-01

    Tsunami Information technology Modernization (TIM) is a National Oceanic and Atmospheric Administration (NOAA) project to update and standardize the earthquake and tsunami monitoring systems currently employed at the U.S. Tsunami Warning Centers in Ewa Beach, Hawaii (PTWC) and Palmer, Alaska (NTWC). While this project was funded by NOAA to solve a specific problem, the requirements that the delivered system be both open source and easily maintainable have resulted in the creation of a variety of open source (OS) software packages. The open source software is now complete and this is a presentation of the OS Software that has been funded by NOAA for benefit of the entire seismic community. The design architecture comprises three distinct components: (1) The user interface, (2) The real-time data acquisition and processing system and (3) The scientific algorithm library. The system follows a modular design with loose coupling between components. We now identify the major project constituents. The user interface, CAVE, is written in Java and is compatible with the existing National Weather Service (NWS) open source graphical system AWIPS. The selected real-time seismic acquisition and processing system is open source SeisComp3 (sc3). The seismic library (libseismic) contains numerous custom written and wrapped open source seismic algorithms (e.g., ML/mb/Ms/Mwp, mantle magnitude (Mm), w-phase moment tensor, bodywave moment tensor, finite-fault inversion, array processing). The seismic library is organized in a way (function naming and usage) that will be familiar to users of Matlab. The seismic library extends sc3 so that it can be called by the real-time system, but it can also be driven and tested outside of sc3, for example, by ObsPy or Earthworm. To unify the three principal components we have developed a flexible and lightweight communication layer called SeismoEdex.

  11. An Imaging And Graphics Workstation For Image Sequence Analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  12. Cooperative processing user interfaces for AdaNET

    NASA Technical Reports Server (NTRS)

    Gutzmann, Kurt M.

    1991-01-01

    A cooperative processing user interface (CUI) system shares the task of graphical display generation and presentation between the user's computer and a remote host. The communications link between the two computers is typically a modem or Ethernet. The two main purposes of a CUI are reduction of the amount of data transmitted between user and host machines, and provision of a graphical user interface system to make the system easier to use.

  13. HeatmapGenerator: high performance RNAseq and microarray visualization software suite to examine differential gene expression levels using an R and C++ hybrid computational pipeline.

    PubMed

    Khomtchouk, Bohdan B; Van Booven, Derek J; Wahlestedt, Claes

    2014-01-01

    The graphical visualization of gene expression data using heatmaps has become an integral component of modern-day medical research. Heatmaps are used extensively to plot quantitative differences in gene expression levels, such as those measured with RNAseq and microarray experiments, to provide qualitative large-scale views of the transcriptonomic landscape. Creating high-quality heatmaps is a computationally intensive task, often requiring considerable programming experience, particularly for customizing features to a specific dataset at hand. Software to create publication-quality heatmaps is developed with the R programming language, C++ programming language, and OpenGL application programming interface (API) to create industry-grade high performance graphics. We create a graphical user interface (GUI) software package called HeatmapGenerator for Windows OS and Mac OS X as an intuitive, user-friendly alternative to researchers with minimal prior coding experience to allow them to create publication-quality heatmaps using R graphics without sacrificing their desired level of customization. The simplicity of HeatmapGenerator is that it only requires the user to upload a preformatted input file and download the publicly available R software language, among a few other operating system-specific requirements. Advanced features such as color, text labels, scaling, legend construction, and even database storage can be easily customized with no prior programming knowledge. We provide an intuitive and user-friendly software package, HeatmapGenerator, to create high-quality, customizable heatmaps generated using the high-resolution color graphics capabilities of R. The software is available for Microsoft Windows and Apple Mac OS X. HeatmapGenerator is released under the GNU General Public License and publicly available at: http://sourceforge.net/projects/heatmapgenerator/. The Mac OS X direct download is available at: http://sourceforge.net/projects/heatmapgenerator/files/HeatmapGenerator_MAC_OSX.tar.gz/download. The Windows OS direct download is available at: http://sourceforge.net/projects/heatmapgenerator/files/HeatmapGenerator_WINDOWS.zip/download.

  14. A Java-Enabled Interactive Graphical Gas Turbine Propulsion System Simulator

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Afjeh, Abdollah A.

    1997-01-01

    This paper describes a gas turbine simulation system which utilizes the newly developed Java language environment software system. The system provides an interactive graphical environment which allows the quick and efficient construction and analysis of arbitrary gas turbine propulsion systems. The simulation system couples a graphical user interface, developed using the Java Abstract Window Toolkit, and a transient, space- averaged, aero-thermodynamic gas turbine analysis method, both entirely coded in the Java language. The combined package provides analytical, graphical and data management tools which allow the user to construct and control engine simulations by manipulating graphical objects on the computer display screen. Distributed simulations, including parallel processing and distributed database access across the Internet and World-Wide Web (WWW), are made possible through services provided by the Java environment.

  15. Methodologie experimentale pour evaluer les caracteristiques des plateformes graphiques avioniques

    NASA Astrophysics Data System (ADS)

    Legault, Vincent

    Within a context where the aviation industry intensifies the development of new visually appealing features and where time-to-market must be as short as possible, rapid graphics processing benchmarking in a certified avionics environment becomes an important issue. With this work we intend to demonstrate that it is possible to deploy a high-performance graphics application on an avionics platform that uses certified graphical COTS components. Moreover, we would like to bring to the avionics community a methodology which will allow developers to identify the needed elements for graphics system optimisation and provide them tools that can measure the complexity of this type of application and measure the amount of resources to properly scale a graphics system according to their needs. As far as we know, no graphics performance profiling tool dedicated to critical embedded architectures has been proposed. We thus had the idea of implementing a specialized benchmarking tool that would be an appropriate and effective solution to this problem. Our solution resides in the extraction of the key graphics specifications from an inherited application to use them afterwards in a 3D image generation application.

  16. Preschool-aged children have difficulty constructing and interpreting simple utterances composed of graphic symbols.

    PubMed

    Sutton, Ann; Trudeau, Natacha; Morford, Jill; Rios, Monica; Poirier, Marie-Andrée

    2010-01-01

    Children who require augmentative and alternative communication (AAC) systems while they are in the process of acquiring language face unique challenges because they use graphic symbols for communication. In contrast to the situation of typically developing children, they use different modalities for comprehension (auditory) and expression (visual). This study explored the ability of three- and four-year-old children without disabilities to perform tasks involving sequences of graphic symbols. Thirty participants were asked to transpose spoken simple sentences into graphic symbols by selecting individual symbols corresponding to the spoken words, and to interpret graphic symbol utterances by selecting one of four photographs corresponding to a sequence of three graphic symbols. The results showed that these were not simple tasks for the participants, and few of them performed in the expected manner - only one in transposition, and only one-third of participants in interpretation. Individual response strategies in some cases lead to contrasting response patterns. Children at this age level have not yet developed the skills required to deal with graphic symbols even though they have mastered the corresponding spoken language structures.

  17. Combined Ozone Retrieval From METOP Sensors Using META-Training Of Deep Neural Networks

    NASA Astrophysics Data System (ADS)

    Felder, Martin; Sehnke, Frank; Kaifel, Anton

    2013-12-01

    The newest installment of our well-proven Neural Net- work Ozone Retrieval System (NNORSY) combines the METOP sensors GOME-2 and IASI with cloud information from AVHRR. Through the use of advanced meta- learning techniques like automatic feature selection and automatic architecture search applied to a set of deep neural networks, having at least two or three hidden layers, we have been able to avoid many technical issues normally encountered during the construction of such a joint retrieval system. This has been made possible by harnessing the processing power of modern consumer graphics cards with high performance graphic processors (GPU), which decreases training times by about two orders of magnitude. The system was trained on data from 2009 and 2010, including target ozone profiles from ozone sondes, ACE- FTS and MLS-AURA. To make maximum use of tropospheric information in the spectra, the data were partitioned into several sets of different cloud fraction ranges with the GOME-2 FOV, on which specialized retrieval networks are being trained. For the final ozone retrieval processing the different specialized networks are combined. The resulting retrieval system is very stable and does not show any systematic dependence on solar zenith angle, scan angle or sensor degradation. We present several sensitivity studies with regard to cloud fraction and target sensor type, as well as the performance in several latitude bands and with respect to independent validation stations. A visual cross-comparison against high-resolution ozone profiles from the KNMI EUMETSAT Ozone SAF product has also been performed and shows some distinctive features which we will briefly discuss. Overall, we demonstrate that a complex retrieval system can now be constructed with a minimum of ma- chine learning knowledge, using automated algorithms for many design decisions previously requiring expert knowledge. Provided sufficient training data and computation power of GPUs is available, the method can be applied to almost any kind of retrieval or, more generally, regression problem.

  18. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    NASA Astrophysics Data System (ADS)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU-based algorithms based on existing parallelization strategies.

  19. Graphic Communications--Commercial Photography. Ohio's Competency Analysis Profile.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Vocational Instructional Materials Lab.

    This Ohio Competency Analysis Profile (OCAP), derived from a modified Developing a Curriculum (DACUM) process, is a current comprehensive and verified employer competency program list for graphic communications--commercial photography. Each unit (with or without subunits) contains competencies and competency builders that identify the…

  20. Design and Implementation of a Tool for Teaching Programming.

    ERIC Educational Resources Information Center

    Goktepe, Mesut; And Others

    1989-01-01

    Discussion of the use of computers in education focuses on a graphics-based system for teaching the Pascal programing language for problem solving. Topics discussed include user interface; notification based systems; communication processes; object oriented programing; workstations; graphics architecture; and flowcharts. (18 references) (LRW)

  1. Display system for imaging scientific telemetric information

    NASA Technical Reports Server (NTRS)

    Zabiyakin, G. I.; Rykovanov, S. N.

    1979-01-01

    A system for imaging scientific telemetric information, based on the M-6000 minicomputer and the SIGD graphic display, is described. Two dimensional graphic display of telemetric information and interaction with the computer, in analysis and processing of telemetric parameters displayed on the screen is provided. The running parameter information output method is presented. User capabilities in the analysis and processing of telemetric information imaged on the display screen and the user language are discussed and illustrated.

  2. Accelerating Molecular Dynamic Simulation on Graphics Processing Units

    PubMed Central

    Friedrichs, Mark S.; Eastman, Peter; Vaidyanathan, Vishal; Houston, Mike; Legrand, Scott; Beberg, Adam L.; Ensign, Daniel L.; Bruns, Christopher M.; Pande, Vijay S.

    2009-01-01

    We describe a complete implementation of all-atom protein molecular dynamics running entirely on a graphics processing unit (GPU), including all standard force field terms, integration, constraints, and implicit solvent. We discuss the design of our algorithms and important optimizations needed to fully take advantage of a GPU. We evaluate its performance, and show that it can be more than 700 times faster than a conventional implementation running on a single CPU core. PMID:19191337

  3. AVE-SESAME program for the REEDA System

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.

    1981-01-01

    The REEDA system software was modified and improved to process the AVE-SESAME severe storm data. A random access file system for the AVE storm data was designed, tested, and implemented. The AVE/SESAME software was modified to incorporate the random access file input and to interface with new graphics hardware/software now available on the REEDA system. Software was developed to graphically display the AVE/SESAME data in the convention normally used by severe storm researchers. Software was converted to AVE/SESAME software systems and interfaced with existing graphics hardware/software available on the REEDA System. Software documentation was provided for existing AVE/SESAME programs underlining functional flow charts and interacting questions. All AVE/SESAME data sets in random access format was processed to allow developed software to access the entire AVE/SESAME data base. The existing software was modified to allow for processing of different AVE/SESAME data set types including satellite surface and radar data.

  4. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with branches for the static process setup, inversion iterations, and solver runs, each branch specifying information at the event, station and channel levels. The workflow management framework is based on an embedded scripting engine that allows definition of various workflow scenarios using a high-level scripting language and provides access to all available inversion components represented as standard library functions. At present the SES3D wave propagation solver is integrated in the solution; the work is in progress for interfacing with SPECFEM3D. A separate framework is designed for interoperability with an optimization module; the workflow manager and optimization process run in parallel and cooperate by exchanging messages according to a specially designed protocol. A library of high-performance modules implementing signal pre-processing, misfit and adjoint computations according to established good practices is included. Monitoring is based on information stored in the inversion state database and at present implements a command line interface; design of a graphical user interface is in progress. The software design fits well into the common massively parallel system architecture featuring a large number of computational nodes running distributed applications under control of batch-oriented resource managers. The solution prototype has been implemented on the "Piz Daint" supercomputer provided by the Swiss Supercomputing Centre (CSCS).

  5. GPU Acceleration of DSP for Communication Receivers.

    PubMed

    Gunther, Jake; Gunther, Hyrum; Moon, Todd

    2017-09-01

    Graphics processing unit (GPU) implementations of signal processing algorithms can outperform CPU-based implementations. This paper describes the GPU implementation of several algorithms encountered in a wide range of high-data rate communication receivers including filters, multirate filters, numerically controlled oscillators, and multi-stage digital down converters. These structures are tested by processing the 20 MHz wide FM radio band (88-108 MHz). Two receiver structures are explored: a single channel receiver and a filter bank channelizer. Both run in real time on NVIDIA GeForce GTX 1080 graphics card.

  6. Caititu: a tool to graphically represent peptide sequence coverage and domain distribution.

    PubMed

    Carvalho, Paulo C; Junqueira, Magno; Valente, Richard H; Domont, Gilberto B

    2008-10-07

    Here we present Caititu, an easy-to-use proteomics software to graphically represent peptide sequence coverage and domain distribution for different correlated samples (e.g. originated from 2D gel spots) relatively to the full-sequence of the known protein they are related to. Although Caititu has a broad applicability, we exemplify its usefulness in Toxinology using snake venom as a model. For example, proteolytic processing may lead to inactivation or loss of domains. Therefore, our proposed graphic representation for peptides identified by two dimensional electrophoresis followed by mass spectrometric identification of excised spots can aid in inferring what kind of processing happened to the toxins, if any. Caititu is freely available to download at: http://pcarvalho.com/things/caititu.

  7. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  8. Architectures for single-chip image computing

    NASA Astrophysics Data System (ADS)

    Gove, Robert J.

    1992-04-01

    This paper will focus on the architectures of VLSI programmable processing components for image computing applications. TI, the maker of industry-leading RISC, DSP, and graphics components, has developed an architecture for a new-generation of image processors capable of implementing a plurality of image, graphics, video, and audio computing functions. We will show that the use of a single-chip heterogeneous MIMD parallel architecture best suits this class of processors--those which will dominate the desktop multimedia, document imaging, computer graphics, and visualization systems of this decade.

  9. A study of methods to predict and measure the transmission of sound through the walls of light aircraft. A survey of techniques for visualization of noise fields

    NASA Technical Reports Server (NTRS)

    Marshall, S. E.; Bernhard, R.

    1984-01-01

    A survey of the most widely used methods for visualizing acoustic phenomena is presented. Emphasis is placed on acoustic processes in the audible frequencies. Many visual problems are analyzed on computer graphic systems. A brief description of the current technology in computer graphics is included. The visualization technique survey will serve as basis for recommending an optimum scheme for displaying acoustic fields on computer graphic systems.

  10. Evaluation of three electronic report processing systems for preparing hydrologic reports of the U.S Geological Survey, Water Resources Division

    USGS Publications Warehouse

    Stiltner, G.J.

    1990-01-01

    In 1987, the Water Resources Division of the U.S. Geological Survey undertook three pilot projects to evaluate electronic report processing systems as a means to improve the quality and timeliness of reports pertaining to water resources investigations. The three projects selected for study included the use of the following configuration of software and hardware: Ventura Publisher software on an IBM model AT personal computer, PageMaker software on a Macintosh computer, and FrameMaker software on a Sun Microsystems workstation. The following assessment criteria were to be addressed in the pilot studies: The combined use of text, tables, and graphics; analysis of time; ease of learning; compatibility with the existing minicomputer system; and technical limitations. It was considered essential that the camera-ready copy produced be in a format suitable for publication. Visual improvement alone was not a consideration. This report consolidates and summarizes the findings of the electronic report processing pilot projects. Text and table files originating on the existing minicomputer system were successfully transformed to the electronic report processing systems in American Standard Code for Information Interchange (ASCII) format. Graphics prepared using a proprietary graphics software package were transferred to all the electronic report processing software through the use of Computer Graphic Metafiles. Graphics from other sources were entered into the systems by scanning paper images. Comparative analysis of time needed to process text and tables by the electronic report processing systems and by conventional methods indicated that, although more time is invested in creating the original page composition for an electronically processed report , substantial time is saved in producing subsequent reports because the format can be stored and re-used by electronic means as a template. Because of the more compact page layouts, costs of printing the reports were 15% to 25% less than costs of printing the reports prepared by conventional methods. Because the largest report workload in the offices conducting water resources investigations is preparation of Water-Resources Investigations Reports, Open-File Reports, and annual State Data Reports, the pilot studies only involved these projects. (USGS)

  11. On the typography of flight-deck documentation

    NASA Technical Reports Server (NTRS)

    Degani, Asaf

    1992-01-01

    Many types of paper documentation are employed on the flight-deck. They range from a simple checklist card to a bulky Aircraft Flight Manual (AFM). Some of these documentations have typographical and graphical deficiencies; yet, many cockpit tasks such as conducting checklists, way-point entry, limitations and performance calculations, and many more, require the use of these documents. Moreover, during emergency and abnormal situations, the flight crews' effectiveness in combating the situation is highly dependent on such documentation; accessing and reading procedures has a significant impact on flight safety. Although flight-deck documentation are an important (and sometimes critical) form of display in the modern cockpit, there is a dearth of information on how to effectively design these displays. The object of this report is to provide a summary of the available literature regarding the design and typographical aspects of printed matter. The report attempts 'to bridge' the gap between basic research about typography, and the kind of information needed by designers of flight-deck documentation. The report focuses on typographical factors such as type-faces, character height, use of lower- and upper-case characters, line length, and spacing. Some graphical aspects such as layout, color coding, fonts, and character contrast are also discussed. In addition, several aspects of cockpit reading conditions such as glare, angular alignment, and paper quality are addressed. Finally, a list of recommendations for the graphical design of flight-deck documentation is provided.

  12. VACTIV: A graphical dialog based program for an automatic processing of line and band spectra

    NASA Astrophysics Data System (ADS)

    Zlokazov, V. B.

    2013-05-01

    The program VACTIV-Visual ACTIV-has been developed for an automatic analysis of spectrum-like distributions, in particular gamma-ray spectra or alpha-spectra and is a standard graphical dialog based Windows XX application, driven by a menu, mouse and keyboard. On the one hand, it was a conversion of an existing Fortran program ACTIV [1] to the DELPHI language; on the other hand, it is a transformation of the sequential syntax of Fortran programming to a new object-oriented style, based on the organization of event interactions. New features implemented in the algorithms of both the versions consisted in the following as peak model both an analytical function and a graphical curve could be used; the peak search algorithm was able to recognize not only Gauss peaks but also peaks with an irregular form; both narrow peaks (2-4 channels) and broad ones (50-100 channels); the regularization technique in the fitting guaranteed a stable solution in the most complicated cases of strongly overlapping or weak peaks. The graphical dialog interface of VACTIV is much more convenient than the batch mode of ACTIV. [1] V.B. Zlokazov, Computer Physics Communications, 28 (1982) 27-37. NEW VERSION PROGRAM SUMMARYProgram Title: VACTIV Catalogue identifier: ABAC_v2_0 Licensing provisions: no Programming language: DELPHI 5-7 Pascal. Computer: IBM PC series. Operating system: Windows XX. RAM: 1 MB Keywords: Nuclear physics, spectrum decomposition, least squares analysis, graphical dialog, object-oriented programming. Classification: 17.6. Catalogue identifier of previous version: ABAC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 28 (1982) 27 Does the new version supersede the previous version?: Yes. Nature of problem: Program VACTIV is intended for precise analysis of arbitrary spectrum-like distributions, e.g. gamma-ray and X-ray spectra and allows the user to carry out the full cycle of automatic processing of such spectra, i.e. calibration, automatic peak search and estimation of parameters of interest. VACTIV can run on any standard modern laptop. Reasons for the new version: At the time of its creation (1999) VACTIV was seemingly the first attempt to apply the newest programming languages and styles to systems of spectrum analysis. Its goal was to both get a convenient and efficient technique for data processing, and to elaborate the formalism of spectrum analysis in terms of classes, their properties, their methods and events of an object-oriented programming language. Summary of revisions: Compared with ACTIV, VACTIV preserves all the mathematical algorithms, but provides the user with all the benefits of an interface, based on a graphical dialog. It allows him to make a quick intervention in the work of the program; in particular, to carry out the on-line control of the fitting process: depending on the intermediate results and using the visual form of data representation, to change the conditions for the fitting and so achieve the optimum performance, selecting the optimum strategy. To find the best conditions for the fitting one can compress the spectrum, delete the blunders from it, smooth it using a high-frequency spline filter and build the background using a low-frequency spline filter; use not only automatic methods for the blunder deletion, the peak search, the peak model forming and the calibration, but also use manual mouse clicking on the spectrum graph. Restrictions: To enhance the reliability and portability of the program the majority of the most important arrays have a static allocation; all the arrays are allocated with a surplus, and the total pool of the program is restricted only by the size of the computer virtual memory. A spectrum has the static size of 32 K real words. The maximum size of the least-square matrix is 314 (the maximum number of fitted parameters per one analyzed spectrum interval, not for the whole spectrum), from which it follows that the maximum number of peaks in one spectrum interval is 154. The maximum total number of peaks in the spectrum is not restricted. Running time: The calculation time is negligibly small compared with the time for the dialog; using ini-files the program can be partly used in a semi-dialog mode.

  13. Do graphic health warning labels have an impact on adolescents' smoking-related beliefs and behaviours?

    PubMed

    White, Victoria; Webster, Bernice; Wakefield, Melanie

    2008-09-01

    To assess the impact of the introduction of graphic health warning labels on cigarette packets on adolescents at different smoking uptake stages. School-based surveys conducted in the year prior to (2005) and approximately 6 months after (2006) the introduction of the graphic health warnings. The 2006 survey was conducted after a TV advertising campaign promoting two new health warnings. Secondary schools in greater metropolitan Melbourne, Australia. Students in year levels 8-12: 2432 students in 2005, and 2050 in 2006, participated. Smoking uptake stage, intention to smoke, reported exposure to cigarette packs, knowledge of health effects of smoking, cognitive processing of warning labels and perceptions of cigarette pack image. At baseline, 72% of students had seen cigarette packs in the previous 6 months, while at follow-up 77% had seen packs and 88% of these had seen the new warning labels. Cognitive processing of warning labels increased, with students more frequently reading, attending to, thinking and talking about warning labels at follow-up. Experimental and established smokers thought about quitting and forgoing cigarettes more at follow-up. At follow-up intention to smoke was lower among those students who had talked about the warning labels and had forgone cigarettes. Graphic warning labels on cigarette packs are noticed by the majority of adolescents, increase adolescents' cognitive processing of these messages and have the potential to lower smoking intentions. Our findings suggest that the introduction of graphic warning labels may help to reduce smoking among adolescents.

  14. Computer-aided light sheet flow visualization using photogrammetry

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1994-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and a visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) results, was chosen to interactively display the reconstructed light sheet images with the numerical surface geometry for the model or aircraft under study. The photogrammetric reconstruction technique and the image processing and computer graphics techniques and equipment are described. Results of the computer-aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images with CFD solutions in the same graphics environment is also demonstrated.

  15. Computer-Aided Light Sheet Flow Visualization

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1993-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.

  16. Computer-aided light sheet flow visualization

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1993-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.

  17. Data Analysis with Graphical Models: Software Tools

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.

    1994-01-01

    Probabilistic graphical models (directed and undirected Markov fields, and combined in chain graphs) are used widely in expert systems, image processing and other areas as a framework for representing and reasoning with probabilities. They come with corresponding algorithms for performing probabilistic inference. This paper discusses an extension to these models by Spiegelhalter and Gilks, plates, used to graphically model the notion of a sample. This offers a graphical specification language for representing data analysis problems. When combined with general methods for statistical inference, this also offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper outlines the framework and then presents some basic tools for the task: a graphical version of the Pitman-Koopman Theorem for the exponential family, problem decomposition, and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.

  18. Graphic Design in Educational Television.

    ERIC Educational Resources Information Center

    Clarke, Beverley

    To help educational television (ETV) practitioners achieve maximum clarity, economy and purposiveness, the range of techniques of television graphics is explained. Closed-circuit and broadcast ETV are compared. The design process is discussed in terms of aspect ratio, line structure, cut off, screen size, tone scales, studio apparatus, and…

  19. Collaboration between Writers and Graphic Designers in Documentation Projects.

    ERIC Educational Resources Information Center

    Mirel, Barbara; And Others

    1995-01-01

    Analyzes collaborations between software manual writers and graphic designers to discover how their processes of collaboration directly affect the form of a finished manual. Identifies three models of collaboration: assembly line (linear drafting), swap meet (iterative drafting and joint problem solving), and symphony (codevelopment in every…

  20. The NOD3 software package: A graphical user interface-supported reduction package for single-dish radio continuum and polarisation observations

    NASA Astrophysics Data System (ADS)

    Müller, Peter; Krause, Marita; Beck, Rainer; Schmidt, Philip

    2017-10-01

    Context. The venerable NOD2 data reduction software package for single-dish radio continuum observations, which was developed for use at the 100-m Effelsberg radio telescope, has been successfully applied over many decades. Modern computing facilities, however, call for a new design. Aims: We aim to develop an interactive software tool with a graphical user interface for the reduction of single-dish radio continuum maps. We make a special effort to reduce the distortions along the scanning direction (scanning effects) by combining maps scanned in orthogonal directions or dual- or multiple-horn observations that need to be processed in a restoration procedure. The package should also process polarisation data and offer the possibility to include special tasks written by the individual user. Methods: Based on the ideas of the NOD2 package we developed NOD3, which includes all necessary tasks from the raw maps to the final maps in total intensity and linear polarisation. Furthermore, plot routines and several methods for map analysis are available. The NOD3 package is written in Python, which allows the extension of the package via additional tasks. The required data format for the input maps is FITS. Results: The NOD3 package is a sophisticated tool to process and analyse maps from single-dish observations that are affected by scanning effects from clouds, receiver instabilities, or radio-frequency interference. The "basket-weaving" tool combines orthogonally scanned maps into a final map that is almost free of scanning effects. The new restoration tool for dual-beam observations reduces the noise by a factor of about two compared to the NOD2 version. Combining single-dish with interferometer data in the map plane ensures the full recovery of the total flux density. Conclusions: This software package is available under the open source license GPL for free use at other single-dish radio telescopes of the astronomical community. The NOD3 package is designed to be extendable to multi-channel data represented by data cubes in Stokes I, Q, and U.

  1. SraTailor: graphical user interface software for processing and visualizing ChIP-seq data.

    PubMed

    Oki, Shinya; Maehara, Kazumitsu; Ohkawa, Yasuyuki; Meno, Chikara

    2014-12-01

    Raw data from ChIP-seq (chromatin immunoprecipitation combined with massively parallel DNA sequencing) experiments are deposited in public databases as SRAs (Sequence Read Archives) that are publically available to all researchers. However, to graphically visualize ChIP-seq data of interest, the corresponding SRAs must be downloaded and converted into BigWig format, a process that involves complicated command-line processing. This task requires users to possess skill with script languages and sequence data processing, a requirement that prevents a wide range of biologists from exploiting SRAs. To address these challenges, we developed SraTailor, a GUI (Graphical User Interface) software package that automatically converts an SRA into a BigWig-formatted file. Simplicity of use is one of the most notable features of SraTailor: entering an accession number of an SRA and clicking the mouse are the only steps required to obtain BigWig-formatted files and to graphically visualize the extents of reads at given loci. SraTailor is also able to make peak calls, generate files of other formats, process users' own data, and accept various command-line-like options. Therefore, this software makes ChIP-seq data fully exploitable by a wide range of biologists. SraTailor is freely available at http://www.devbio.med.kyushu-u.ac.jp/sra_tailor/, and runs on both Mac and Windows machines. © 2014 The Authors Genes to Cells © 2014 by the Molecular Biology Society of Japan and Wiley Publishing Asia Pty Ltd.

  2. Acceleration of GPU-based Krylov solvers via data transfer reduction

    DOE PAGES

    Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; ...

    2015-04-08

    Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphicsmore » processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.« less

  3. Mobile high-performance computing (HPC) for synthetic aperture radar signal processing

    NASA Astrophysics Data System (ADS)

    Misko, Joshua; Kim, Youngsoo; Qi, Chenchen; Sirkeci, Birsen

    2018-04-01

    The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.

  4. An Overview of Psycholinguistic Reading Theory.

    ERIC Educational Resources Information Center

    Hayes, Christopher G.

    In the most adequate psycholinguistic model of the reading process the proficient silent reader decodes directly from graphic surface structure into deep structure, with no decoding into oral surface structure. Three cue systems used by all proficient readers include graphic cues (letters and words), syntactic cues (the grammatical arrangement of…

  5. Interactive Learning for Graphic Design Foundations

    ERIC Educational Resources Information Center

    Chu, Sauman; Ramirez, German Mauricio Mejia

    2012-01-01

    One of the biggest problems for students majoring in pre-graphic design is students' inability to apply their knowledge to different design solutions. The purpose of this study is to examine the effectiveness of interactive learning modules in facilitating knowledge acquisition during the learning process and to create interactive learning modules…

  6. Computer Instructional Aids for Undergraduate Control Education.

    ERIC Educational Resources Information Center

    Volz, Richard A.; And Others

    Engineering is coming to rely more and more heavily upon the computer for computations, analyses, and graphic displays which aid the design process. A general purpose simulation system, the Time-shared Automatic Control Laboratory (TACL), and a set of computer-aided design programs, Control Oriented Interactive Graphic Analysis and Design…

  7. Arrows: A Special Case of Graphic Communication.

    ERIC Educational Resources Information Center

    Hardin, Pris

    The purpose of this paper is to examine arrow design in relation to the type of pointing, connecting, or processing involved. Three possible approaches to the investigation of arrows as graphic communication include research: by arrow function, relating message structure to arrow design, and linking user expectations to arrow design. The following…

  8. The View from Here: Emergence of Graphical Literacy

    ERIC Educational Resources Information Center

    Roberts, Kathryn L.; Brugar, Kristy A.

    2017-01-01

    The purpose of this study is to describe upper elementary students' understandings of four graphical devices that frequently occur in social studies texts: captioned images, maps, tables, and timelines. Using verbal protocol data collection procedures, we collected information on students' metacognitive processes when they were explicitly asked to…

  9. Teaching Heat Exchanger Network Synthesis Using Interactive Microcomputer Graphics.

    ERIC Educational Resources Information Center

    Dixon, Anthony G.

    1987-01-01

    Describes the Heat Exchanger Network Synthesis (HENS) program used at Worcester Polytechnic Institute (Massachusetts) as an aid to teaching the energy integration step in process design. Focuses on the benefits of the computer graphics used in the program to increase the speed of generating and changing networks. (TW)

  10. Animation as a Distractor to Learning.

    ERIC Educational Resources Information Center

    Rieber, Lloyd P.

    1996-01-01

    A study of 364 fifth graders investigated distractibility of animated graphics in a computer-based tutorial about Newton's Laws of Motion. Found no difference in post-test performance for those with high, medium, or no distraction graphics. Students in the two distraction conditions took less time to process instructional frames than students in…

  11. Visual Invention and the Composition of Scientific Research Graphics: A Topological Approach

    ERIC Educational Resources Information Center

    Walsh, Lynda

    2018-01-01

    This report details the second phase of an ongoing research project investigating the visual invention and composition processes of scientific researchers. In this phase, four academic researchers completed think-aloud protocols as they composed graphics for research presentations; they also answered follow-up questions about their visual…

  12. Science Learning with Information Technologies as a Tool for "Scientific Thinking" in Engineering Education

    ERIC Educational Resources Information Center

    Smirnov, Eugeny; Bogun, Vitali

    2011-01-01

    New methodologies in science (or mathematics) learning process and scientific thinking in the classroom activity of engineer students with ICT (information and communication technology), including graphic calculator are presented: visual modelling with ICT, action research with graphic calculator, insight in classroom and communications and…

  13. Small Interactive Image Processing System (SMIPS) system description

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.

  14. Accelerating image recognition on mobile devices using GPGPU

    NASA Astrophysics Data System (ADS)

    Bordallo López, Miguel; Nykänen, Henri; Hannuksela, Jari; Silvén, Olli; Vehviläinen, Markku

    2011-01-01

    The future multi-modal user interfaces of battery-powered mobile devices are expected to require computationally costly image analysis techniques. The use of Graphic Processing Units for computing is very well suited for parallel processing and the addition of programmable stages and high precision arithmetic provide for opportunities to implement energy-efficient complete algorithms. At the moment the first mobile graphics accelerators with programmable pipelines are available, enabling the GPGPU implementation of several image processing algorithms. In this context, we consider a face tracking approach that uses efficient gray-scale invariant texture features and boosting. The solution is based on the Local Binary Pattern (LBP) features and makes use of the GPU on the pre-processing and feature extraction phase. We have implemented a series of image processing techniques in the shader language of OpenGL ES 2.0, compiled them for a mobile graphics processing unit and performed tests on a mobile application processor platform (OMAP3530). In our contribution, we describe the challenges of designing on a mobile platform, present the performance achieved and provide measurement results for the actual power consumption in comparison to using the CPU (ARM) on the same platform.

  15. 75 FR 24505 - Modernization of OSHA's Injury and Illness Data Collection Process

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-05

    ... data collected by an improved and modernized OSHA recordkeeping system and made public under the Open.... OSHA-2010-0024] Modernization of OSHA's Injury and Illness Data Collection Process AGENCY: Occupational... modernization of OSHA's injury and illness data collection system. OSHA encourages stakeholders who cannot...

  16. Massively parallel GPU-accelerated minimization of classical density functional theory

    NASA Astrophysics Data System (ADS)

    Stopper, Daniel; Roth, Roland

    2017-08-01

    In this paper, we discuss the ability to numerically minimize the grand potential of hard disks in two-dimensional and of hard spheres in three-dimensional space within the framework of classical density functional and fundamental measure theory on modern graphics cards. Our main finding is that a massively parallel minimization leads to an enormous performance gain in comparison to standard sequential minimization schemes. Furthermore, the results indicate that in complex multi-dimensional situations, a heavy parallel minimization of the grand potential seems to be mandatory in order to reach a reasonable balance between accuracy and computational cost.

  17. Modeling biochemical transformation processes and information processing with Narrator.

    PubMed

    Mandel, Johannes J; Fuss, Hendrik; Palfreyman, Niall M; Dubitzky, Werner

    2007-03-27

    Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Narrator is a flexible and intuitive systems biology tool. It is specifically intended for users aiming to construct and simulate dynamic models of biology without recourse to extensive mathematical detail. Its design facilitates mappings to different formal languages and frameworks. The combined set of features makes Narrator unique among tools of its kind. Narrator is implemented as Java software program and available as open-source from http://www.narrator-tool.org.

  18. Modeling biochemical transformation processes and information processing with Narrator

    PubMed Central

    Mandel, Johannes J; Fuß, Hendrik; Palfreyman, Niall M; Dubitzky, Werner

    2007-01-01

    Background Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Results Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Conclusion Narrator is a flexible and intuitive systems biology tool. It is specifically intended for users aiming to construct and simulate dynamic models of biology without recourse to extensive mathematical detail. Its design facilitates mappings to different formal languages and frameworks. The combined set of features makes Narrator unique among tools of its kind. Narrator is implemented as Java software program and available as open-source from . PMID:17389034

  19. What Can Causal Networks Tell Us about Metabolic Pathways?

    PubMed Central

    Blair, Rachael Hageman; Kliebenstein, Daniel J.; Churchill, Gary A.

    2012-01-01

    Graphical models describe the linear correlation structure of data and have been used to establish causal relationships among phenotypes in genetic mapping populations. Data are typically collected at a single point in time. Biological processes on the other hand are often non-linear and display time varying dynamics. The extent to which graphical models can recapitulate the architecture of an underlying biological processes is not well understood. We consider metabolic networks with known stoichiometry to address the fundamental question: “What can causal networks tell us about metabolic pathways?”. Using data from an Arabidopsis BaySha population and simulated data from dynamic models of pathway motifs, we assess our ability to reconstruct metabolic pathways using graphical models. Our results highlight the necessity of non-genetic residual biological variation for reliable inference. Recovery of the ordering within a pathway is possible, but should not be expected. Causal inference is sensitive to subtle patterns in the correlation structure that may be driven by a variety of factors, which may not emphasize the substrate-product relationship. We illustrate the effects of metabolic pathway architecture, epistasis and stochastic variation on correlation structure and graphical model-derived networks. We conclude that graphical models should be interpreted cautiously, especially if the implied causal relationships are to be used in the design of intervention strategies. PMID:22496633

  20. A graphics subsystem retrofit design for the bladed-disk data acquisition system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Carney, R. R.

    1983-01-01

    A graphics subsystem retrofit design for the turbojet blade vibration data acquisition system is presented. The graphics subsystem will operate in two modes permitting the system operator to view blade vibrations on an oscilloscope type of display. The first mode is a real-time mode that displays only gross blade characteristics, such as maximum deflections and standing waves. This mode is used to aid the operator in determining when to collect detailed blade vibration data. The second mode of operation is a post-processing mode that will animate the actual blade vibrations using the detailed data collected on an earlier data collection run. The operator can vary the rate of payback to view differring characteristics of blade vibrations. The heart of the graphics subsystem is a modified version of AMD's ""super sixteen'' computer, called the graphics preprocessor computer (GPC). This computer is based on AMD's 2900 series of bit-slice components.

  1. Pairwise Sequence Alignment Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeff Daily, PNNL

    2015-05-20

    Vector extensions, such as SSE, have been part of the x86 CPU since the 1990s, with applications in graphics, signal processing, and scientific applications. Although many algorithms and applications can naturally benefit from automatic vectorization techniques, there are still many that are difficult to vectorize due to their dependence on irregular data structures, dense branch operations, or data dependencies. Sequence alignment, one of the most widely used operations in bioinformatics workflows, has a computational footprint that features complex data dependencies. The trend of widening vector registers adversely affects the state-of-the-art sequence alignment algorithm based on striped data layouts. Therefore, amore » novel SIMD implementation of a parallel scan-based sequence alignment algorithm that can better exploit wider SIMD units was implemented as part of the Parallel Sequence Alignment Library (parasail). Parasail features: Reference implementations of all known vectorized sequence alignment approaches. Implementations of Smith Waterman (SW), semi-global (SG), and Needleman Wunsch (NW) sequence alignment algorithms. Implementations across all modern CPU instruction sets including AVX2 and KNC. Language interfaces for C/C++ and Python.« less

  2. Multimodal system for the planning and guidance of bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Cheirsilp, Ronnarit; Zang, Xiaonan; Byrnes, Patrick

    2015-03-01

    Many technical innovations in multimodal radiologic imaging and bronchoscopy have emerged recently in the effort against lung cancer. Modern X-ray computed-tomography (CT) scanners provide three-dimensional (3D) high-resolution chest images, positron emission tomography (PET) scanners give complementary molecular imaging data, and new integrated PET/CT scanners combine the strengths of both modalities. State-of-the-art bronchoscopes permit minimally invasive tissue sampling, with vivid endobronchial video enabling navigation deep into the airway-tree periphery, while complementary endobronchial ultrasound (EBUS) reveals local views of anatomical structures outside the airways. In addition, image-guided intervention (IGI) systems have proven their utility for CT-based planning and guidance of bronchoscopy. Unfortunately, no IGI system exists that integrates all sources effectively through the complete lung-cancer staging work flow. This paper presents a prototype of a computer-based multimodal IGI system that strives to fill this need. The system combines a wide range of automatic and semi-automatic image-processing tools for multimodal data fusion and procedure planning. It also provides a flexible graphical user interface for follow-on guidance of bronchoscopy/EBUS. Human-study results demonstrate the system's potential.

  3. The Genome Portal of the Department of Energy Joint Genome Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordberg, Henrik; Cantor, Michael; Dushekyo, Serge

    2014-03-14

    The JGI Genome Portal (http://genome.jgi.doe.gov) provides unified access to all JGI genomic databases and analytical tools. A user can search, download and explore multiple data sets available for all DOE JGI sequencing projects including their status, assemblies and annotations of sequenced genomes. Genome Portal in the past 2 years was significantly updated, with a specific emphasis on efficient handling of the rapidly growing amount of diverse genomic data accumulated in JGI. A critical aspect of handling big data in genomics is the development of visualization and analysis tools that allow scientists to derive meaning from what are otherwise terrabases ofmore » inert sequence. An interactive visualization tool developed in the group allows us to explore contigs resulting from a single metagenome assembly. Implemented with modern web technologies that take advantage of the power of the computer's graphical processing unit (gpu), the tool allows the user to easily navigate over a 100,000 data points in multiple dimensions, among many biologically meaningful parameters of a dataset such as relative abundance, contig length, and G+C content.« less

  4. E-Control: First Public Release of Remote Control Software for VLBI Telescopes

    NASA Technical Reports Server (NTRS)

    Neidhardt, Alexander; Ettl, Martin; Rottmann, Helge; Ploetz, Christian; Muehlbauer, Matthias; Hase, Hayo; Alef, Walter; Sobarzo, Sergio; Herrera, Cristian; Himwich, Ed

    2010-01-01

    Automating and remotely controlling observations are important for future operations in a Global Geodetic Observing System (GGOS). At the Geodetic Observatory Wettzell, in cooperation with the Max-Planck-Institute for Radio Astronomy in Bonn, a software extension to the existing NASA Field System has been developed for remote control. It uses the principle of a remotely accessible, autonomous process cell as a server extension for the Field System. The communication is realized for low transfer rates using Remote Procedure Calls (RPC). It uses generative programming with the interface software generator idl2rpc.pl developed at Wettzell. The user interacts with this system over a modern graphical user interface created with wxWidgets. For security reasons the communication is automatically tunneled through a Secure Shell (SSH) session to the telescope. There are already successful test observations with the telescopes at O Higgins, Concepcion, and Wettzell. At Wettzell the software is already used routinely for weekend observations. Therefore the first public release of the software is now available, which will also be useful for other telescopes.

  5. Solving systems of linear equations by GPU-based matrix factorization in a Science Ground Segment

    NASA Astrophysics Data System (ADS)

    Legendre, Maxime; Schmidt, Albrecht; Moussaoui, Saïd; Lammers, Uwe

    2013-11-01

    Recently, Graphics Cards have been used to offload scientific computations from traditional CPUs for greater efficiency. This paper investigates the adaptation of a real-world linear system solver, which plays a central role in the data processing of the Science Ground Segment of ESA's astrometric Gaia mission. The paper quantifies the resource trade-offs between traditional CPU implementations and modern CUDA based GPU implementations. It also analyses the impact on the pipeline architecture and system development. The investigation starts from both a selected baseline algorithm with a reference implementation and a traditional linear system solver and then explores various modifications to control flow and data layout to achieve higher resource efficiency. It turns out that with the current state of the art, the modifications impact non-technical system attributes. For example, the control flow of the original modified Cholesky transform is modified so that locality of the code and verifiability deteriorate. The maintainability of the system is affected as well. On the system level, users will have to deal with more complex configuration control and testing procedures.

  6. Computer programming for generating visual stimuli.

    PubMed

    Bukhari, Farhan; Kurylo, Daniel D

    2008-02-01

    Critical to vision research is the generation of visual displays with precise control over stimulus metrics. Generating stimuli often requires adapting commercial software or developing specialized software for specific research applications. In order to facilitate this process, we give here an overview that allows nonexpert users to generate and customize stimuli for vision research. We first give a review of relevant hardware and software considerations, to allow the selection of display hardware, operating system, programming language, and graphics packages most appropriate for specific research applications. We then describe the framework of a generic computer program that can be adapted for use with a broad range of experimental applications. Stimuli are generated in the context of trial events, allowing the display of text messages, the monitoring of subject responses and reaction times, and the inclusion of contingency algorithms. This approach allows direct control and management of computer-generated visual stimuli while utilizing the full capabilities of modern hardware and software systems. The flowchart and source code for the stimulus-generating program may be downloaded from www.psychonomic.org/archive.

  7. A Fluid Mechanics Hypercourse

    NASA Astrophysics Data System (ADS)

    Fay, James A.; Sonwalkar, Nishikant

    1996-05-01

    This CD-ROM is designed to accompany James Fay's Introduction to Fluid Mechanics. An enhanced hypermedia version of the textbook, it offers a number of ways to explore the fluid mechanics domain. These include a complete hypertext version of the original book, physical-experiment video clips, excerpts from external references, audio annotations, colored graphics, review questions, and progressive hints for solving problems. Throughout, the authors provide expert guidance in navigating the typed links so that students do not get lost in the learning process. System requirements: Macintosh with 68030 or greater processor and with at least 16 Mb of RAM. Operating System 6.0.4 or later for 680x0 processor and System 7.1.2 or later for Power-PC. CD-ROM drive with 256- color capability. Preferred display 14 inches or above (SuperVGA with 1 megabyte of VRAM). Additional system font software: Computer Modern postscript fonts (CM/PS Screen Fonts, CMBSY10, and CMTT10) and Adobe Type Manager (ATM 3.0 or later). James A. Fay is Professor Emeritus and Senior Lecturer in the Department of Mechanical Engineering at MIT.

  8. A GPU-Based Architecture for Real-Time Data Assessment at Synchrotron Experiments

    NASA Astrophysics Data System (ADS)

    Chilingaryan, Suren; Mirone, Alessandro; Hammersley, Andrew; Ferrero, Claudio; Helfen, Lukas; Kopmann, Andreas; Rolo, Tomy dos Santos; Vagovic, Patrik

    2011-08-01

    Advances in digital detector technology leads presently to rapidly increasing data rates in imaging experiments. Using fast two-dimensional detectors in computed tomography, the data acquisition can be much faster than the reconstruction if no adequate measures are taken, especially when a high photon flux at synchrotron sources is used. We have optimized the reconstruction software employed at the micro-tomography beamlines of our synchrotron facilities to use the computational power of modern graphic cards. The main paradigm of our approach is the full utilization of all system resources. We use a pipelined architecture, where the GPUs are used as compute coprocessors to reconstruct slices, while the CPUs are preparing the next ones. Special attention is devoted to minimize data transfers between the host and GPU memory and to execute memory transfers in parallel with the computations. We were able to reduce the reconstruction time by a factor 30 and process a typical data set of 20 GB in 40 seconds. The time needed for the first evaluation of the reconstructed sample is reduced significantly and quasi real-time visualization is now possible.

  9. Bio-inspired color sketch for eco-friendly printing

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Tolstaya, Ekaterina V.; Rychagov, Michael N.; Lee, Hokeun; Kim, Sang Ho; Choi, Donchul

    2012-01-01

    Saving of toner/ink consumption is an important task in modern printing devices. It has a positive ecological and social impact. We propose technique for converting print-job pictures to a recognizable and pleasant color sketches. Drawing a "pencil sketch" from a photo relates to a special area in image processing and computer graphics - non-photorealistic rendering. We describe a new approach for automatic sketch generation which allows to create well-recognizable sketches and to preserve partly colors of the initial picture. Our sketches contain significantly less color dots then initial images and this helps to save toner/ink. Our bio-inspired approach is based on sophisticated edge detection technique for a mask creation and multiplication of source image with increased contrast by this mask. To construct the mask we use DoG edge detection, which is a result of blending of initial image with its blurred copy through the alpha-channel, which is created from Saliency Map according to Pre-attentive Human Vision model. Measurement of percentage of saved toner and user study proves effectiveness of proposed technique for toner saving in eco-friendly printing mode.

  10. Policy Process Editor for P3BM Software

    NASA Technical Reports Server (NTRS)

    James, Mark; Chang, Hsin-Ping; Chow, Edward T.; Crichton, Gerald A.

    2010-01-01

    A computer program enables generation, in the form of graphical representations of process flows with embedded natural-language policy statements, input to a suite of policy-, process-, and performance-based management (P3BM) software. This program (1) serves as an interface between users and the Hunter software, which translates the input into machine-readable form; and (2) enables users to initialize and monitor the policy-implementation process. This program provides an intuitive graphical interface for incorporating natural-language policy statements into business-process flow diagrams. Thus, the program enables users who dictate policies to intuitively embed their intended process flows as they state the policies, reducing the likelihood of errors and reducing the time between declaration and execution of policy.

  11. Fast Occlusion and Shadow Detection for High Resolution Remote Sensing Image Combined with LIDAR Point Cloud

    NASA Astrophysics Data System (ADS)

    Hu, X.; Li, X.

    2012-08-01

    The orthophoto is an important component of GIS database and has been applied in many fields. But occlusion and shadow causes the loss of feature information which has a great effect on the quality of images. One of the critical steps in true orthophoto generation is the detection of occlusion and shadow. Nowadays LiDAR can obtain the digital surface model (DSM) directly. Combined with this technology, image occlusion and shadow can be detected automatically. In this paper, the Z-Buffer is applied for occlusion detection. The shadow detection can be regarded as a same problem with occlusion detection considering the angle between the sun and the camera. However, the Z-Buffer algorithm is computationally expensive. And the volume of scanned data and remote sensing images is very large. Efficient algorithm is another challenge. Modern graphics processing unit (GPU) is much more powerful than central processing unit (CPU). We introduce this technology to speed up the Z-Buffer algorithm and get 7 times increase in speed compared with CPU. The results of experiments demonstrate that Z-Buffer algorithm plays well in occlusion and shadow detection combined with high density of point cloud and GPU can speed up the computation significantly.

  12. Comparison of Campylobacter contamination levels on chicken carcasses between modern and traditional types of slaughtering facilities in Malaysia.

    PubMed

    Rejab, Saira Banu Mohamed; Zessin, Karl-Hans; Fries, Reinhard; Patchanee, Prapas

    2012-01-01

    A total of 360 samples including fresh fecal droppings, neck skins, and swab samples was collected from 24 broiler flocks and processed by 12 modern processing plants in 6 states in Malaysia. Ninety samples from 10 traditional wet markets located in the same states as modern processing plants were also collected. Microbiological isolation for Campylobacter was performed following ISO 10272-1:2006 (E). The overall rate of contamination for Campylobacter in modern processing plants and in traditional wet markets was 61.1% (220/360) and 85.6% (77/90), respectively. Campylobacter jejuni was detected as the majority with approximately 70% for both facilities. In the modern processing plants, the contamination rate for Campylobacter gradually declined from 80.6% before the inside-outside washing to 62.5% after inside-outside washing and to 38.9% after the post chilling step. The contamination rate for Campylobacter from processed chicken neck skin in traditional wet markets (93.3%) was significantly (P<0.01) higher than in modern processing plants (38.9%).

  13. Applying graphics user interface ot group technology classification and coding at the Boeing aerospace company

    NASA Astrophysics Data System (ADS)

    Ness, P. H.; Jacobson, H.

    1984-10-01

    The thrust of 'group technology' is toward the exploitation of similarities in component design and manufacturing process plans to achieve assembly line flow cost efficiencies for small batch production. The systematic method devised for the identification of similarities in component geometry and processing steps is a coding and classification scheme implemented by interactive CAD/CAM systems. This coding and classification scheme has led to significant increases in computer processing power, allowing rapid searches and retrievals on the basis of a 30-digit code together with user-friendly computer graphics.

  14. Application of graphics processing units to search pipelines for gravitational waves from coalescing binaries of compact objects

    NASA Astrophysics Data System (ADS)

    Chung, Shin Kee; Wen, Linqing; Blair, David; Cannon, Kipp; Datta, Amitava

    2010-07-01

    We report a novel application of a graphics processing unit (GPU) for the purpose of accelerating the search pipelines for gravitational waves from coalescing binaries of compact objects. A speed-up of 16-fold in total has been achieved with an NVIDIA GeForce 8800 Ultra GPU card compared with one core of a 2.5 GHz Intel Q9300 central processing unit (CPU). We show that substantial improvements are possible and discuss the reduction in CPU count required for the detection of inspiral sources afforded by the use of GPUs.

  15. [Data collection in anesthesia. Experiences with the inauguration of a new information system].

    PubMed

    Zbinden, A M; Rothenbühler, H; Häberli, B

    1997-06-01

    In many institutions information systems are used to process off-line anaesthesia data for invoices, statistical purposes, and quality assurance. Information systems are also increasingly being used to improve process control in order to reduce costs. Most of today's systems were created when information technology and working processes in anaesthesia were very different from those in use today. Thus, many institutions must now replace their computer systems but are probably not aware of how complex this change will be. Modern information systems mostly use client-server architecture and relational data bases. Substituting an old system with a new one is frequently a greater task than designing a system from scratch. This article gives the conclusions drawn from the experience obtained when a large departmental computer system is redesigned in an university hospital. The new system was based on a client-server architecture and was developed by an external company without preceding conceptual analysis. Modules for patient, anaesthesia, surgical, and pain-service data were included. Data were analysed using a separate statistical package (RS/1 from Bolt Beranek), taking advantage of its powerful precompiled procedures. Development and introduction of the new system took much more time and effort than expected despite the use of modern software tools. Introduction of the new program required intensive user training despite the choice of modem graphic screen layouts. Automatic data-reading systems could not be used, as too many faults occurred and the effort for the user was too high. However, after the initial problems were solved the system turned out to be a powerful tool for quality control (both process and outcome quality), billing, and scheduling. The statistical analysis of the data resulted in meaningful and relevant conclusions. Before creating a new information system, the working processes have to be analysed and, if possible, made more efficient; a detailed programme specification must then be made. A servicing and maintenance contract should be drawn up before the order is given to a company. Time periods of equal duration have to be scheduled for defining, writing, testing and introducing the program. Modern client-server systems with relational data bases are by no means simpler to establish and maintain than previous mainframe systems with hierarchical data bases, and thus, experienced computer specialists need to be close at hand. We recommend collecting data only once for both statistics and quality control. To verify data quality, a system of random spot-sampling has to be established. Despite the large investments needed to build up such a system, we consider it a powerful tool for helping to solve the difficult daily problems of managing a surgical and anaesthesia unit.

  16. Three-dimensional structural analysis using interactive graphics

    NASA Technical Reports Server (NTRS)

    Biffle, J.; Sumlin, H. A.

    1975-01-01

    The application of computer interactive graphics to three-dimensional structural analysis was described, with emphasis on the following aspects: (1) structural analysis, and (2) generation and checking of input data and examination of the large volume of output data (stresses, displacements, velocities, accelerations). Handling of three-dimensional input processing with a special MESH3D computer program was explained. Similarly, a special code PLTZ may be used to perform all the needed tasks for output processing from a finite element code. Examples were illustrated.

  17. Application of computer generated color graphic techniques to the processing and display of three dimensional fluid dynamic data

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.; Putt, C. W.; Giamati, C. C.

    1981-01-01

    Color coding techniques used in the processing of remote sensing imagery were adapted and applied to the fluid dynamics problems associated with turbofan mixer nozzles. The computer generated color graphics were found to be useful in reconstructing the measured flow field from low resolution experimental data to give more physical meaning to this information and in scanning and interpreting the large volume of computer generated data from the three dimensional viscous computer code used in the analysis.

  18. Real-Time Visualization of an HPF-based CFD Simulation

    NASA Technical Reports Server (NTRS)

    Kremenetsky, Mark; Vaziri, Arsi; Haimes, Robert; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Current time-dependent CFD simulations produce very large multi-dimensional data sets at each time step. The visual analysis of computational results are traditionally performed by post processing the static data on graphics workstations. We present results from an alternate approach in which we analyze the simulation data in situ on each processing node at the time of simulation. The locally analyzed results, usually more economical and in a reduced form, are then combined and sent back for visualization on a graphics workstation.

  19. General purpose molecular dynamics simulations fully implemented on graphics processing units

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Lorenz, Chris D.; Travesset, A.

    2008-05-01

    Graphics processing units (GPUs), originally developed for rendering real-time effects in computer games, now provide unprecedented computational power for scientific applications. In this paper, we develop a general purpose molecular dynamics code that runs entirely on a single GPU. It is shown that our GPU implementation provides a performance equivalent to that of fast 30 processor core distributed memory cluster. Our results show that GPUs already provide an inexpensive alternative to such clusters and discuss implications for the future.

  20. Quantum optimal control with automatic differentiation using graphics processors

    NASA Astrophysics Data System (ADS)

    Leung, Nelson; Abdelhafez, Mohamed; Chakram, Srivatsan; Naik, Ravi; Groszkowski, Peter; Koch, Jens; Schuster, David

    We implement quantum optimal control based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them into the optimization process with ease. We will describe efficient techniques to optimally control weakly anharmonic systems that are commonly encountered in circuit QED, including coupled superconducting transmon qubits and multi-cavity circuit QED systems. These systems allow for a rich variety of control schemes that quantum optimal control is well suited to explore.

  1. Structured Analysis of the Logistics Support Analysis (LSA) Task, and Integrated Logistic Support (ILS) Element, ’Standardization and Interoperability (S and I)’.

    DTIC Science & Technology

    1988-11-01

    system, using graphic techniques which enable users, analysts, and designers to get a clear and common picture of the system and how its parts fit...boxes into hierarchies suitable for computer implementation. ŗ. Structured Design uses tools, especially graphic ones, to render systems readily...LSA, PROCESSES, DATA FLOWS, DATA STORES, EX"RNAL ENTITIES, OVERALL SYSTEMS DESIGN PROCESS, over 19, ABSTRACT (Continue on reverse if necessary and

  2. Improvements in recall and food choices using a graphical method to deliver information of select nutrients.

    PubMed

    Pratt, Nathan S; Ellison, Brenna D; Benjamin, Aaron S; Nakamura, Manabu T

    2016-01-01

    Consumers have difficulty using nutrition information. We hypothesized that graphically delivering information of select nutrients relative to a target would allow individuals to process information in time-constrained settings more effectively than numerical information. Objectives of the study were to determine the efficacy of the graphical method in (1) improving memory of nutrient information and (2) improving consumer purchasing behavior in a restaurant. Values of fiber and protein per calorie were 2-dimensionally plotted alongside a target box. First, a randomized cued recall experiment was conducted (n=63). Recall accuracy of nutrition information improved by up to 43% when shown graphically instead of numerically. Second, the impact of graphical nutrition signposting on diner choices was tested in a cafeteria. Saturated fat and sodium information was also presented using color coding. Nutrient content of meals (n=362) was compared between 3 signposting phases: graphical, nutrition facts panels (NFP), or no nutrition label. Graphical signposting improved nutrient content of purchases in the intended direction, whereas NFP had no effect compared with the baseline. Calories ordered from total meals, entrées, and sides were significantly less during graphical signposting than no-label and NFP periods. For total meal and entrées, protein per calorie purchased was significantly higher and saturated fat significantly lower during graphical signposting than the other phases. Graphical signposting remained a predictor of calories and protein per calorie purchased in regression modeling. These findings demonstrate that graphically presenting nutrition information makes that information more available for decision making and influences behavior change in a realistic setting. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Surgical pathology report in the era of desktop publishing.

    PubMed

    Pillarisetti, S G

    1993-01-01

    Since it is believed that "a picture is worth a thousand words," incorporation of computer-generated line art was used as a adjunct to gross description in surgical pathology reporting in selected cases. The lack of an integrated software program was overcome by using commercially available graphic and word processing software. A library of drawings was developed over the last few years. Most time-consuming is the development of templates and the graphic library. With some effort it is possible to integrate graphics of high quality into surgical pathology reports.

  4. Advanced graphical user interface for multi-physics simulations using AMST

    NASA Astrophysics Data System (ADS)

    Hoffmann, Florian; Vogel, Frank

    2017-07-01

    Numerical modelling of particulate matter has gained much popularity in recent decades. Advanced Multi-physics Simulation Technology (AMST) is a state-of-the-art three dimensional numerical modelling technique combining the eX-tended Discrete Element Method (XDEM) with Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) [1]. One major limitation of this code is the lack of a graphical user interface (GUI) meaning that all pre-processing has to be made directly in a HDF5-file. This contribution presents the first graphical pre-processor developed for AMST.

  5. Interactive computer graphics - Why's, wherefore's and examples

    NASA Technical Reports Server (NTRS)

    Gregory, T. J.; Carmichael, R. L.

    1983-01-01

    The benefits of using computer graphics in design are briefly reviewed. It is shown that computer graphics substantially aids productivity by permitting errors in design to be found immediately and by greatly reducing the cost of fixing the errors and the cost of redoing the process. The possibilities offered by computer-generated displays in terms of information content are emphasized, along with the form in which the information is transferred. The human being is ideally and naturally suited to dealing with information in picture format, and the content rate in communication with pictures is several orders of magnitude greater than with words or even graphs. Since science and engineering involve communicating ideas, concepts, and information, the benefits of computer graphics cannot be overestimated.

  6. Graphic Arts: Orientation, Composition, and Paste-Up. Third Edition.

    ERIC Educational Resources Information Center

    Crummett, Dan

    This document contains teacher and student materials for a course in graphic arts. Ten units of instruction cover the following topics: (1) orientation; (2) shop safety; (3) shop organization; (4) printing processes; (5) paper; (6) typography; (7) typesetting; (8) design principles; (9) paste-up principles and procedures; and (10) proof procedures…

  7. A Comparison of the Use of Text Summaries, Plain Thumbnails, and Enhanced Thumbnails for Web Search Tasks.

    ERIC Educational Resources Information Center

    Woodruff, Allison; Rosenholtz, Ruth; Morrison, Julie B.; Faulring, Andrew; Pirolli, Peter

    2002-01-01

    Discussion of Web search strategies focuses on a comparative study of textual and graphical summarization mechanisms applied to search engine results. Suggests that thumbnail images (graphical summaries) can increase efficiency in processing results, and that enhanced thumbnails (augmented with readable textual elements) had more consistent…

  8. Getting the Bigger Picture: Children's Utilization of Graphics and Text

    ERIC Educational Resources Information Center

    Norman, Rebecca R.; Roberts, Kathryn L.

    2015-01-01

    This study examined 30 second graders' patterns of attention to graphics (e.g., maps, diagrams, photographs, illustrations) and their illustration extensions (e.g., captions, labels) in two informational texts, and how students processed these items (e.g., creating narrative or evaluating). Results indicate that students do tend to study different…

  9. Preschool-Aged Children Have Difficulty Constructing and Interpreting Simple Utterances Composed of Graphic Symbols

    ERIC Educational Resources Information Center

    Sutton, Ann; Trudeau, Natacha; Morford, Jill; Rios, Monica; Poirier, Marie-Andree

    2010-01-01

    Children who require augmentative and alternative communication (AAC) systems while they are in the process of acquiring language face unique challenges because they use graphic symbols for communication. In contrast to the situation of typically developing children, they use different modalities for comprehension (auditory) and expression…

  10. Lessons from a doctoral thesis.

    PubMed

    Peiris, A N; Mueller, R A; Sheridan, D P

    1990-01-01

    The production of a doctoral thesis is a time-consuming affair that until recently was done in conjunction with professional publishing services. Advances in computer technology have made many sophisticated desktop publishing techniques available to the microcomputer user. We describe the computer method used, the problems encountered, and the solutions improvised in the production of a doctoral thesis by computer. The Apple Macintosh was selected for its ease of use and intrinsic graphics capabilities. A scanner was used to incorporate text from published papers into a word processing program. The body of the text was updated and supplemented with new sections. Scanned graphics from the published papers were less suitable for publication, and the original data were replotted and modified with a graphics-drawing program. Graphics were imported and incorporated in the text. Final hard copy was produced by a laser printer and bound with both conventional and rapid new binding techniques. Microcomputer-based desktop processing methods provide a rapid and cost-effective means of communicating the written word. We anticipate that this evolving technology will have increased use by physicians in both the private and academic sectors.

  11. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  12. Pycortex: an interactive surface visualizer for fMRI

    PubMed Central

    Gao, James S.; Huth, Alexander G.; Lescroart, Mark D.; Gallant, Jack L.

    2015-01-01

    Surface visualizations of fMRI provide a comprehensive view of cortical activity. However, surface visualizations are difficult to generate and most common visualization techniques rely on unnecessary interpolation which limits the fidelity of the resulting maps. Furthermore, it is difficult to understand the relationship between flattened cortical surfaces and the underlying 3D anatomy using tools available currently. To address these problems we have developed pycortex, a Python toolbox for interactive surface mapping and visualization. Pycortex exploits the power of modern graphics cards to sample volumetric data on a per-pixel basis, allowing dense and accurate mapping of the voxel grid across the surface. Anatomical and functional information can be projected onto the cortical surface. The surface can be inflated and flattened interactively, aiding interpretation of the correspondence between the anatomical surface and the flattened cortical sheet. The output of pycortex can be viewed using WebGL, a technology compatible with modern web browsers. This allows complex fMRI surface maps to be distributed broadly online without requiring installation of complex software. PMID:26483666

  13. Stochastic and Deterministic Crystal Structure Solution Methods in GSAS-II: Monte Carlo/Simulated Annealing Versus Charge Flipping

    DOE PAGES

    Von Dreele, Robert

    2017-08-29

    One of the goals in developing GSAS-II was to expand from the capabilities of the original General Structure Analysis System (GSAS) which largely encompassed just structure refinement and post refinement analysis. GSAS-II has been written almost entirely in Python loaded with graphics, GUI and mathematical packages (matplotlib, pyOpenGL, wxpython, numpy and scipy). Thus, GSAS-II has a fully developed modern GUI as well as extensive graphical display of data and results. However, the structure and operation of Python has required new approaches to many of the algorithms used in crystal structure analysis. The extensions beyond GSAS include image calibration/integration as wellmore » as peak fitting and unit cell indexing for powder data which are precursors for structure solution. Structure solution within GSAS-II begins with either Pawley or LeBail extracted structure factors from powder data or those measured in a single crystal experiment. Both charge flipping and Monte Carlo-Simulated Annealing techniques are available; the former can be applied to (3+1) incommensurate structures as well as conventional 3D structures.« less

  14. Ionizing radiation measurements using low cost instruments for teaching in college or high-school in Brazil

    NASA Astrophysics Data System (ADS)

    Silva, M. C.; Vilela, D. C.; Migoto, V. G.; Gomes, M. P.; Martin, I. M.; Germano, J. S. E.

    2017-11-01

    Ionizing radiation one of modern physics experimental teaching in colleges and high school can be easily implemented today due to low coasts of detectors and also electronic circuits and data acquisition interfaces. First it is interesting to show to young’s students what is ionizing radiation and from where they appears near ground level? How it is possible to measure these radiations and how to check intensities variation during day, night, dry and wet periods in the same school? For increasing interest and stimulation in others students how to proceed in making the graphics of the ionizing radiation and presenting him in real time using Web internet facilities? Many others facilities like calibration of the detector using low intensities radioactive ionizing radiation sources, make comparison of the measurements and discussions of the results should be possible between many groups of students from several schools in the region of Brazil. This paper presents the experimental procedures including detectors and associated electronic including data acquisition, graphics elaboration and Web internet procedures to discuss and exchanging data measurements from several schools.

  15. Impact of memory bottleneck on the performance of graphics processing units

    NASA Astrophysics Data System (ADS)

    Son, Dong Oh; Choi, Hong Jun; Kim, Jong Myon; Kim, Cheol Hong

    2015-12-01

    Recent graphics processing units (GPUs) can process general-purpose applications as well as graphics applications with the help of various user-friendly application programming interfaces (APIs) supported by GPU vendors. Unfortunately, utilizing the hardware resource in the GPU efficiently is a challenging problem, since the GPU architecture is totally different to the traditional CPU architecture. To solve this problem, many studies have focused on the techniques for improving the system performance using GPUs. In this work, we analyze the GPU performance varying GPU parameters such as the number of cores and clock frequency. According to our simulations, the GPU performance can be improved by 125.8% and 16.2% on average as the number of cores and clock frequency increase, respectively. However, the performance is saturated when memory bottleneck problems incur due to huge data requests to the memory. The performance of GPUs can be improved as the memory bottleneck is reduced by changing GPU parameters dynamically.

  16. Development of web-GIS system for analysis of georeferenced geophysical data

    NASA Astrophysics Data System (ADS)

    Okladnikov, I.; Gordov, E. P.; Titov, A. G.; Bogomolov, V. Y.; Genina, E.; Martynova, Y.; Shulgina, T. M.

    2012-12-01

    Georeferenced datasets (meteorological databases, modeling and reanalysis results, remote sensing products, etc.) are currently actively used in numerous applications including modeling, interpretation and forecast of climatic and ecosystem changes for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their huge size which might constitute up to tens terabytes for a single dataset at present studies in the area of climate and environmental change require a special software support. A dedicated web-GIS information-computational system for analysis of georeferenced climatological and meteorological data has been created. The information-computational system consists of 4 basic parts: computational kernel developed using GNU Data Language (GDL), a set of PHP-controllers run within specialized web-portal, JavaScript class libraries for development of typical components of web mapping application graphical user interface (GUI) based on AJAX technology, and an archive of geophysical datasets. Computational kernel comprises of a number of dedicated modules for querying and extraction of data, mathematical and statistical data analysis, visualization, and preparing output files in geoTIFF and netCDF format containing processing results. Specialized web-portal consists of a web-server Apache, complying OGC standards Geoserver software which is used as a base for presenting cartographical information over the Web, and a set of PHP-controllers implementing web-mapping application logic and governing computational kernel. JavaScript libraries aiming at graphical user interface development are based on GeoExt library combining ExtJS Framework and OpenLayers software. The archive of geophysical data consists of a number of structured environmental datasets represented by data files in netCDF, HDF, GRIB, ESRI Shapefile formats. For processing by the system are available: two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, ECMWF ERA-40 Reanalysis, ECMWF ERA Interim Reanalysis, MRI/JMA APHRODITE's Water Resources Project Reanalysis, DWD Global Precipitation Climatology Centre's data, GMAO Modern Era-Retrospective analysis for Research and Applications, meteorological observational data for the territory of the former USSR for the 20th century, results of modeling by global and regional climatological models, and others. The system is already involved into a scientific research process. Particularly, recently the system was successfully used for analysis of Siberia climate changes and its impact in the region. The Web-GIS information-computational system for geophysical data analysis provides specialists involved into multidisciplinary research projects with reliable and practical instruments for complex analysis of climate and ecosystems changes on global and regional scales. Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified web-interface in a common graphical web-browser. This work is partially supported by the Ministry of education and science of the Russian Federation (contract #07.514.114044), projects IV.31.1.5, IV.31.2.7, RFBR grants #10-07-00547a, #11-05-01190a, and integrated project SB RAS #131.

  17. Systems Biology Graphical Notation: Process Description language Level 1 Version 1.3.

    PubMed

    Moodie, Stuart; Le Novère, Nicolas; Demir, Emek; Mi, Huaiyu; Villéger, Alice

    2015-09-04

    The Systems Biological Graphical Notation (SBGN) is an international community effort for standardized graphical representations of biological pathways and networks. The goal of SBGN is to provide unambiguous pathway and network maps for readers with different scientific backgrounds as well as to support efficient and accurate exchange of biological knowledge between different research communities, industry, and other players in systems biology. Three SBGN languages, Process Description (PD), Entity Relationship (ER) and Activity Flow (AF), allow for the representation of different aspects of biological and biochemical systems at different levels of detail. The SBGN Process Description language represents biological entities and processes between these entities within a network. SBGN PD focuses on the mechanistic description and temporal dependencies of biological interactions and transformations. The nodes (elements) are split into entity nodes describing, e.g., metabolites, proteins, genes and complexes, and process nodes describing, e.g., reactions and associations. The edges (connections) provide descriptions of relationships (or influences) between the nodes, such as consumption, production, stimulation and inhibition. Among all three languages of SBGN, PD is the closest to metabolic and regulatory pathways in biological literature and textbooks, but its well-defined semantics offer a superior precision in expressing biological knowledge.

  18. Literacy learning in users of AAC: A neurocognitive perspective.

    PubMed

    Van Balkom, Hans; Verhoeven, Ludo

    2010-09-01

    The understanding of written or printed text or discourse - depicted either in orthographical, graphic-visual or tactile symbols - calls upon both bottom-up word recognition processes and top-down comprehension processes. Different architectures have been proposed to account for literacy processes. Research has shown that the first steps in perceiving, processing and deriving conceptual meaning from words, graphic symbols, manual signs, and co-speech gestures or tactile manual signing and tangible symbols can be seen as identical and collectively (sub)activated. Results from recent brain research and neurolinguistics have revealed new insights in the reading process of typical and atypical readers and may provide verifiable evidence for improved literacy assessment and the validation of early intervention programs for AAC users.

  19. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser

    PubMed Central

    Almeida, Jonas S.; Iriabho, Egiebade E.; Gorrepati, Vijaya L.; Wilkinson, Sean R.; Grüneberg, Alexander; Robbins, David E.; Hackney, James R.

    2012-01-01

    Background: Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. Materials and Methods: ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Results: Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. Conclusions: The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local “download and installation”. PMID:22934238

  20. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser.

    PubMed

    Almeida, Jonas S; Iriabho, Egiebade E; Gorrepati, Vijaya L; Wilkinson, Sean R; Grüneberg, Alexander; Robbins, David E; Hackney, James R

    2012-01-01

    Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local "download and installation".

  1. Heterogeneous computing architecture for fast detection of SNP-SNP interactions.

    PubMed

    Sluga, Davor; Curk, Tomaz; Zupan, Blaz; Lotric, Uros

    2014-06-25

    The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems.

  2. Heterogeneous computing architecture for fast detection of SNP-SNP interactions

    PubMed Central

    2014-01-01

    Background The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. Results We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. Conclusions General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems. PMID:24964802

  3. Explorationists and dinosaurs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, W.S.

    1993-02-01

    The exploration industry is changing, exploration technology is changing and the explorationist's job is changing. Resource companies are diversifying internationally and their central organizations are providing advisors rather than services. As a result, the relationship between the resource company and the contractor is changing. Resource companies are promoting standards so that all contract services in all parts of the world will look the same to their advisors. Contractors, for competitive reasons, want to look [open quotes]different[close quotes] from other contractors. The resource companies must encourage competition between contractors to insure the availability of new technology but must also resist themore » current trend of burdening the contractor with more and more of the risk involved in exploration. It is becoming more and more obvious that geophysical expenditures represent the best [open quotes]value added[close quotes] expenditures in exploration and development budgets. As a result, seismic-related contractors represent the growth component of our industry. The predominant growth is in 3-D seismic technology, and this growth is being further propelled by the computational power of the new generation of massively parallel computers and by recent advances in computer graphic techniques. Interpretation of seismic data involves the analysis of wavelet shapes and amplitudes prior to stacking the data. Thus, modern interpretation involves understanding compressional waves, shear waves, and propagating modes which create noise and interference. Modern interpretation and processing are carried out simultaneously, iteratively, and interactively and involve many physics-related concepts. These concepts are not merely tools for the interpretation, they are the interpretation. Explorationists who do not recognize this fact are going the way of the dinosaurs.« less

  4. Laser marking as a result of applying reverse engineering

    NASA Astrophysics Data System (ADS)

    Mihalache, Andrei; Nagîţ, Gheorghe; Rîpanu, Marius Ionuţ; Slǎtineanu, Laurenţiu; Dodun, Oana; Coteaţǎ, Margareta

    2018-05-01

    The elaboration of a modern manufacturing technology needs a certain quantum of information concerning the part to be obtained. When it is necessary to elaborate the technology for an existing object, such an information could be ensured by using the principles specific to the reverse engineering. Essentially, in the case of this method, the analysis of the surfaces and of other characteristics of the part must offer enough information for the elaboration of the part manufacturing technology. On the other hand, it is known that the laser marking is a processing method able to ensure the transfer of various inscriptions or drawings on a part. Sometimes, the laser marking could be based on the analysis of an existing object, whose image could be used to generate the same object or an improved object. There are many groups of factors able to affect the results of applying the laser marking process. A theoretical analysis was proposed to show that the heights of triangles obtained by means of a CNC marking equipment depend on the width of the line generated by the laser spot on the workpiece surface. An experimental research was thought and materialized to highlight the influence exerted by the line with and the angle of lines intersections on the accuracy of the marking process. By mathematical processing of the experimental results, empirical mathematical models were determined. The power type model and the graphical representation elaborated on the base of this model offered an image concerning the influences exerted by the considered input factors on the marking process accuracy.

  5. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    PubMed

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  6. A software architecture for automating operations processes

    NASA Technical Reports Server (NTRS)

    Miller, Kevin J.

    1994-01-01

    The Operations Engineering Lab (OEL) at JPL has developed a software architecture based on an integrated toolkit approach for simplifying and automating mission operations tasks. The toolkit approach is based on building adaptable, reusable graphical tools that are integrated through a combination of libraries, scripts, and system-level user interface shells. The graphical interface shells are designed to integrate and visually guide a user through the complex steps in an operations process. They provide a user with an integrated system-level picture of an overall process, defining the required inputs and possible output through interactive on-screen graphics. The OEL has developed the software for building these process-oriented graphical user interface (GUI) shells. The OEL Shell development system (OEL Shell) is an extension of JPL's Widget Creation Library (WCL). The OEL Shell system can be used to easily build user interfaces for running complex processes, applications with extensive command-line interfaces, and tool-integration tasks. The interface shells display a logical process flow using arrows and box graphics. They also allow a user to select which output products are desired and which input sources are needed, eliminating the need to know which program and its associated command-line parameters must be executed in each case. The shells have also proved valuable for use as operations training tools because of the OEL Shell hypertext help environment. The OEL toolkit approach is guided by several principles, including the use of ASCII text file interfaces with a multimission format, Perl scripts for mission-specific adaptation code, and programs that include a simple command-line interface for batch mode processing. Projects can adapt the interface shells by simple changes to the resources configuration file. This approach has allowed the development of sophisticated, automated software systems that are easy, cheap, and fast to build. This paper will discuss our toolkit approach and the OEL Shell interface builder in the context of a real operations process example. The paper will discuss the design and implementation of a Ulysses toolkit for generating the mission sequence of events. The Sequence of Events Generation (SEG) system provides an adaptable multimission toolkit for producing a time-ordered listing and timeline display of spacecraft commands, state changes, and required ground activities.

  7. The DFVLR main department for central data processing, 1976 - 1983

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Data processing, equipment and systems operation, operative and user systems, user services, computer networks and communications, text processing, computer graphics, and high power computers are discussed.

  8. Graphical Representations of Data Improve Student Understanding of Measurement and Uncertainty: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Susac, Ana; Bubic, Andreja; Martinjak, Petra; Planinic, Maja; Palmovic, Marijan

    2017-01-01

    Developing a better understanding of the measurement process and measurement uncertainty is one of the main goals of university physics laboratory courses. This study investigated the influence of graphical representation of data on student understanding and interpreting of measurement results. A sample of 101 undergraduate students (48 first year…

  9. Expanding Students' Analytical Frameworks through the Study of Graphic Novels

    ERIC Educational Resources Information Center

    Connors, Sean P.

    2015-01-01

    When teachers work with students to construct a metalanguage that they can draw on to describe and analyze graphic novels, and then invite students to apply that metalanguage in the service of composing multimodal texts of their own, teachers broaden students' analytical frameworks. In the process of doing so, teachers empower students. In this…

  10. GUIDON-WATCH: A Graphic Interface for Viewing a Knowledge-Based System. Technical Report #14.

    ERIC Educational Resources Information Center

    Richer, Mark H.; Clancey, William J.

    This paper describes GUIDON-WATCH, a graphic interface that uses multiple windows and a mouse to allow a student to browse a knowledge base and view reasoning processes during diagnostic problem solving. The GUIDON project at Stanford University is investigating how knowledge-based systems can provide the basis for teaching programs, and this…

  11. Color-Coded Graphic Organizers for Teaching Writing to Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Ewoldt, Kathy B.; Morgan, Joseph John

    2017-01-01

    A commonly used method for supporting the writing of students with learning disabilities (LD), graphic organizers have been shown to effectively support instruction for students with LD in a variety of content areas (Dexter & Hughes, 2011). Students with LD often struggle with the process of developing their ideas into organized sentences; the…

  12. Installation to Production of a Large-Scale General Purpose Graphics Processing Unit (GPGPU) Cluster at the U.S. Army Research Laboratory: Thufir

    DTIC Science & Technology

    2014-09-01

    semiempirical and ray-optical models. For example, the semiempirical COST-Walfisch- Ikegami model (3) estimates the received power predominantly on the...Books: Philadelphia, PA, 1965. 2. Rick, T .; Mathur, R. Fast Edge-Diffraction-Based Radio Wave Propagation Model for Graphics Hardware. Proceedings of

  13. Integrating Surface Modeling into the Engineering Design Graphics Curriculum

    ERIC Educational Resources Information Center

    Hartman, Nathan W.

    2006-01-01

    It has been suggested there is a knowledge base that surrounds the use of 3D modeling within the engineering design process and correspondingly within engineering design graphics education. While solid modeling receives a great deal of attention and discussion relative to curriculum efforts, and rightly so, surface modeling is an equally viable 3D…

  14. Mediating Emotive Empathy With Informational Text: Three Students' Think-Aloud Protocols of "Gettysburg: The Graphic Novel"

    ERIC Educational Resources Information Center

    Chisholm, James S.; Shelton, Ashley L.; Sheffield, Caroline C.

    2017-01-01

    Although the popularity and use of graphic novels in literacy instruction has increased in the last decade, few sustained analyses have examined adolescents' reading processes with informational texts in social studies classrooms. Recent research that has foregrounded visual, emotional, and embodied textual responses situates this qualitative…

  15. Standardization of a Graphic Symbol System as an Alternative Communication Tool for Turkish

    ERIC Educational Resources Information Center

    Karal, Yasemin; Karal, Hasan; Silbir, Lokman; Altun, Taner

    2016-01-01

    Graphic symbols are commonly used across countries in order to support individuals with communicative deficiency. The literature review revealed the absence of such a system for Turkish socio-cultural context. In this study, the aim was to develop a symbol system appropriate for the Turkish socio-cultural context. The process began with studies…

  16. Perception in statistical graphics

    NASA Astrophysics Data System (ADS)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  17. EAGLEView: A surface and grid generation program and its data management

    NASA Technical Reports Server (NTRS)

    Remotigue, M. G.; Hart, E. T.; Stokes, M. L.

    1992-01-01

    An old and proven grid generation code, the EAGLE grid generation package, is given an added dimension of a graphical interface and a real time data base manager. The Numerical Aerodynamic Simulation (NAS) Panel Library is used for the graphical user interface. Through the panels, EAGLEView constructs the EAGLE script command and sends it to EAGLE to be processed. After the object is created, the script is saved in a mini-buffer which can be edited and/or saved and reinterpreted. The graphical objects are set-up in a linked-list and can be selected or queried by pointing and clicking the mouse. The added graphical enhancement to the EAGLE system emphasizes the unique capability to construct field points around complex geometry and visualize the construction every step of the way.

  18. Graphics processing unit based computation for NDE applications

    NASA Astrophysics Data System (ADS)

    Nahas, C. A.; Rajagopal, Prabhu; Balasubramaniam, Krishnan; Krishnamurthy, C. V.

    2012-05-01

    Advances in parallel processing in recent years are helping to improve the cost of numerical simulation. Breakthroughs in Graphical Processing Unit (GPU) based computation now offer the prospect of further drastic improvements. The introduction of 'compute unified device architecture' (CUDA) by NVIDIA (the global technology company based in Santa Clara, California, USA) has made programming GPUs for general purpose computing accessible to the average programmer. Here we use CUDA to develop parallel finite difference schemes as applicable to two problems of interest to NDE community, namely heat diffusion and elastic wave propagation. The implementations are for two-dimensions. Performance improvement of the GPU implementation against serial CPU implementation is then discussed.

  19. Social complexity, modernity and suicide: an assessment of Durkheim's suicide from the perspective of a non-linear analysis of complex social systems.

    PubMed

    Condorelli, Rosalia

    2016-01-01

    Can we share even today the same vision of modernity which Durkheim left us by its suicide analysis? or can society 'surprise us'? The answer to these questions can be inspired by several studies which found that beginning the second half of the twentieth century suicides in western countries more industrialized and modernized do not increase in a constant, linear way as modernization and social fragmentation process increases, as well as Durkheim's theory seems to lead us to predict. Despite continued modernizing process, they found stabilizing or falling overall suicide rate trends. Therefore, a gradual process of adaptation to the stress of modernization associated to low social integration levels seems to be activated in modern society. Assuming this perspective, the paper highlights as this tendency may be understood in the light of the new concept of social systems as complex adaptive systems, systems which are able to adapt to environmental perturbations and generate as a whole surprising, emergent effects due to nonlinear interactions among their components. So, in the frame of Nonlinear Dynamical System Modeling, we formalize the logic of suicide decision-making process responsible for changes at aggregate level in suicide growth rates by a nonlinear differential equation structured in a logistic way, and in so doing we attempt to capture the mechanism underlying the change process in suicide growth rate and to test the hypothesis that system's dynamics exhibits a restrained increase process as expression of an adaptation process to the liquidity of social ties in modern society. In particular, a Nonlinear Logistic Map is applied to suicide data in a modern society such as the Italian one from 1875 to 2010. The analytic results, seeming to confirm the idea of the activation of an adaptation process to the liquidity of social ties, constitutes an opportunity for a more general reflection on the current configuration of modern society, by relating the Durkheimian Theory with the Halbwachs' Theory and most current visions of modernity such as the Baumanian one. Complexity completes the interpretative framework by rooting the generating mechanism of adaptation process in the precondition of a new General Theory of Systems making the non linearity property of social system's interactions and surprise the functioning and evolution rule of social systems.

  20. Transportable Applications Environment (TAE) Plus: A NASA tool for building and managing graphical user interfaces

    NASA Technical Reports Server (NTRS)

    Szczur, Martha R.

    1991-01-01

    The Transportable Applications Environment (TAE) Plus, developed at GSFC, is an advanced portable user interface development environment which simplifies the process of creating and managing complex application graphical user interfaces (GUI's), supports prototyping, allows applications to be ported easily between different platforms and encourages appropriate levels of user interface consistency between applications. The following topics are discussed: the capabilities of the TAE Plus tool; how the implementation has utilized state-of-the-art technologies within graphic workstations; and how it has been used both within and outside of NASA.

  1. The Capabilities of the Graphical Observation Scheduling System (GROSS) as Used by the Astro-2 Spacelab Mission

    NASA Technical Reports Server (NTRS)

    Phillips, Shaun

    1996-01-01

    The Graphical Observation Scheduling System (GROSS) and its functionality and editing capabilities are reported on. The GROSS system was developed as a replacement for a suite of existing programs and associated processes with the aim of: providing a software tool that combines the functionality of several of the existing programs, and provides a Graphical User Interface (GUI) that gives greater data visibility and editing capabilities. It is considered that the improved editing capability provided by this approach enhanced the efficiency of the second astronomical Spacelab mission's (ASTRO-2) mission planning.

  2. Standardized languages and notations for graphical modelling of patient care processes: a systematic review.

    PubMed

    Mincarone, Pierpaolo; Leo, Carlo Giacomo; Trujillo-Martín, Maria Del Mar; Manson, Jan; Guarino, Roberto; Ponzini, Giuseppe; Sabina, Saverio

    2018-04-01

    The importance of working toward quality improvement in healthcare implies an increasing interest in analysing, understanding and optimizing process logic and sequences of activities embedded in healthcare processes. Their graphical representation promotes faster learning, higher retention and better compliance. The study identifies standardized graphical languages and notations applied to patient care processes and investigates their usefulness in the healthcare setting. Peer-reviewed literature up to 19 May 2016. Information complemented by a questionnaire sent to the authors of selected studies. Systematic review conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. Five authors extracted results of selected studies. Ten articles met the inclusion criteria. One notation and language for healthcare process modelling were identified with an application to patient care processes: Business Process Model and Notation and Unified Modeling Language™. One of the authors of every selected study completed the questionnaire. Users' comprehensibility and facilitation of inter-professional analysis of processes have been recognized, in the filled in questionnaires, as major strengths for process modelling in healthcare. Both the notation and the language could increase the clarity of presentation thanks to their visual properties, the capacity of easily managing macro and micro scenarios, the possibility of clearly and precisely representing the process logic. Both could increase guidelines/pathways applicability by representing complex scenarios through charts and algorithms hence contributing to reduce unjustified practice variations which negatively impact on quality of care and patient safety.

  3. People detection method using graphics processing units for a mobile robot with an omnidirectional camera

    NASA Astrophysics Data System (ADS)

    Kang, Sungil; Roh, Annah; Nam, Bodam; Hong, Hyunki

    2011-12-01

    This paper presents a novel vision system for people detection using an omnidirectional camera mounted on a mobile robot. In order to determine regions of interest (ROI), we compute a dense optical flow map using graphics processing units, which enable us to examine compliance with the ego-motion of the robot in a dynamic environment. Shape-based classification algorithms are employed to sort ROIs into human beings and nonhumans. The experimental results show that the proposed system detects people more precisely than previous methods.

  4. L3 Interactive Data Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohn, Michael; Adams, Paul

    2006-09-05

    The L3 system is a computational steering environment for image processing and scientific computing. It consists of an interactive graphical language and interface. Its purpose is to help advanced users in controlling their computational software and assist in the management of data accumulated during numerical experiments. L3 provides a combination of features not found in other environments; these are: - textual and graphical construction of programs - persistence of programs and associated data - direct mapping between the scripts, the parameters, and the produced data - implicit hierarchial data organization - full programmability, including conditionals and functions - incremental executionmore » of programs The software includes the l3 language and the graphical environment. The language is a single-assignment functional language; the implementation consists of lexer, parser, interpreter, storage handler, and editing support, The graphical environment is an event-driven nested list viewer/editor providing graphical elements corresponding to the language. These elements are both the represenation of a users program and active interfaces to the values computed by that program.« less

  5. Color postprocessing for 3-dimensional finite element mesh quality evaluation and evolving graphical workstation

    NASA Technical Reports Server (NTRS)

    Panthaki, Malcolm J.

    1987-01-01

    Three general tasks on general-purpose, interactive color graphics postprocessing for three-dimensional computational mechanics were accomplished. First, the existing program (POSTPRO3D) is ported to a high-resolution device. In the course of this transfer, numerous enhancements are implemented in the program. The performance of the hardware was evaluated from the point of view of engineering postprocessing, and the characteristics of future hardware were discussed. Second, interactive graphical tools implemented to facilitate qualitative mesh evaluation from a single analysis. The literature was surveyed and a bibliography compiled. Qualitative mesh sensors were examined, and the use of two-dimensional plots of unaveraged responses on the surface of three-dimensional continua was emphasized in an interactive color raster graphics environment. Finally, a postprocessing environment was designed for state-of-the-art workstation technology. Modularity, personalization of the environment, integration of the engineering design processes, and the development and use of high-level graphics tools are some of the features of the intended environment.

  6. Integration of Modelling and Graphics to Create an Infrared Signal Processing Test Bed

    NASA Astrophysics Data System (ADS)

    Sethi, H. R.; Ralph, John E.

    1989-03-01

    The work reported in this paper was carried out as part of a contract with MoD (PE) UK. It considers the problems associated with realistic modelling of a passive infrared system in an operational environment. Ideally all aspects of the system and environment should be integrated into a complete end-to-end simulation but in the past limited computing power has prevented this. Recent developments in workstation technology and the increasing availability of parallel processing techniques makes the end-to-end simulation possible. However the complexity and speed of such simulations means difficulties for the operator in controlling the software and understanding the results. These difficulties can be greatly reduced by providing an extremely user friendly interface and a very flexible, high power, high resolution colour graphics capability. Most system modelling is based on separate software simulation of the individual components of the system itself and its environment. These component models may have their own characteristic inbuilt assumptions and approximations, may be written in the language favoured by the originator and may have a wide variety of input and output conventions and requirements. The models and their limitations need to be matched to the range of conditions appropriate to the operational scenerio. A comprehensive set of data bases needs to be generated by the component models and these data bases must be made readily available to the investigator. Performance measures need to be defined and displayed in some convenient graphics form. Some options are presented for combining available hardware and software to create an environment within which the models can be integrated, and which provide the required man-machine interface, graphics and computing power. The impact of massively parallel processing and artificial intelligence will be discussed. Parallel processing will make real time end-to-end simulation possible and will greatly improve the graphical visualisation of the model output data. Artificial intelligence should help to enhance the man-machine interface.

  7. The Information System at CeSAM

    NASA Astrophysics Data System (ADS)

    Agneray, F.; Gimenez, S.; Moreau, C.; Roehlly, Y.

    2012-09-01

    Modern large observational programmes produce important amounts of data from various origins, and need high level quality control, fast data access via easy-to-use graphic interfaces, as well as possibility to cross-correlate informations coming from different observations. The Centre de donnéeS Astrophysique de Marseille (CeSAM) offer web access to VO compliant Information Systems to access data of different projects (VVDS, HeDAM, EXODAT, HST-COSMOS,…), including ancillary data obtained outside Laboratoire d'Astrophysique de Marseille (LAM) control. The CeSAM Information Systems provides download of catalogues and some additional services like: search, extract and display imaging and spectroscopic data by multi-criteria and Cone Search interfaces.

  8. Pre- and postprocessing techniques for determining goodness of computational meshes

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Westermann, T.; Bass, J. M.

    1993-01-01

    Research in error estimation, mesh conditioning, and solution enhancement for finite element, finite difference, and finite volume methods has been incorporated into AUDITOR, a modern, user-friendly code, which operates on 2D and 3D unstructured neutral files to improve the accuracy and reliability of computational results. Residual error estimation capabilities provide local and global estimates of solution error in the energy norm. Higher order results for derived quantities may be extracted from initial solutions. Within the X-MOTIF graphical user interface, extensive visualization capabilities support critical evaluation of results in linear elasticity, steady state heat transfer, and both compressible and incompressible fluid dynamics.

  9. Computational Physics in a Nutshell

    NASA Astrophysics Data System (ADS)

    Schillaci, Michael

    2001-11-01

    Too often students of science are expected to ``pick-up'' what they need to know about the Art of Science. A description of the two-semester Computational Physics course being taught by the author offers a remedy to this situation. The course teaches students the three pillars of modern scientific research: Problem Solving, Programming, and Presentation. Using FORTRAN, LaTeXe, MAPLE V, HTML, and JAVA, students learn the fundamentals of algorithm development, how to implement classes and packages written by others, how to produce publication quality graphics and documents and how to publish them on the world-wide-web. The course content is outlined and project examples are offered.

  10. Astronomical Simulations Using Visual Python

    NASA Astrophysics Data System (ADS)

    Cobb, Michael L.

    2007-05-01

    The Physics and Engineering Physics Department at Southeast Missouri State University has adopted the “Matter and Interactions I Modern Mechanics” text by Chabay and Sherwood for our calculus based introductory physics course. We have fully integrated the use of modeling and simulations by using the Visual Python language also know as VPython. This powerful, high level, object orientated language with full three dimensional, stereo graphics has stimulated both my students and myself to find wider applications for our new found skills. We have successfully modeled gravitational resonances in planetary rings, galaxy collisions, and planetary orbits around binary star systems. This talk will provide a quick overview of VPython and demonstrate the various simulations.

  11. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loughry, Thomas A.

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to tenmore » times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.« less

  12. Graphics processing unit accelerated intensity-based optical coherence tomography angiography using differential frames with real-time motion correction.

    PubMed

    Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi

    2014-02-01

    We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.

  13. GPU acceleration of Runge Kutta-Fehlberg and its comparison with Dormand-Prince method

    NASA Astrophysics Data System (ADS)

    Seen, Wo Mei; Gobithaasan, R. U.; Miura, Kenjiro T.

    2014-07-01

    There is a significant reduction of processing time and speedup of performance in computer graphics with the emergence of Graphic Processing Units (GPUs). GPUs have been developed to surpass Central Processing Unit (CPU) in terms of performance and processing speed. This evolution has opened up a new area in computing and researches where highly parallel GPU has been used for non-graphical algorithms. Physical or phenomenal simulations and modelling can be accelerated through General Purpose Graphic Processing Units (GPGPU) and Compute Unified Device Architecture (CUDA) implementations. These phenomena can be represented with mathematical models in the form of Ordinary Differential Equations (ODEs) which encompasses the gist of change rate between independent and dependent variables. ODEs are numerically integrated over time in order to simulate these behaviours. The classical Runge-Kutta (RK) scheme is the common method used to numerically solve ODEs. The Runge Kutta Fehlberg (RKF) scheme has been specially developed to provide an estimate of the principal local truncation error at each step, known as embedding estimate technique. This paper delves into the implementation of RKF scheme for GPU devices and compares its result with Dorman Prince method. A pseudo code is developed to show the implementation in detail. Hence, practitioners will be able to understand the data allocation in GPU, formation of RKF kernels and the flow of data to/from GPU-CPU upon RKF kernel evaluation. The pseudo code is then written in C Language and two ODE models are executed to show the achievable speedup as compared to CPU implementation. The accuracy and efficiency of the proposed implementation method is discussed in the final section of this paper.

  14. Graphic Strategies for Analyzing and Interpreting Curricular Mapping Data

    PubMed Central

    Leonard, Sean T.

    2010-01-01

    Objective To describe curricular mapping strategies used in analyzing and interpreting curricular mapping data and present findings on how these strategies were used to facilitate curricular development. Design Nova Southeastern University's doctor of pharmacy curriculum was mapped to the college's educational outcomes. The mapping process included development of educational outcomes followed by analysis of course material and semi-structured interviews with course faculty members. Data collected per course outcome included learning opportunities and assessment measures used. Assessment Nearly 1,000 variables and 10,000 discrete rows of curricular data were collected. Graphic representations of curricular data were created using bar charts and stacked area graphs relating the learning opportunities to the educational outcomes. Graphs were used in the curricular evaluation and development processes to facilitate the identification of curricular holes, sequencing misalignments, learning opportunities, and assessment measures. Conclusion Mapping strategies that use graphic representations of curricular data serve as effective diagnostic and curricular development tools. PMID:20798804

  15. Graphic strategies for analyzing and interpreting curricular mapping data.

    PubMed

    Armayor, Graciela M; Leonard, Sean T

    2010-06-15

    To describe curricular mapping strategies used in analyzing and interpreting curricular mapping data and present findings on how these strategies were used to facilitate curricular development. Nova Southeastern University's doctor of pharmacy curriculum was mapped to the college's educational outcomes. The mapping process included development of educational outcomes followed by analysis of course material and semi-structured interviews with course faculty members. Data collected per course outcome included learning opportunities and assessment measures used. Nearly 1,000 variables and 10,000 discrete rows of curricular data were collected. Graphic representations of curricular data were created using bar charts and stacked area graphs relating the learning opportunities to the educational outcomes. Graphs were used in the curricular evaluation and development processes to facilitate the identification of curricular holes, sequencing misalignments, learning opportunities, and assessment measures. Mapping strategies that use graphic representations of curricular data serve as effective diagnostic and curricular development tools.

  16. Potential Application of a Graphical Processing Unit to Parallel Computations in the NUBEAM Code

    NASA Astrophysics Data System (ADS)

    Payne, J.; McCune, D.; Prater, R.

    2010-11-01

    NUBEAM is a comprehensive computational Monte Carlo based model for neutral beam injection (NBI) in tokamaks. NUBEAM computes NBI-relevant profiles in tokamak plasmas by tracking the deposition and the slowing of fast ions. At the core of NUBEAM are vector calculations used to track fast ions. These calculations have recently been parallelized to run on MPI clusters. However, cost and interlink bandwidth limit the ability to fully parallelize NUBEAM on an MPI cluster. Recent implementation of double precision capabilities for Graphical Processing Units (GPUs) presents a cost effective and high performance alternative or complement to MPI computation. Commercially available graphics cards can achieve up to 672 GFLOPS double precision and can handle hundreds of thousands of threads. The ability to execute at least one thread per particle simultaneously could significantly reduce the execution time and the statistical noise of NUBEAM. Progress on implementation on a GPU will be presented.

  17. A human performance evaluation of graphic symbol-design features.

    PubMed

    Samet, M G; Geiselman, R E; Landee, B M

    1982-06-01

    16 subjects learned each of two tactical display symbol sets (conventional symbols and iconic symbols) in turn and were then shown a series of graphic displays containing various symbol configurations. For each display, the subject was asked questions corresponding to different behavioral processes relating to symbol use (identification, search, comparison, pattern recognition). The results indicated that: (a) conventional symbols yielded faster pattern-recognition performance than iconic symbols, and iconic symbols did not yield faster identification than conventional symbols, and (b) the portrayal of additional feature information (through the use of perimeter density or vector projection coding) slowed processing of the core symbol information in four tasks, but certain symbol-design features created less perceptual interference and had greater correspondence with the portrayal of specific tactical concepts than others. The results were discussed in terms of the complexities involved in the selection of symbol design features for use in graphic tactical displays.

  18. Evaluation of graphic cardiovascular display in a high-fidelity simulator.

    PubMed

    Agutter, James; Drews, Frank; Syroid, Noah; Westneskow, Dwayne; Albert, Rob; Strayer, David; Bermudez, Julio; Weinger, Matthew B

    2003-11-01

    "Human error" in anesthesia can be attributed to misleading information from patient monitors or to the physician's failure to recognize a pattern. A graphic representation of monitored data may provide better support for detection, diagnosis, and treatment. We designed a graphic display to show hemodynamic variables. Twenty anesthesiologists were asked to assume care of a simulated patient. Half the participants used the graphic cardiovascular display; the other half used a Datex As/3 monitor. One scenario was a total hip replacement with a transfusion reaction to mismatched blood. The second scenario was a radical prostatectomy with 1.5 L of blood loss and myocardial ischemia. Subjects who used the graphic display detected myocardial ischemia 2 min sooner than those who did not use the display. Treatment was initiated sooner (2.5 versus 4.9 min). There were no significant differences between groups in the hip replacement scenario. Systolic blood pressure deviated less from baseline, central venous pressure was closer to its baseline, and arterial oxygen saturation was higher at the end of the case when the graphic display was used. The study lends some support for the hypothesis that providing clinical information graphically in a display designed with emergent features and functional relationships can improve clinicians' ability to detect, diagnose, manage, and treat critical cardiovascular events in a simulated environment. A graphic representation of monitored data may provide better support for detection, diagnosis, and treatment. A user-centered design process led to a novel object-oriented graphic display of hemodynamic variables containing emergent features and functional relationships. In a simulated environment, this display appeared to support clinicians' ability to diagnose, manage, and treat a critical cardiovascular event in a simulated environment. We designed a graphic display to show hemodynamic variables. The study provides some support for the hypothesis that providing clinical information graphically in a display designed with emergent features and functional relationships can improve clinicians' ability to detect, diagnosis, mange, and treat critical cardiovascular events in a simulated environment.

  19. Graphic gambling warnings: how they affect emotions, cognitive responses and attitude change.

    PubMed

    Muñoz, Yaromir; Chebat, Jean-Charles; Borges, Adilson

    2013-09-01

    The present study focuses on the effects of graphic warnings related to excessive gambling. It is based upon a theoretical model derived from both the Protection Motivation Theory (PMT) and the Elaboration Likelihood Model (ELM). We focus on video lottery terminal (VLT), one of the most hazardous format in the gaming industry. Our cohort consisted of 103 actual gamblers who reported previous gambling activity on VLT's on a regular basis. We assess the effectiveness of graphic warnings vs. text-only warnings and the effectiveness of two major arguments (i.e., family vs. financial disruption). A 2 × 2 factorial design was used to test the direct and combined effects of two variables (i.e., warning content and presence vs. absence of a graphic). It was found that the presence of a graphic enhances both cognitive appraisal and fear, and has positive effects on the Depth of Information Processing. In addition, graphic content combined with family disruptions is more effective for changing attitudes and complying with the warning than other combinations of the manipulated variables. It is proposed that ELM and PMT complement each other to explain the effects of warnings. Theoretical and practical implications are discussed.

  20. How To... Get Creative with WordArt

    ERIC Educational Resources Information Center

    Lindroth, Linda

    2004-01-01

    WordArt is a wizard feature in MS Word that changes text into a graphic object. It is located in the MS Word menu bar: Insert, Picture, WordArt. Text can be edited to create a multitude of special effects--all with very little, if any, graphic arts training. WordArt is perfect for word processing writing, allowing even primary students to create…

Top