Science.gov

Sample records for mobile graphics processing

  1. Evaluating Mobile Graphics Processing Units (GPUs) for Real-Time Resource Constrained Applications

    SciTech Connect

    Meredith, J; Conger, J; Liu, Y; Johnson, J

    2005-11-11

    Modern graphics processing units (GPUs) can provide tremendous performance boosts for some applications beyond what a single CPU can accomplish, and their performance is growing at a rate faster than CPUs as well. Mobile GPUs available for laptops have the small form factor and low power requirements suitable for use in embedded processing. We evaluated several desktop and mobile GPUs and CPUs on traditional and non-traditional graphics tasks, as well as on the most time consuming pieces of a full hyperspectral imaging application. Accuracy remained high despite small differences in arithmetic operations like rounding. Performance improvements are summarized here relative to a desktop Pentium 4 CPU.

  2. People detection method using graphics processing units for a mobile robot with an omnidirectional camera

    NASA Astrophysics Data System (ADS)

    Kang, Sungil; Roh, Annah; Nam, Bodam; Hong, Hyunki

    2011-12-01

    This paper presents a novel vision system for people detection using an omnidirectional camera mounted on a mobile robot. In order to determine regions of interest (ROI), we compute a dense optical flow map using graphics processing units, which enable us to examine compliance with the ego-motion of the robot in a dynamic environment. Shape-based classification algorithms are employed to sort ROIs into human beings and nonhumans. The experimental results show that the proposed system detects people more precisely than previous methods.

  3. Graphical Language for Data Processing

    NASA Technical Reports Server (NTRS)

    Alphonso, Keith

    2011-01-01

    A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.

  4. Graphic Design in Libraries: A Conceptual Process

    ERIC Educational Resources Information Center

    Ruiz, Miguel

    2014-01-01

    Providing successful library services requires efficient and effective communication with users; therefore, it is important that content creators who develop visual materials understand key components of design and, specifically, develop a holistic graphic design process. Graphic design, as a form of visual communication, is the process of…

  5. Graphics hardware accelerated panorama builder for mobile phones

    NASA Astrophysics Data System (ADS)

    Bordallo López, Miguel; Hannuksela, Jari; Silvén, Olli; Vehviläinen, Markku

    2009-02-01

    Modern mobile communication devices frequently contain built-in cameras allowing users to capture highresolution still images, but at the same time the imaging applications are facing both usability and throughput bottlenecks. The difficulties in taking ad hoc pictures of printed paper documents with multi-megapixel cellular phone cameras on a common business use case, illustrate these problems for anyone. The result can be examined only after several seconds, and is often blurry, so a new picture is needed, although the view-finder image had looked good. The process can be a frustrating one with waits and the user not being able to predict the quality beforehand. The problems can be traced to the processor speed and camera resolution mismatch, and application interactivity demands. In this context we analyze building mosaic images of printed documents from frames selected from VGA resolution (640x480 pixel) video. High interactivity is achieved by providing real-time feedback on the quality, while simultaneously guiding the user actions. The graphics processing unit of the mobile device can be used to speed up the reconstruction computations. To demonstrate the viability of the concept, we present an interactive document scanning application implemented on a Nokia N95 mobile phone.

  6. Cockpit weather graphics using mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Seth, Shashi

    1993-01-01

    Many new companies are pushing state-of-the-art technology to bring a revolution in the cockpits of General Aviation (GA) aircraft. The vision, according to Dr. Bruce Holmes - the Assistant Director for Aeronautics at National Aeronautics and Space Administration's (NASA) Langley Research Center, is to provide such an advanced flight control system that the motor and cognitive skills you use to drive a car would be very similar to the ones you would use to fly an airplane. We at ViGYAN, Inc., are currently developing a system called the Pilot Weather Advisor (PWxA), which would be a part of such an advanced technology flight management system. The PWxA provides graphical depictions of weather information in the cockpit of aircraft in near real-time, through the use of broadcast satellite communications. The purpose of this system is to improve the safety and utility of GA aircraft operations. Considerable effort is being extended for research in the design of graphical weather systems, notably the works of Scanlon and Dash. The concept of providing pilots with graphical depictions of weather conditions, overlaid on geographical and navigational maps, is extremely powerful.

  7. Optimization Techniques for 3D Graphics Deployment on Mobile Devices

    NASA Astrophysics Data System (ADS)

    Koskela, Timo; Vatjus-Anttila, Jarkko

    2015-03-01

    3D Internet technologies are becoming essential enablers in many application areas including games, education, collaboration, navigation and social networking. The use of 3D Internet applications with mobile devices provides location-independent access and richer use context, but also performance issues. Therefore, one of the important challenges facing 3D Internet applications is the deployment of 3D graphics on mobile devices. In this article, we present an extensive survey on optimization techniques for 3D graphics deployment on mobile devices and qualitatively analyze the applicability of each technique from the standpoints of visual quality, performance and energy consumption. The analysis focuses on optimization techniques related to data-driven 3D graphics deployment, because it supports off-line use, multi-user interaction, user-created 3D graphics and creation of arbitrary 3D graphics. The outcome of the analysis facilitates the development and deployment of 3D Internet applications on mobile devices and provides guidelines for future research.

  8. Process and representation in graphical displays

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne

    1990-01-01

    How people comprehend graphics is examined. Graphical comprehension involves the cognitive representation of information from a graphic display and the processing strategies that people apply to answer questions about graphics. Research on representation has examined both the features present in a graphic display and the cognitive representation of the graphic. The key features include the physical components of a graph, the relation between the figure and its axes, and the information in the graph. Tests of people's memory for graphs indicate that both the physical and informational aspect of a graph are important in the cognitive representation of a graph. However, the physical (or perceptual) features overshadow the information to a large degree. Processing strategies also involve a perception-information distinction. In order to answer simple questions (e.g., determining the value of a variable, comparing several variables, and determining the mean of a set of variables), people switch between two information processing strategies: (1) an arithmetic, look-up strategy in which they use a graph much like a table, looking up values and performing arithmetic calculations; and (2) a perceptual strategy in which they use the spatial characteristics of the graph to make comparisons and estimations. The user's choice of strategies depends on the task and the characteristics of the graph. A theory of graphic comprehension is presented.

  9. Process and representation in graphical displays

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne

    1993-01-01

    Our initial model of graphic comprehension has focused on statistical graphs. Like other models of human-computer interaction, models of graphical comprehension can be used by human-computer interface designers and developers to create interfaces that present information in an efficient and usable manner. Our investigation of graph comprehension addresses two primary questions: how do people represent the information contained in a data graph?; and how do they process information from the graph? The topics of focus for graphic representation concern the features into which people decompose a graph and the representations of the graph in memory. The issue of processing can be further analyzed as two questions: what overall processing strategies do people use?; and what are the specific processing skills required?

  10. Graphics processing unit-assisted lossless decompression

    DOEpatents

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  11. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; Russell, Samuel S.

    2012-01-01

    Objective Develop a software application utilizing high performance computing techniques, including general purpose graphics processing units (GPGPUs), for the analysis and visualization of large thermographic data sets. Over the past several years, an increasing effort among scientists and engineers to utilize graphics processing units (GPUs) in a more general purpose fashion is allowing for previously unobtainable levels of computation by individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU which yield significant increases in performance. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Image processing is one area were GPUs are being used to greatly increase the performance of certain analysis and visualization techniques.

  12. Graphical analysis of power systems for mobile robotics

    NASA Astrophysics Data System (ADS)

    Raade, Justin William

    The field of mobile robotics places stringent demands on the power system. Energetic autonomy, or the ability to function for a useful operation time independent of any tether, refueling, or recharging, is a driving force in a robot designed for a field application. The focus of this dissertation is the development of two graphical analysis tools, namely Ragone plots and optimal hybridization plots, for the design of human scale mobile robotic power systems. These tools contribute to the intuitive understanding of the performance of a power system and expand the toolbox of the design engineer. Ragone plots are useful for graphically comparing the merits of different power systems for a wide range of operation times. They plot the specific power versus the specific energy of a system on logarithmic scales. The driving equations in the creation of a Ragone plot are derived in terms of several important system parameters. Trends at extreme operation times (both very short and very long) are examined. Ragone plot analysis is applied to the design of several power systems for high-power human exoskeletons. Power systems examined include a monopropellant-powered free piston hydraulic pump, a gasoline-powered internal combustion engine with hydraulic actuators, and a fuel cell with electric actuators. Hybrid power systems consist of two or more distinct energy sources that are used together to meet a single load. They can often outperform non-hybrid power systems in low duty-cycle applications or those with widely varying load profiles and long operation times. Two types of energy sources are defined: engine-like and capacitive. The hybridization rules for different combinations of energy sources are derived using graphical plots of hybrid power system mass versus the primary system power. Optimal hybridization analysis is applied to several power systems for low-power human exoskeletons. Hybrid power systems examined include a fuel cell and a solar panel coupled with

  13. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  14. Process control graphics for petrochemical plants

    SciTech Connect

    Lieber, R.E.

    1982-12-01

    Describes many specialized features of a computer control system, schematic/graphics in particular, which are vital to effectively run today's complex refineries and chemical plants. Illustrates such control systems as a full-graphic control house panel of the 60s, a European refinery control house of the early 70s, and the Ingolstadt refinery control house. Presents diagram showing a shape library. Implementation of state-of-the-art control theory, distributed control, dual hi-way digital instrument systems, and many other person-machine interface developments have been prime factors in process control. Further developments in person-machine interfaces are in progress including voice input/output, touch screen, and other entry devices. Color usage, angle of projection, control house lighting, and pattern recognition are all being studied by vendors, users, and academics. These studies involve psychologists concerned with ''quality of life'' factors, employee relations personnel concerned with labor contracts or restrictions, as well as operations personnel concerned with just getting the plant to run better.

  15. Kernel density estimation using graphical processing unit

    NASA Astrophysics Data System (ADS)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  16. Graphics Processing Units for HEP trigger systems

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-07-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  17. Identification of Learning Processes by Means of Computer Graphics.

    ERIC Educational Resources Information Center

    Sorensen, Birgitte Holm

    1993-01-01

    Describes a development project for the use of computer graphics and video in connection with an inservice training course for primary education teachers in Denmark. Topics addressed include research approaches to computers; computer graphics in learning processes; activities relating to computer graphics; the role of the teacher; and student…

  18. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2012-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often great, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques. Technical Methodology/Approach: Apply massively parallel algorithms and data structures to the specific analysis requirements presented when working with thermographic data sets.

  19. A Relational Reasoning Approach to Text-Graphic Processing

    ERIC Educational Resources Information Center

    Danielson, Robert W.; Sinatra, Gale M.

    2017-01-01

    We propose that research on text-graphic processing could be strengthened by the inclusion of relational reasoning perspectives. We briefly outline four aspects of relational reasoning: "analogies," "anomalies," "antinomies", and "antitheses". Next, we illustrate how text-graphic researchers have been…

  20. Graphics

    ERIC Educational Resources Information Center

    Post, Susan

    1975-01-01

    An art teacher described an elective course in graphics which was designed to enlarge a student's knowledge of value, color, shape within a shape, transparency, line and texture. This course utilized the technique of working a multi-colored print from a single block that was first introduced by Picasso. (Author/RK)

  1. Graphic Arts: Process Camera, Stripping, and Platemaking. Third Edition.

    ERIC Educational Resources Information Center

    Crummett, Dan

    This document contains teacher and student materials for a course in graphic arts concentrating on camera work, stripping, and plate making in the printing process. Eight units of instruction cover the following topics: (1) the process camera and darkroom equipment; (2) line photography; (3) halftone photography; (4) other darkroom techniques; (5)…

  2. Graphic Arts: Book Two. Process Camera, Stripping, and Platemaking.

    ERIC Educational Resources Information Center

    Farajollahi, Karim; And Others

    The second of a three-volume set of instructional materials for a course in graphic arts, this manual consists of 10 instructional units dealing with the process camera, stripping, and platemaking. Covered in the individual units are the process camera and darkroom photography, line photography, half-tone photography, other darkroom techniques,…

  3. The Graphical Representation of Algorithmic Processes. Volume 1

    DTIC Science & Technology

    1989-12-01

    systems and their classifications. Static Dynamic PegaSys [18] BALSA [6] Booch Diagrams [4] PV [5] Process Data Flow Diagrams Structure Charts...techniques that attempt to show data or control flow in a static graphical representation of an algorithm. The PegaSys system strad- dles the fields of...Through PegaSys ," Computer, 18(8):72-85 (August 1985). 19. Myers, Brad A. "Incense: A system For Displaying Data Structures," Computer Graphics, 17(3):115

  4. Graphic Arts: Book Three. The Press and Related Processes.

    ERIC Educational Resources Information Center

    Farajollahi, Karim; And Others

    The third of a three-volume set of instructional materials for a graphic arts course, this manual consists of nine instructional units dealing with presses and related processes. Covered in the units are basic press fundamentals, offset press systems, offset press operating procedures, offset inks and dampening chemistry, preventive maintenance…

  5. Graphic Arts: The Press and Finishing Processes. Third Edition.

    ERIC Educational Resources Information Center

    Crummett, Dan

    This document contains teacher and student materials for a course in graphic arts concentrating on printing presses and the finishing process for publications. Seven units of instruction cover the following topics: (1) offset press systems; (2) offset inks and dampening chemistry; (3) offset press operating procedures; (4) preventive maintenance…

  6. The Use of Computer Graphics in the Design Process.

    ERIC Educational Resources Information Center

    Palazzi, Maria

    This master's thesis examines applications of computer technology to the field of industrial design and ways in which technology can transform the traditional process. Following a statement of the problem, the history and applications of the fields of computer graphics and industrial design are reviewed. The traditional industrial design process…

  7. Digital-Computer Processing of Graphical Data. Final Report.

    ERIC Educational Resources Information Center

    Freeman, Herbert

    The final report of a two-year study concerned with the digital-computer processing of graphical data. Five separate investigations carried out under this study are described briefly, and a detailed bibliography, complete with abstracts, is included in which are listed the technical papers and reports published during the period of this program.…

  8. An Interactive Graphics Program for Investigating Digital Signal Processing.

    ERIC Educational Resources Information Center

    Miller, Billy K.; And Others

    1983-01-01

    Describes development of an interactive computer graphics program for use in teaching digital signal processing. The program allows students to interactively configure digital systems on a monitor display and observe their system's performance by means of digital plots on the system's outputs. A sample program run is included. (JN)

  9. Obliterable of graphics and correction of skew using Hough transform for mobile captured documents

    NASA Astrophysics Data System (ADS)

    Chethan, H. K.; Kumar, G. Hemantha

    2011-10-01

    CBDA is an emerging field in Computer Vision and Pattern Recognition. In recent technology camera are incorporated to several electronic equipments and are very interesting and thus playing a vital role by replacing scanner with hand held imaging devices like Digital Cameras, Mobile phones and gaming devices attached with these camera. The goal of the work is to remove graphics from the document which plays a vital role in recognition of characters from the mobile captured documents. In this paper we have proposed a novel method for separating or removal of graphics like logos, animations other than the text from the document and method to reduce noise and finally textual content skew is estimated and corrected using Hough Transform. The experimental results show the efficacy compared to the result of well known existing methods.

  10. Efficient magnetohydrodynamic simulations on graphics processing units with CUDA

    NASA Astrophysics Data System (ADS)

    Wong, Hon-Cheng; Wong, Un-Hong; Feng, Xueshang; Tang, Zesheng

    2011-10-01

    Magnetohydrodynamic (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the best of the author's knowledge, the first implementation of MHD simulations entirely on GPUs with CUDA, named GPU-MHD, to accelerate the simulation process. GPU-MHD supports both single and double precision computations. A series of numerical tests have been performed to validate the correctness of our code. Accuracy evaluation by comparing single and double precision computation results is also given. Performance measurements of both single and double precision are conducted on both the NVIDIA GeForce GTX 295 (GT200 architecture) and GTX 480 (Fermi architecture) graphics cards. These measurements show that our GPU-based implementation achieves between one and two orders of magnitude of improvement depending on the graphics card used, the problem size, and the precision when comparing to the original serial CPU MHD implementation. In addition, we extend GPU-MHD to support the visualization of the simulation results and thus the whole MHD simulation and visualization process can be performed entirely on GPUs.

  11. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    PubMed

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.

  12. Graphics processing unit acceleration of computational electromagnetic methods

    NASA Astrophysics Data System (ADS)

    Inman, Matthew

    The use of Graphical Processing Units (GPU's) for scientific applications has been evolving and expanding for the decade. GPU's provide an alternative to the CPU in the creation and execution of the numerical codes that are often relied upon in to perform simulations in computational electromagnetics. While originally designed purely to display graphics on the users monitor, GPU's today are essentially powerful floating point co-processors that can be programmed not only to render complex graphics, but also perform the complex mathematical calculations often encountered in scientific computing. Currently the GPU's being produced often contain hundreds of separate cores able to access large amounts of high-speed dedicated memory. By utilizing the power offered by such a specialized processor, it is possible to drastically speed up the calculations required in computational electromagnetics. This increase in speed allows for the use of GPU based simulations in a variety of situations that the computational time has heretofore been a limiting factor in, such as in educational courses. Many situations in teaching electromagnetics often rely upon simple examples of problems due to the simulation times needed to analyze more complex problems. The use of GPU based simulations will be shown to allow demonstrations of more advanced problems than previously allowed by adapting the methods for use on the GPU. Modules will be developed for a wide variety of teaching situations utilizing the speed of the GPU to demonstrate various techniques and ideas previously unrealizable.

  13. Porting a Hall MHD Code to a Graphic Processing Unit

    NASA Technical Reports Server (NTRS)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  14. Graphics Processing Unit Accelerated Hirsch-Fye Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Moore, Conrad; Abu Asal, Sameer; Rajagoplan, Kaushik; Poliakoff, David; Caprino, Joseph; Tomko, Karen; Thakur, Bhupender; Yang, Shuxiang; Moreno, Juana; Jarrell, Mark

    2012-02-01

    In Dynamical Mean Field Theory and its cluster extensions, such as the Dynamic Cluster Algorithm, the bottleneck of the algorithm is solving the self-consistency equations with an impurity solver. Hirsch-Fye Quantum Monte Carlo is one of the most commonly used impurity and cluster solvers. This work implements optimizations of the algorithm, such as enabling large data re-use, suitable for the Graphics Processing Unit (GPU) architecture. The GPU's sheer number of concurrent parallel computations and large bandwidth to many shared memories takes advantage of the inherent parallelism in the Green function update and measurement routines, and can substantially improve the efficiency of the Hirsch-Fye impurity solver.

  15. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    NASA Astrophysics Data System (ADS)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  16. Exploiting graphics processing units for computational biology and bioinformatics.

    PubMed

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  17. Solar physics applications of computer graphics and image processing

    NASA Technical Reports Server (NTRS)

    Altschuler, M. D.

    1985-01-01

    Computer graphics devices coupled with computers and carefully developed software provide new opportunities to achieve insight into the geometry and time evolution of scalar, vector, and tensor fields and to extract more information quickly and cheaply from the same image data. Two or more different fields which overlay in space can be calculated from the data (and the physics), then displayed from any perspective, and compared visually. The maximum regions of one field can be compared with the gradients of another. Time changing fields can also be compared. Images can be added, subtracted, transformed, noise filtered, frequency filtered, contrast enhanced, color coded, enlarged, compressed, parameterized, and histogrammed, in whole or section by section. Today it is possible to process multiple digital images to reveal spatial and temporal correlations and cross correlations. Data from different observatories taken at different times can be processed, interpolated, and transformed to a common coordinate system.

  18. Implementing wide baseline matching algorithms on a graphics processing unit.

    SciTech Connect

    Rothganger, Fredrick H.; Larson, Kurt W.; Gonzales, Antonio Ignacio; Myers, Daniel S.

    2007-10-01

    Wide baseline matching is the state of the art for object recognition and image registration problems in computer vision. Though effective, the computational expense of these algorithms limits their application to many real-world problems. The performance of wide baseline matching algorithms may be improved by using a graphical processing unit as a fast multithreaded co-processor. In this paper, we present an implementation of the difference of Gaussian feature extractor, based on the CUDA system of GPU programming developed by NVIDIA, and implemented on their hardware. For a 2000x2000 pixel image, the GPU-based method executes nearly thirteen times faster than a comparable CPU-based method, with no significant loss of accuracy.

  19. Graphics Processing Units and High-Dimensional Optimization

    PubMed Central

    Zhou, Hua; Lange, Kenneth; Suchard, Marc A.

    2011-01-01

    This paper discusses the potential of graphics processing units (GPUs) in high-dimensional optimization problems. A single GPU card with hundreds of arithmetic cores can be inserted in a personal computer and dramatically accelerates many statistical algorithms. To exploit these devices fully, optimization algorithms should reduce to multiple parallel tasks, each accessing a limited amount of data. These criteria favor EM and MM algorithms that separate parameters and data. To a lesser extent block relaxation and coordinate descent and ascent also qualify. We demonstrate the utility of GPUs in nonnegative matrix factorization, PET image reconstruction, and multidimensional scaling. Speedups of 100 fold can easily be attained. Over the next decade, GPUs will fundamentally alter the landscape of computational statistics. It is time for more statisticians to get on-board. PMID:21847315

  20. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    SciTech Connect

    Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele

    2014-05-07

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  1. Graphic Arts: The Press and Finishing Processes. Teacher Guide.

    ERIC Educational Resources Information Center

    Feasley, Sue C., Ed.

    This curriculum guide is the third in a three-volume series of instructional materials for competency-based graphic arts instruction. Each publication is designed to include the technical content and tasks necessary for a student to be employed in an entry-level graphic arts occupation. Introductory materials include an instructional/task analysis…

  2. Graphic Arts: Process Camera, Stripping, and Platemaking. Teacher Guide.

    ERIC Educational Resources Information Center

    Feasley, Sue C., Ed.

    This curriculum guide is the second in a three-volume series of instructional materials for competency-based graphic arts instruction. Each publication is designed to include the technical content and tasks necessary for a student to be employed in an entry-level graphic arts occupation. Introductory materials include an instructional/task…

  3. Role of Graphics Tools in the Learning Design Process

    ERIC Educational Resources Information Center

    Laisney, Patrice; Brandt-Pomares, Pascale

    2015-01-01

    This paper discusses the design activities of students in secondary school in France. Graphics tools are now part of the capacity of design professionals. It is therefore apt to reflect on their integration into the technological education. Has the use of intermediate graphical tools changed students' performance, and if so in what direction, in…

  4. Simplification of 3D Graphics for Mobile Devices: Exploring the Trade-off Between Energy Savings and User Perceptions of Visual Quality

    NASA Astrophysics Data System (ADS)

    Vatjus-Anttila, Jarkko; Koskela, Timo; Lappalainen, Tuomas; Häkkilä, Jonna

    2017-03-01

    3D graphics have quickly become a popular form of media that can also be accessed with today's mobile devices. However, the use of 3D applications with mobile devices is typically a very energy-consuming task due to the processing complexity and the large file size of 3D graphics. As a result, their use may lead to rapid depletion of the limited battery life. In this paper, we investigate how much energy savings can be gained in the transmission and rendering of 3D graphics by simplifying geometry data. In this connection, we also examine users' perceptions on the visual quality of the simplified 3D models. The results of this paper provide new knowledge on the energy savings that can be gained through geometry simplification, as well as on how much the geometry can be simplified before the visual quality of 3D models becomes unacceptable for the mobile users. Based on the results, it can be concluded that geometry simplification can provide significant energy savings for mobile devices without disturbing the users. When geometry simplification is combined with distance based adjustment of detail, up to 52% energy savings were gained in our experiments compared to using only a single high quality 3D model.

  5. Accelerated Searches of Gravitational Waves Using Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Chung, Shin Kee; Wen, Linqing; Blair, David; Cannon, Kipp

    2010-06-01

    The existence of gravitational waves was predicted by Albert Einstein. Black hole and neutron star binary systems will product strong gravitational waves through their inspiral and eventual merger. The analysis of the gravitational wave data is computationally intensive, requiring matched filtering of terabytes of data with a bank of at least 3000 numerical templates that represent predicted waveforms. We need to complete the analysis in real-time (within the duration of the signal) in order to enable follow-up observations with some conventional optical or radio telescopes. We report a novel application of a graphics processing units (GPUs) for the purpose of accelerating the search pipelines for gravitational waves from coalescing binary systems of compact objects. A speed-up of 16 fold in total has been achieved with an NVIDIA GeForce 8800 Ultra GPU card compared with a standard central processing unit (CPU). We show that further improvements are possible and discuss the reduction in CPU number required for the detection of inspiral sources afforded by the use of GPUs.

  6. Accelerating sino-atrium computer simulations with graphic processing units.

    PubMed

    Zhang, Hong; Xiao, Zheng; Lin, Shien-fong

    2015-01-01

    Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.

  7. Graphics processing units in bioinformatics, computational biology and systems biology.

    PubMed

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2016-07-08

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools.

  8. Accelerating radio astronomy cross-correlation with graphics processing units

    NASA Astrophysics Data System (ADS)

    Clark, M. A.; LaPlante, P. C.; Greenhill, L. J.

    2013-05-01

    We present a highly parallel implementation of the cross-correlation of time-series data using graphics processing units (GPUs), which is scalable to hundreds of independent inputs and suitable for the processing of signals from 'large-Formula' arrays of many radio antennas. The computational part of the algorithm, the X-engine, is implemented efficiently on NVIDIA's Fermi architecture, sustaining up to 79% of the peak single-precision floating-point throughput. We compare performance obtained for hardware- and software-managed caches, observing significantly better performance for the latter. The high performance reported involves use of a multi-level data tiling strategy in memory and use of a pipelined algorithm with simultaneous computation and transfer of data from host to device memory. The speed of code development, flexibility, and low cost of the GPU implementations compared with application-specific integrated circuit (ASIC) and field programmable gate array (FPGA) implementations have the potential to greatly shorten the cycle of correlator development and deployment, for cases where some power-consumption penalty can be tolerated.

  9. Parallel Latent Semantic Analysis using a Graphics Processing Unit

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E; Cavanagh, Joseph M

    2009-01-01

    Latent Semantic Analysis (LSA) can be used to reduce the dimensions of large Term-Document datasets using Singular Value Decomposition. However, with the ever expanding size of data sets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. The Graphics Processing Unit (GPU) can solve some highly parallel problems much faster than the traditional sequential processor (CPU). Thus, a deployable system using a GPU to speedup large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a computer cluster. In this paper, we presented a parallel LSA implementation on the GPU, using NVIDIA Compute Unified Device Architecture (CUDA) and Compute Unified Basic Linear Algebra Subprograms (CUBLAS). The performance of this implementation is compared to traditional LSA implementation on CPU using an optimized Basic Linear Algebra Subprograms library. For large matrices that have dimensions divisible by 16, the GPU algorithm ran five to six times faster than the CPU version.

  10. Handling geophysical flows: Numerical modelling using Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario

    2016-04-01

    Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta

  11. Efficient graphics processing unit-based voxel carving for surveillance

    NASA Astrophysics Data System (ADS)

    Ober-Gecks, Antje; Zwicker, Marius; Henrich, Dominik

    2016-07-01

    A graphics processing unit (GPU)-based implementation of a space carving method for the reconstruction of the photo hull is presented. In particular, the generalized voxel coloring with item buffer approach is transferred to the GPU. The fast computation on the GPU is realized by an incrementally calculated standard deviation within the likelihood ratio test, which is applied as color consistency criterion. A fast and efficient computation of complete voxel-pixel projections is provided using volume rendering methods. This generates a speedup of the iterative carving procedure while considering all given pixel color information. Different volume rendering methods, such as texture mapping and raycasting, are examined. The termination of the voxel carving procedure is controlled through an anytime concept. The photo hull algorithm is examined for its applicability to real-world surveillance scenarios as an online reconstruction method. For this reason, a GPU-based redesign of a visual hull algorithm is provided that utilizes geometric knowledge about known static occluders of the scene in order to create a conservative and complete visual hull that includes all given objects. This visual hull approximation serves as input for the photo hull algorithm.

  12. Graphics processing unit-based alignment of protein interaction networks.

    PubMed

    Xie, Jiang; Zhou, Zhonghua; Ma, Jin; Xiang, Chaojuan; Nie, Qing; Zhang, Wu

    2015-08-01

    Network alignment is an important bridge to understanding human protein-protein interactions (PPIs) and functions through model organisms. However, the underlying subgraph isomorphism problem complicates and increases the time required to align protein interaction networks (PINs). Parallel computing technology is an effective solution to the challenge of aligning large-scale networks via sequential computing. In this study, the typical Hungarian-Greedy Algorithm (HGA) is used as an example for PIN alignment. The authors propose a HGA with 2-nearest neighbours (HGA-2N) and implement its graphics processing unit (GPU) acceleration. Numerical experiments demonstrate that HGA-2N can find alignments that are close to those found by HGA while dramatically reducing computing time. The GPU implementation of HGA-2N optimises the parallel pattern, computing mode and storage mode and it improves the computing time ratio between the CPU and GPU compared with HGA when large-scale networks are considered. By using HGA-2N in GPUs, conserved PPIs can be observed, and potential PPIs can be predicted. Among the predictions based on 25 common Gene Ontology terms, 42.8% can be found in the Human Protein Reference Database. Furthermore, a new method of reconstructing phylogenetic trees is introduced, which shows the same relationships among five herpes viruses that are obtained using other methods.

  13. Multilevel Summation of Electrostatic Potentials Using Graphics Processing Units*

    PubMed Central

    Hardy, David J.; Stone, John E.; Schulten, Klaus

    2009-01-01

    Physical and engineering practicalities involved in microprocessor design have resulted in flat performance growth for traditional single-core microprocessors. The urgent need for continuing increases in the performance of scientific applications requires the use of many-core processors and accelerators such as graphics processing units (GPUs). This paper discusses GPU acceleration of the multilevel summation method for computing electrostatic potentials and forces for a system of charged atoms, which is a problem of paramount importance in biomolecular modeling applications. We present and test a new GPU algorithm for the long-range part of the potentials that computes a cutoff pair potential between lattice points, essentially convolving a fixed 3-D lattice of “weights” over all sub-cubes of a much larger lattice. The implementation exploits the different memory subsystems provided on the GPU to stream optimally sized data sets through the multiprocessors. We demonstrate for the full multilevel summation calculation speedups of up to 26 using a single GPU and 46 using multiple GPUs, enabling the computation of a high-resolution map of the electrostatic potential for a system of 1.5 million atoms in under 12 seconds. PMID:20161132

  14. Use of general purpose graphics processing units with MODFLOW

    USGS Publications Warehouse

    Hughes, Joseph D.; White, Jeremy T.

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.

  15. Use of general purpose graphics processing units with MODFLOW.

    PubMed

    Hughes, Joseph D; White, Jeremy T

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.

  16. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    SciTech Connect

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  17. Accelerating chemical database searching using graphics processing units.

    PubMed

    Liu, Pu; Agrafiotis, Dimitris K; Rassokhin, Dmitrii N; Yang, Eric

    2011-08-22

    The utility of chemoinformatics systems depends on the accurate computer representation and efficient manipulation of chemical compounds. In such systems, a small molecule is often digitized as a large fingerprint vector, where each element indicates the presence/absence or the number of occurrences of a particular structural feature. Since in theory the number of unique features can be exceedingly large, these fingerprint vectors are usually folded into much shorter ones using hashing and modulo operations, allowing fast "in-memory" manipulation and comparison of molecules. There is increasing evidence that lossless fingerprints can substantially improve retrieval performance in chemical database searching (substructure or similarity), which have led to the development of several lossless fingerprint compression algorithms. However, any gains in storage and retrieval afforded by compression need to be weighed against the extra computational burden required for decompression before these fingerprints can be compared. Here we demonstrate that graphics processing units (GPU) can greatly alleviate this problem, enabling the practical application of lossless fingerprints on large databases. More specifically, we show that, with the help of a ~$500 ordinary video card, the entire PubChem database of ~32 million compounds can be searched in ~0.2-2 s on average, which is 2 orders of magnitude faster than a conventional CPU. If multiple query patterns are processed in batch, the speedup is even more dramatic (less than 0.02-0.2 s/query for 1000 queries). In the present study, we use the Elias gamma compression algorithm, which results in a compression ratio as high as 0.097.

  18. Massively Parallel Latent Semantic Analyzes using a Graphics Processing Unit

    SciTech Connect

    Cavanagh, Joseph M; Cui, Xiaohui

    2009-01-01

    Latent Semantic Indexing (LSA) aims to reduce the dimensions of large Term-Document datasets using Singular Value Decomposition. However, with the ever expanding size of data sets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. The Graphics Processing Unit (GPU) can solve some highly parallel problems much faster than the traditional sequential processor (CPU). Thus, a deployable system using a GPU to speedup large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a computer cluster. Due to the GPU s application-specific architecture, harnessing the GPU s computational prowess for LSA is a great challenge. We present a parallel LSA implementation on the GPU, using NVIDIA Compute Unified Device Architecture and Compute Unified Basic Linear Algebra Subprograms. The performance of this implementation is compared to traditional LSA implementation on CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1000x1000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran five to six times faster than the CPU version. The large variation is due to architectural benefits the GPU has for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  19. Area-delay trade-offs of texture decompressors for a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Novoa Súñer, Emilio; Ituero, Pablo; López-Vallejo, Marisa

    2011-05-01

    Graphics Processing Units have become a booster for the microelectronics industry. However, due to intellectual property issues, there is a serious lack of information on implementation details of the hardware architecture that is behind GPUs. For instance, the way texture is handled and decompressed in a GPU to reduce bandwidth usage has never been dealt with in depth from a hardware point of view. This work addresses a comparative study on the hardware implementation of different texture decompression algorithms for both conventional (PCs and video game consoles) and mobile platforms. Circuit synthesis is performed targeting both a reconfigurable hardware platform and a 90nm standard cell library. Area-delay trade-offs have been extensively analyzed, which allows us to compare the complexity of decompressors and thus determine suitability of algorithms for systems with limited hardware resources.

  20. Flocking-based Document Clustering on the Graphics Processing Unit

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E; Patton, Robert M; ST Charles, Jesse Lee

    2008-01-01

    Abstract?Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. Each bird represents a single document and flies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to receive results in a reasonable amount of time. However, flocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have found increased performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefit the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NIVIDA? we developed a document flocking implementation to be run on the NIVIDA?GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3000 documents. The results of these tests were very significant. Performance gains ranged from three to nearly five times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  1. A Block-Asynchronous Relaxation Method for Graphics Processing Units

    SciTech Connect

    Antz, Hartwig; Tomov, Stanimire; Dongarra, Jack; Heuveline, Vincent

    2011-11-30

    In this paper, we analyze the potential of asynchronous relaxation methods on Graphics Processing Units (GPUs). For this purpose, we developed a set of asynchronous iteration algorithms in CUDA and compared them with a parallel implementation of synchronous relaxation methods on CPU-based systems. For a set of test matrices taken from the University of Florida Matrix Collection we monitor the convergence behavior, the average iteration time and the total time-to-solution time. Analyzing the results, we observe that even for our most basic asynchronous relaxation scheme, despite its lower convergence rate compared to the Gauss-Seidel relaxation (that we expected), the asynchronous iteration running on GPUs is still able to provide solution approximations of certain accuracy in considerably shorter time then Gauss- Seidel running on CPUs. Hence, it overcompensates for the slower convergence by exploiting the scalability and the good fit of the asynchronous schemes for the highly parallel GPU architectures. Further, enhancing the most basic asynchronous approach with hybrid schemes – using multiple iterations within the ”subdomain” handled by a GPU thread block and Jacobi-like asynchronous updates across the ”boundaries”, subject to tuning various parameters – we manage to not only recover the loss of global convergence but often accelerate convergence of up to two times (compared to the effective but difficult to parallelize Gauss-Seidel type of schemes), while keeping the execution time of a global iteration practically the same. This shows the high potential of the asynchronous methods not only as a stand alone numerical solver for linear systems of equations fulfilling certain convergence conditions but more importantly as a smoother in multigrid methods. Due to the explosion of parallelism in todays architecture designs, the significance and the need for asynchronous methods, as the ones described in this work, is expected to grow.

  2. Viscoelastic Finite Difference Modeling Using Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.

    2014-12-01

    Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size

  3. Graphic Arts: Process Camera, Stripping, and Platemaking. Fourth Edition. Teacher Edition [and] Student Edition.

    ERIC Educational Resources Information Center

    Multistate Academic and Vocational Curriculum Consortium, Stillwater, OK.

    This publication contains both a teacher edition and a student edition of materials for a course in graphic arts that covers the process camera, stripping, and platemaking. The course introduces basic concepts and skills necessary for entry-level employment in a graphic communication occupation. The contents of the materials are tied to measurable…

  4. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    NASA Astrophysics Data System (ADS)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  5. Software Graphics Processing Unit (sGPU) for Deep Space Applications

    NASA Technical Reports Server (NTRS)

    McCabe, Mary; Salazar, George; Steele, Glen

    2015-01-01

    A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.

  6. Commercial Off-The-Shelf (COTS) Graphics Processing Board (GPB) Radiation Test Evaluation Report

    NASA Technical Reports Server (NTRS)

    Salazar, George A.; Steele, Glen F.

    2013-01-01

    Large round trip communications latency for deep space missions will require more onboard computational capabilities to enable the space vehicle to undertake many tasks that have traditionally been ground-based, mission control responsibilities. As a result, visual display graphics will be required to provide simpler vehicle situational awareness through graphical representations, as well as provide capabilities never before done in a space mission, such as augmented reality for in-flight maintenance or Telepresence activities. These capabilities will require graphics processors and associated support electronic components for high computational graphics processing. In an effort to understand the performance of commercial graphics card electronics operating in the expected radiation environment, a preliminary test was performed on five commercial offthe- shelf (COTS) graphics cards. This paper discusses the preliminary evaluation test results of five COTS graphics processing cards tested to the International Space Station (ISS) low earth orbit radiation environment. Three of the five graphics cards were tested to a total dose of 6000 rads (Si). The test articles, test configuration, preliminary results, and recommendations are discussed.

  7. Grace: A cross-platform micromagnetic simulator on graphics processing units

    NASA Astrophysics Data System (ADS)

    Zhu, Ru

    2015-12-01

    A micromagnetic simulator running on graphics processing units (GPUs) is presented. Different from GPU implementations of other research groups which are predominantly running on NVidia's CUDA platform, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform independent. It runs on GPUs from venders including NVidia, AMD and Intel, and achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude. The simulator paved the way for running large size micromagnetic simulations on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics cards, and is freely available to download.

  8. Development of a Flow Solver with Complex Kinetics on the Graphic Processing Units

    DTIC Science & Technology

    2011-09-22

    Physics 109, 11 (2011), 113308. [9] Klockner, A., Warburton, T., Bridge, J., and Hesthaven, J. Nodal Discontinuous Galerkin Methods on Graphics...Graphic Processing Units ( GPU ) to model reactive gas mixture with detailed chemical kinetics. The solver incorporates high-order finite volume methods...method. We explored different approaches in implementing a fast kinetics solver on the GPU . The detail of the implementation is discussed in the

  9. Mobile Devices and GPU Parallelism in Ionospheric Data Processing

    NASA Astrophysics Data System (ADS)

    Mascharka, D.; Pankratius, V.

    2015-12-01

    Scientific data acquisition in the field is often constrained by data transfer backchannels to analysis environments. Geoscientists are therefore facing practical bottlenecks with increasing sensor density and variety. Mobile devices, such as smartphones and tablets, offer promising solutions to key problems in scientific data acquisition, pre-processing, and validation by providing advanced capabilities in the field. This is due to affordable network connectivity options and the increasing mobile computational power. This contribution exemplifies a scenario faced by scientists in the field and presents the "Mahali TEC Processing App" developed in the context of the NSF-funded Mahali project. Aimed at atmospheric science and the study of ionospheric Total Electron Content (TEC), this app is able to gather data from various dual-frequency GPS receivers. It demonstrates parsing of full-day RINEX files on mobile devices and on-the-fly computation of vertical TEC values based on satellite ephemeris models that are obtained from NASA. Our experiments show how parallel computing on the mobile device GPU enables fast processing and visualization of up to 2 million datapoints in real-time using OpenGL. GPS receiver bias is estimated through minimum TEC approximations that can be interactively adjusted by scientists in the graphical user interface. Scientists can also perform approximate computations for "quickviews" to reduce CPU processing time and memory consumption. In the final stage of our mobile processing pipeline, scientists can upload data to the cloud for further processing. Acknowledgements: The Mahali project (http://mahali.mit.edu) is funded by the NSF INSPIRE grant no. AGS-1343967 (PI: V. Pankratius). We would like to acknowledge our collaborators at Boston College, Virginia Tech, Johns Hopkins University, Colorado State University, as well as the support of UNAVCO for loans of dual-frequency GPS receivers for use in this project, and Intel for loans of

  10. Student Thinking Processes While Constructing Graphic Representations of Textbook Content: What Insights Do Think-Alouds Provide?

    ERIC Educational Resources Information Center

    Scott, D. Beth; Dreher, Mariam Jean

    2016-01-01

    This study examined the thinking processes students engage in while constructing graphic representations of textbook content. Twenty-eight students who either used graphic representations in a routine manner during social studies instruction or learned to construct graphic representations based on the rhetorical patterns used to organize textbook…

  11. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    PubMed

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-08

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  12. High Throughput Transmission Optical Projection Tomography Using Low Cost Graphics Processing Unit

    PubMed Central

    Vinegoni, Claudio; Fexon, Lyuba; Feruglio, Paolo Fumene; Pivovarov, Misha; Figueiredo, Jose-Luiz; Nahrendorf, Matthias; Pozzo, Antonio; Sbarbati, Andrea; Weissleder, Ralph

    2009-01-01

    We implement the use of a graphics processing unit (GPU) in order to achieve real time data processing for high-throughput transmission optical projection tomography imaging. By implementing the GPU we have obtained a 300 fold performance enhancement in comparison to a CPU workstation implementation. This enables to obtain on-the-fly reconstructions enabling for high throughput imaging. PMID:20052155

  13. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  14. Accelerating Malware Detection via a Graphics Processing Unit

    DTIC Science & Technology

    2010-09-01

    Processing Unit . . . . . . . . . . . . . . . . . . 4 PE Portable Executable . . . . . . . . . . . . . . . . . . . . . 4 COFF Common Object File Format...operating systems for the future [Szo05]. The PE format is an updated version of the common object file format ( COFF ) [Mic06]. Microsoft released a new...pro.mspx, Accessed July 2010, 2001. 79 Mic06. Microsoft. Common object file format ( coff ). MSDN, November 2006. Re- vision 4.1. Mic07a. Microsoft

  15. Parallel computing with graphics processing units for high-speed Monte Carlo simulation of photon migration.

    PubMed

    Alerstam, Erik; Svensson, Tomas; Andersson-Engels, Stefan

    2008-01-01

    General-purpose computing on graphics processing units (GPGPU) is shown to dramatically increase the speed of Monte Carlo simulations of photon migration. In a standard simulation of time-resolved photon migration in a semi-infinite geometry, the proposed methodology executed on a low-cost graphics processing unit (GPU) is a factor 1000 faster than simulation performed on a single standard processor. In addition, we address important technical aspects of GPU-based simulations of photon migration. The technique is expected to become a standard method in Monte Carlo simulations of photon migration.

  16. ReaDDyMM: Fast Interacting Particle Reaction-Diffusion Simulations Using Graphical Processing Units

    PubMed Central

    Biedermann, Johann; Ullrich, Alexander; Schöneberg, Johannes; Noé, Frank

    2015-01-01

    ReaDDy is a modular particle simulation package combining off-lattice reaction kinetics with arbitrary particle interaction forces. Here we present a graphical processing unit implementation of ReaDDy that employs the fast multiplatform molecular dynamics package OpenMM. A speedup of up to two orders of magnitude is demonstrated, giving us access to timescales of multiple seconds on single graphical processing units. This opens up the possibility of simulating cellular signal transduction events while resolving all protein copies. PMID:25650912

  17. Text and Illustration Processing System (TIPS) User’s Manual. Volume II. Graphics Processing System.

    DTIC Science & Technology

    1981-08-01

    revers DD ’jAN 1473 EDITION OF I NOV 4? it OUSOLETE Unclasif ed S/N 0102- LFO014-6 01 SECURITY CLAWS rICATION Of’ THIS PA4E Cumin 1=3PIN Uncl assi...eliminating unwanted edge data, it permits any rectangular area of a graphic to be extracted into a separate graphic file. Any graphic file may be input to

  18. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    NASA Technical Reports Server (NTRS)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  19. Validation of Mobility of Pedestrians with Low Vision Using Graphic Floor Signs and Voice Guides.

    PubMed

    Omori, Kiyohiro; Yanagihara, Takao; Kitagawa, Hiroshi; Ikeda, Norihiro

    2015-01-01

    Some people with low vision or elderly persons tend to walk while watching a nearby floor, therefore, they often overlook or hard to read suspended signs. In this study, we propose two kinds of voice guides, and an experiment is conducted by participants with low vision using these voice guides and graphic floor signs in order to investigate effectiveness of these combinations. In clock position method (CP), each direction of near facilities are described in using an analogy of a 12-hour clock. Meanwhile, in numbering method (NU), near facilities are put the number in clockwise order, however, each direction are only illustrated in a crossing sign. As a result of an experiment, it is showed that both voice guides are effective for pedestrians with low vision. NU is used as a complement of graphic floor signs. Meanwhile, CP is used independently with graphic floor signs, however, there is a risk in the case of using in the environment where pedestrians are easy to mistake the reference direction defined by the sounding speaker.

  20. [Mobile phone-computer wireless interactive graphics transmission technology and its medical application].

    PubMed

    Huang, Shuo; Liu, Jing

    2010-05-01

    Application of clinical digital medical imaging has raised many tough issues to tackle, such as data storage, management, and information sharing. Here we investigated a mobile phone based medical image management system which is capable of achieving personal medical imaging information storage, management and comprehensive health information analysis. The technologies related to the management system spanning the wireless transmission technology, the technical capabilities of phone in mobile health care and management of mobile medical database were discussed. Taking medical infrared images transmission between phone and computer as an example, the working principle of the present system was demonstrated.

  1. A DDC Bibliography on Optical or Graphic Information Processing (Information Sciences Series). Volume I.

    ERIC Educational Resources Information Center

    Defense Documentation Center, Alexandria, VA.

    This unclassified-unlimited bibliography contains 183 references, with abstracts, dealing specifically with optical or graphic information processing. Citations are grouped under three headings: display devices and theory, character recognition, and pattern recognition. Within each group, they are arranged in accession number (AD-number) sequence.…

  2. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  3. Mesh-particle interpolations on graphics processing units and multicore central processing units.

    PubMed

    Rossinelli, Diego; Conti, Christian; Koumoutsakos, Petros

    2011-06-13

    Particle-mesh interpolations are fundamental operations for particle-in-cell codes, as implemented in vortex methods, plasma dynamics and electrostatics simulations. In these simulations, the mesh is used to solve the field equations and the gradients of the fields are used in order to advance the particles. The time integration of particle trajectories is performed through an extensive resampling of the flow field at the particle locations. The computational performance of this resampling turns out to be limited by the memory bandwidth of the underlying computer architecture. We investigate how mesh-particle interpolation can be efficiently performed on graphics processing units (GPUs) and multicore central processing units (CPUs), and we present two implementation techniques. The single-precision results for the multicore CPU implementation show an acceleration of 45-70×, depending on system size, and an acceleration of 85-155× for the GPU implementation over an efficient single-threaded C++ implementation. In double precision, we observe a performance improvement of 30-40× for the multicore CPU implementation and 20-45× for the GPU implementation. With respect to the 16-threaded standard C++ implementation, the present CPU technique leads to a performance increase of roughly 2.8-3.7× in single precision and 1.7-2.4× in double precision, whereas the GPU technique leads to an improvement of 9× in single precision and 2.2-2.8× in double precision.

  4. Graphics to H.264 video encoding for 3D scene representation and interaction on mobile devices using region of interest

    NASA Astrophysics Data System (ADS)

    Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Jia, Jie; Kim, Hae-Kwang

    2007-12-01

    In this paper, we propose a method of 3D graphics to video encoding and streaming that are embedded into a remote interactive 3D visualization system for rapidly representing a 3D scene on mobile devices without having to download it from the server. In particular, a 3D graphics to video framework is presented that increases the visual quality of regions of interest (ROI) of the video by performing more bit allocation to ROI during H.264 video encoding. The ROI are identified by projection 3D objects to a 2D plane during rasterization. The system offers users to navigate the 3D scene and interact with objects of interests for querying their descriptions. We developed an adaptive media streaming server that can provide an adaptive video stream in term of object-based quality to the client according to the user's preferences and the variation of network bandwidth. Results show that by doing ROI mode selection, PSNR of test sample slightly change while visual quality of objects increases evidently.

  5. Graphical enhancement to support PCA-based process monitoring and fault diagnosis.

    PubMed

    Ralston, Patricia; DePuy, Gail; Graham, James H

    2004-10-01

    Principal component analysis (PCA) for process modeling and multivariate statistical techniques for monitoring, fault detection, and diagnosis are becoming more common in published research, but are still underutilized in practice. This paper summarizes an in-depth case study on a chemical process with 20 monitored process variables, one of which reflects product quality. The analysis is performed using the PLS_Toolbox 2.01 with MATLAB, augmented with software which automates the analysis and implements a statistical enhancement that uses confidence limits on the residuals of each variable for fault detection rather than just confidence limits on an overall residual. The newly developed graphical interface identifies and displays each variable's contribution to the faulty behavior of the process; and it aids greatly in analyzing results. The case study analyzed within shows that using the statistical enhancement can reduce the fault detection time, and the automated graphical interface implements the enhancement easily.

  6. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures

    NASA Astrophysics Data System (ADS)

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.

    2012-03-01

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in realtime by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  7. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures.

    PubMed

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R

    2012-02-23

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  8. Parallel computing for simultaneous iterative tomographic imaging by graphics processing units

    NASA Astrophysics Data System (ADS)

    Bello-Maldonado, Pedro D.; López, Ricardo; Rogers, Colleen; Jin, Yuanwei; Lu, Enyue

    2016-05-01

    In this paper, we address the problem of accelerating inversion algorithms for nonlinear acoustic tomographic imaging by parallel computing on graphics processing units (GPUs). Nonlinear inversion algorithms for tomographic imaging often rely on iterative algorithms for solving an inverse problem, thus computationally intensive. We study the simultaneous iterative reconstruction technique (SIRT) for the multiple-input-multiple-output (MIMO) tomography algorithm which enables parallel computations of the grid points as well as the parallel execution of multiple source excitation. Using graphics processing units (GPUs) and the Compute Unified Device Architecture (CUDA) programming model an overall improvement of 26.33x was achieved when combining both approaches compared with sequential algorithms. Furthermore we propose an adaptive iterative relaxation factor and the use of non-uniform weights to improve the overall convergence of the algorithm. Using these techniques, fast computations can be performed in parallel without the loss of image quality during the reconstruction process.

  9. Graphics Processing Unit-Based Bioheat Simulation to Facilitate Rapid Decision Making Associated with Cryosurgery Training.

    PubMed

    Keelan, Robert; Zhang, Hong; Shimada, Kenji; Rabin, Yoed

    2016-04-01

    This study focuses on the implementation of an efficient numerical technique for cryosurgery simulations on a graphics processing unit as an alternative means to accelerate runtime. This study is part of an ongoing effort to develop computerized training tools for cryosurgery, with prostate cryosurgery as a developmental model. The ability to perform rapid simulations of various test cases is critical to facilitate sound decision making associated with medical training. Consistent with clinical practice, the training tool aims at correlating the frozen region contour and the corresponding temperature field with the target region shape. The current study focuses on the feasibility of graphics processing unit-based computation using C++ accelerated massive parallelism, as one possible implementation. Benchmark results on a variety of computation platforms display between 3-fold acceleration (laptop) and 13-fold acceleration (gaming computer) of cryosurgery simulation, in comparison with the more common implementation on a multicore central processing unit. While the general concept of graphics processing unit-based simulations is not new, its application to phase-change problems, combined with the unique requirements for cryosurgery optimization, represents the core contribution of the current study.

  10. Impact of memory bottleneck on the performance of graphics processing units

    NASA Astrophysics Data System (ADS)

    Son, Dong Oh; Choi, Hong Jun; Kim, Jong Myon; Kim, Cheol Hong

    2015-12-01

    Recent graphics processing units (GPUs) can process general-purpose applications as well as graphics applications with the help of various user-friendly application programming interfaces (APIs) supported by GPU vendors. Unfortunately, utilizing the hardware resource in the GPU efficiently is a challenging problem, since the GPU architecture is totally different to the traditional CPU architecture. To solve this problem, many studies have focused on the techniques for improving the system performance using GPUs. In this work, we analyze the GPU performance varying GPU parameters such as the number of cores and clock frequency. According to our simulations, the GPU performance can be improved by 125.8% and 16.2% on average as the number of cores and clock frequency increase, respectively. However, the performance is saturated when memory bottleneck problems incur due to huge data requests to the memory. The performance of GPUs can be improved as the memory bottleneck is reduced by changing GPU parameters dynamically.

  11. iMOSFLM: a new graphical interface for diffraction-image processing with MOSFLM

    PubMed Central

    Battye, T. Geoff G.; Kontogiannis, Luke; Johnson, Owen; Powell, Harold R.; Leslie, Andrew G. W.

    2011-01-01

    iMOSFLM is a graphical user interface to the diffraction data-integration program MOSFLM. It is designed to simplify data processing by dividing the process into a series of steps, which are normally carried out sequentially. Each step has its own display pane, allowing control over parameters that influence that step and providing graphical feedback to the user. Suitable values for integration parameters are set automatically, but additional menus provide a detailed level of control for experienced users. The image display and the interfaces to the different tasks (indexing, strategy calculation, cell refinement, integration and history) are described. The most important parameters for each step and the best way of assessing success or failure are discussed. PMID:21460445

  12. TRIIG - Time-lapse reproduction of images through interactive graphics. [digital processing of quality hard copy

    NASA Technical Reports Server (NTRS)

    Buckner, J. D.; Council, H. W.; Edwards, T. R.

    1974-01-01

    Description of the hardware and software implementing the system of time-lapse reproduction of images through interactive graphics (TRIIG). The system produces a quality hard copy of processed images in a fast and inexpensive manner. This capability allows for optimal development of processing software through the rapid viewing of many image frames in an interactive mode. Three critical optical devices are used to reproduce an image: an Optronics photo reader/writer, the Adage Graphics Terminal, and Polaroid Type 57 high speed film. Typical sources of digitized images are observation satellites, such as ERTS or Mariner, computer coupled electron microscopes for high-magnification studies, or computer coupled X-ray devices for medical research.

  13. Finite Element Optimization for Nondestructive Evaluation on a Graphics Processing Unit for Ground Vehicle Hull Inspection

    DTIC Science & Technology

    2013-08-22

    Toledo , Toledo , OH 3 US Army TARDEC, Warren, MI UNCLASSIFIED: Distribution Statement A Approved for Public Release Finite Element Optimization for...2 University of Toledo , Toledo , OH 3 US Army TARDEC, Warren, MI This is a reprint of the paper presented under the same title in the...GRAPHICS PROCESSING UNIT FOR GROUND VEHICLE HULL INSPECTION Victor U. Karthik ECE Department Michigan State University East Lansing, MI 48824

  14. Signal processing for ION mobility spectrometers

    NASA Technical Reports Server (NTRS)

    Taylor, S.; Hinton, M.; Turner, R.

    1995-01-01

    Signal processing techniques for systems based upon Ion Mobility Spectrometry will be discussed in the light of 10 years of experience in the design of real-time IMS. Among the topics to be covered are compensation techniques for variations in the number density of the gas - the use of an internal standard (a reference peak) or pressure and temperature sensors. Sources of noise and methods for noise reduction will be discussed together with resolution limitations and the ability of deconvolution techniques to improve resolving power. The use of neural networks (either by themselves or as a component part of a processing system) will be reviewed.

  15. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    PubMed

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  16. Graphics processing unit-based quantitative second-harmonic generation imaging

    NASA Astrophysics Data System (ADS)

    Kabir, Mohammad Mahfuzul; Jonayat, ASM; Patel, Sanjay; Toussaint, Kimani C., Jr.

    2014-09-01

    We adapt a graphics processing unit (GPU) to dynamic quantitative second-harmonic generation imaging. We demonstrate the temporal advantage of the GPU-based approach by computing the number of frames analyzed per second from SHG image videos showing varying fiber orientations. In comparison to our previously reported CPU-based approach, our GPU-based image analysis results in ˜10× improvement in computational time. This work can be adapted to other quantitative, nonlinear imaging techniques and provides a significant step toward obtaining quantitative information from fast in vivo biological processes.

  17. Using a commercial graphical processing unit and the CUDA programming language to accelerate scientific image processing applications

    NASA Astrophysics Data System (ADS)

    Broussard, Randy P.; Ives, Robert W.

    2011-01-01

    In the past two years the processing power of video graphics cards has quadrupled and is approaching super computer levels. State-of-the-art graphical processing units (GPU) boast of theoretical computational performance in the range of 1.5 trillion floating point operations per second (1.5 Teraflops). This processing power is readily accessible to the scientific community at a relatively small cost. High level programming languages are now available that give access to the internal architecture of the graphics card allowing greater algorithm optimization. This research takes memory access expensive portions of an image-based iris identification algorithm and hosts it on a GPU using the C++ compatible CUDA language. The selected segmentation algorithm uses basic image processing techniques such as image inversion, value squaring, thresholding, dilation, erosion and memory/computationally intensive calculations such as the circular Hough transform. Portions of the iris segmentation algorithm were accelerated by a factor of 77 over the 2008 GPU results. Some parts of the algorithm ran at speeds that were over 1600 times faster than their CPU counterparts. Strengths and limitations of the GPU Single Instruction Multiple Data architecture are discussed. Memory access times, instruction execution times, programming details and code samples are presented as part of the research.

  18. Systems Biology Graphical Notation: Process Description language Level 1 Version 1.3.

    PubMed

    Moodie, Stuart; Le Novère, Nicolas; Demir, Emek; Mi, Huaiyu; Villéger, Alice

    2015-09-04

    The Systems Biological Graphical Notation (SBGN) is an international community effort for standardized graphical representations of biological pathways and networks. The goal of SBGN is to provide unambiguous pathway and network maps for readers with different scientific backgrounds as well as to support efficient and accurate exchange of biological knowledge between different research communities, industry, and other players in systems biology. Three SBGN languages, Process Description (PD), Entity Relationship (ER) and Activity Flow (AF), allow for the representation of different aspects of biological and biochemical systems at different levels of detail. The SBGN Process Description language represents biological entities and processes between these entities within a network. SBGN PD focuses on the mechanistic description and temporal dependencies of biological interactions and transformations. The nodes (elements) are split into entity nodes describing, e.g., metabolites, proteins, genes and complexes, and process nodes describing, e.g., reactions and associations. The edges (connections) provide descriptions of relationships (or influences) between the nodes, such as consumption, production, stimulation and inhibition. Among all three languages of SBGN, PD is the closest to metabolic and regulatory pathways in biological literature and textbooks, but its well-defined semantics offer a superior precision in expressing biological knowledge.

  19. Creating Interactive Graphical Overlays in the Advanced Weather Interactive Processing System Using Shapefiles and DGM Files

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Lafosse, Richard; Hood, Doris; Hoeth, Brian

    2007-01-01

    Graphical overlays can be created in real-time in the Advanced Weather Interactive Processing System (AWIPS) using shapefiles or Denver AWIPS Risk Reduction and Requirements Evaluation (DARE) Graphics Metafile (DGM) files. This presentation describes how to create graphical overlays on-the-fly for AWIPS, by using two examples of AWIPS applications that were created by the Applied Meteorology Unit (AMU) located at Cape Canaveral Air Force Station (CCAFS), Florida. The first example is the Anvil Threat Corridor Forecast Tool, which produces a shapefile that depicts a graphical threat corridor of the forecast movement of thunderstorm anvil clouds, based on the observed or forecast upper-level winds. This tool is used by the Spaceflight Meteorology Group (SMG) at Johnson Space Center, Texas and 45th Weather Squadron (45 WS) at CCAFS to analyze the threat of natural or space vehicle-triggered lightning over a location. The second example is a launch and landing trajectory tool that produces a DGM file that plots the ground track of space vehicles during launch or landing. The trajectory tool can be used by SMG and the 45 WS forecasters to analyze weather radar imagery along a launch or landing trajectory. The presentation will list the advantages and disadvantages of both file types for creating interactive graphical overlays in future AWIPS applications. Shapefiles are a popular format used extensively in Geographical Information Systems. They are usually used in AWIPS to depict static map backgrounds. A shapefile stores the geometry and attribute information of spatial features in a dataset (ESRI 1998). Shapefiles can contain point, line, and polygon features. Each shapefile contains a main file, index file, and a dBASE table. The main file contains a record for each spatial feature, which describes the feature with a list of its vertices. The index file contains the offset of each record from the beginning of the main file. The dBASE table contains records for each

  20. Graphics processing unit implementation of lattice Boltzmann models for flowing soft systems.

    PubMed

    Bernaschi, Massimo; Rossi, Ludovico; Benzi, Roberto; Sbragaglia, Mauro; Succi, Sauro

    2009-12-01

    A graphic processing unit (GPU) implementation of the multicomponent lattice Boltzmann equation with multirange interactions for soft-glassy materials ["glassy" lattice Boltzmann (LB)] is presented. Performance measurements for flows under shear indicate a GPU/CPU speed up in excess of 10 for 1024(2) grids. Such significant speed up permits to carry out multimillion time-steps simulations of 1024(2) grids within tens of hours of GPU time, thereby considerably expanding the scope of the glassy LB toward the investigation of long-time relaxation properties of soft-flowing glassy materials.

  1. Monte Carlo Simulations of Random Frustrated Systems on Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Feng, Sheng; Fang, Ye; Hall, Sean; Papke, Ariane; Thomasson, Cade; Tam, Ka-Ming; Moreno, Juana; Jarrell, Mark

    2012-02-01

    We study the implementation of the classical Monte Carlo simulation for random frustrated models using the multithreaded computing environment provided by the the Compute Unified Device Architecture (CUDA) on modern Graphics Processing Units (GPU) with hundreds of cores and high memory bandwidth. The key for optimizing the performance of the GPU computing is in the proper handling of the data structure. Utilizing the multi-spin coding, we obtain an efficient GPU implementation of the parallel tempering Monte Carlo simulation for the Edwards-Anderson spin glass model. In the typical simulations, we find over two thousand times of speed-up over the single threaded CPU implementation.

  2. Advanced colour processing for mobile devices

    NASA Astrophysics Data System (ADS)

    Gillich, Eugen; Dörksen, Helene; Lohweg, Volker

    2015-02-01

    Mobile devices such as smartphones are going to play an important role in professionally image processing tasks. However, mobile systems were not designed for such applications, especially in terms of image processing requirements like stability and robustness. One major drawback is the automatic white balance, which comes with the devices. It is necessary for many applications, but of no use when applied to shiny surfaces. Such an issue appears when image acquisition takes place in differently coloured illuminations caused by different environments. This results in inhomogeneous appearances of the same subject. In our paper we show a new approach for handling the complex task of generating a low-noise and sharp image without spatial filtering. Our method is based on the fact that we analyze the spectral and saturation distribution of the channels. Furthermore, the RGB space is transformed into a more convenient space, a particular HSI space. We generate the greyscale image by a control procedure that takes into account the colour channels. This leads in an adaptive colour mixing model with reduced noise. The results of the optimized images are used to show how, e. g., image classification benefits from our colour adaptation approach.

  3. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding

    SciTech Connect

    Loughry, Thomas A.

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  4. Modified graphical autocatalytic set model of combustion process in circulating fluidized bed boiler

    NASA Astrophysics Data System (ADS)

    Yusof, Nurul Syazwani; Bakar, Sumarni Abu; Ismail, Razidah

    2014-07-01

    Circulating Fluidized Bed Boiler (CFB) is a device for generating steam by burning fossil fuels in a furnace operating under a special hydrodynamic condition. Autocatalytic Set has provided a graphical model of chemical reactions that occurred during combustion process in CFB. Eight important chemical substances known as species were represented as nodes and catalytic relationships between nodes are represented by the edges in the graph. In this paper, the model is extended and modified by considering other relevant chemical reactions that also exist during the process. Catalytic relationship among the species in the model is discussed. The result reveals that the modified model is able to gives more explanation of the relationship among the species during the process at initial time t.

  5. Rapid learning-based video stereolization using graphic processing unit acceleration

    NASA Astrophysics Data System (ADS)

    Sun, Tian; Jung, Cheolkon; Wang, Lei; Kim, Joongkyu

    2016-09-01

    Video stereolization has received much attention in recent years due to the lack of stereoscopic three-dimensional (3-D) contents. Although video stereolization can enrich stereoscopic 3-D contents, it is hard to achieve automatic two-dimensional-to-3-D conversion with less computational cost. We proposed rapid learning-based video stereolization using a graphic processing unit (GPU) acceleration. We first generated an initial depth map based on learning from examples. Then, we refined the depth map using saliency and cross-bilateral filtering to make object boundaries clear. Finally, we performed depth-image-based-rendering to generate stereoscopic 3-D views. To accelerate the computation of video stereolization, we provided a parallelizable hybrid GPU-central processing unit (CPU) solution to be suitable for running on GPU. Experimental results demonstrate that the proposed method is nearly 180 times faster than CPU-based processing and achieves a good performance comparable to the-state-of-the-art ones.

  6. BarraCUDA - a fast short read sequence aligner using graphics processing units

    PubMed Central

    2012-01-01

    Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net PMID:22244497

  7. Fast direct reconstruction strategy of dynamic fluorescence molecular tomography using graphics processing units

    NASA Astrophysics Data System (ADS)

    Chen, Maomao; Zhang, Jiulou; Cai, Chuangjian; Gao, Yang; Luo, Jianwen

    2016-06-01

    Dynamic fluorescence molecular tomography (DFMT) is a valuable method to evaluate the metabolic process of contrast agents in different organs in vivo, and direct reconstruction methods can improve the temporal resolution of DFMT. However, challenges still remain due to the large time consumption of the direct reconstruction methods. An acceleration strategy using graphics processing units (GPU) is presented. The procedure of conjugate gradient optimization in the direct reconstruction method is programmed using the compute unified device architecture and then accelerated on GPU. Numerical simulations and in vivo experiments are performed to validate the feasibility of the strategy. The results demonstrate that, compared with the traditional method, the proposed strategy can reduce the time consumption by ˜90% without a degradation of quality.

  8. Denoising NMR time-domain signal by singular-value decomposition accelerated by graphics processing units.

    PubMed

    Man, Pascal P; Bonhomme, Christian; Babonneau, Florence

    2014-01-01

    We present a post-processing method that decreases the NMR spectrum noise without line shape distortion. As a result the signal-to-noise (S/N) ratio of a spectrum increases. This method is called Cadzow enhancement procedure that is based on the singular-value decomposition of time-domain signal. We also provide software whose execution duration is a few seconds for typical data when it is executed in modern graphic-processing unit. We tested this procedure not only on low sensitive nucleus (29)Si in hybrid materials but also on low gyromagnetic ratio, quadrupole nucleus (87)Sr in reference sample Sr(NO3)2. Improving the spectrum S/N ratio facilitates the determination of T/Q ratio of hybrid materials. It is also applicable to simulated spectrum, resulting shorter simulation duration for powder averaging. An estimation of the number of singular values needed for denoising is also provided.

  9. Real-time digital holographic microscopy using the graphic processing unit.

    PubMed

    Shimobaba, Tomoyoshi; Sato, Yoshikuni; Miura, Junya; Takenouchi, Mai; Ito, Tomoyoshi

    2008-08-04

    Digital holographic microscopy (DHM) is a well-known powerful method allowing both the amplitude and phase of a specimen to be simultaneously observed. In order to obtain a reconstructed image from a hologram, numerous calculations for the Fresnel diffraction are required. The Fresnel diffraction can be accelerated by the FFT (Fast Fourier Transform) algorithm. However, real-time reconstruction from a hologram is difficult even if we use a recent central processing unit (CPU) to calculate the Fresnel diffraction by the FFT algorithm. In this paper, we describe a real-time DHM system using a graphic processing unit (GPU) with many stream processors, which allows use as a highly parallel processor. The computational speed of the Fresnel diffraction using the GPU is faster than that of recent CPUs. The real-time DHM system can obtain reconstructed images from holograms whose size is 512 x 512 grids in 24 frames per second.

  10. SU-E-P-59: A Graphical Interface for XCAT Phantom Configuration, Generation and Processing

    SciTech Connect

    Myronakis, M; Cai, W; Dhou, S; Cifter, F; Lewis, J; Hurwitz, M

    2015-06-15

    Purpose: To design a comprehensive open-source, publicly available, graphical user interface (GUI) to facilitate the configuration, generation, processing and use of the 4D Extended Cardiac-Torso (XCAT) phantom. Methods: The XCAT phantom includes over 9000 anatomical objects as well as respiratory, cardiac and tumor motion. It is widely used for research studies in medical imaging and radiotherapy. The phantom generation process involves the configuration of a text script to parameterize the geometry, motion, and composition of the whole body and objects within it, and to generate simulated PET or CT images. To avoid the need for manual editing or script writing, our MATLAB-based GUI uses slider controls, drop-down lists, buttons and graphical text input to parameterize and process the phantom. Results: Our GUI can be used to: a) generate parameter files; b) generate the voxelized phantom; c) combine the phantom with a lesion; d) display the phantom; e) produce average and maximum intensity images from the phantom output files; f) incorporate irregular patient breathing patterns; and f) generate DICOM files containing phantom images. The GUI provides local help information using tool-tip strings on the currently selected phantom, minimizing the need for external documentation. The DICOM generation feature is intended to simplify the process of importing the phantom images into radiotherapy treatment planning systems or other clinical software. Conclusion: The GUI simplifies and automates the use of the XCAT phantom for imaging-based research projects in medical imaging or radiotherapy. This has the potential to accelerate research conducted with the XCAT phantom, or to ease the learning curve for new users. This tool does not include the XCAT phantom software itself. We would like to acknowledge funding from MRA, Varian Medical Systems Inc.

  11. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    PubMed

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  12. FAST CALCULATION OF THE LOMB-SCARGLE PERIODOGRAM USING GRAPHICS PROCESSING UNITS

    SciTech Connect

    Townsend, R. H. D.

    2010-12-15

    I introduce a new code for fast calculation of the Lomb-Scargle periodogram that leverages the computing power of graphics processing units (GPUs). After establishing a background to the newly emergent field of GPU computing, I discuss the code design and narrate key parts of its source. Benchmarking calculations indicate no significant differences in accuracy compared to an equivalent CPU-based code. However, the differences in performance are pronounced; running on a low-end GPU, the code can match eight CPU cores, and on a high-end GPU it is faster by a factor approaching 30. Applications of the code include analysis of long photometric time series obtained by ongoing satellite missions and upcoming ground-based monitoring facilities, and Monte Carlo simulation of periodogram statistical properties.

  13. ASAMgpu V1.0 - a moist fully compressible atmospheric model using graphics processing units (GPUs)

    NASA Astrophysics Data System (ADS)

    Horn, S.

    2012-03-01

    In this work the three dimensional compressible moist atmospheric model ASAMgpu is presented. The calculations are done using graphics processing units (GPUs). To ensure platform independence OpenGL and GLSL are used, with that the model runs on any hardware supporting fragment shaders. The MPICH2 library enables interprocess communication allowing the usage of more than one GPU through domain decomposition. Time integration is done with an explicit three step Runge-Kutta scheme with a time-splitting algorithm for the acoustic waves. The results for four test cases are shown in this paper. A rising dry heat bubble, a cold bubble induced density flow, a rising moist heat bubble in a saturated environment, and a DYCOMS-II case.

  14. ASAMgpu V1.0 - a moist fully compressible atmospheric model using graphics processing units (GPUs)

    NASA Astrophysics Data System (ADS)

    Horn, S.

    2011-10-01

    In this work the three dimensional compressible moist atmospheric model ASAMgpu is presented. The calculations are done using graphics processing units (GPUs). To ensure platform independence OpenGL and GLSL is used, with that the model runs on any hardware supporting fragment shaders. The MPICH2 library enables interprocess communication allowing the usage of more than one GPU through domain decomposition. Time integration is done with an explicit three step Runge-Kutta scheme with a timesplitting algorithm for the acoustic waves. The results for four test cases are shown in this paper. A rising dry heat bubble, a cold bubble induced density flow, a rising moist heat bubble in a saturated environment and a DYCOMS-II case.

  15. Efficient implementation of effective core potential integrals and gradients on graphical processing units

    NASA Astrophysics Data System (ADS)

    Song, Chenchen; Wang, Lee-Ping; Sachse, Torsten; Preiß, Julia; Presselt, Martin; Martínez, Todd J.

    2015-07-01

    Effective core potential integral and gradient evaluations are accelerated via implementation on graphical processing units (GPUs). Two simple formulas are proposed to estimate the upper bounds of the integrals, and these are used for screening. A sorting strategy is designed to balance the workload between GPU threads properly. Significant improvements in performance and reduced scaling with system size are observed when combining the screening and sorting methods, and the calculations are highly efficient for systems containing up to 10 000 basis functions. The GPU implementation preserves the precision of the calculation; the ground state Hartree-Fock energy achieves good accuracy for CdSe and ZnTe nanocrystals, and energy is well conserved in ab initio molecular dynamics simulations.

  16. Real-time spatiotemporal division multiplexing electroholography with a single graphics processing unit utilizing movie features.

    PubMed

    Niwase, Hiroaki; Takada, Naoki; Araki, Hiromitsu; Nakayama, Hirotaka; Sugiyama, Atsushi; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2014-11-17

    We propose a real-time spatiotemporal division multiplexing electroholography utilizing the features of movies. The proposed method spatially divides a 3-D object into plural parts and periodically selects a divided part in each frame, thereby reconstructing a three-dimensional (3-D) movie of the original object. Computer-generated holograms of the selected part are calculated by a single graphics processing unit and sequentially displayed on a spatial light modulator. Visual continuity enables a reconstructed movie of the original 3-D object. The proposed method realized a real-time reconstructed movie of a 3-D object composed of 11,646 points at over 30 frames per second (fps). We also displayed a reconstructed movie of a 3-D object composed of 44,647 points at about 10 fps.

  17. Stochastic proximity embedding on graphics processing units: taking multidimensional scaling to a new scale.

    PubMed

    Yang, Eric; Liu, Pu; Rassokhin, Dimitrii N; Agrafiotis, Dimitris K

    2011-11-28

    Stochastic proximity embedding (SPE) was developed as a method for efficiently calculating lower dimensional embeddings of high-dimensional data sets. Rather than using a global minimization scheme, SPE relies upon updating the distances of randomly selected points in an iterative fashion. This was found to generate embeddings of comparable quality to those obtained using classical multidimensional scaling algorithms. However, SPE is able to obtain these results in O(n) rather than O(n²) time and thus is much better suited to large data sets. In an effort both to speed up SPE and utilize it for even larger problems, we have created a multithreaded implementation which takes advantage of the growing general computing power of graphics processing units (GPUs). The use of GPUs allows the embedding of data sets containing millions of data points in interactive time scales.

  18. A graphical method to evaluate predominant geochemical processes occurring in groundwater systems for radiocarbon dating

    USGS Publications Warehouse

    Han, Liang-Feng; Plummer, L. Niel; Aggarwal, Pradeep

    2012-01-01

    A graphical method is described for identifying geochemical reactions needed in the interpretation of radiocarbon age in groundwater systems. Graphs are constructed by plotting the measured 14C, δ13C, and concentration of dissolved inorganic carbon and are interpreted according to specific criteria to recognize water samples that are consistent with a wide range of processes, including geochemical reactions, carbon isotopic exchange, 14C decay, and mixing of waters. The graphs are used to provide a qualitative estimate of radiocarbon age, to deduce the hydrochemical complexity of a groundwater system, and to compare samples from different groundwater systems. Graphs of chemical and isotopic data from a series of previously-published groundwater studies are used to demonstrate the utility of the approach. Ultimately, the information derived from the graphs is used to improve geochemical models for adjustment of radiocarbon ages in groundwater systems.

  19. Automated Code Engine for Graphical Processing Units: Application to the Effective Core Potential Integrals and Gradients.

    PubMed

    Song, Chenchen; Wang, Lee-Ping; Martínez, Todd J

    2016-01-12

    We present an automated code engine (ACE) that automatically generates optimized kernels for computing integrals in electronic structure theory on a given graphical processing unit (GPU) computing platform. The code generator in ACE creates multiple code variants with different memory and floating point operation trade-offs. A graph representation is created as the foundation of the code generation, which allows the code generator to be extended to various types of integrals. The code optimizer in ACE determines the optimal code variant and GPU configurations for a given GPU computing platform by scanning over all possible code candidates and then choosing the best-performing code candidate for each kernel. We apply ACE to the optimization of effective core potential integrals and gradients. It is observed that the best code candidate varies with differing angular momentum, floating point precision, and type of GPU being used, which shows that the ACE may be a powerful tool in adapting to fast evolving GPU architectures.

  20. On the use of graphics processing units (GPUs) for molecular dynamics simulation of spherical particles

    NASA Astrophysics Data System (ADS)

    Hidalgo, R. C.; Kanzaki, T.; Alonso-Marroquin, F.; Luding, S.

    2013-06-01

    General-purpose computation on Graphics Processing Units (GPU) on personal computers has recently become an attractive alternative to parallel computing on clusters and supercomputers. We present the GPU-implementation of an accurate molecular dynamics algorithm for a system of spheres. The new hybrid CPU-GPU implementation takes into account all the degrees of freedom, including the quaternion representation of 3D rotations. For additional versatility, the contact interaction between particles is defined using a force law of enhanced generality, which accounts for the elastic and dissipative interactions, and the hard-sphere interaction parameters are translated to the soft-sphere parameter set. We prove that the algorithm complies with the statistical mechanical laws by examining the homogeneous cooling of a granular gas with rotation. The results are in excellent agreement with well established mean-field theories for low-density hard sphere systems. This GPU technique dramatically reduces user waiting time, compared with a traditional CPU implementation.

  1. Parallel multigrid solver of radiative transfer equation for photon transport via graphics processing unit.

    PubMed

    Gao, Hao; Phan, Lan; Lin, Yuting

    2012-09-01

    A graphics processing unit-based parallel multigrid solver for a radiative transfer equation with vacuum boundary condition or reflection boundary condition is presented for heterogeneous media with complex geometry based on two-dimensional triangular meshes or three-dimensional tetrahedral meshes. The computational complexity of this parallel solver is linearly proportional to the degrees of freedom in both angular and spatial variables, while the full multigrid method is utilized to minimize the number of iterations. The overall gain of speed is roughly 30 to 300 fold with respect to our prior multigrid solver, which depends on the underlying regime and the parallelization. The numerical validations are presented with the MATLAB codes at https://sites.google.com/site/rtefastsolver/.

  2. Acceleration of the GAMESS-UK electronic structure package on graphical processing units.

    PubMed

    Wilkinson, Karl A; Sherwood, Paul; Guest, Martyn F; Naidoo, Kevin J

    2011-07-30

    The approach used to calculate the two-electron integral by many electronic structure packages including generalized atomic and molecular electronic structure system-UK has been designed for CPU-based compute units. We redesigned the two-electron compute algorithm for acceleration on a graphical processing unit (GPU). We report the acceleration strategy and illustrate it on the (ss|ss) type integrals. This strategy is general for Fortran-based codes and uses the Accelerator compiler from Portland Group International and GPU-based accelerators from Nvidia. The evaluation of (ss|ss) type integrals within calculations using Hartree Fock ab initio methods and density functional theory are accelerated by single and quad GPU hardware systems by factors of 43 and 153, respectively. The overall speedup for a single self consistent field cycle is at least a factor of eight times faster on a single GPU compared with that of a single CPU.

  3. Using graphics processing units to accelerate perturbation Monte Carlo simulation in a turbid medium

    NASA Astrophysics Data System (ADS)

    Cai, Fuhong; He, Sailing

    2012-04-01

    We report a fast perturbation Monte Carlo (PMC) algorithm accelerated by graphics processing units (GPU). The two-step PMC simulation [Opt. Lett. 36, 2095 (2011)] is performed by storing the seeds instead of the photon's trajectory, and thus the requirement in computer random-access memory (RAM) becomes minimal. The two-step PMC is extremely suitable for implementation onto GPU. In a standard simulation of spatially-resolved photon migration in the turbid media, the acceleration ratio between using GPU and using conventional CPU is about 1000. Furthermore, since in the two-step PMC algorithm one records the effective seeds, which is associated to the photon that reaches a region of interest in this letter, and then re-run the MC simulation based on the recorded effective seeds, radiative transfer equation (RTE) can be solved by two-step PMC not only with an arbitrary change in the absorption coefficient, but also with large change in the scattering coefficient.

  4. Using graphics processing units to accelerate perturbation Monte Carlo simulation in a turbid medium.

    PubMed

    Cai, Fuhong; He, Sailing

    2012-04-01

    We report a fast perturbation Monte Carlo (PMC) algorithm accelerated by graphics processing units (GPU). The two-step PMC simulation [Opt. Lett. 36, 2095 (2011)] is performed by storing the seeds instead of the photon's trajectory, and thus the requirement in computer random-access memory (RAM) becomes minimal. The two-step PMC is extremely suitable for implementation onto GPU. In a standard simulation of spatially-resolved photon migration in the turbid media, the acceleration ratio between using GPU and using conventional CPU is about 1000. Furthermore, since in the two-step PMC algorithm one records the effective seeds, which is associated to the photon that reaches a region of interest in this letter, and then re-run the MC simulation based on the recorded effective seeds, radiative transfer equation (RTE) can be solved by two-step PMC not only with an arbitrary change in the absorption coefficient, but also with large change in the scattering coefficient.

  5. Particle-In-Cell simulations of high pressure plasmas using graphics processing units

    NASA Astrophysics Data System (ADS)

    Gebhardt, Markus; Atteln, Frank; Brinkmann, Ralf Peter; Mussenbrock, Thomas; Mertmann, Philipp; Awakowicz, Peter

    2009-10-01

    Particle-In-Cell (PIC) simulations are widely used to understand the fundamental phenomena in low-temperature plasmas. Particularly plasmas at very low gas pressures are studied using PIC methods. The inherent drawback of these methods is that they are very time consuming -- certain stability conditions has to be satisfied. This holds even more for the PIC simulation of high pressure plasmas due to the very high collision rates. The simulations take up to very much time to run on standard computers and require the help of computer clusters or super computers. Recent advances in the field of graphics processing units (GPUs) provides every personal computer with a highly parallel multi processor architecture for very little money. This architecture is freely programmable and can be used to implement a wide class of problems. In this paper we present the concepts of a fully parallel PIC simulation of high pressure plasmas using the benefits of GPU programming.

  6. Parallel multigrid solver of radiative transfer equation for photon transport via graphics processing unit

    PubMed Central

    Phan, Lan; Lin, Yuting

    2012-01-01

    Abstract. A graphics processing unit–based parallel multigrid solver for a radiative transfer equation with vacuum boundary condition or reflection boundary condition is presented for heterogeneous media with complex geometry based on two-dimensional triangular meshes or three-dimensional tetrahedral meshes. The computational complexity of this parallel solver is linearly proportional to the degrees of freedom in both angular and spatial variables, while the full multigrid method is utilized to minimize the number of iterations. The overall gain of speed is roughly 30 to 300 fold with respect to our prior multigrid solver, which depends on the underlying regime and the parallelization. The numerical validations are presented with the MATLAB codes at https://sites.google.com/site/rtefastsolver/. PMID:23085905

  7. Applying a visual language for image processing as a graphical teaching tool in medical imaging

    NASA Astrophysics Data System (ADS)

    Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As

  8. Real time 3D structural and Doppler OCT imaging on graphics processing units

    NASA Astrophysics Data System (ADS)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2013-03-01

    In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.

  9. Implementation and optimization of ultrasound signal processing algorithms on mobile GPU

    NASA Astrophysics Data System (ADS)

    Kong, Woo Kyu; Lee, Wooyoul; Kim, Kyu Cheol; Yoo, Yangmo; Song, Tai-Kyong

    2014-03-01

    A general-purpose graphics processing unit (GPGPU) has been used for improving computing power in medical ultrasound imaging systems. Recently, a mobile GPU becomes powerful to deal with 3D games and videos at high frame rates on Full HD or HD resolution displays. This paper proposes the method to implement ultrasound signal processing on a mobile GPU available in the high-end smartphone (Galaxy S4, Samsung Electronics, Seoul, Korea) with programmable shaders on the OpenGL ES 2.0 platform. To maximize the performance of the mobile GPU, the optimization of shader design and load sharing between vertex and fragment shader was performed. The beamformed data were captured from a tissue mimicking phantom (Model 539 Multipurpose Phantom, ATS Laboratories, Inc., Bridgeport, CT, USA) by using a commercial ultrasound imaging system equipped with a research package (Ultrasonix Touch, Ultrasonix, Richmond, BC, Canada). The real-time performance is evaluated by frame rates while varying the range of signal processing blocks. The implementation method of ultrasound signal processing on OpenGL ES 2.0 was verified by analyzing PSNR with MATLAB gold standard that has the same signal path. CNR was also analyzed to verify the method. From the evaluations, the proposed mobile GPU-based processing method has no significant difference with the processing using MATLAB (i.e., PSNR<52.51 dB). The comparable results of CNR were obtained from both processing methods (i.e., 11.31). From the mobile GPU implementation, the frame rates of 57.6 Hz were achieved. The total execution time was 17.4 ms that was faster than the acquisition time (i.e., 34.4 ms). These results indicate that the mobile GPU-based processing method can support real-time ultrasound B-mode processing on the smartphone.

  10. Real-time blood flow visualization using the graphics processing unit

    NASA Astrophysics Data System (ADS)

    Yang, Owen; Cuccia, David; Choi, Bernard

    2011-01-01

    Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ~10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark.

  11. [Influence of the recording interval and a graphic organizer on the writing process/product and on other psychological variables].

    PubMed

    García Sánchez, Jesús N; Rodríguez Pérez, Celestino

    2007-05-01

    An experimental study of the influence of the recording interval and a graphic organizer on the processes of writing composition and on the final product is presented. We studied 326 participants, age 10 to 16 years old, by means of a nested design. Two groups were compared: one group was aided in the writing process with a graphic organizer and the other was not. Each group was subdivided into two further groups: one with a mean recording interval of 45 seconds and the other with approximately 90 seconds recording interval in a writing log. The results showed that the group aided by a graphic organizer obtained better results both in processes and writing product, and that the groups assessed with an average interval of 45 seconds obtained worse results. Implications for educational practice are discussed, and limitations and future perspectives are commented on.

  12. Visual displays that directly interface and provide read-outs of molecular states via molecular graphics processing units.

    PubMed

    Poje, Julia E; Kastratovic, Tamara; Macdonald, Andrew R; Guillermo, Ana C; Troetti, Steven E; Jabado, Omar J; Fanning, M Leigh; Stefanovic, Darko; Macdonald, Joanne

    2014-08-25

    The monitoring of molecular systems usually requires sophisticated technologies to interpret nanoscale events into electronic-decipherable signals. We demonstrate a new method for obtaining read-outs of molecular states that uses graphics processing units made from molecular circuits. Because they are made from molecules, the units are able to directly interact with molecular systems. We developed deoxyribozyme-based graphics processing units able to monitor nucleic acids and output alphanumerical read-outs via a fluorescent display. Using this design we created a molecular 7-segment display, a molecular calculator able to add and multiply small numbers, and a molecular automaton able to diagnose Ebola and Marburg virus sequences. These molecular graphics processing units provide insight for the construction of autonomous biosensing devices, and are essential components for the development of molecular computing platforms devoid of electronics.

  13. Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units.

    PubMed

    Li, Jian; Bloch, Pavel; Xu, Jing; Sarunic, Marinko V; Shannon, Lesley

    2011-05-01

    Fourier domain optical coherence tomography (FD-OCT) provides faster line rates, better resolution, and higher sensitivity for noninvasive, in vivo biomedical imaging compared to traditional time domain OCT (TD-OCT). However, because the signal processing for FD-OCT is computationally intensive, real-time FD-OCT applications demand powerful computing platforms to deliver acceptable performance. Graphics processing units (GPUs) have been used as coprocessors to accelerate FD-OCT by leveraging their relatively simple programming model to exploit thread-level parallelism. Unfortunately, GPUs do not "share" memory with their host processors, requiring additional data transfers between the GPU and CPU. In this paper, we implement a complete FD-OCT accelerator on a consumer grade GPU/CPU platform. Our data acquisition system uses spectrometer-based detection and a dual-arm interferometer topology with numerical dispersion compensation for retinal imaging. We demonstrate that the maximum line rate is dictated by the memory transfer time and not the processing time due to the GPU platform's memory model. Finally, we discuss how the performance trends of GPU-based accelerators compare to the expected future requirements of FD-OCT data rates.

  14. Parallel particle swarm optimization on a graphics processing unit with application to trajectory optimization

    NASA Astrophysics Data System (ADS)

    Wu, Q.; Xiong, F.; Wang, F.; Xiong, Y.

    2016-10-01

    In order to reduce the computational time, a fully parallel implementation of the particle swarm optimization (PSO) algorithm on a graphics processing unit (GPU) is presented. Instead of being executed on the central processing unit (CPU) sequentially, PSO is executed in parallel via the GPU on the compute unified device architecture (CUDA) platform. The processes of fitness evaluation, updating of velocity and position of all particles are all parallelized and introduced in detail. Comparative studies on the optimization of four benchmark functions and a trajectory optimization problem are conducted by running PSO on the GPU (GPU-PSO) and CPU (CPU-PSO). The impact of design dimension, number of particles and size of the thread-block in the GPU and their interactions on the computational time is investigated. The results show that the computational time of the developed GPU-PSO is much shorter than that of CPU-PSO, with comparable accuracy, which demonstrates the remarkable speed-up capability of GPU-PSO.

  15. Developing extensible lattice-Boltzmann simulators for general-purpose graphics-processing units

    SciTech Connect

    Walsh, S C; Saar, M O

    2011-12-21

    Lattice-Boltzmann methods are versatile numerical modeling techniques capable of reproducing a wide variety of fluid-mechanical behavior. These methods are well suited to parallel implementation, particularly on the single-instruction multiple data (SIMD) parallel processing environments found in computer graphics processing units (GPUs). Although more recent programming tools dramatically improve the ease with which GPU programs can be written, the programming environment still lacks the flexibility available to more traditional CPU programs. In particular, it may be difficult to develop modular and extensible programs that require variable on-device functionality with current GPU architectures. This paper describes a process of automatic code generation that overcomes these difficulties for lattice-Boltzmann simulations. It details the development of GPU-based modules for an extensible lattice-Boltzmann simulation package - LBHydra. The performance of the automatically generated code is compared to equivalent purpose written codes for both single-phase, multiple-phase, and multiple-component flows. The flexibility of the new method is demonstrated by simulating a rising, dissolving droplet in a porous medium with user generated lattice-Boltzmann models and subroutines.

  16. Graphics processing unit-based dispersion encoded full-range frequency-domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Wang, Ling; Hofer, Bernd; Guggenheim, Jeremy A.; Považay, Boris

    2012-07-01

    Dispersion encoded full-range (DEFR) frequency-domain optical coherence tomography (FD-OCT) and its enhanced version, fast DEFR, utilize dispersion mismatch between sample and reference arm to eliminate the ambiguity in OCT signals caused by non-complex valued spectral measurement, thereby numerically doubling the usable information content. By iteratively suppressing asymmetrically dispersed complex conjugate artifacts of OCT-signal pulses the complex valued signal can be recovered without additional measurements, thus doubling the spatial signal range to cover the full positive and negative sampling range. Previously the computational complexity and low processing speed limited application of DEFR to smaller amounts of data and did not allow for interactive operation at high resolution. We report a graphics processing unit (GPU)-based implementation of fast DEFR, which significantly improves reconstruction speed by a factor of more than 90 in respect to CPU-based processing and thereby overcomes these limitations. Implemented on a commercial low-cost GPU, a display line rate of ~21,000 depth scans/s for 2048 samples/depth scan using 10 iterations of the fast DEFR algorithm has been achieved, sufficient for real-time visualization in situ.

  17. Adiabatic/nonadiabatic state-to-state reactive scattering dynamics implemented on graphics processing units.

    PubMed

    Zhang, Pei-Yu; Han, Ke-Li

    2013-09-12

    An efficient graphics processing units (GPUs) version of time-dependent wavepacket code is developed for the atom-diatom state-to-state reactive scattering processes. The propagation of the wavepacket is entirely calculated on GPUs employing the split-operator method after preparation of the initial wavepacket on the central processing unit (CPU). An additional split-operator method is introduced in the rotational part of the Hamiltonian to decrease communication of GPUs without losing accuracy of state-to-state information. The code is tested to calculate the differential cross sections of H + H2 reaction and state-resolved reaction probabilities of nonadiabatic triplet-singlet transitions of O((3)P,(1)D) + H2 for the total angular momentum J = 0. The global speedups of 22.11, 38.80, and 44.80 are found comparing the parallel computation of one GPU, two GPUs by exact rotational operator, and two GPU versions by an approximate rotational operator with serial computation of the CPU, respectively.

  18. Optical diagnostics of a single evaporating droplet using fast parallel computing on graphics processing units

    NASA Astrophysics Data System (ADS)

    Jakubczyk, D.; Migacz, S.; Derkachov, G.; Woźniak, M.; Archer, J.; Kolwas, K.

    2016-09-01

    We report on the first application of the graphics processing units (GPUs) accelerated computing technology to improve performance of numerical methods used for the optical characterization of evaporating microdroplets. Single microdroplets of various liquids with different volatility and molecular weight (glycerine, glycols, water, etc.), as well as mixtures of liquids and diverse suspensions evaporate inside the electrodynamic trap under the chosen temperature and composition of atmosphere. The series of scattering patterns recorded from the evaporating microdroplets are processed by fitting complete Mie theory predictions with gradientless lookup table method. We showed that computations on GPUs can be effectively applied to inverse scattering problems. In particular, our technique accelerated calculations of the Mie scattering theory on a single-core processor in a Matlab environment over 800 times and almost 100 times comparing to the corresponding code in C language. Additionally, we overcame problems of the time-consuming data post-processing when some of the parameters (particularly the refractive index) of an investigated liquid are uncertain. Our program allows us to track the parameters characterizing the evaporating droplet nearly simultaneously with the progress of evaporation.

  19. Atmospheric process evaluation of mobile source emissions

    SciTech Connect

    1995-07-01

    During the past two decades there has been a considerable effort in the US to develop and introduce an alternative to the use of gasoline and conventional diesel fuel for transportation. The primary motives for this effort have been twofold: energy security and improvement in air quality, most notably ozone, or smog. The anticipated improvement in air quality is associated with a decrease in the atmospheric reactivity, and sometimes a decrease in the mass emission rate, of the organic gas and NO{sub x} emissions from alternative fuels when compared to conventional transportation fuels. Quantification of these air quality impacts is a prerequisite to decisions on adopting alternative fuels. The purpose of this report is to present a critical review of the procedures and data base used to assess the impact on ambient air quality of mobile source emissions from alternative and conventional transportation fuels and to make recommendations as to how this process can be improved. Alternative transportation fuels are defined as methanol, ethanol, CNG, LPG, and reformulated gasoline. Most of the discussion centers on light-duty AFVs operating on these fuels. Other advanced transportation technologies and fuels such as hydrogen, electric vehicles, and fuel cells, will not be discussed. However, the issues raised herein can also be applied to these technologies and other classes of vehicles, such as heavy-duty diesels (HDDs). An evaluation of the overall impact of AFVs on society requires consideration of a number of complex issues. It involves the development of new vehicle technology associated with engines, fuel systems, and emission control technology; the implementation of the necessary fuel infrastructure; and an appropriate understanding of the economic, health, safety, and environmental impacts associated with the use of these fuels. This report addresses the steps necessary to properly evaluate the impact of AFVs on ozone air quality.

  20. Density-fitted singles and doubles coupled cluster on graphics processing units

    NASA Astrophysics Data System (ADS)

    DePrince, , A. Eugene, III; Kennedy, Matthew R.; Sumpter, Bobby G.; Sherrill, C. David

    2014-03-01

    We adapt an algorithm for singles and doubles coupled cluster (CCSD) that uses density fitting or Cholesky decomposition (CD) in the construction and contraction of all electron repulsion integrals (ERIs) for use on heterogeneous compute nodes consisting of a multicore central processing unit (CPU) and at least one graphics processing unit (GPU). The use of approximate three-index ERIs ameliorates two of the major difficulties in designing scientific algorithms for GPUs: (1) the extremely limited global memory on the devices and (2) the overhead associated with data motion across the bus. For the benzene trimer described by an aug-cc-pVDZ basis set, the use of a single NVIDIA Tesla C2070 (Fermi) GPU accelerates a CD-CCSD computation by a factor of 2.1, relative to the multicore CPU-only algorithm that uses six highly efficient Intel Core i7-3930K CPU cores. The use of two Fermi GPUs provides an acceleration of 2.89, which is comparable to that observed when using a single NVIDIA Kepler K20c GPU (2.73).

  1. Spatial resolution recovery utilizing multi-ray tracing and graphic processing unit in PET image reconstruction.

    PubMed

    Liang, Yicheng; Peng, Hao

    2015-02-07

    Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity.

  2. A performance comparison of different graphics processing units running direct N-body simulations

    NASA Astrophysics Data System (ADS)

    Capuzzo-Dolcetta, R.; Spera, M.

    2013-11-01

    Hybrid computational architectures based on the joint power of Central Processing Units (CPUs) and Graphic Processing Units (GPUs) are becoming popular and powerful hardware tools for a wide range of simulations in biology, chemistry, engineering, physics, etc. In this paper we present a performance comparison of various GPUs available on market when applied to the numerical integration of the classic, gravitational, N-body problem. To do this, we developed an OpenCL version of the parallel code HiGPUs used for these tests, because this portable version is the only apt to work on GPUs of different makes. The main general result is that we confirm the reliability, speed and cheapness of GPUs when applied to the examined kind of problems (i.e. when the forces to evaluate are dependent on the mutual distances, as it happens in gravitational physics and molecular dynamics). More specifically, we find that also the cheap GPUs built to be employed just for gaming applications are very performant in terms of computing speed also in scientific applications and, although with some limitations concerning on-board memory, can be a good choice to build a cheap and efficient machine for scientific applications.

  3. Real-time lossy compression of hyperspectral images using iterative error analysis on graphics processing units

    NASA Astrophysics Data System (ADS)

    Sánchez, Sergio; Plaza, Antonio

    2012-06-01

    Hyperspectral image compression is an important task in remotely sensed Earth Observation as the dimensionality of this kind of image data is ever increasing. This requires on-board compression in order to optimize the donwlink connection when sending the data to Earth. A successful algorithm to perform lossy compression of remotely sensed hyperspectral data is the iterative error analysis (IEA) algorithm, which applies an iterative process which allows controlling the amount of information loss and compression ratio depending on the number of iterations. This algorithm, which is based on spectral unmixing concepts, can be computationally expensive for hyperspectral images with high dimensionality. In this paper, we develop a new parallel implementation of the IEA algorithm for hyperspectral image compression on graphics processing units (GPUs). The proposed implementation is tested on several different GPUs from NVidia, and is shown to exhibit real-time performance in the analysis of an Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) data sets collected over different locations. The proposed algorithm and its parallel GPU implementation represent a significant advance towards real-time onboard (lossy) compression of hyperspectral data where the quality of the compression can be also adjusted in real-time.

  4. Fast ray-tracing of human eye optics on Graphics Processing Units.

    PubMed

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images.

  5. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  6. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  7. The feasibility of genome-scale biological network inference using Graphics Processing Units.

    PubMed

    Thiagarajan, Raghuram; Alavi, Amir; Podichetty, Jagdeep T; Bazil, Jason N; Beard, Daniel A

    2017-01-01

    Systems research spanning fields from biology to finance involves the identification of models to represent the underpinnings of complex systems. Formal approaches for data-driven identification of network interactions include statistical inference-based approaches and methods to identify dynamical systems models that are capable of fitting multivariate data. Availability of large data sets and so-called 'big data' applications in biology present great opportunities as well as major challenges for systems identification/reverse engineering applications. For example, both inverse identification and forward simulations of genome-scale gene regulatory network models pose compute-intensive problems. This issue is addressed here by combining the processing power of Graphics Processing Units (GPUs) and a parallel reverse engineering algorithm for inference of regulatory networks. It is shown that, given an appropriate data set, information on genome-scale networks (systems of 1000 or more state variables) can be inferred using a reverse-engineering algorithm in a matter of days on a small-scale modern GPU cluster.

  8. A full graphics processing unit implementation of uncertainty-aware drainage basin delineation

    NASA Astrophysics Data System (ADS)

    Eränen, David; Oksanen, Juha; Westerholm, Jan; Sarjakoski, Tapani

    2014-12-01

    Terrain analysis based on modern, high-resolution Digital Elevation Models (DEMs) has become quite time consuming because of the large amounts of data involved. Additionally, when the propagation of uncertainties during the analysis process is investigated using the Monte Carlo method, the run time of the algorithm can increase by a factor of between 100 and 1000, depending on the desired accuracy of the result. This increase in run time constitutes a large barrier when we expect the use of uncertainty-aware terrain analysis become more general. In this paper, we evaluate the use of Graphics Processing Units (GPUs) in uncertainty-aware drainage basin delineation. All computations are run on a GPU, including the creation of the realization of a stationary DEM uncertainty model, stream burning, pit filling, flow direction calculation, and the actual delineation of the drainage basins. On average, our GPU version is approximately 11 times faster than a sequential, one-core CPU version performing the same task.

  9. VACTIV: A graphical dialog based program for an automatic processing of line and band spectra

    NASA Astrophysics Data System (ADS)

    Zlokazov, V. B.

    2013-05-01

    The program VACTIV-Visual ACTIV-has been developed for an automatic analysis of spectrum-like distributions, in particular gamma-ray spectra or alpha-spectra and is a standard graphical dialog based Windows XX application, driven by a menu, mouse and keyboard. On the one hand, it was a conversion of an existing Fortran program ACTIV [1] to the DELPHI language; on the other hand, it is a transformation of the sequential syntax of Fortran programming to a new object-oriented style, based on the organization of event interactions. New features implemented in the algorithms of both the versions consisted in the following as peak model both an analytical function and a graphical curve could be used; the peak search algorithm was able to recognize not only Gauss peaks but also peaks with an irregular form; both narrow peaks (2-4 channels) and broad ones (50-100 channels); the regularization technique in the fitting guaranteed a stable solution in the most complicated cases of strongly overlapping or weak peaks. The graphical dialog interface of VACTIV is much more convenient than the batch mode of ACTIV. [1] V.B. Zlokazov, Computer Physics Communications, 28 (1982) 27-37. NEW VERSION PROGRAM SUMMARYProgram Title: VACTIV Catalogue identifier: ABAC_v2_0 Licensing provisions: no Programming language: DELPHI 5-7 Pascal. Computer: IBM PC series. Operating system: Windows XX. RAM: 1 MB Keywords: Nuclear physics, spectrum decomposition, least squares analysis, graphical dialog, object-oriented programming. Classification: 17.6. Catalogue identifier of previous version: ABAC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 28 (1982) 27 Does the new version supersede the previous version?: Yes. Nature of problem: Program VACTIV is intended for precise analysis of arbitrary spectrum-like distributions, e.g. gamma-ray and X-ray spectra and allows the user to carry out the full cycle of automatic processing of such spectra, i.e. calibration, automatic peak search

  10. Graphical Technique to Support the Teaching/Learning Process of Software Process Reference Models

    NASA Astrophysics Data System (ADS)

    Espinosa-Curiel, Ismael Edrein; Rodríguez-Jacobo, Josefina; Fernández-Zepeda, José Alberto

    In this paper, we propose a set of diagrams to visualize software process reference models (PRM). The diagrams, called dimods, are the combination of some visual and process modeling techniques such as rich pictures, mind maps, IDEF and RAD diagrams. We show the use of this technique by designing a set of dimods for the Mexican Software Industry Process Model (MoProSoft). Additionally, we perform an evaluation of the usefulness of dimods. The result of the evaluation shows that dimods may be a support tool that facilitates the understanding, memorization, and learning of software PRMs in both, software development organizations and universities. The results also show that dimods may have advantages over the traditional description methods for these types of models.

  11. Large-scale analytical Fourier transform of photomask layouts using graphics processing units

    NASA Astrophysics Data System (ADS)

    Sakamoto, Julia A.

    2015-10-01

    Compensation of lens-heating effects during the exposure scan in an optical lithographic system requires knowledge of the heating profile in the pupil of the projection lens. A necessary component in the accurate estimation of this profile is the total integrated distribution of light, relying on the squared modulus of the Fourier transform (FT) of the photomask layout for individual process layers. Requiring a layout representation in pixelated image format, the most common approach is to compute the FT numerically via the fast Fourier transform (FFT). However, the file size for a standard 26- mm×33-mm mask with 5-nm pixels is an overwhelming 137 TB in single precision; the data importing process alone, prior to FFT computation, can render this method highly impractical. A more feasible solution is to handle layout data in a highly compact format with vertex locations of mask features (polygons), which correspond to elements in an integrated circuit, as well as pattern symmetries and repetitions (e.g., GDSII format). Provided the polygons can decompose into shapes for which analytical FT expressions are possible, the analytical approach dramatically reduces computation time and alleviates the burden of importing extensive mask data. Algorithms have been developed for importing and interpreting hierarchical layout data and computing the analytical FT on a graphics processing unit (GPU) for rapid parallel processing, not assuming incoherent imaging. Testing was performed on the active layer of a 392- μm×297-μm virtual chip test structure with 43 substructures distributed over six hierarchical levels. The factor of improvement in the analytical versus numerical approach for importing layout data, performing CPU-GPU memory transfers, and executing the FT on a single NVIDIA Tesla K20X GPU was 1.6×104, 4.9×103, and 3.8×103, respectively. Various ideas for algorithm enhancements will be discussed.

  12. Image Processing for Multiple-Target Tracking on a Graphics Processing Unit

    DTIC Science & Technology

    2009-03-01

    Vision . . . . . . . . . . . . . . . . . . . . 24 FRBD Fast Radial Blob Detector . . . . . . . . . . . . . . . . . . 26 FPS Frames Per Second...Implementation The MATLAB implementation can only process HD images at about two frames per second ( FPS ). Nearly a third of that time is dedicated to...2.87 times faster than the original MATLAB implementation. That is an improvement from about 8 FPS to 23 FPS . Figure 4.5 provides a brief summary of

  13. Interactive Computing and Graphics in Undergraduate Digital Signal Processing. Microcomputing Working Paper Series F 84-9.

    ERIC Educational Resources Information Center

    Onaral, Banu; And Others

    This report describes the development of a Drexel University electrical and computer engineering course on digital filter design that used interactive computing and graphics, and was one of three courses in a senior-level sequence on digital signal processing (DSP). Interactive and digital analysis/design routines and the interconnection of these…

  14. Accounting for Students' Schemes in the Development of a Graphical Process for Solving Polynomial Inequalities in Instrumented Activity

    ERIC Educational Resources Information Center

    Rivera, Ferdinand D.

    2007-01-01

    This paper provides an instrumental account of precalculus students' graphical process for solving polynomial inequalities. It is carried out in terms of the students' instrumental schemes as mediated by handheld graphing calculators and in cooperation with their classmates in a classroom setting. The ethnographic narrative relays an instrumental…

  15. Mobile Monitoring Data Processing & Analysis Strategies

    EPA Science Inventory

    The development of portable, high-time resolution instruments for measuring the concentrations of a variety of air pollutants has made it possible to collect data while in motion. This strategy, known as mobile monitoring, involves mounting air sensors on variety of different pla...

  16. Mobile Monitoring Data Processing and Analysis Strategies

    EPA Science Inventory

    The development of portable, high-time resolution instruments for measuring the concentrations of a variety of air pollutants has made it possible to collect data while in motion. This strategy, known as mobile monitoring, involves mounting air sensors on variety of different pla...

  17. TMSEEG: A MATLAB-Based Graphical User Interface for Processing Electrophysiological Signals during Transcranial Magnetic Stimulation.

    PubMed

    Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C; Wong, Willy; Daskalakis, Zafiris J; Farzan, Faranak

    2016-01-01

    Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline

  18. TMSEEG: A MATLAB-Based Graphical User Interface for Processing Electrophysiological Signals during Transcranial Magnetic Stimulation

    PubMed Central

    Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C.; Wong, Willy; Daskalakis, Zafiris J.; Farzan, Faranak

    2016-01-01

    Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline

  19. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    NASA Astrophysics Data System (ADS)

    Rath, N.; Kato, S.; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q.

    2014-04-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  20. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    SciTech Connect

    Rath, N. Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q.; Kato, S.

    2014-04-15

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  1. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units.

    PubMed

    Rath, N; Kato, S; Levesque, J P; Mauel, M E; Navratil, G A; Peng, Q

    2014-04-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  2. Accelerating image reconstruction in three-dimensional optoacoustic tomography on graphics processing units

    PubMed Central

    Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A.; Anastasio, Mark A.

    2013-01-01

    Purpose: Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Methods: Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. Results: The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Conclusions: Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction. PMID:23387778

  3. Developing a multiscale, multi-resolution agent-based brain tumor model by graphics processing units

    PubMed Central

    2011-01-01

    Multiscale agent-based modeling (MABM) has been widely used to simulate Glioblastoma Multiforme (GBM) and its progression. At the intracellular level, the MABM approach employs a system of ordinary differential equations to describe quantitatively specific intracellular molecular pathways that determine phenotypic switches among cells (e.g. from migration to proliferation and vice versa). At the intercellular level, MABM describes cell-cell interactions by a discrete module. At the tissue level, partial differential equations are employed to model the diffusion of chemoattractants, which are the input factors of the intracellular molecular pathway. Moreover, multiscale analysis makes it possible to explore the molecules that play important roles in determining the cellular phenotypic switches that in turn drive the whole GBM expansion. However, owing to limited computational resources, MABM is currently a theoretical biological model that uses relatively coarse grids to simulate a few cancer cells in a small slice of brain cancer tissue. In order to improve this theoretical model to simulate and predict actual GBM cancer progression in real time, a graphics processing unit (GPU)-based parallel computing algorithm was developed and combined with the multi-resolution design to speed up the MABM. The simulated results demonstrated that the GPU-based, multi-resolution and multiscale approach can accelerate the previous MABM around 30-fold with relatively fine grids in a large extracellular matrix. Therefore, the new model has great potential for simulating and predicting real-time GBM progression, if real experimental data are incorporated. PMID:22176732

  4. Accelerating electrostatic surface potential calculation with multi-scale approximation on graphics processing units.

    PubMed

    Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V

    2010-06-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone.

  5. High-Throughput Characterization of Porous Materials Using Graphics Processing Units

    SciTech Connect

    Kim, Jihan; Martin, Richard L.; Rübel, Oliver; Haranczyk, Maciej; Smit, Berend

    2012-05-08

    We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CH$_{4}$ and CO$_{2}$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.

  6. Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit

    NASA Astrophysics Data System (ADS)

    Vittaldev, Vivek; Russell, Ryan P.

    2017-03-01

    Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.

  7. Graphics processing unit accelerated one-dimensional blood flow computation in the human arterial tree.

    PubMed

    Itu, Lucian; Sharma, Puneet; Kamen, Ali; Suciu, Constantin; Comaniciu, Dorin

    2013-12-01

    One-dimensional blood flow models have been used extensively for computing pressure and flow waveforms in the human arterial circulation. We propose an improved numerical implementation based on a graphics processing unit (GPU) for the acceleration of the execution time of one-dimensional model. A novel parallel hybrid CPU-GPU algorithm with compact copy operations (PHCGCC) and a parallel GPU only (PGO) algorithm are developed, which are compared against previously introduced PHCG versions, a single-threaded CPU only algorithm and a multi-threaded CPU only algorithm. Different second-order numerical schemes (Lax-Wendroff and Taylor series) are evaluated for the numerical solution of one-dimensional model, and the computational setups include physiologically motivated non-periodic (Windkessel) and periodic boundary conditions (BC) (structured tree) and elastic and viscoelastic wall laws. Both the PHCGCC and the PGO implementations improved the execution time significantly. The speed-up values over the single-threaded CPU only implementation range from 5.26 to 8.10 × , whereas the speed-up values over the multi-threaded CPU only implementation range from 1.84 to 4.02 × . The PHCGCC algorithm performs best for an elastic wall law with non-periodic BC and for viscoelastic wall laws, whereas the PGO algorithm performs best for an elastic wall law with periodic BC.

  8. Graphics processing unit (GPU)-based computation of heat conduction in thermally anisotropic solids

    NASA Astrophysics Data System (ADS)

    Nahas, C. A.; Balasubramaniam, Krishnan; Rajagopal, Prabhu

    2013-01-01

    Numerical modeling of anisotropic media is a computationally intensive task since it brings additional complexity to the field problem in such a way that the physical properties are different in different directions. Largely used in the aerospace industry because of their lightweight nature, composite materials are a very good example of thermally anisotropic media. With advancements in video gaming technology, parallel processors are much cheaper today and accessibility to higher-end graphical processing devices has increased dramatically over the past couple of years. Since these massively parallel GPUs are very good in handling floating point arithmetic, they provide a new platform for engineers and scientists to accelerate their numerical models using commodity hardware. In this paper we implement a parallel finite difference model of thermal diffusion through anisotropic media using the NVIDIA CUDA (Compute Unified device Architecture). We use the NVIDIA GeForce GTX 560 Ti as our primary computing device which consists of 384 CUDA cores clocked at 1645 MHz with a standard desktop pc as the host platform. We compare the results from standard CPU implementation for its accuracy and speed and draw implications for simulation using the GPU paradigm.

  9. Parallel design of JPEG-LS encoder on graphics processing units

    NASA Astrophysics Data System (ADS)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  10. Graphics processing unit (GPU)-accelerated particle filter framework for positron emission tomography image reconstruction.

    PubMed

    Yu, Fengchao; Liu, Huafeng; Hu, Zhenghui; Shi, Pengcheng

    2012-04-01

    As a consequence of the random nature of photon emissions and detections, the data collected by a positron emission tomography (PET) imaging system can be shown to be Poisson distributed. Meanwhile, there have been considerable efforts within the tracer kinetic modeling communities aimed at establishing the relationship between the PET data and physiological parameters that affect the uptake and metabolism of the tracer. Both statistical and physiological models are important to PET reconstruction. The majority of previous efforts are based on simplified, nonphysical mathematical expression, such as Poisson modeling of the measured data, which is, on the whole, completed without consideration of the underlying physiology. In this paper, we proposed a graphics processing unit (GPU)-accelerated reconstruction strategy that can take both statistical model and physiological model into consideration with the aid of state-space evolution equations. The proposed strategy formulates the organ activity distribution through tracer kinetics models and the photon-counting measurements through observation equations, thus making it possible to unify these two constraints into a general framework. In order to accelerate reconstruction, GPU-based parallel computing is introduced. Experiments of Zubal-thorax-phantom data, Monte Carlo simulated phantom data, and real phantom data show the power of the method. Furthermore, thanks to the computing power of the GPU, the reconstruction time is practical for clinical application.

  11. Accelerating the Gillespie Exact Stochastic Simulation Algorithm using hybrid parallel execution on graphics processing units.

    PubMed

    Komarov, Ivan; D'Souza, Roshan M

    2012-01-01

    The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.

  12. The application of projected conjugate gradient solvers on graphical processing units

    SciTech Connect

    Lin, Youzuo; Renaut, Rosemary

    2011-01-26

    Graphical processing units introduce the capability for large scale computation at the desktop. Presented numerical results verify that efficiencies and accuracies of basic linear algebra subroutines of all levels when implemented in CUDA and Jacket are comparable. But experimental results demonstrate that the basic linear algebra subroutines of level three offer the greatest potential for improving efficiency of basic numerical algorithms. We consider the solution of the multiple right hand side set of linear equations using Krylov subspace-based solvers. Thus, for the multiple right hand side case, it is more efficient to make use of a block implementation of the conjugate gradient algorithm, rather than to solve each system independently. Jacket is used for the implementation. Furthermore, including projection from one system to another improves efficiency. A relevant example, for which simulated results are provided, is the reconstruction of a three dimensional medical image volume acquired from a positron emission tomography scanner. Efficiency of the reconstruction is improved by using projection across nearby slices.

  13. Accelerating Electrostatic Surface Potential Calculation with Multiscale Approximation on Graphics Processing Units

    PubMed Central

    Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.

    2010-01-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792

  14. An Optimized Multicolor Point-Implicit Solver for Unstructured Grid Applications on Graphics Processing Units

    NASA Technical Reports Server (NTRS)

    Zubair, Mohammad; Nielsen, Eric; Luitjens, Justin; Hammond, Dana

    2016-01-01

    In the field of computational fluid dynamics, the Navier-Stokes equations are often solved using an unstructuredgrid approach to accommodate geometric complexity. Implicit solution methodologies for such spatial discretizations generally require frequent solution of large tightly-coupled systems of block-sparse linear equations. The multicolor point-implicit solver used in the current work typically requires a significant fraction of the overall application run time. In this work, an efficient implementation of the solver for graphics processing units is proposed. Several factors present unique challenges to achieving an efficient implementation in this environment. These include the variable amount of parallelism available in different kernel calls, indirect memory access patterns, low arithmetic intensity, and the requirement to support variable block sizes. In this work, the solver is reformulated to use standard sparse and dense Basic Linear Algebra Subprograms (BLAS) functions. However, numerical experiments show that the performance of the BLAS functions available in existing CUDA libraries is suboptimal for matrices representative of those encountered in actual simulations. Instead, optimized versions of these functions are developed. Depending on block size, the new implementations show performance gains of up to 7x over the existing CUDA library functions.

  15. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.

  16. Accelerated Molecular Dynamics Simulations with the AMOEBA Polarizable Force Field on Graphics Processing Units.

    PubMed

    Lindert, Steffen; Bucher, Denis; Eastman, Peter; Pande, Vijay; McCammon, J Andrew

    2013-11-12

    The accelerated molecular dynamics (aMD) method has recently been shown to enhance the sampling of biomolecules in molecular dynamics (MD) simulations, often by several orders of magnitude. Here, we describe an implementation of the aMD method for the OpenMM application layer that takes full advantage of graphics processing units (GPUs) computing. The aMD method is shown to work in combination with the AMOEBA polarizable force field (AMOEBA-aMD), allowing the simulation of long time-scale events with a polarizable force field. Benchmarks are provided to show that the AMOEBA-aMD method is efficiently implemented and produces accurate results in its standard parametrization. For the BPTI protein, we demonstrate that the protein structure described with AMOEBA remains stable even on the extended time scales accessed at high levels of accelerations. For the DNA repair metalloenzyme endonuclease IV, we show that the use of the AMOEBA force field is a significant improvement over fixed charged models for describing the enzyme active-site. The new AMOEBA-aMD method is publicly available (http://wiki.simtk.org/openmm/VirtualRepository) and promises to be interesting for studying complex systems that can benefit from both the use of a polarizable force field and enhanced sampling.

  17. Density-fitted singles and doubles coupled cluster on graphics processing units

    SciTech Connect

    Sherrill, David; Sumpter, Bobby G; DePrince, III, A. Eugene

    2014-01-01

    We adapt an algorithm for singles and doubles coupled cluster (CCSD) that uses density fitting (DF) or Cholesky decomposition (CD) in the construction and contraction of all electron repulsion integrals (ERI s) for use on heterogeneous compute nodes consisting of a multicore CPU and at least one graphics processing unit (GPU). The use of approximate 3-index ERI s ameliorates two of the major difficulties in designing scientific algorithms for GPU s: (i) the extremely limited global memory on the devices and (ii) the overhead associated with data motion across the PCI bus. For the benzene trimer described by an aug-cc-pVDZ basis set, the use of a single NVIDIA Tesla C2070 (Fermi) GPU accelerates a CD-CCSD computation by a factor of 2.1, relative to the multicore CPU-only algorithm that uses 6 highly efficient Intel core i7-3930K CPU cores. The use of two Fermis provides an acceleration of 2.89, which is comparable to that observed when using a single NVIDIA Kepler K20c GPU (2.73).

  18. Graphic processing unit accelerated real-time partially coherent beam generator

    NASA Astrophysics Data System (ADS)

    Ni, Xiaolong; Liu, Zhi; Chen, Chunyi; Jiang, Huilin; Fang, Hanhan; Song, Lujun; Zhang, Su

    2016-07-01

    A method of using liquid-crystals (LCs) to generate a partially coherent beam in real-time is described. An expression for generating a partially coherent beam is given and calculated using a graphic processing unit (GPU), i.e., the GeForce GTX 680. A liquid-crystal on silicon (LCOS) with 256 × 256 pixels is used as the partially coherent beam generator (PCBG). An optimizing method with partition convolution is used to improve the generating speed of our LC PCBG. The total time needed to generate a random phase map with a coherence width range from 0.015 mm to 1.5 mm is less than 2.4 ms for calculation and readout with the GPU; adding the time needed for the CPU to read and send to LCOS with the response time of the LC PCBG, the real-time partially coherent beam (PCB) generation frequency of our LC PCBG is up to 312 Hz. To our knowledge, it is the first real-time partially coherent beam generator. A series of experiments based on double pinhole interference are performed. The result shows that to generate a laser beam with a coherence width of 0.9 mm and 1.5 mm, with a mean error of approximately 1%, the RMS values needed 0.021306 and 0.020883 and the PV values required 0.073576 and 0.072998, respectively.

  19. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  20. On the Social Psychology of Social Mobility Processes.

    ERIC Educational Resources Information Center

    Kerckhoff, Alan C.

    1989-01-01

    Discusses two types of research--the "new structuralism" approach and "work and personality" studies--on the occupational attainment aspect of social mobility. Suggests that a life course approach to social mobility processes may provide a basis for integrating the structural and social psychological perspectives. Contains 25…

  1. Mobile Ultrasound Plane Wave Beamforming on iPhone or iPad using Metal- based GPU Processing

    NASA Astrophysics Data System (ADS)

    Hewener, Holger J.; Tretbar, Steffen H.

    Mobile and cost effective ultrasound devices are being used in point of care scenarios or the drama room. To reduce the costs of such devices we already presented the possibilities of consumer devices like the Apple iPad for full signal processing of raw data for ultrasound image generation. Using technologies like plane wave imaging to generate a full image with only one excitation/reception event the acquisition times and power consumption of ultrasound imaging can be reduced for low power mobile devices based on consumer electronics realizing the transition from FPGA or ASIC based beamforming into more flexible software beamforming. The massive parallel beamforming processing can be done with the Apple framework "Metal" for advanced graphics and general purpose GPU processing for the iOS platform. We were able to integrate the beamforming reconstruction into our mobile ultrasound processing application with imaging rates up to 70 Hz on iPad Air 2 hardware.

  2. In-Situ Statistical Analysis of Autotune Simulation Data using Graphical Processing Units

    SciTech Connect

    Ranjan, Niloo; Sanyal, Jibonananda; New, Joshua Ryan

    2013-08-01

    Developing accurate building energy simulation models to assist energy efficiency at speed and scale is one of the research goals of the Whole-Building and Community Integration group, which is a part of Building Technologies Research and Integration Center (BTRIC) at Oak Ridge National Laboratory (ORNL). The aim of the Autotune project is to speed up the automated calibration of building energy models to match measured utility or sensor data. The workflow of this project takes input parameters and runs EnergyPlus simulations on Oak Ridge Leadership Computing Facility s (OLCF) computing resources such as Titan, the world s second fastest supercomputer. Multiple simulations run in parallel on nodes having 16 processors each and a Graphics Processing Unit (GPU). Each node produces a 5.7 GB output file comprising 256 files from 64 simulations. Four types of output data covering monthly, daily, hourly, and 15-minute time steps for each annual simulation is produced. A total of 270TB+ of data has been produced. In this project, the simulation data is statistically analyzed in-situ using GPUs while annual simulations are being computed on the traditional processors. Titan, with its recent addition of 18,688 Compute Unified Device Architecture (CUDA) capable NVIDIA GPUs, has greatly extended its capability for massively parallel data processing. CUDA is used along with C/MPI to calculate statistical metrics such as sum, mean, variance, and standard deviation leveraging GPU acceleration. The workflow developed in this project produces statistical summaries of the data which reduces by multiple orders of magnitude the time and amount of data that needs to be stored. These statistical capabilities are anticipated to be useful for sensitivity analysis of EnergyPlus simulations.

  3. Computing the Density Matrix in Electronic Structure Theory on Graphics Processing Units.

    PubMed

    Cawkwell, M J; Sanville, E J; Mniszewski, S M; Niklasson, Anders M N

    2012-11-13

    The self-consistent solution of a Schrödinger-like equation for the density matrix is a critical and computationally demanding step in quantum-based models of interatomic bonding. This step was tackled historically via the diagonalization of the Hamiltonian. We have investigated the performance and accuracy of the second-order spectral projection (SP2) algorithm for the computation of the density matrix via a recursive expansion of the Fermi operator in a series of generalized matrix-matrix multiplications. We demonstrate that owing to its simplicity, the SP2 algorithm [Niklasson, A. M. N. Phys. Rev. B2002, 66, 155115] is exceptionally well suited to implementation on graphics processing units (GPUs). The performance in double and single precision arithmetic of a hybrid GPU/central processing unit (CPU) and full GPU implementation of the SP2 algorithm exceed those of a CPU-only implementation of the SP2 algorithm and traditional matrix diagonalization when the dimensions of the matrices exceed about 2000 × 2000. Padding schemes for arrays allocated in the GPU memory that optimize the performance of the CUBLAS implementations of the level 3 BLAS DGEMM and SGEMM subroutines for generalized matrix-matrix multiplications are described in detail. The analysis of the relative performance of the hybrid CPU/GPU and full GPU implementations indicate that the transfer of arrays between the GPU and CPU constitutes only a small fraction of the total computation time. The errors measured in the self-consistent density matrices computed using the SP2 algorithm are generally smaller than those measured in matrices computed via diagonalization. Furthermore, the errors in the density matrices computed using the SP2 algorithm do not exhibit any dependence of system size, whereas the errors increase linearly with the number of orbitals when diagonalization is employed.

  4. Simulating data processing for an Advanced Ion Mobility Mass Spectrometer

    SciTech Connect

    Chavarría-Miranda, Daniel; Clowers, Brian H.; Anderson, Gordon A.; Belov, Mikhail E.

    2007-11-03

    We have designed and implemented a Cray XD-1-based sim- ulation of data capture and signal processing for an ad- vanced Ion Mobility mass spectrometer (Hadamard trans- form Ion Mobility). Our simulation is a hybrid application that uses both an FPGA component and a CPU-based soft- ware component to simulate Ion Mobility mass spectrome- try data processing. The FPGA component includes data capture and accumulation, as well as a more sophisticated deconvolution algorithm based on a PNNL-developed en- hancement to standard Hadamard transform Ion Mobility spectrometry. The software portion is in charge of stream- ing data to the FPGA and collecting results. We expect the computational and memory addressing logic of the FPGA component to be portable to an instrument-attached FPGA board that can be interfaced with a Hadamard transform Ion Mobility mass spectrometer.

  5. Biogeochemical Processes Regulating the Mobility of Uranium in Sediments

    SciTech Connect

    Belli, Keaton M.; Taillefert, Martial

    2016-07-01

    This book chapters reviews the latest knowledge on the biogeochemical processes regulating the mobility of uranium in sediments. It contains both data from the literature and new data from the authors.

  6. Graphics Processing Unit Acceleration and Parallelization of GENESIS for Large-Scale Molecular Dynamics Simulations.

    PubMed

    Jung, Jaewoon; Naurse, Akira; Kobayashi, Chigusa; Sugita, Yuji

    2016-10-11

    The graphics processing unit (GPU) has become a popular computational platform for molecular dynamics (MD) simulations of biomolecules. A significant speedup in the simulations of small- or medium-size systems using only a few computer nodes with a single or multiple GPUs has been reported. Because of GPU memory limitation and slow communication between GPUs on different computer nodes, it is not straightforward to accelerate MD simulations of large biological systems that contain a few million or more atoms on massively parallel supercomputers with GPUs. In this study, we develop a new scheme in our MD software, GENESIS, to reduce the total computational time on such computers. Computationally intensive real-space nonbonded interactions are computed mainly on GPUs in the scheme, while less intensive bonded interactions and communication-intensive reciprocal-space interactions are performed on CPUs. On the basis of the midpoint cell method as a domain decomposition scheme, we invent the single particle interaction list for reducing the GPU memory usage. Since total computational time is limited by the reciprocal-space computation, we utilize the RESPA multiple time-step integration and reduce the CPU resting time by assigning a subset of nonbonded interactions on CPUs as well as on GPUs when the reciprocal-space computation is skipped. We validated our GPU implementations in GENESIS on BPTI and a membrane protein, porin, by MD simulations and an alanine-tripeptide by REMD simulations. Benchmark calculations on TSUBAME supercomputer showed that an MD simulation of a million atoms system was scalable up to 256 computer nodes with GPUs.

  7. Porting ONETEP to graphical processing unit-based coprocessors. 1. FFT box operations.

    PubMed

    Wilkinson, Karl; Skylaris, Chris-Kriton

    2013-10-30

    We present the first graphical processing unit (GPU) coprocessor-enabled version of the Order-N Electronic Total Energy Package (ONETEP) code for linear-scaling first principles quantum mechanical calculations on materials. This work focuses on porting to the GPU the parts of the code that involve atom-localized fast Fourier transform (FFT) operations. These are among the most computationally intensive parts of the code and are used in core algorithms such as the calculation of the charge density, the local potential integrals, the kinetic energy integrals, and the nonorthogonal generalized Wannier function gradient. We have found that direct porting of the isolated FFT operations did not provide any benefit. Instead, it was necessary to tailor the port to each of the aforementioned algorithms to optimize data transfer to and from the GPU. A detailed discussion of the methods used and tests of the resulting performance are presented, which show that individual steps in the relevant algorithms are accelerated by a significant amount. However, the transfer of data between the GPU and host machine is a significant bottleneck in the reported version of the code. In addition, an initial investigation into a dynamic precision scheme for the ONETEP energy calculation has been performed to take advantage of the enhanced single precision capabilities of GPUs. The methods used here result in no disruption to the existing code base. Furthermore, as the developments reported here concern the core algorithms, they will benefit the full range of ONETEP functionality. Our use of a directive-based programming model ensures portability to other forms of coprocessors and will allow this work to form the basis of future developments to the code designed to support emerging high-performance computing platforms.

  8. Fast analysis of molecular dynamics trajectories with graphics processing units-Radial distribution function histogramming

    SciTech Connect

    Levine, Benjamin G.; Stone, John E.; Kohlmeyer, Axel

    2011-05-01

    The calculation of radial distribution functions (RDFs) from molecular dynamics trajectory data is a common and computationally expensive analysis task. The rate limiting step in the calculation of the RDF is building a histogram of the distance between atom pairs in each trajectory frame. Here we present an implementation of this histogramming scheme for multiple graphics processing units (GPUs). The algorithm features a tiling scheme to maximize the reuse of data at the fastest levels of the GPU's memory hierarchy and dynamic load balancing to allow high performance on heterogeneous configurations of GPUs. Several versions of the RDF algorithm are presented, utilizing the specific hardware features found on different generations of GPUs. We take advantage of larger shared memory and atomic memory operations available on state-of-the-art GPUs to accelerate the code significantly. The use of atomic memory operations allows the fast, limited-capacity on-chip memory to be used much more efficiently, resulting in a fivefold increase in performance compared to the version of the algorithm without atomic operations. The ultimate version of the algorithm running in parallel on four NVIDIA GeForce GTX 480 (Fermi) GPUs was found to be 92 times faster than a multithreaded implementation running on an Intel Xeon 5550 CPU. On this multi-GPU hardware, the RDF between two selections of 1,000,000 atoms each can be calculated in 26.9 s per frame. The multi-GPU RDF algorithms described here are implemented in VMD, a widely used and freely available software package for molecular dynamics visualization and analysis.

  9. FLOCKING-BASED DOCUMENT CLUSTERING ON THE GRAPHICS PROCESSING UNIT [Book Chapter

    SciTech Connect

    Charles, J S; Patton, R M; Potok, T E; Cui, X

    2008-01-01

    Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the fl ocking behavior of birds. Each bird represents a single document and fl ies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly diffi cult to receive results in a reasonable amount of time. However, fl ocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have experienced improved performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefi t the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NVIDIA®, we developed a document fl ocking implementation to be run on the NVIDIA® GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3,000 documents. The results of these tests were very signifi cant. Performance gains ranged from three to nearly fi ve times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  10. Fast Analysis of Molecular Dynamics Trajectories with Graphics Processing Units-Radial Distribution Function Histogramming.

    PubMed

    Levine, Benjamin G; Stone, John E; Kohlmeyer, Axel

    2011-05-01

    The calculation of radial distribution functions (RDFs) from molecular dynamics trajectory data is a common and computationally expensive analysis task. The rate limiting step in the calculation of the RDF is building a histogram of the distance between atom pairs in each trajectory frame. Here we present an implementation of this histogramming scheme for multiple graphics processing units (GPUs). The algorithm features a tiling scheme to maximize the reuse of data at the fastest levels of the GPU's memory hierarchy and dynamic load balancing to allow high performance on heterogeneous configurations of GPUs. Several versions of the RDF algorithm are presented, utilizing the specific hardware features found on different generations of GPUs. We take advantage of larger shared memory and atomic memory operations available on state-of-the-art GPUs to accelerate the code significantly. The use of atomic memory operations allows the fast, limited-capacity on-chip memory to be used much more efficiently, resulting in a fivefold increase in performance compared to the version of the algorithm without atomic operations. The ultimate version of the algorithm running in parallel on four NVIDIA GeForce GTX 480 (Fermi) GPUs was found to be 92 times faster than a multithreaded implementation running on an Intel Xeon 5550 CPU. On this multi-GPU hardware, the RDF between two selections of 1,000,000 atoms each can be calculated in 26.9 seconds per frame. The multi-GPU RDF algorithms described here are implemented in VMD, a widely used and freely available software package for molecular dynamics visualization and analysis.

  11. Real-time Graphics Processing Unit Based Fourier Domain Optical Coherence Tomography and Surgical Applications

    NASA Astrophysics Data System (ADS)

    Zhang, Kang

    2011-12-01

    In this dissertation, real-time Fourier domain optical coherence tomography (FD-OCT) capable of multi-dimensional micrometer-resolution imaging targeted specifically for microsurgical intervention applications was developed and studied. As a part of this work several ultra-high speed real-time FD-OCT imaging and sensing systems were proposed and developed. A real-time 4D (3D+time) OCT system platform using the graphics processing unit (GPU) to accelerate OCT signal processing, the imaging reconstruction, visualization, and volume rendering was developed. Several GPU based algorithms such as non-uniform fast Fourier transform (NUFFT), numerical dispersion compensation, and multi-GPU implementation were developed to improve the impulse response, SNR roll-off and stability of the system. Full-range complex-conjugate-free FD-OCT was also implemented on the GPU architecture to achieve doubled image range and improved SNR. These technologies overcome the imaging reconstruction and visualization bottlenecks widely exist in current ultra-high speed FD-OCT systems and open the way to interventional OCT imaging for applications in guided microsurgery. A hand-held common-path optical coherence tomography (CP-OCT) distance-sensor based microsurgical tool was developed and validated. Through real-time signal processing, edge detection and feed-back control, the tool was shown to be capable of track target surface and compensate motion. The micro-incision test using a phantom was performed using a CP-OCT-sensor integrated hand-held tool, which showed an incision error less than +/-5 microns, comparing to >100 microns error by free-hand incision. The CP-OCT distance sensor has also been utilized to enhance the accuracy and safety of optical nerve stimulation. Finally, several experiments were conducted to validate the system for surgical applications. One of them involved 4D OCT guided micro-manipulation using a phantom. Multiple volume renderings of one 3D data set were

  12. Large eddy simulations of turbulent flows on graphics processing units: Application to film-cooling flows

    NASA Astrophysics Data System (ADS)

    Shinn, Aaron F.

    Computational Fluid Dynamics (CFD) simulations can be very computationally expensive, especially for Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) of turbulent ows. In LES the large, energy containing eddies are resolved by the computational mesh, but the smaller (sub-grid) scales are modeled. In DNS, all scales of turbulence are resolved, including the smallest dissipative (Kolmogorov) scales. Clusters of CPUs have been the standard approach for such simulations, but an emerging approach is the use of Graphics Processing Units (GPUs), which deliver impressive computing performance compared to CPUs. Recently there has been great interest in the scientific computing community to use GPUs for general-purpose computation (such as the numerical solution of PDEs) rather than graphics rendering. To explore the use of GPUs for CFD simulations, an incompressible Navier-Stokes solver was developed for a GPU. This solver is capable of simulating unsteady laminar flows or performing a LES or DNS of turbulent ows. The Navier-Stokes equations are solved via a fractional-step method and are spatially discretized using the finite volume method on a Cartesian mesh. An immersed boundary method based on a ghost cell treatment was developed to handle flow past complex geometries. The implementation of these numerical methods had to suit the architecture of the GPU, which is designed for massive multithreading. The details of this implementation will be described, along with strategies for performance optimization. Validation of the GPU-based solver was performed for fundamental bench-mark problems, and a performance assessment indicated that the solver was over an order-of-magnitude faster compared to a CPU. The GPU-based Navier-Stokes solver was used to study film-cooling flows via Large Eddy Simulation. In modern gas turbine engines, the film-cooling method is used to protect turbine blades from hot combustion gases. Therefore, understanding the physics of

  13. Creating Interactive Graphical Overlays in the Advanced Weather Interactive Processing System (AWIPS) Using Shapefiles and DGM Files

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Lafosse, Richard; Hood, Doris; Hoeth, Brian

    2007-01-01

    Graphical overlays can be created in real-time in the Advanced Weather Interactive Processing System (AWIPS) using shapefiles or DARE Graphics Metafile (DGM) files. This presentation describes how to create graphical overlays on-the-fly for AWIPS, by using two examples of AWIPS applications that were created by the Applied Meteorology Unit (AMU). The first example is the Anvil Threat Corridor Forecast Tool, which produces a shapefile that depicts a graphical threat corridor of the forecast movement of thunderstorm anvil clouds, based on the observed or forecast upper-level winds. This tool is used by the Spaceflight Meteorology Group (SMG) and 45th Weather Squadron (45 WS) to analyze the threat of natural or space vehicle-triggered lightning over a location. The second example is a launch and landing trajectory tool that produces a DGM file that plots the ground track of space vehicles during launch or landing. The trajectory tool can be used by SMG and the 45 WS forecasters to analyze weather radar imagery along a launch or landing trajectory. Advantages of both file types will be listed.

  14. Graphic Arts: The Press and Finishing Processes. Fourth Edition. Teacher Edition [and] Student Edition.

    ERIC Educational Resources Information Center

    Farajollahi, Karim; Ogle, Gary; Reed, William; Woodcock, Kenneth

    Part of a series of instructional materials for courses on graphic communication, this packet contains both teacher and student materials for seven units that cover the following topics: (1) offset press systems; (2) offset inks and dampening chemistry; (3) offset press operating procedures; (4) preventive maintenance and troubleshooting; (5) job…

  15. CAI and Imagery: Interactive Computer Graphics for Teaching About Invisible Process. Technical Report No. 74.

    ERIC Educational Resources Information Center

    Rigney, Joseph W.; Lutz, Kathy A.

    In preparation for a study of using interactive computer graphics for training, some current theorizing about internal and external, digital and analog representational systems are reviewed. The possibility is considered that there are two, overlapping, internal, analog representational systems, one for organismic states and the other for external…

  16. Efficient particle-in-cell simulation of auroral plasma phenomena using a CUDA enabled graphics processing unit

    NASA Astrophysics Data System (ADS)

    Sewell, Stephen

    This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.

  17. Computer Graphics.

    ERIC Educational Resources Information Center

    Halpern, Jeanne W.

    1970-01-01

    Computer graphics have been called the most exciting development in computer technology. At the University of Michigan, three kinds of graphics output equipment are now being used: symbolic printers, line plotters or drafting devices, and cathode-ray tubes (CRT). Six examples are given that demonstrate the range of graphics use at the University.…

  18. Application of computer generated color graphic techniques to the processing and display of three dimensional fluid dynamic data

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.; Putt, C. W.; Giamati, C. C.

    1981-01-01

    Color coding techniques used in the processing of remote sensing imagery were adapted and applied to the fluid dynamics problems associated with turbofan mixer nozzles. The computer generated color graphics were found to be useful in reconstructing the measured flow field from low resolution experimental data to give more physical meaning to this information and in scanning and interpreting the large volume of computer generated data from the three dimensional viscous computer code used in the analysis.

  19. Discontinuous Galerkin method with Gaussian artificial viscosity on graphical processing units for nonlinear acoustics

    NASA Astrophysics Data System (ADS)

    Tripathi, Bharat B.; Marchiano, Régis; Baskar, Sambandam; Coulouvrat, François

    2015-10-01

    Propagation of acoustical shock waves in complex geometry is a topic of interest in the field of nonlinear acoustics. For instance, simulation of Buzz Saw Noice requires the treatment of shock waves generated by the turbofan through the engines of aeroplanes with complex geometries and wall liners. Nevertheless, from a numerical point of view it remains a challenge. The two main hurdles are to take into account the complex geometry of the domain and to deal with the spurious oscillations (Gibbs phenomenon) near the discontinuities. In this work, first we derive the conservative hyperbolic system of nonlinear acoustics (up to quadratic nonlinear terms) using the fundamental equations of fluid dynamics. Then, we propose to adapt the classical nodal discontinuous Galerkin method to develop a high fidelity solver for nonlinear acoustics. The discontinuous Galerkin method is a hybrid of finite element and finite volume method and is very versatile to handle complex geometry. In order to obtain better performance, the method is parallelized on Graphical Processing Units. Like other numerical methods, discontinuous Galerkin method suffers with the problem of Gibbs phenomenon near the shock, which is a numerical artifact. Among the various ways to manage these spurious oscillations, we choose the method of parabolic regularization. Although, the introduction of artificial viscosity into the system is a popular way of managing shocks, we propose a new approach of introducing smooth artificial viscosity locally in each element, wherever needed. Firstly, a shock sensor using the linear coefficients of the spectral solution is used to locate the position of the discontinuities. Then, a viscosity coefficient depending on the shock sensor is introduced into the hyperbolic system of equations, only in the elements near the shock. The viscosity is applied as a two-dimensional Gaussian patch with its shape parameters depending on the element dimensions, referred here as Element

  20. Compressed sensing reconstruction for whole-heart imaging with 3D radial trajectories: a graphics processing unit implementation.

    PubMed

    Nam, Seunghoon; Akçakaya, Mehmet; Basha, Tamer; Stehning, Christian; Manning, Warren J; Tarokh, Vahid; Nezafat, Reza

    2013-01-01

    A disadvantage of three-dimensional (3D) isotropic acquisition in whole-heart coronary MRI is the prolonged data acquisition time. Isotropic 3D radial trajectories allow undersampling of k-space data in all three spatial dimensions, enabling accelerated acquisition of the volumetric data. Compressed sensing (CS) reconstruction can provide further acceleration in the acquisition by removing the incoherent artifacts due to undersampling and improving the image quality. However, the heavy computational overhead of the CS reconstruction has been a limiting factor for its application. In this article, a parallelized implementation of an iterative CS reconstruction method for 3D radial acquisitions using a commercial graphics processing unit is presented. The execution time of the graphics processing unit-implemented CS reconstruction was compared with that of the C++ implementation, and the efficacy of the undersampled 3D radial acquisition with CS reconstruction was investigated in both phantom and whole-heart coronary data sets. Subsequently, the efficacy of CS in suppressing streaking artifacts in 3D whole-heart coronary MRI with 3D radial imaging and its convergence properties were studied. The CS reconstruction provides improved image quality (in terms of vessel sharpness and suppression of noise-like artifacts) compared with the conventional 3D gridding algorithm, and the graphics processing unit implementation greatly reduces the execution time of CS reconstruction yielding 34-54 times speed-up compared with C++ implementation.

  1. Accelerating quantum chemistry calculations with graphical processing units - toward in high-density (HD) silico drug discovery.

    PubMed

    Hagiwara, Yohsuke; Ohno, Kazuki; Orita, Masaya; Koga, Ryota; Endo, Toshio; Akiyama, Yutaka; Sekijima, Masakazu

    2013-09-01

    The growing power of central processing units (CPU) has made it possible to use quantum mechanical (QM) calculations for in silico drug discovery. However, limited CPU power makes large-scale in silico screening such as virtual screening with QM calculations a challenge. Recently, general-purpose computing on graphics processing units (GPGPU) has offered an alternative, because of its significantly accelerated computational time over CPU. Here, we review a GPGPU-based supercomputer, TSUBAME2.0, and its promise for next generation in silico drug discovery, in high-density (HD) silico drug discovery.

  2. Ab initio nonadiabatic dynamics of multichromophore complexes: a scalable graphical-processing-unit-accelerated exciton framework.

    PubMed

    Sisto, Aaron; Glowacki, David R; Martinez, Todd J

    2014-09-16

    ("fragmenting") a molecular system and then stitching it back together. In this Account, we address both of these problems, the first by using graphical processing units (GPUs) and electronic structure algorithms tuned for these architectures and the second by using an exciton model as a framework in which to stitch together the solutions of the smaller problems. The multitiered parallel framework outlined here is aimed at nonadiabatic dynamics simulations on large supramolecular multichromophoric complexes in full atomistic detail. In this framework, the lowest tier of parallelism involves GPU-accelerated electronic structure theory calculations, for which we summarize recent progress in parallelizing the computation and use of electron repulsion integrals (ERIs), which are the major computational bottleneck in both density functional theory (DFT) and time-dependent density functional theory (TDDFT). The topmost tier of parallelism relies on a distributed memory framework, in which we build an exciton model that couples chromophoric units. Combining these multiple levels of parallelism allows access to ground and excited state dynamics for large multichromophoric assemblies. The parallel excitonic framework is in good agreement with much more computationally demanding TDDFT calculations of the full assembly.

  3. FamSeq: a variant calling program for family-based sequencing data using graphics processing units.

    PubMed

    Peng, Gang; Fan, Yu; Wang, Wenyi

    2014-10-01

    Various algorithms have been developed for variant calling using next-generation sequencing data, and various methods have been applied to reduce the associated false positive and false negative rates. Few variant calling programs, however, utilize the pedigree information when the family-based sequencing data are available. Here, we present a program, FamSeq, which reduces both false positive and false negative rates by incorporating the pedigree information from the Mendelian genetic model into variant calling. To accommodate variations in data complexity, FamSeq consists of four distinct implementations of the Mendelian genetic model: the Bayesian network algorithm, a graphics processing unit version of the Bayesian network algorithm, the Elston-Stewart algorithm and the Markov chain Monte Carlo algorithm. To make the software efficient and applicable to large families, we parallelized the Bayesian network algorithm that copes with pedigrees with inbreeding loops without losing calculation precision on an NVIDIA graphics processing unit. In order to compare the difference in the four methods, we applied FamSeq to pedigree sequencing data with family sizes that varied from 7 to 12. When there is no inbreeding loop in the pedigree, the Elston-Stewart algorithm gives analytical results in a short time. If there are inbreeding loops in the pedigree, we recommend the Bayesian network method, which provides exact answers. To improve the computing speed of the Bayesian network method, we parallelized the computation on a graphics processing unit. This allowed the Bayesian network method to process the whole genome sequencing data of a family of 12 individuals within two days, which was a 10-fold time reduction compared to the time required for this computation on a central processing unit.

  4. Interpretation of Medical Imaging Data with a Mobile Application: A Mobile Digital Imaging Processing Environment

    PubMed Central

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J.; Ullmann, Jeremy F. P.; Janke, Andrew L.

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users’ expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587

  5. NATURAL graphics

    NASA Technical Reports Server (NTRS)

    Jones, R. H.

    1984-01-01

    The hardware and software developments in computer graphics are discussed. Major topics include: system capabilities, hardware design, system compatibility, and software interface with the data base management system.

  6. High mobility, printable, and solution-processed graphene electronics.

    PubMed

    Wang, Shuai; Ang, Priscilla Kailian; Wang, Ziqian; Tang, Ai Ling Lena; Thong, John T L; Loh, Kian Ping

    2010-01-01

    The ability to print graphene sheets onto large scale, flexible substrates holds promise for large scale, transparent electronics on flexible substrates. Solution processable graphene sheets derived from graphite can form stable dispersions in solutions and are amenable to bulk scale processing and ink jet printing. However, the electrical conductivity and carrier mobilities of this material are usually reported to be orders of magnitude poorer than that of the mechanically cleaved counterpart due to its higher density of defects, which restricts its use in electronics. Here, we show that by optimizing several key factors in processing, we are able to fabricate high mobility graphene films derived from large sized graphene oxide sheets, which paves the way for all-carbon post-CMOS electronics. All-carbon source-drain channel electronics fabricated from such films exhibit significantly improved transport characteristics, with carrier mobilities of 365 cm(2)/(V.s) for hole and 281 cm(2)/(V.s) for electron, measured in air at room temperature. In particular, intrinsic mobility as high as 5000 cm(2)/(V.s) can be obtained from such solution-processed graphene films when ionic screening is applied to nullify the Coulombic scattering by charged impurities.

  7. Calculation method for computer-generated holograms with cylindrical basic object light by using a graphics processing unit.

    PubMed

    Sakata, Hironobu; Hosoyachi, Kouhei; Yang, Chan-Young; Sakamoto, Yuji

    2011-12-01

    It takes an enormous amount of time to calculate a computer-generated hologram (CGH). A fast calculation method for a CGH using precalculated object light has been proposed in which the light waves of an arbitrary object are calculated using transform calculations of the precalculated object light. However, this method requires a huge amount of memory. This paper proposes the use of a method that uses a cylindrical basic object light to reduce the memory requirement. Furthermore, it is accelerated by using a graphics processing unit (GPU). Experimental results show that the calculation speed on a GPU is about 65 times faster than that on a CPU.

  8. An atomic orbital-based formulation of the complete active space self-consistent field method on graphical processing units

    SciTech Connect

    Hohenstein, Edward G.; Luehr, Nathan; Ufimtsev, Ivan S.; Martínez, Todd J.

    2015-06-14

    Despite its importance, state-of-the-art algorithms for performing complete active space self-consistent field (CASSCF) computations have lagged far behind those for single reference methods. We develop an algorithm for the CASSCF orbital optimization that uses sparsity in the atomic orbital (AO) basis set to increase the applicability of CASSCF. Our implementation of this algorithm uses graphical processing units (GPUs) and has allowed us to perform CASSCF computations on molecular systems containing more than one thousand atoms. Additionally, we have implemented analytic gradients of the CASSCF energy; the gradients also benefit from GPU acceleration as well as sparsity in the AO basis.

  9. Visualization of complex processes in lipid systems using computer simulations and molecular graphics.

    PubMed

    Telenius, Jelena; Vattulainen, Ilpo; Monticelli, Luca

    2009-01-01

    Computer simulation has become an increasingly popular tool in the study of lipid membranes, complementing experimental techniques by providing information on structure and dynamics at high spatial and temporal resolution. Molecular visualization is the most powerful way to represent the results of molecular simulations, and can be used to illustrate complex transformations of lipid aggregates more easily and more effectively than written text. In this chapter, we review some basic aspects of simulation methodologies commonly employed in the study of lipid membranes and we describe a few examples of complex phenomena that have been recently investigated using molecular simulations. We then explain how molecular visualization provides added value to computational work in the field of biological membranes, and we conclude by listing a few molecular graphics packages widely used in scientific publications.

  10. Business Graphics

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Genigraphics Corporation's Masterpiece 8770 FilmRecorder is an advanced high resolution system designed to improve and expand a company's in-house graphics production. GRAFTIME/software package was designed to allow office personnel with minimal training to produce professional level graphics for business communications and presentations. Products are no longer being manufactured.

  11. Graphic Storytelling

    ERIC Educational Resources Information Center

    Thompson, John

    2009-01-01

    Graphic storytelling is a medium that allows students to make and share stories, while developing their art communication skills. American comics today are more varied in genre, approach, and audience than ever before. When considering the impact of Japanese manga on the youth, graphic storytelling emerges as a powerful player in pop culture. In…

  12. Business process analysis of a foodborne outbreak investigation mobile system

    NASA Astrophysics Data System (ADS)

    Nowicki, T.; Waszkowski, R.; Saniuk, A.

    2016-08-01

    Epidemiological investigation during an outbreak of food-borne disease requires taking a number of activities carried out in the field. This results in a restriction of access to current data about the epidemic and reducing the possibility of transferring information from the field to headquarters. This problem can be solved by using an appropriate system of mobile devices. The purpose of this paper is to present the IT solution based on the central repository for epidemiological investigations and mobile devices designed for use in the field. Based on such a solution business processes can be properly rebuild in a way to achieve better results in the activities of health inspectors.

  13. Graphics processing unit accelerated intensity-based optical coherence tomography angiography using differential frames with real-time motion correction.

    PubMed

    Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi

    2014-02-01

    We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.

  14. A real-time GNSS-R system based on software-defined radio and graphics processing units

    NASA Astrophysics Data System (ADS)

    Hobiger, Thomas; Amagai, Jun; Aida, Masanori; Narita, Hideki

    2012-04-01

    Reflected signals of the Global Navigation Satellite System (GNSS) from the sea or land surface can be utilized to deduce and monitor physical and geophysical parameters of the reflecting area. Unlike most other remote sensing techniques, GNSS-Reflectometry (GNSS-R) operates as a passive radar that takes advantage from the increasing number of navigation satellites that broadcast their L-band signals. Thereby, most of the GNSS-R receiver architectures are based on dedicated hardware solutions. Software-defined radio (SDR) technology has advanced in the recent years and enabled signal processing in real-time, which makes it an ideal candidate for the realization of a flexible GNSS-R system. Additionally, modern commodity graphic cards, which offer massive parallel computing performances, allow to handle the whole signal processing chain without interfering with the PC's CPU. Thus, this paper describes a GNSS-R system which has been developed on the principles of software-defined radio supported by General Purpose Graphics Processing Units (GPGPUs), and presents results from initial field tests which confirm the anticipated capability of the system.

  15. [Dynamic Pulse Signal Processing and Analyzing in Mobile System].

    PubMed

    Chou, Yongxin; Zhang, Aihua; Ou, Jiqing; Qi, Yusheng

    2015-09-01

    In order to derive dynamic pulse rate variability (DPRV) signal from dynamic pulse signal in real time, a method for extracting DPRV signal was proposed and a portable mobile monitoring system was designed. The system consists of a front end for collecting and wireless sending pulse signal and a mobile terminal. The proposed method is employed to extract DPRV from dynamic pulse signal in mobile terminal, and the DPRV signal is analyzed both in the time domain and the frequency domain and also with non-linear method in real time. The results show that the proposed method can accurately derive DPRV signal in real time, the system can be used for processing and analyzing DPRV signal in real time.

  16. Image processing for navigation on a mobile embedded platform: design of an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Loose, Harald; Lemke, Christiane; Papazov, Chavdar

    2006-02-01

    This paper deals with intelligent mobile platforms connected to a camera controlled by a small hardware-platform called RCUBE. This platform is able to provide features of a typical actuator-sensor board with various inputs and outputs as well as computing power and image recognition capabilities. Several intelligent autonomous RCBUE devices can be equipped and programmed to participate in the BOSPORUS network. These components form an intelligent network for gathering sensor and image data, sensor data fusion, navigation and control of mobile platforms. The RCUBE platform provides a standalone solution for image processing, which will be explained and presented. It plays a major role for several components in a reference implementation of the BOSPORUS system. On the one hand, intelligent cameras will be positioned in the environment, analyzing the events from a fixed point of view and sharing their perceptions with other components in the system. On the other hand, image processing results will contribute to a reliable navigation of a mobile system, which is crucially important. Fixed landmarks and other objects appropriate for determining the position of a mobile system can be recognized. For navigation other methods are added, i.e. GPS calculations and odometers.

  17. Graphic pathogeographies.

    PubMed

    Donovan, Courtney

    2014-09-01

    This paper focuses on the graphic pathogeographies in David B.'s Epileptic and David Small's Stitches: A Memoir to highlight the significance of geographic concepts in graphic novels of health and disease. Despite its importance in such works, few scholars have examined the role of geography in their narrative and structure. I examine the role of place in Epileptic and Stitches to extend the academic discussion on graphic novels of health and disease and identify how such works bring attention to the role of geography in the individual's engagement with health, disease, and related settings.

  18. Graphical Man/Machine Communications

    DTIC Science & Technology

    Progress is reported concerning the use of computer controlled graphical displays in the areas of radiaton diffusion and hydrodynamics, general...ventricular dynamics. Progress is continuing on the use of computer graphics in architecture. Some progress in halftone graphics is reported with no basic...developments presented. Colored halftone perspective pictures are being used to represent multivariable situations. Nonlinear waveform processing is

  19. GPU MrBayes V3.1: MrBayes on Graphics Processing Units for Protein Sequence Data.

    PubMed

    Pang, Shuai; Stones, Rebecca J; Ren, Ming-Ming; Liu, Xiao-Guang; Wang, Gang; Xia, Hong-ju; Wu, Hao-Yang; Liu, Yang; Xie, Qiang

    2015-09-01

    We present a modified GPU (graphics processing unit) version of MrBayes, called ta(MC)(3) (GPU MrBayes V3.1), for Bayesian phylogenetic inference on protein data sets. Our main contributions are 1) utilizing 64-bit variables, thereby enabling ta(MC)(3) to process larger data sets than MrBayes; and 2) to use Kahan summation to improve accuracy, convergence rates, and consequently runtime. Versus the current fastest software, we achieve a speedup of up to around 2.5 (and up to around 90 vs. serial MrBayes), and more on multi-GPU hardware. GPU MrBayes V3.1 is available from http://sourceforge.net/projects/mrbayes-gpu/.

  20. The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units

    SciTech Connect

    Hall, Clifford; Ji, Weixiao; Blaisten-Barojas, Estela

    2014-02-01

    We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.

  1. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    PubMed

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  2. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  3. Robot graphic simulation testbed

    NASA Technical Reports Server (NTRS)

    Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.

    1991-01-01

    The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts.

  4. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  5. [Communicative process in the mobile emergency service (SAMU/192)].

    PubMed

    dos Santos, Maria Claudia; Bernardes, Andrea; Gabriel, Carmen Silvia; Evora, Yolanda Dora Martinez; Rocha, Fernanda Ludmilla Rossi

    2012-03-01

    This study aims to characterize the communication process among nursing assistants who work in vehicles of the basic life support of the mobile emergency service, in the coordination of this service, and in the unified medical regulation service in a city of the state of São Paulo, Brazil. This descriptive and qualitative research used the thematic content analysis for data analysis. Semi-structured interviews were used for the data collection, which was held in January, 2010. Results show diffculties in communication with both the medical regulation service and the coordination. Among the most highlighted aspects are failures during the radio transmission, lack of qualified radio operators, difficult access to the coordination and lack of supervision by nurses. However, it was possible to detect solutions that aim to improve the communication ana consequently, the service offered by the mobile emergency service.

  6. Enabling customer self service through image processing on mobile devices

    NASA Astrophysics Data System (ADS)

    Kliche, Ingmar; Hellmann, Sascha; Kreutel, Jörn

    2013-03-01

    Our paper will outline the results of a research project that employs image processing for the automatic diagnosis of technical devices whose internal state is communicated through visual displays. In particular, we developed a method for detecting exceptional states of retail wireless routers, analysing the state and blinking behaviour of the LEDs that make up most routers' user interface. The method was made configurable by means of abstracting away from a particular device's display properties, thus being able to analyse a whole range of different devices whose displays are covered by our abstraction. The method of analysis and its configuration mechanism were implemented as a native mobile application for the Android Platform. It employs the local camera of mobile devices for capturing a router's state, and uses overlaid visual hints for guiding the user toward that perspective from where an analysis is possible.

  7. A graphical simulation model of the entire DNA process associated with the analysis of short tandem repeat loci.

    PubMed

    Gill, Peter; Curran, James; Elliot, Keith

    2005-01-01

    The use of expert systems to interpret short tandem repeat DNA profiles in forensic, medical and ancient DNA applications is becoming increasingly prevalent as high-throughput analytical systems generate large amounts of data that are time-consuming to process. With special reference to low copy number (LCN) applications, we use a graphical model to simulate stochastic variation associated with the entire DNA process starting with extraction of sample, followed by the processing associated with the preparation of a PCR reaction mixture and PCR itself. Each part of the process is modelled with input efficiency parameters. Then, the key output parameters that define the characteristics of a DNA profile are derived, namely heterozygote balance (Hb) and the probability of allelic drop-out p(D). The model can be used to estimate the unknown efficiency parameters, such as pi(extraction). 'What-if' scenarios can be used to improve and optimize the entire process, e.g. by increasing the aliquot forwarded to PCR, the improvement expected to a given DNA profile can be reliably predicted. We demonstrate that Hb and drop-out are mainly a function of stochastic effect of pre-PCR molecular selection. Whole genome amplification is unlikely to give any benefit over conventional PCR for LCN.

  8. A graphical simulation model of the entire DNA process associated with the analysis of short tandem repeat loci

    PubMed Central

    Gill, Peter; Curran, James; Elliot, Keith

    2005-01-01

    The use of expert systems to interpret short tandem repeat DNA profiles in forensic, medical and ancient DNA applications is becoming increasingly prevalent as high-throughput analytical systems generate large amounts of data that are time-consuming to process. With special reference to low copy number (LCN) applications, we use a graphical model to simulate stochastic variation associated with the entire DNA process starting with extraction of sample, followed by the processing associated with the preparation of a PCR reaction mixture and PCR itself. Each part of the process is modelled with input efficiency parameters. Then, the key output parameters that define the characteristics of a DNA profile are derived, namely heterozygote balance (Hb) and the probability of allelic drop-out p(D). The model can be used to estimate the unknown efficiency parameters, such as πextraction. ‘What-if’ scenarios can be used to improve and optimize the entire process, e.g. by increasing the aliquot forwarded to PCR, the improvement expected to a given DNA profile can be reliably predicted. We demonstrate that Hb and drop-out are mainly a function of stochastic effect of pre-PCR molecular selection. Whole genome amplification is unlikely to give any benefit over conventional PCR for LCN. PMID:15681615

  9. Perception in statistical graphics

    NASA Astrophysics Data System (ADS)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  10. Distributed cooperating processes in a mobile robot control system

    NASA Technical Reports Server (NTRS)

    Skillman, Thomas L., Jr.

    1988-01-01

    A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.

  11. Programmer's Guide for FFORM. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Anderson, Lougenia; Gales, Larry

    This module is part of a series designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. FFORM is a portable format-free input subroutine package written in ANSI Fortran IV…

  12. Graphic Novels, Web Comics, and Creator Blogs: Examining Product and Process

    ERIC Educational Resources Information Center

    Carter, James Bucky

    2011-01-01

    Young adult literature (YAL) of the late 20th and early 21st century is exploring hybrid forms with growing regularity by embracing textual conventions from sequential art, video games, film, and more. As well, Web-based technologies have given those who consume YAL more immediate access to authors, their metacognitive creative processes, and…

  13. Conceptual Learning with Multiple Graphical Representations: Intelligent Tutoring Systems Support for Sense-Making and Fluency-Building Processes

    ERIC Educational Resources Information Center

    Rau, Martina A.

    2013-01-01

    Most learning environments in the STEM disciplines use multiple graphical representations along with textual descriptions and symbolic representations. Multiple graphical representations are powerful learning tools because they can emphasize complementary aspects of complex learning contents. However, to benefit from multiple graphical…

  14. Graphic Arts.

    ERIC Educational Resources Information Center

    Towler, Alan L.

    This guide to teaching graphic arts, one in a series of instructional materials for junior high industrial arts education, is designed to assist teachers as they plan and implement new courses of study and as they make revisions and improvements in existing courses in order to integrate classroom learning with real-life experiences. This graphic…

  15. The LHEA PDP 11/70 graphics processing facility users guide

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A compilation of all necessary and useful information needed to allow the inexperienced user to program on the PDP 11/70. Information regarding the use of editing and file manipulation utilities as well as operational procedures are included. The inexperienced user is taken through the process of creating, editing, compiling, task building and debugging his/her FORTRAN program. Also, documentation on additional software is included.

  16. Analysis and simulation of industrial distillation processes using a graphical system design model

    NASA Astrophysics Data System (ADS)

    Boca, Maria Loredana; Dobra, Remus; Dragos, Pasculescu; Ahmad, Mohammad Ayaz

    2016-12-01

    The separation column used for experimentations one model can be configured in two ways: one - two columns of different diameters placed one within the other extension, and second way, one column with set diameter [1], [2]. The column separates the carbon isotopes based on the cryogenic distillation of pure carbon monoxide, which is fed at a constant flow rate as a gas through the feeding system [1],[2]. Based on numerical control systems used in virtual instrumentation was done some simulations of the distillation process in order to obtain of the isotope 13C at high concentrations. The experimental installation for cryogenic separation can be configured from the point of view of the separation column in two ways: Cascade - two columns of different diameters and placed one in the extension of the other column, and second one column with a set diameter. It is proposed that this installation is controlled to achieve data using a data acquisition tool and professional software that will process information from the isotopic column based on a logical dedicated algorithm. Classical isotopic column will be controlled automatically, and information about the main parameters will be monitored and properly display using one program. Take in consideration the very-low operating temperature, an efficient thermal isolation vacuum jacket is necessary. Since the "elementary separation ratio" [2] is very close to unity in order to raise the (13C) isotope concentration up to a desired level, a permanent counter current of the liquid-gaseous phases of the carbon monoxide is created by the main elements of the equipment: the boiler in the bottom-side of the column and the condenser in the top-side.

  17. Computer graphics and the graphic artist

    NASA Technical Reports Server (NTRS)

    Taylor, N. L.; Fedors, E. G.; Pinelli, T. E.

    1985-01-01

    A centralized computer graphics system is being developed at the NASA Langley Research Center. This system was required to satisfy multiuser needs, ranging from presentation quality graphics prepared by a graphic artist to 16-mm movie simulations generated by engineers and scientists. While the major thrust of the central graphics system was directed toward engineering and scientific applications, hardware and software capabilities to support the graphic artists were integrated into the design. This paper briefly discusses the importance of computer graphics in research; the central graphics system in terms of systems, software, and hardware requirements; the application of computer graphics to graphic arts, discussed in terms of the requirements for a graphic arts workstation; and the problems encountered in applying computer graphics to the graphic arts. The paper concludes by presenting the status of the central graphics system.

  18. Optimization of Parallel Legendre Transform using Graphics Processing Unit (GPU) for a Geodynamo Code

    NASA Astrophysics Data System (ADS)

    Lokavarapu, H. V.; Matsui, H.

    2015-12-01

    Convection and magnetic field of the Earth's outer core are expected to have vast length scales. To resolve these flows, high performance computing is required for geodynamo simulations using spherical harmonics transform (SHT), a significant portion of the execution time is spent on the Legendre transform. Calypso is a geodynamo code designed to model magnetohydrodynamics of a Boussinesq fluid in a rotating spherical shell, such as the outer core of the Earth. The code has been shown to scale well on computer clusters capable of computing at the order of 10⁵ cores using Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) parallelization for CPUs. To further optimize, we investigate three different algorithms of the SHT using GPUs. One is to preemptively compute the Legendre polynomials on the CPU before executing SHT on the GPU within the time integration loop. In the second approach, both the Legendre polynomials and the SHT are computed on the GPU simultaneously. In the third approach , we initially partition the radial grid for the forward transform and the harmonic order for the backward transform between the CPU and GPU. There after, the partitioned works are simultaneously computed in the time integration loop. We examine the trade-offs between space and time, memory bandwidth and GPU computations on Maverick, a Texas Advanced Computing Center (TACC) supercomputer. We have observed improved performance using a GPU enabled Legendre transform. Furthermore, we will compare and contrast the different algorithms in the context of GPUs.

  19. Graphics Processing Unit-Accelerated Code for Computing Second-Order Wiener Kernels and Spike-Triggered Covariance

    PubMed Central

    Mano, Omer

    2017-01-01

    Sensory neuroscience seeks to understand and predict how sensory neurons respond to stimuli. Nonlinear components of neural responses are frequently characterized by the second-order Wiener kernel and the closely-related spike-triggered covariance (STC). Recent advances in data acquisition have made it increasingly common and computationally intensive to compute second-order Wiener kernels/STC matrices. In order to speed up this sort of analysis, we developed a graphics processing unit (GPU)-accelerated module that computes the second-order Wiener kernel of a system’s response to a stimulus. The generated kernel can be easily transformed for use in standard STC analyses. Our code speeds up such analyses by factors of over 100 relative to current methods that utilize central processing units (CPUs). It works on any modern GPU and may be integrated into many data analysis workflows. This module accelerates data analysis so that more time can be spent exploring parameter space and interpreting data. PMID:28068420

  20. Performance evaluation for volumetric segmentation of multiple sclerosis lesions using MATLAB and computing engine in the graphical processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Le, Anh H.; Park, Young W.; Ma, Kevin; Jacobs, Colin; Liu, Brent J.

    2010-03-01

    Multiple Sclerosis (MS) is a progressive neurological disease affecting myelin pathways in the brain. Multiple lesions in the white matter can cause paralysis and severe motor disabilities of the affected patient. To solve the issue of inconsistency and user-dependency in manual lesion measurement of MRI, we have proposed a 3-D automated lesion quantification algorithm to enable objective and efficient lesion volume tracking. The computer-aided detection (CAD) of MS, written in MATLAB, utilizes K-Nearest Neighbors (KNN) method to compute the probability of lesions on a per-voxel basis. Despite the highly optimized algorithm of imaging processing that is used in CAD development, MS CAD integration and evaluation in clinical workflow is technically challenging due to the requirement of high computation rates and memory bandwidth in the recursive nature of the algorithm. In this paper, we present the development and evaluation of using a computing engine in the graphical processing unit (GPU) with MATLAB for segmentation of MS lesions. The paper investigates the utilization of a high-end GPU for parallel computing of KNN in the MATLAB environment to improve algorithm performance. The integration is accomplished using NVIDIA's CUDA developmental toolkit for MATLAB. The results of this study will validate the practicality and effectiveness of the prototype MS CAD in a clinical setting. The GPU method may allow MS CAD to rapidly integrate in an electronic patient record or any disease-centric health care system.

  1. Graphics processing unit accelerated three-dimensional model for the simulation of pulsed low-temperature plasmas

    SciTech Connect

    Fierro, Andrew Dickens, James; Neuber, Andreas

    2014-12-15

    A 3-dimensional particle-in-cell/Monte Carlo collision simulation that is fully implemented on a graphics processing unit (GPU) is described and used to determine low-temperature plasma characteristics at high reduced electric field, E/n, in nitrogen gas. Details of implementation on the GPU using the NVIDIA Compute Unified Device Architecture framework are discussed with respect to efficient code execution. The software is capable of tracking around 10 × 10{sup 6} particles with dynamic weighting and a total mesh size larger than 10{sup 8} cells. Verification of the simulation is performed by comparing the electron energy distribution function and plasma transport parameters to known Boltzmann Equation (BE) solvers. Under the assumption of a uniform electric field and neglecting the build-up of positive ion space charge, the simulation agrees well with the BE solvers. The model is utilized to calculate plasma characteristics of a pulsed, parallel plate discharge. A photoionization model provides the simulation with additional electrons after the initial seeded electron density has drifted towards the anode. Comparison of the performance benefits between the GPU-implementation versus a CPU-implementation is considered, and a speed-up factor of 13 for a 3D relaxation Poisson solver is obtained. Furthermore, a factor 60 speed-up is realized for parallelization of the electron processes.

  2. Grid-based algorithm to search critical points, in the electron density, accelerated by graphics processing units.

    PubMed

    Hernández-Esparza, Raymundo; Mejía-Chica, Sol-Milena; Zapata-Escobar, Andy D; Guevara-García, Alfredo; Martínez-Melchor, Apolinar; Hernández-Pérez, Julio-M; Vargas, Rubicelia; Garza, Jorge

    2014-12-05

    Using a grid-based method to search the critical points in electron density, we show how to accelerate such a method with graphics processing units (GPUs). When the GPU implementation is contrasted with that used on central processing units (CPUs), we found a large difference between the time elapsed by both implementations: the smallest time is observed when GPUs are used. We tested two GPUs, one related with video games and other used for high-performance computing (HPC). By the side of the CPUs, two processors were tested, one used in common personal computers and other used for HPC, both of last generation. Although our parallel algorithm scales quite well on CPUs, the same implementation on GPUs runs around 10× faster than 16 CPUs, with any of the tested GPUs and CPUs. We have found what one GPU dedicated for video games can be used without any problem for our application, delivering a remarkable performance, in fact; this GPU competes against one HPC GPU, in particular when single-precision is used.

  3. Structural Determination of (Al2O3)(n) (n = 1-15) Clusters Based on Graphic Processing Unit.

    PubMed

    Zhang, Qiyao; Cheng, Longjiu

    2015-05-26

    Global optimization algorithms have been widely used in the field of chemistry to search the global minimum structures of molecular and atomic clusters, which is a nondeterministic polynomial problem with the increasing sizes of clusters. Considering that the computational ability of a graphic processing unit (GPU) is much better than that of a central processing unit (CPU), we developed a GPU-based genetic algorithm for structural prediction of clusters and achieved a high acceleration ratio compared to a CPU. On the one-dimensional (1D) operation of a GPU, taking (Al2O3)n clusters as test cases, the peak acceleration ratio in the GPU is about 220 times that in a CPU in single precision and the value is 103 for double precision in calculation of the analytical interatomic potential. The peak acceleration ratio is about 240 and 107 on the block operation, and it is about 77 and 35 on the 2D operation compared to a CPU in single precision and double precision, respectively. And the peak acceleration ratio of the whole genetic algorithm program is about 35 compared to CPU at double precision. Structures of (Al2O3)n clusters at n = 1-10 reported in previous works are successfully located, and their low-lying structures at n = 11-15 are predicted.

  4. A Physics-Based Modeling and Real-Time Simulation of Biomechanical Diffusion Process Through Optical Imaged Alveolar Tissues on Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kaya, Ilhan; Santhanam, Anand P.; Lee, Kye-Sung; Meemon, Panomsak; Papp, Nicolene; Rolland, Jannick P.

    Tissue engineering has broad applications from creating the much-needed engineered tissue and organ structures for regenerative medicine to providing in vitro testbeds for drug testing. In the latter, application domain, creating alveolar lung tissue, and simulating the diffusion process of oxygen and other possible agents from the air into the blood stream as well as modeling the removal of carbon dioxide and other possible entities from the blood stream are of critical importance to simulating lung functions in various environments. In this chapter, we propose a physics-based model to simulate the alveolar gas exchange and the alveolar diffusionDiffusion alveolar process. Tissue engineers, for the first time, may utilize these simulation results to better understand the underlying gas exchange process and properly adjust the tissue growing cycles. In this work, alveolar tissues are imaged by means of an optical coherence microscopyOptical coherence microscopy (OCM Modality OCM ) system developed in our laboratory. As a consequence, 3D alveoli tissue data with its inherent complex boundary is taken as input to the simulationSimulation diffusion system, which is based on computational fluid mechanics in simulating the alveolar gas exchange. The visualizationVisualization and the simulation of diffusion of the air into the blood through the alveoli tissue is performed using a state-of-art graphics processing unitGraphics processing unit (GPU). Results show the real-time simulation of the gas exchange through the 2D alveoli tissue.

  5. Integrative Processing of Verbal and Graphical Information during Re-Reading Predicts Learning from Illustrated Text: An Eye-Movement Study

    ERIC Educational Resources Information Center

    Mason, Lucia; Tornatora, Maria Caterina; Pluchino, Patrik

    2015-01-01

    Printed or digital textbooks contain texts accompanied by various kinds of visualisation. Successful comprehension of these materials requires integrating verbal and graphical information. This study investigates the time course of processing an illustrated text through eye-tracking methodology in the school context. The aims were to identify…

  6. Communication: A reduced scaling J-engine based reformulation of SOS-MP2 using graphics processing units.

    PubMed

    Maurer, S A; Kussmann, J; Ochsenfeld, C

    2014-08-07

    We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N⁵) to O(N³) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows to replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server.

  7. GPUDePiCt: A Parallel Implementation of a Clustering Algorithm for Computing Degenerate Primers on Graphics Processing Units.

    PubMed

    Cickovski, Trevor; Flor, Tiffany; Irving-Sachs, Galen; Novikov, Philip; Parda, James; Narasimhan, Giri

    2015-01-01

    In order to make multiple copies of a target sequence in the laboratory, the technique of Polymerase Chain Reaction (PCR) requires the design of "primers", which are short fragments of nucleotides complementary to the flanking regions of the target sequence. If the same primer is to amplify multiple closely related target sequences, then it is necessary to make the primers "degenerate", which would allow it to hybridize to target sequences with a limited amount of variability that may have been caused by mutations. However, the PCR technique can only allow a limited amount of degeneracy, and therefore the design of degenerate primers requires the identification of reasonably well-conserved regions in the input sequences. We take an existing algorithm for designing degenerate primers that is based on clustering and parallelize it in a web-accessible software package GPUDePiCt, using a shared memory model and the computing power of Graphics Processing Units (GPUs). We test our implementation on large sets of aligned sequences from the human genome and show a multi-fold speedup for clustering using our hybrid GPU/CPU implementation over a pure CPU approach for these sequences, which consist of more than 7,500 nucleotides. We also demonstrate that this speedup is consistent over larger numbers and longer lengths of aligned sequences.

  8. Monte Carlo-based fluorescence molecular tomography reconstruction method accelerated by a cluster of graphic processing units.

    PubMed

    Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

    2011-02-01

    High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

  9. Monte Carlo-based fluorescence molecular tomography reconstruction method accelerated by a cluster of graphic processing units

    NASA Astrophysics Data System (ADS)

    Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

    2011-02-01

    High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

  10. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid.

    PubMed

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn

    2016-09-15

    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc.

  11. A graphics processing unit accelerated motion correction algorithm and modular system for real-time fMRI.

    PubMed

    Scheinost, Dustin; Hampson, Michelle; Qiu, Maolin; Bhawnani, Jitendra; Constable, R Todd; Papademetris, Xenophon

    2013-07-01

    Real-time functional magnetic resonance imaging (rt-fMRI) has recently gained interest as a possible means to facilitate the learning of certain behaviors. However, rt-fMRI is limited by processing speed and available software, and continued development is needed for rt-fMRI to progress further and become feasible for clinical use. In this work, we present an open-source rt-fMRI system for biofeedback powered by a novel Graphics Processing Unit (GPU) accelerated motion correction strategy as part of the BioImage Suite project ( www.bioimagesuite.org ). Our system contributes to the development of rt-fMRI by presenting a motion correction algorithm that provides an estimate of motion with essentially no processing delay as well as a modular rt-fMRI system design. Using empirical data from rt-fMRI scans, we assessed the quality of motion correction in this new system. The present algorithm performed comparably to standard (non real-time) offline methods and outperformed other real-time methods based on zero order interpolation of motion parameters. The modular approach to the rt-fMRI system allows the system to be flexible to the experiment and feedback design, a valuable feature for many applications. We illustrate the flexibility of the system by describing several of our ongoing studies. Our hope is that continuing development of open-source rt-fMRI algorithms and software will make this new technology more accessible and adaptable, and will thereby accelerate its application in the clinical and cognitive neurosciences.

  12. PO*WW*ER mobile treatment unit process hazards analysis

    SciTech Connect

    Richardson, R.B.

    1996-06-01

    The objective of this report is to demonstrate that a thorough assessment of the risks associated with the operation of the Rust Geotech patented PO*WW*ER mobile treatment unit (MTU) has been performed and documented. The MTU was developed to treat aqueous mixed wastes at the US Department of Energy (DOE) Albuquerque Operations Office sites. The MTU uses evaporation to separate organics and water from radionuclides and solids, and catalytic oxidation to convert the hazardous into byproducts. This process hazards analysis evaluated a number of accident scenarios not directly related to the operation of the MTU, such as natural phenomena damage and mishandling of chemical containers. Worst case accident scenarios were further evaluated to determine the risk potential to the MTU and to workers, the public, and the environment. The overall risk to any group from operation of the MTU was determined to be very low; the MTU is classified as a Radiological Facility with low hazards.

  13. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  14. Monte Carlo standardless approach for laser induced breakdown spectroscopy based on massive parallel graphic processing unit computing

    NASA Astrophysics Data System (ADS)

    Demidov, A.; Eschlböck-Fuchs, S.; Kazakov, A. Ya.; Gornushkin, I. B.; Kolmhofer, P. J.; Pedarnig, J. D.; Huber, N.; Heitz, J.; Schmid, T.; Rössler, R.; Panne, U.

    2016-11-01

    The improved Monte-Carlo (MC) method for standard-less analysis in laser induced breakdown spectroscopy (LIBS) is presented. Concentrations in MC LIBS are found by fitting model-generated synthetic spectra to experimental spectra. The current version of MC LIBS is based on the graphic processing unit (GPU) computation and reduces the analysis time down to several seconds per spectrum/sample. The previous version of MC LIBS which was based on the central processing unit (CPU) computation requested unacceptably long analysis times of 10's minutes per spectrum/sample. The reduction of the computational time is achieved through the massively parallel computing on the GPU which embeds thousands of co-processors. It is shown that the number of iterations on the GPU exceeds that on the CPU by a factor > 1000 for the 5-dimentional parameter space and yet requires > 10-fold shorter computational time. The improved GPU-MC LIBS outperforms the CPU-MS LIBS in terms of accuracy, precision, and analysis time. The performance is tested on LIBS-spectra obtained from pelletized powders of metal oxides consisting of CaO, Fe2O3, MgO, and TiO2 that simulated by-products of steel industry, steel slags. It is demonstrated that GPU-based MC LIBS is capable of rapid multi-element analysis with relative error between 1 and 10's percent that is sufficient for industrial applications (e.g. steel slag analysis). The results of the improved GPU-based MC LIBS are positively compared to that of the CPU-based MC LIBS as well as to the results of the standard calibration-free (CF) LIBS based on the Boltzmann plot method.

  15. Designing and Implementing an OVERFLOW Reader for ParaView and Comparing Performance Between Central Processing Units and Graphical Processing Units

    NASA Technical Reports Server (NTRS)

    Chawner, David M.; Gomez, Ray J.

    2010-01-01

    In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.

  16. Graphic Communications--Graphic Arts. Ohio's Competency Analysis Profile.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Vocational Instructional Materials Lab.

    This Ohio Competency Analysis Profile (OCAP), derived from a modified Developing a Curriculum (DACUM) process, is a current comprehensive and verified employer competency program list for graphic communications--graphic arts. Each unit (with or without subunits) contains competencies and competency builders that identify the occupational,…

  17. Computer Graphics Verification

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Video processing creates technical animation sequences using studio quality equipment to realistically represent fluid flow over space shuttle surfaces, helicopter rotors, and turbine blades.Computer systems Co-op, Tim Weatherford, performing computer graphics verification. Part of Co-op brochure.

  18. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards

    NASA Astrophysics Data System (ADS)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G.

    2011-07-01

    We describe and evaluate a fast implementation of a classical block-matching motion estimation algorithm for multiple graphical processing units (GPUs) using the compute unified device architecture computing engine. The implemented block-matching algorithm uses summed absolute difference error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation, we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and noninteger search grids. The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a noninteger search grid. The additional speedup for a noninteger search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable. In addition, we compared the execution time of the proposed FS GPU implementation with two existing, highly optimized nonfull grid search CPU-based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and simplified unsymmetrical multi-hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation. We also demonstrated that for an image sequence of 720 × 480 pixels in resolution commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  19. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    PubMed

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  20. A Physics-Based Modeling and Real-Time Simulation of Biomechanical Diffusion Process Through Optical Imaged Alveolar Tissues on Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kaya, Ilhan; Santhanam, Anand P.; Lee, Kye-Sung; Meemon, Panomsak; Papp, Nicolene; Rolland, Jannick P.

    Tissue engineering has broad applications from creating the much-needed engineered tissue and organ structures for regenerative medicine to providing in vitro testbeds for drug testing. In the latter, application domain, creating alveolar lung tissue, and simulating the diffusion process of oxygen and other possible agents from the air into the blood stream as well as modeling the removal of carbon dioxide and other possible entities from the blood stream are of critical importance to simulating lung functions in various environments. In this chapter, we propose a physics-based model to simulate the alveolar gas exchange and the alveolar process. Tissue engineers, for the first time, may utilize these simulation results to better understand the underlying gas exchange process and properly adjust the tissue growing cycles. In this work, alveolar tissues are imaged by means of an optical coherence microscopy (OCM) system developed in our laboratory. As a consequence, 3D alveoli tissue data with its inherent complex boundary is taken as input to the system, which is based on computational fluid mechanics in simulating the alveolar gas exchange. The visualization and the simulation of diffusion of the air into the blood through the alveoli tissue is performed using a state-of-art graphics processing unit (GPU). Results show the real-time simulation of the gas exchange through the 2D alveoli tissue.

  1. Measuring Cognitive Load in Test Items: Static Graphics versus Animated Graphics

    ERIC Educational Resources Information Center

    Dindar, M.; Kabakçi Yurdakul, I.; Inan Dönmez, F.

    2015-01-01

    The majority of multimedia learning studies focus on the use of graphics in learning process but very few of them examine the role of graphics in testing students' knowledge. This study investigates the use of static graphics versus animated graphics in a computer-based English achievement test from a cognitive load theory perspective. Three…

  2. Computer-Aided Process and Tools for Mobile Software Acquisition

    DTIC Science & Technology

    2013-07-30

    against the execution trace of the mobile apps using logfile-based runtime verification. A case study of formally specifying, validating, and verifying a...the automatically generated statechart code against the execution trace of the mobile apps using logfile-based runtime verification. A case study ...12  Case Study

  3. Developing Online Multimodal Verbal Communication to Enhance the Writing Process in an Audio-Graphic Conferencing Environment

    ERIC Educational Resources Information Center

    Ciekanski, Maud; Chanier, Thierry

    2008-01-01

    Over the last decade, most studies in Computer-Mediated Communication (CMC) have highlighted how online synchronous learning environments implement a new literacy related to multimodal communication. The environment used in our experiment is based on a synchronous audio-graphic conferencing tool. This study concerns false beginners in an English…

  4. Design Graphics

    NASA Technical Reports Server (NTRS)

    1990-01-01

    A mathematician, David R. Hedgley, Jr. developed a computer program that considers whether a line in a graphic model of a three-dimensional object should or should not be visible. Known as the Hidden Line Computer Code, the program automatically removes superfluous lines and displays an object from a specific viewpoint, just as the human eye would see it. An example of how one company uses the program is the experience of Birdair which specializes in production of fabric skylights and stadium covers. The fabric called SHEERFILL is a Teflon coated fiberglass material developed in cooperation with DuPont Company. SHEERFILL glazed structures are either tension structures or air-supported tension structures. Both are formed by patterned fabric sheets supported by a steel or aluminum frame or cable network. Birdair uses the Hidden Line Computer Code, to illustrate a prospective structure to an architect or owner. The program generates a three- dimensional perspective with the hidden lines removed. This program is still used by Birdair and continues to be commercially available to the public.

  5. Graphic engine resource management

    NASA Astrophysics Data System (ADS)

    Bautin, Mikhail; Dwarakinath, Ashok; Chiueh, Tzi-cker

    2008-01-01

    Modern consumer-grade 3D graphic cards boast a computation/memory resource that can easily rival or even exceed that of standard desktop PCs. Although these cards are mainly designed for 3D gaming applications, their enormous computational power has attracted developers to port an increasing number of scientific computation programs to these cards, including matrix computation, collision detection, cryptography, database sorting, etc. As more and more applications run on 3D graphic cards, there is a need to allocate the computation/memory resource on these cards among the sharing applications more fairly and efficiently. In this paper, we describe the design, implementation and evaluation of a Graphic Processing Unit (GPU) scheduler based on Deficit Round Robin scheduling that successfully allocates to every process an equal share of the GPU time regardless of their demand. This scheduler, called GERM, estimates the execution time of each GPU command group based on dynamically collected statistics, and controls each process's GPU command production rate through its CPU scheduling priority. Measurements on the first GERM prototype show that this approach can keep the maximal GPU time consumption difference among concurrent GPU processes consistently below 5% for a variety of application mixes.

  6. Interoperability framework for communication between processes running on different mobile operating systems

    NASA Astrophysics Data System (ADS)

    Gal, A.; Filip, I.; Dragan, F.

    2016-02-01

    As we live in an era where mobile communication is everywhere around us, the necessity to communicate between the variety of the devices we have available becomes even more of an urge. The major impediment to be able to achieve communication between the available devices is the incompatibility between the operating systems running on these devices. In the present paper we propose a framework that will make possible the ability to inter-operate between processes running on different mobile operating systems. The interoperability process will make use of any communication environment which is made available by the mobile devices where the processes are installed. The communication environment is chosen so as the process is optimal in terms of transferring the data between the mobile devices. The paper defines the architecture of the framework, expanding the functionality and interrelation between modules that make up the framework. For the proof of concept, we propose to use three different mobile operating systems installed on three different types of mobile devices. Depending on the various factors related to the structure of the mobile devices and the data type to be transferred, the framework will establish a data transfer protocol that will be used. The framework automates the interoperability process, user intervention being limited to a simple selection from the options that the framework suggests based on the full analysis of structural and functional elements of the mobile devices used in the process.

  7. Space Spurred Computer Graphics

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Dicomed Corporation was asked by NASA in the early 1970s to develop processing capabilities for recording images sent from Mars by Viking spacecraft. The company produced a film recorder which increased the intensity levels and the capability for color recording. This development led to a strong technology base resulting in sophisticated computer graphics equipment. Dicomed systems are used to record CAD (computer aided design) and CAM (computer aided manufacturing) equipment, to update maps and produce computer generated animation.

  8. Graphic Design Is Not a Medium.

    ERIC Educational Resources Information Center

    Gruber, John Edward, Jr.

    2001-01-01

    Discusses graphic design and reviews its development from analog processes to a digital tool with the use of computers. Topics include graphical user interfaces; the need for visual communication concepts; transmedia as opposed to repurposing; and graphic design instruction in higher education. (LRW)

  9. Mathematical Creative Activity and the Graphic Calculator

    ERIC Educational Resources Information Center

    Duda, Janina

    2011-01-01

    Teaching mathematics using graphic calculators has been an issue of didactic discussions for years. Finding ways in which graphic calculators can enrich the development process of creative activity in mathematically gifted students between the ages of 16-17 is the focus of this article. Research was conducted using graphic calculators with…

  10. Space-Time Processing for Tactical Mobile Ad Hoc Networks

    DTIC Science & Technology

    2010-05-01

    Sagnik Ghosh, Bhaskar D. Rao, and James R. Zeidler; "Outage-Efficient Strategies for Multiuser MIMO Networks with Channel Distribution Information... Distributed cooperative routing and hybrid ARQ in MIMO -BLAST ad hoc networks”, submitted to IEEE Transactions on Communications, 2009. Davide...mobile ad hoc networks using polling techniques for MIMO nodes. Centralized and distributed topology control algorithms have been develop to

  11. Dynamic stepping information process method in mobile bio-sensing computing environments.

    PubMed

    Lee, Tae-Gyu; Lee, Seong-Hoon

    2014-01-01

    Recently, the interest toward human longevity free from diseases is being converged as one system frame along with the development of mobile computing environment, diversification of remote medical system and aging society. Such converged system enables implementation of a bioinformatics system created as various supplementary information services by sensing and gathering health conditions and various bio-information of mobile users to set up medical information. The existing bio-information system performs static and identical process without changes after the bio-information process defined at the initial system configuration executes the system. However, such static process indicates ineffective execution in the application of mobile bio-information system performing mobile computing. Especially, an inconvenient duty of having to perform initialization of new definition and execution is accompanied during the process configuration of bio-information system and change of method. This study proposes a dynamic process design and execution method to overcome such ineffective process.

  12. Mobilization

    DTIC Science & Technology

    1987-01-01

    istic and romantic emotionalism that typifies this genre. Longino, James C., et al. “A Study of World War Procurement and Industrial Mobilization...States. Harrisburg, PA: Military Service Publishing Co., 1941. CARL 355.22 J72b. Written in rough prose , this World War II era document explains the

  13. The Longitudinal Impact of Cognitive Speed of Processing Training on Driving Mobility

    ERIC Educational Resources Information Center

    Edwards, Jerri D.; Myers, Charlsie; Ross, Lesley A.; Roenker, Daniel L.; Cissell, Gayla M.; McLaughlin, Alexis M.; Ball, Karlene K.

    2009-01-01

    Purpose: To examine how cognitive speed of processing training affects driving mobility across a 3-year period among older drivers. Design and Methods: Older drivers with poor Useful Field of View (UFOV) test performance (indicating greater risk for subsequent at-fault crashes and mobility declines) were randomly assigned to either a speed of…

  14. Mobile Technology and CAD Technology Integration in Teaching Architectural Design Process for Producing Creative Product

    ERIC Educational Resources Information Center

    Bin Hassan, Isham Shah; Ismail, Mohd Arif; Mustafa, Ramlee

    2011-01-01

    The purpose of this research is to examine the effect of integrating the mobile and CAD technology on teaching architectural design process for Malaysian polytechnic architectural students in producing a creative product. The website is set up based on Caroll's minimal theory, while mobile and CAD technology integration is based on Brown and…

  15. Forming Professional Mobility in the Process of Future Master Philologists' Training in Ukraine and Abroad

    ERIC Educational Resources Information Center

    Semenog, Olena

    2016-01-01

    On the basis of scientific research, the experience of higher education institutions in Ukraine and abroad (the USA, the Swiss Confederation) concerning the forming of future philologists' professional mobility in the process of Master training has been generalized. It has been overviewed, that professional mobility is an essential indicator of…

  16. Computer-Aided Process and Tools for Mobile Software Acquisition

    DTIC Science & Technology

    2013-04-01

    file?based runtime verification. A case study of formally specifying, validating, and verifying a set of requirements for an iPhone application that...Center for Strategic and International Studies The Making of a DoD Acquisition Lead System Integrator (LSI) Paul Montgomery, Ron Carlson, and John...code against the execution trace of the mobile apps using log file–based runtime verification. A case study of formally specifying, validating, and

  17. Fast point-based method of a computer-generated hologram for a triangle-patch model by using a graphics processing unit.

    PubMed

    Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji

    2016-01-20

    The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method.

  18. Finite difference calculation of acoustic streaming including the boundary layer phenomena in an ultrasonic air pump on graphics processing unit array

    NASA Astrophysics Data System (ADS)

    Wada, Yuji; Koyama, Daisuke; Nakamura, Kentaro

    2012-09-01

    Direct finite difference fluid simulation of acoustic streaming on the fine-meshed threedimension model by graphics processing unit (GPU)-oriented calculation array is discussed. Airflows due to the acoustic traveling wave are induced when an intense sound field is generated in a gap between a bending transducer and a reflector. Calculation results showed good agreement with the measurements in the pressure distribution. In addition to that, several flow-vortices were observed near the boundary of the reflector and the transducer, which have been often discussed in acoustic tube near the boundary, and have not yet been observed in the calculation in the ultrasonic air pump of this type.

  19. Weather information network including graphical display

    NASA Technical Reports Server (NTRS)

    Leger, Daniel R. (Inventor); Burdon, David (Inventor); Son, Robert S. (Inventor); Martin, Kevin D. (Inventor); Harrison, John (Inventor); Hughes, Keith R. (Inventor)

    2006-01-01

    An apparatus for providing weather information onboard an aircraft includes a processor unit and a graphical user interface. The processor unit processes weather information after it is received onboard the aircraft from a ground-based source, and the graphical user interface provides a graphical presentation of the weather information to a user onboard the aircraft. Preferably, the graphical user interface includes one or more user-selectable options for graphically displaying at least one of convection information, turbulence information, icing information, weather satellite information, SIGMET information, significant weather prognosis information, and winds aloft information.

  20. Graphical fiber shaping control interface

    NASA Astrophysics Data System (ADS)

    Basso, Eric T.; Ninomiya, Yasuyuki

    2016-03-01

    In this paper, we present an improved graphical user interface for defining single-pass novel shaping techniques on glass processing machines that allows for streamlined process development. This approach offers unique modularity and debugging capability to researchers during the process development phase not usually afforded with similar scripting languages.

  1. Gasoline from coal in the state of Illinois: feasibility study. Volume I. Design. [KBW gasification process, ICI low-pressure methanol process and Mobil M-gasoline process

    SciTech Connect

    Not Available

    1980-01-01

    Volume 1 describes the proposed plant: KBW gasification process, ICI low-pressure methanol process and Mobil M-gasoline process, and also with ancillary processes, such as oxygen plant, shift process, RECTISOL purification process, sulfur recovery equipment and pollution control equipment. Numerous engineering diagrams are included. (LTN)

  2. Mobile Phone Service Process Hiccups at Cellular Inc.

    ERIC Educational Resources Information Center

    Edgington, Theresa M.

    2010-01-01

    This teaching case documents an actual case of process execution and failure. The case is useful in MIS introductory courses seeking to demonstrate the interdependencies within a business process, and the concept of cascading failure at the process level. This case demonstrates benefits and potential problems with information technology systems,…

  3. A graphical language for reliability model generation

    NASA Technical Reports Server (NTRS)

    Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.

    1990-01-01

    A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.

  4. GRASP/Ada: Graphical Representations of Algorithms, Structures, and Processes for Ada. The development of a program analysis environment for Ada: Reverse engineering tools for Ada, task 2, phase 3

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1991-01-01

    The main objective is the investigation, formulation, and generation of graphical representations of algorithms, structures, and processes for Ada (GRASP/Ada). The presented task, in which various graphical representations that can be extracted or generated from source code are described and categorized, is focused on reverse engineering. The following subject areas are covered: the system model; control structure diagram generator; object oriented design diagram generator; user interface; and the GRASP library.

  5. MPE graphics -- Scalable X11 graphics in MPI

    SciTech Connect

    Gropp, W.; Karrels, E.; Lusk, E.

    1994-12-31

    As parallel programs enter the mainstream, they need to provide the same facilities and ease-of-use features expected of uniprocessor programs. For many applications, this means that they need to provide graphical output. This talk discusses a library of routines that provide scalable X Window System graphics. These routines make use of the MPI message-passing standard to provide a safe and reliable system that can be easily used in parallel programs. At the same time they encapsulate commonly-used services to provide a convenient interface to X graphics facilities. The easiest way to provide X11 graphics to a parallel program is to allow each process to draw on the same X11 Window. That is, each process opens a connection to the X11 server and draws directly to it. In one sense, this is as scalable a system as possible, since the single graphics display is an unavoidable point of sequential access. However, in reality, an X server can only accept a relatively small number of connections. In addition, the latency associated with each transmission between a parallel process and the X Window server is relatively high. This talk addresses these issues.

  6. The use of mobile computational technology in the nursing process: a new challenge for Brazilian nurses.

    PubMed

    Sperandio, Dircelene Jussara; Evora, Yolanda Dora Martinez

    2009-01-01

    The purpose of this study was to analyze the use a hand-mobile device with integrated wireless network interface to help nurses document the nursing process. The system is structured in five modules allowing the nurses to access and document data related to vital signals, hydroelectrolytic balance, assessment and nursing prescription at the point-of-care with transmission of data in real time. The results demonstrated that the mobile computer technology provided mobility for nurses and facilitated communication and documentation of care. In addition, real time documentation proved to be more efficient than a manual documentation system.

  7. Repellency Awareness Graphic

    EPA Pesticide Factsheets

    Companies can apply to use the voluntary new graphic on product labels of skin-applied insect repellents. This graphic is intended to help consumers easily identify the protection time for mosquitoes and ticks and select appropriately.

  8. Chemical Effects in the Separation Process of a Differential Mobility / Mass Spectrometer System

    PubMed Central

    Schneider, Bradley B.; Covey, Thomas R.; Coy, Stephen L.; Krylov, Evgeny V.; Nazarov, Erkinjon G.

    2013-01-01

    In differential mobility spectrometry (DMS, also referred to as high field asymmetric waveform ion mobility spectrometry, FAIMS), ions are separated on the basis of the difference in their mobility under high and low electric fields. The addition of polar modifiers to the gas transporting the ions through a DMS enhances the formation of clusters in a field-dependent way and thus amplifies the high and low field mobility difference resulting in increased peak capacity and separation power. Observations of the increase in mobility field dependence are consistent with a cluster formation model, also referred to as the dynamic cluster-decluster model. The uniqueness of chemical interactions that occur between an ion and cluster-forming neutrals increases the selectivity of the separation and the depression of low-field mobility relative to high-field mobility increases the compensation voltage and peak capacity. The effect of polar modifiers on the peak capacity across a broad range of chemicals has been investigated. We discuss the theoretical underpinnings which explain the observed effects. In contrast to the result from polar modifiers, we find that using mixtures of inert gases as the transport gas improve resolution by reducing peak width but has very little effect on peak capacity or selectivity. Inert gases do not cluster and thus do not reduce low field mobility relative to high-field mobility. The observed changes in the differential mobility α parameter exhibited by different classes of compounds when the transport gas contains polar modifiers or has a significant fraction of inert gas can be explained on the basis of the physical mechanisms involved in the separation processes. PMID:20121077

  9. Graphics processing unit (GPU) implementation of image processing algorithms to improve system performance of the control acquisition, processing, and image display system (CAPIDS) of the micro-angiographic fluoroscope (MAF)

    NASA Astrophysics Data System (ADS)

    Swetadri Vasan, S. N.; Ionita, Ciprian N.; Titus, A. H.; Cartwright, A. N.; Bednarek, D. R.; Rudin, S.

    2012-03-01

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  10. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF).

    PubMed

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2012-02-23

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  11. Space-Time Processing for Tactical Mobile Ad Hoc Networks

    DTIC Science & Technology

    2008-08-01

    Maximization in Multi-User, MIMO Channels with Linear Processing...58 2.9 Using Feedback in Ad Hoc Networks....................................................................65 2.10 Feedback MIMO ...in MIMO Ad Hoc Interference Networks.......................................................................................................75 2.12

  12. Shuttle Systems 3-D Applications: Application of 3-D Graphics in Engineering Training for Shuttle Ground Processing

    NASA Technical Reports Server (NTRS)

    Godfrey, Gary S.

    2003-01-01

    This project illustrates an animation of the orbiter mate to the external tank, an animation of the OMS POD installation to the orbiter, and a simulation of the landing gear mechanism at the Kennedy Space Center. A detailed storyboard was created to reflect each animation or simulation. Solid models were collected and translated into Pro/Engineer's prt and asm formats. These solid models included computer files of the: orbiter, external tank, solid rocket booster, mobile launch platform, transporter, vehicle assembly building, OMS POD fixture, and landing gear. A depository of the above solid models was established. These solid models were translated into several formats. This depository contained the following files: stl for sterolithography, stp for neutral file work, shrinkwrap for compression, tiff for photoshop work, jpeg for Internet use, and prt and asm for Pro/Engineer use. Solid models were created of the material handling sling, bay 3 platforms, and orbiter contact points. Animations were developed using mechanisms to reflect each storyboard. Every effort was made to build all models technically correct for engineering use. The result was an animated routine that could be used by NASA for training material handlers and uncovering engineering safety issues.

  13. Twitter Micro-Blogging Based Mobile Learning Approach to Enhance the Agriculture Education Process

    ERIC Educational Resources Information Center

    Dissanayeke, Uvasara; Hewagamage, K. P.; Ramberg, Robert; Wikramanayake, G. N.

    2013-01-01

    The study intends to see how to introduce mobile learning within the domain of agriculture so as to enhance the agriculture education process. We propose to use the Activity theory together with other methodologies such as participatory methods to design, implement, and evaluate mLearning activities. The study explores the process of introducing…

  14. Developing a Mobile Application "Educational Process Remote Management System" on the Android Operating System

    ERIC Educational Resources Information Center

    Abildinova, Gulmira M.; Alzhanov, Aitugan K.; Ospanova, Nazira N.; Taybaldieva, Zhymatay; Baigojanova, Dametken S.; Pashovkin, Nikita O.

    2016-01-01

    Nowadays, when there is a need to introduce various innovations into the educational process, most efforts are aimed at simplifying the learning process. To that end, electronic textbooks, testing systems and other software is being developed. Most of them are intended to run on personal computers with limited mobility. Smart education is…

  15. Effects of Mobile Instant Messaging on Collaborative Learning Processes and Outcomes: The Case of South Korea

    ERIC Educational Resources Information Center

    Kim, Hyewon; Lee, MiYoung; Kim, Minjeong

    2014-01-01

    The purpose of this paper was to investigate the effects of mobile instant messaging on collaborative learning processes and outcomes. The collaborative processes were measured in terms of different types of interactions. We measured the outcomes of the collaborations through both the students' taskwork and their teamwork. The collaborative…

  16. ICT and mobile health to improve clinical process delivery. a research project for therapy management process innovation.

    PubMed

    Locatelli, Paolo; Montefusco, Vittorio; Sini, Elena; Restifo, Nicola; Facchini, Roberta; Torresani, Michele

    2013-01-01

    The volume and the complexity of clinical and administrative information make Information and Communication Technologies (ICTs) essential for running and innovating healthcare. This paper tells about a project aimed to design, develop and implement a set of organizational models, acknowledged procedures and ICT tools (Mobile & Wireless solutions and Automatic Identification and Data Capture technologies) to improve actual support, safety, reliability and traceability of a specific therapy management (stem cells). The value of the project is to design a solution based on mobile and identification technology in tight collaboration with physicians and actors involved in the process to ensure usability and effectivenes in process management.

  17. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    PubMed

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks.

  18. Emergency healthcare process automation using mobile computing and cloud services.

    PubMed

    Poulymenopoulou, M; Malamateniou, F; Vassilacopoulos, G

    2012-10-01

    Emergency care is basically concerned with the provision of pre-hospital and in-hospital medical and/or paramedical services and it typically involves a wide variety of interdependent and distributed activities that can be interconnected to form emergency care processes within and between Emergency Medical Service (EMS) agencies and hospitals. Hence, in developing an information system for emergency care processes, it is essential to support individual process activities and to satisfy collaboration and coordination needs by providing readily access to patient and operational information regardless of location and time. Filling this information gap by enabling the provision of the right information, to the right people, at the right time fosters new challenges, including the specification of a common information format, the interoperability among heterogeneous institutional information systems or the development of new, ubiquitous trans-institutional systems. This paper is concerned with the development of an integrated computer support to emergency care processes by evolving and cross-linking institutional healthcare systems. To this end, an integrated EMS cloud-based architecture has been developed that allows authorized users to access emergency case information in standardized document form, as proposed by the Integrating the Healthcare Enterprise (IHE) profile, uses the Organization for the Advancement of Structured Information Standards (OASIS) standard Emergency Data Exchange Language (EDXL) Hospital Availability Exchange (HAVE) for exchanging operational data with hospitals and incorporates an intelligent module that supports triaging and selecting the most appropriate ambulances and hospitals for each case.

  19. A service protocol for post-processing of medical images on the mobile device

    NASA Astrophysics Data System (ADS)

    He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian

    2014-03-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.

  20. Adaptive sampling for learning gaussian processes using mobile sensor networks.

    PubMed

    Xu, Yunfei; Choi, Jongeun

    2011-01-01

    This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme.

  1. Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks

    PubMed Central

    Xu, Yunfei; Choi, Jongeun

    2011-01-01

    This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal Gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of Gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme. PMID:22163785

  2. Intra-Theater Mobilization: Guidance, Process, and Tools

    DTIC Science & Technology

    1992-09-01

    theorist Eliyahu Goldratt put this need into perspective in The Haystack Syndrome: Sifting Information out of the Data Ocean: Every organization is...judged not in isolation, but on what it contributes to the global purpose. The developer of TOC, Dr. Eliyahu Goldratt , expands on this approach: How to...the process. Goldratt refers to this as a shipping buffer, since it buffers against throughput lost when "shipments" of the product are not ready to

  3. A process: development of a model multiculturalism curriculum designed for mobility across geographic borders.

    PubMed

    Frels, L; Scott, J; Schramm, M A

    1997-08-01

    The Council on Accreditation Project, Nurse Anesthesia Educational Requirements and Mobility Between North American Free Trade Agreement (NAFTA) Countries, has as one of its outcomes the development of a model curriculum that would minimize educational barriers for mobility of nurse anesthetists across NAFTA geographical borders with a focus on the blending of professional and technical expertise with issues of human diversity and/or cultural differences. The overall long-term outcome of the project is to test a process. The manuscript discusses the process used in year III of the project to integrate cultural concepts into a nurse anesthesia model curriculum.

  4. Characterizing Information Processing With a Mobile Device: Measurement of Simple and Choice Reaction Time.

    PubMed

    Burke, Daniel; Linder, Susan; Hirsch, Joshua; Dey, Tanujit; Kana, Daniel; Ringenbach, Shannon; Schindler, David; Alberts, Jay

    2016-03-01

    Information processing is typically evaluated using simple reaction time (SRT) and choice reaction time (CRT) paradigms in which a specific response is initiated following a given stimulus. The measurement of reaction time (RT) has evolved from monitoring the timing of mechanical switches to computerized paradigms. The proliferation of mobile devices with touch screens makes them a natural next technological approach to assess information processing. The aims of this study were to determine the validity and reliability of using of a mobile device (Apple iPad or iTouch) to accurately measure RT. Sixty healthy young adults completed SRT and CRT tasks using a traditional test platform and mobile platforms on two occasions. The SRT was similar across test modality: 300, 287, and 280 milliseconds (ms) for the traditional, iPad, and iTouch, respectively. The CRT was similar within mobile devices, though slightly faster on the traditional: 359, 408, and 384 ms for traditional, iPad, and iTouch, respectively. Intraclass correlation coefficients ranged from 0.79 to 0.85 for SRT and from 0.75 to 0.83 for CRT. The similarity and reliability of SRT across platforms and consistency of SRT and CRT across test conditions indicate that mobile devices provide the next generation of assessment platforms for information processing.

  5. NMR data visualization, processing, and analysis on mobile devices.

    PubMed

    Cobas, Carlos; Iglesias, Isaac; Seoane, Felipe

    2015-08-01

    Touch-screen computers are emerging as a popular platform for many applications, including those in chemistry and analytical sciences. In this work, we present our implementation of a new NMR 'app' designed for hand-held and portable touch-controlled devices, such as smartphones and tablets. It features a flexible architecture formed by a powerful NMR processing and analysis kernel and an intuitive user interface that makes full use of the smart devices haptic capabilities. Routine 1D and 2D NMR spectra acquired in most NMR instruments can be processed in a fully unattended way. More advanced experiments such as non-uniform sampled NMR spectra are also supported through a very efficient parallelized Modified Iterative Soft Thresholding algorithm. Specific technical development features as well as the overall feasibility of using NMR software apps will also be discussed. All aspects considered the functionalities of the app allowing it to work as a stand-alone tool or as a 'companion' to more advanced desktop applications such as Mnova NMR.

  6. Design Standards for Instructional Computer Programs. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. The report describes design standards for the computer programs. They are designed to be…

  7. User's Guide for Subroutine FFORM. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry; Anderson, Lougenia

    These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. FFORM is a portable format-free input subroutine package which simplifies the input of values…

  8. User's Guide for Subroutine PLOT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    This module is part of a series designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PLOT3D is a subroutine package which generates a variety of three dimensional hidden…

  9. Programmer's Guide for Subroutine PLOT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    This module is part of a series designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PLOT3D is a subroutine package which generates a variety of three-dimensional hidden…

  10. User's Guide for Subroutine PRNT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PRNT3D is a subroutine package which generates a variety of printer plot displays. The displays…

  11. Programmer's Guide for Subroutine PRNT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PRNT3D is a subroutine package which generates a variety of printed plot displays. The displays…

  12. Graphic Design in Educational Television.

    ERIC Educational Resources Information Center

    Clarke, Beverley

    To help educational television (ETV) practitioners achieve maximum clarity, economy and purposiveness, the range of techniques of television graphics is explained. Closed-circuit and broadcast ETV are compared. The design process is discussed in terms of aspect ratio, line structure, cut off, screen size, tone scales, studio apparatus, and…

  13. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  14. Using analytic network process for evaluating mobile text entry methods.

    PubMed

    Ocampo, Lanndon A; Seva, Rosemary R

    2016-01-01

    This paper highlights a preference evaluation methodology for text entry methods in a touch keyboard smartphone using analytic network process (ANP). Evaluation of text entry methods in literature mainly considers speed and accuracy. This study presents an alternative means for selecting text entry method that considers user preference. A case study was carried out with a group of experts who were asked to develop a selection decision model of five text entry methods. The decision problem is flexible enough to reflect interdependencies of decision elements that are necessary in describing real-life conditions. Results showed that QWERTY method is more preferred than other text entry methods while arrangement of keys is the most preferred criterion in characterizing a sound method. Sensitivity analysis using simulation of normally distributed random numbers under fairly large perturbation reported the foregoing results reliable enough to reflect robust judgment. The main contribution of this paper is the introduction of a multi-criteria decision approach in the preference evaluation of text entry methods.

  15. Scalable Indoor Localization via Mobile Crowdsourcing and Gaussian Process.

    PubMed

    Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei; Wang, Weiping

    2016-03-16

    Indoor localization using Received Signal Strength Indication (RSSI) fingerprinting has been extensively studied for decades. The positioning accuracy is highly dependent on the density of the signal database. In areas without calibration data, however, this algorithm breaks down. Building and updating a dense signal database is labor intensive, expensive, and even impossible in some areas. Researchers are continually searching for better algorithms to create and update dense databases more efficiently. In this paper, we propose a scalable indoor positioning algorithm that works both in surveyed and unsurveyed areas. We first propose Minimum Inverse Distance (MID) algorithm to build a virtual database with uniformly distributed virtual Reference Points (RP). The area covered by the virtual RPs can be larger than the surveyed area. A Local Gaussian Process (LGP) is then applied to estimate the virtual RPs' RSSI values based on the crowdsourced training data. Finally, we improve the Bayesian algorithm to estimate the user's location using the virtual database. All the parameters are optimized by simulations, and the new algorithm is tested on real-case scenarios. The results show that the new algorithm improves the accuracy by 25.5% in the surveyed area, with an average positioning error below 2.2 m for 80% of the cases. Moreover, the proposed algorithm can localize the users in the neighboring unsurveyed area.

  16. Scalable Indoor Localization via Mobile Crowdsourcing and Gaussian Process

    PubMed Central

    Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei; Wang, Weiping

    2016-01-01

    Indoor localization using Received Signal Strength Indication (RSSI) fingerprinting has been extensively studied for decades. The positioning accuracy is highly dependent on the density of the signal database. In areas without calibration data, however, this algorithm breaks down. Building and updating a dense signal database is labor intensive, expensive, and even impossible in some areas. Researchers are continually searching for better algorithms to create and update dense databases more efficiently. In this paper, we propose a scalable indoor positioning algorithm that works both in surveyed and unsurveyed areas. We first propose Minimum Inverse Distance (MID) algorithm to build a virtual database with uniformly distributed virtual Reference Points (RP). The area covered by the virtual RPs can be larger than the surveyed area. A Local Gaussian Process (LGP) is then applied to estimate the virtual RPs’ RSSI values based on the crowdsourced training data. Finally, we improve the Bayesian algorithm to estimate the user’s location using the virtual database. All the parameters are optimized by simulations, and the new algorithm is tested on real-case scenarios. The results show that the new algorithm improves the accuracy by 25.5% in the surveyed area, with an average positioning error below 2.2 m for 80% of the cases. Moreover, the proposed algorithm can localize the users in the neighboring unsurveyed area. PMID:26999139

  17. High Electron Mobility Transistor Structures on Sapphire Substrates Using CMOS Compatible Processing Techniques

    NASA Technical Reports Server (NTRS)

    Mueller, Carl; Alterovitz, Samuel; Croke, Edward; Ponchak, George

    2004-01-01

    System-on-a-chip (SOC) processes are under intense development for high-speed, high frequency transceiver circuitry. As frequencies, data rates, and circuit complexity increases, the need for substrates that enable high-speed analog operation, low-power digital circuitry, and excellent isolation between devices becomes increasingly critical. SiGe/Si modulation doped field effect transistors (MODFETs) with high carrier mobilities are currently under development to meet the active RF device needs. However, as the substrate normally used is Si, the low-to-modest substrate resistivity causes large losses in the passive elements required for a complete high frequency circuit. These losses are projected to become increasingly troublesome as device frequencies progress to the Ku-band (12 - 18 GHz) and beyond. Sapphire is an excellent substrate for high frequency SOC designs because it supports excellent both active and passive RF device performance, as well as low-power digital operations. We are developing high electron mobility SiGe/Si transistor structures on r-plane sapphire, using either in-situ grown n-MODFET structures or ion-implanted high electron mobility transistor (HEMT) structures. Advantages of the MODFET structures include high electron mobilities at all temperatures (relative to ion-implanted HEMT structures), with mobility continuously improving to cryogenic temperatures. We have measured electron mobilities over 1,200 and 13,000 sq cm/V-sec at room temperature and 0.25 K, respectively in MODFET structures. The electron carrier densities were 1.6 and 1.33 x 10(exp 12)/sq cm at room and liquid helium temperature, respectively, denoting excellent carrier confinement. Using this technique, we have observed electron mobilities as high as 900 sq cm/V-sec at room temperature at a carrier density of 1.3 x 10(exp 12)/sq cm. The temperature dependence of mobility for both the MODFET and HEMT structures provides insights into the mechanisms that allow for enhanced

  18. IMAT graphics manual

    NASA Technical Reports Server (NTRS)

    Stockwell, Alan E.; Cooper, Paul A.

    1991-01-01

    The Integrated Multidisciplinary Analysis Tool (IMAT) consists of a menu driven executive system coupled with a relational database which links commercial structures, structural dynamics and control codes. The IMAT graphics system, a key element of the software, provides a common interface for storing, retrieving, and displaying graphical information. The IMAT Graphics Manual shows users of commercial analysis codes (MATRIXx, MSC/NASTRAN and I-DEAS) how to use the IMAT graphics system to obtain high quality graphical output using familiar plotting procedures. The manual explains the key features of the IMAT graphics system, illustrates their use with simple step-by-step examples, and provides a reference for users who wish to take advantage of the flexibility of the software to customize their own applications.

  19. VoxelMages: a general-purpose graphical interface for designing geometries and processing DICOM images for PENELOPE.

    PubMed

    Giménez-Alventosa, V; Ballester, F; Vijande, J

    2016-12-01

    The design and construction of geometries for Monte Carlo calculations is an error-prone, time-consuming, and complex step in simulations describing particle interactions and transport in the field of medical physics. The software VoxelMages has been developed to help the user in this task. It allows to design complex geometries and to process DICOM image files for simulations with the general-purpose Monte Carlo code PENELOPE in an easy and straightforward way. VoxelMages also allows to import DICOM-RT structure contour information as delivered by a treatment planning system. Its main characteristics, usage and performance benchmarking are described in detail.

  20. Performance Tradeoff Considerations in a Graphics Processing Unit (GPU) Implementation of a Low Detectable Aircraft Sensor System

    DTIC Science & Technology

    2013-01-01

    CUDA          *  Optimal  employment  of   GPU  memory...the   GPU   using   the   stream   construct  within   CUDA .    Using  this  technique,  a  small  amount  of...input  tile  data   is  sent  to  the   GPU   initially.     Then,   while   the   CUDA   kernels   process  

  1. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs.

    PubMed

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture.

  2. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs

    PubMed Central

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854

  3. 77 FR 38597 - Multistakeholder Process To Develop Consumer Data Privacy Code of Conduct Concerning Mobile...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-28

    ... Data Privacy Code of Conduct Concerning Mobile Application Transparency AGENCY: National... develop legally enforceable codes of conduct that specify how the Consumer Privacy Bill of Rights applies... multistakeholder process is to develop a code of conduct to provide transparency in how companies...

  4. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units.

    PubMed

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-02-12

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.

  5. Improvement of MS (multiple sclerosis) CAD (computer aided diagnosis) performance using C/C++ and computing engine in the graphical processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Suh, Joohyung; Ma, Kevin; Le, Anh

    2011-03-01

    Multiple Sclerosis (MS) is a disease which is caused by damaged myelin around axons of the brain and spinal cord. Currently, MR Imaging is used for diagnosis, but it is very highly variable and time-consuming since the lesion detection and estimation of lesion volume are performed manually. For this reason, we developed a CAD (Computer Aided Diagnosis) system which would assist segmentation of MS to facilitate physician's diagnosis. The MS CAD system utilizes K-NN (k-nearest neighbor) algorithm to detect and segment the lesion volume in an area based on the voxel. The prototype MS CAD system was developed under the MATLAB environment. Currently, the MS CAD system consumes a huge amount of time to process data. In this paper we will present the development of a second version of MS CAD system which has been converted into C/C++ in order to take advantage of the GPU (Graphical Processing Unit) which will provide parallel computation. With the realization of C/C++ and utilizing the GPU, we expect to cut running time drastically. The paper investigates the conversion from MATLAB to C/C++ and the utilization of a high-end GPU for parallel computing of data to improve algorithm performance of MS CAD.

  6. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units

    PubMed Central

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-01-01

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684

  7. Performance improvement for solution-processed high-mobility ZnO thin-film transistors

    NASA Astrophysics Data System (ADS)

    Sha Li, Chen; Li, Yu Ning; Wu, Yi Liang; Ong, Beng S.; Loutfy, Rafik O.

    2008-06-01

    The fabrication technology of stable, non-toxic, transparent, high performance zinc oxide (ZnO) thin-film semiconductors via the solution process was investigated. Two methods, which were, respectively, annealing a spin-coated precursor solution and annealing a drop-coated precursor solution, were compared. The prepared ZnO thin-film semiconductor transistors have well-controlled, preferential crystal orientation and exhibit superior field-effect performance characteristics. But the ZnO thin-film transistor (TFT) fabricated by annealing a drop-coated precursor solution has a distinctly elevated linear mobility, which further approaches the saturated mobility, compared with that fabricated by annealing a spin-coated precursor solution. The performance of the solution-processed ZnO TFT was further improved when substituting the spin-coating process by the drop-coating process.

  8. Band-like temperature dependence of mobility in a solution-processed organic semiconductor

    NASA Astrophysics Data System (ADS)

    Sakanoue, Tomo; Sirringhaus, Henning

    2010-09-01

    The mobility μ of solution-processed organic semiconductorshas improved markedly to room-temperature values of 1-5cm2V-1s-1. In spite of their growing technological importance, the fundamental open question remains whether charges are localized onto individual molecules or exhibit extended-state band conduction like those in inorganic semiconductors. The high bulk mobility of 100cm2V-1s-1 at 10K of some molecular single crystals provides clear evidence that extended-state conduction is possible in van-der-Waals-bonded solids at low temperatures. However, the nature of conduction at room temperature with mobilities close to the Ioffe-Regel limit remains controversial. Here we investigate the origin of an apparent `band-like', negative temperature coefficient of the mobility (dμ/dT<0) in spin-coated films of 6,13-bis(triisopropylsilylethynyl)-pentacene. We use optical spectroscopy of gate-induced charge carriers to show that, at low temperature and small lateral electric field, charges become localized onto individual molecules in shallow trap states, but that a moderate lateral electric field is able to detrap them resulting in highly nonlinear, low-temperature transport. The negative temperature coefficient of the mobility at high fields is not due to extended-state conduction but to localized transport limited by thermal lattice fluctuations.

  9. Visual to tactile conversion of vector graphics.

    PubMed

    Krufka, Stephen E; Barner, Kenneth E; Aysal, Tuncer Can

    2007-06-01

    Methods to automatically convert graphics into raised-line images have been recently investigated. In this paper, concepts from previous research are extended to the vector graphics case, producing tactile pictures in which important features are emphasized. The proposed algorithm extracts object boundaries and employs a classification process, based on a graphic's hierarchical structure, to determine critical outlines. A single parameter is introduced into the classification process, enabling users to tailor graphics to their own preferences. The resulting outlines are printed using a Braille printer to produce tactile output. Critical outlines are embossed with raised dots of highest height while other lines and details are embossed with a lower height. Psychophysical experiments including discrimination, identification, and comprehension are utilized to evaluate and compare the proposed algorithm. Results indicate that the proposed method outperforms other methods in all three considered tasks. The results also show that emphasizing important features significantly increases comprehension of tactile graphics, validating the proposed method's effectiveness in conveying visual information.

  10. Graphics mini manual

    NASA Technical Reports Server (NTRS)

    Taylor, Nancy L.; Randall, Donald P.; Bowen, John T.; Johnson, Mary M.; Roland, Vincent R.; Matthews, Christine G.; Gates, Raymond L.; Skeens, Kristi M.; Nolf, Scott R.; Hammond, Dana P.

    1990-01-01

    The computer graphics capabilities available at the Center are introduced and their use is explained. More specifically, the manual identifies and describes the various graphics software and hardware components, details the interfaces between these components, and provides information concerning the use of these components at LaRC.

  11. Real-time electroholography using a multiple-graphics processing unit cluster system with a single spatial light modulator and the InfiniBand network

    NASA Astrophysics Data System (ADS)

    Niwase, Hiroaki; Takada, Naoki; Araki, Hiromitsu; Maeda, Yuki; Fujiwara, Masato; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2016-09-01

    Parallel calculations of large-pixel-count computer-generated holograms (CGHs) are suitable for multiple-graphics processing unit (multi-GPU) cluster systems. However, it is not easy for a multi-GPU cluster system to accomplish fast CGH calculations when CGH transfers between PCs are required. In these cases, the CGH transfer between the PCs becomes a bottleneck. Usually, this problem occurs only in multi-GPU cluster systems with a single spatial light modulator. To overcome this problem, we propose a simple method using the InfiniBand network. The computational speed of the proposed method using 13 GPUs (NVIDIA GeForce GTX TITAN X) was more than 3000 times faster than that of a CPU (Intel Core i7 4770) when the number of three-dimensional (3-D) object points exceeded 20,480. In practice, we achieved ˜40 tera floating point operations per second (TFLOPS) when the number of 3-D object points exceeded 40,960. Our proposed method was able to reconstruct a real-time movie of a 3-D object comprising 95,949 points.

  12. Acceleration of split-field finite difference time-domain method for anisotropic media by means of graphics processing unit computing

    NASA Astrophysics Data System (ADS)

    Francés, Jorge; Bleda, Sergio; Álvarez, Mariela Lázara; Martínez, Francisco Javier; Márquez, Andres; Neipp, Cristian; Beléndez, Augusto

    2014-01-01

    The implementation of split-field finite difference time domain (SF-FDTD) applied to light-wave propagation through periodic media with arbitrary anisotropy method in graphics processing units (GPUs) is described. The SF-FDTD technique and the periodic boundary condition allow the consideration of a single period of the structure reducing the simulation grid. Nevertheless, the analysis of the anisotropic media implies considering all the electromagnetic field components and the use of complex notation. These aspects reduce the computational efficiency of the numerical method compared with the isotropic and nonperiodic implementation. Specifically, the implementation of the SF-FDTD in the Kepler family of GPUs of NVIDIA is presented. An analysis of the performance of this implementation is done, and several applications have been considered in order to estimate the possibilities provided by both the formalism and the implementation into GPU: binary phase gratings and twisted-nematic liquid crystal cells. Regarding the analysis of binary phase gratings, the validity of the scalar diffraction theory is evaluated by the comparison of the diffraction efficiencies predicted by SF-FDTD. The analysis for the second order of diffraction is extended, which is considered as a reference for the transmittance obtained by the SF-FDTD scheme for periodic media.

  13. Graphics processing unit-accelerated non-rigid registration of MR images to CT images during CT-guided percutaneous liver tumor ablations

    PubMed Central

    Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G.; Shekhar, Raj; Hata, Nobuhiko

    2015-01-01

    Rationale and Objectives Accuracy and speed are essential for the intraprocedural nonrigid MR-to-CT image registration in the assessment of tumor margins during CT-guided liver tumor ablations. While both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique based on volume subdivision with hardware acceleration using a graphical processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Materials and Methods Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance (HD)), and total processing time including contouring of ROIs and computation were compared using a paired Student’s t-test. Results Accuracy of the GPU-accelerated registrations and B-spline registrations, respectively were 88.3 ± 3.7% vs 89.3 ± 4.9% (p = 0.41) for DSC and 13.1 ± 5.2 mm vs 11.4 ± 6.3 mm (p = 0.15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 s vs 557 ± 116 s (p < 0.000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (p = 0.71). Conclusion The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU

  14. Introduction to LBL graphics

    SciTech Connect

    Not Available

    1984-08-01

    The Computing Services Department supports a number of graphics software packages on the VAX machines, primarily on the IGM VAX. These packages will drive a large variety of different graphical devices, terminals (including various Tektronix terminals, the AED 512 color raster terminal, the IMLAC Series II vector list processor terminal and others), various styles of plotters and the DICOMED D48 film recorder. We are going to present to you the following graphic software packages: Tell-A-Graf, Cuechart, Tell-A-Plan, Data Connection, DI-3000, Contouring, Grafmaker (including Grafeasy), Grafmaster, Movie.BYU, Grafpac, IDDS, UGS/HPLOT/HBOOK, and SDL/SGL.

  15. Graphic arts standards update: 1996

    NASA Astrophysics Data System (ADS)

    McDowell, David Q.

    1996-03-01

    Color definition and data exchange continue to be dominant themes in both the US and international graphic arts standards activity. However, there is a growing understanding of the role that metrology and printing process definition play in helping define stable process conditions to which color characterization data can be related. Standards have already been published that define the requirements for color measurement and computation, scanner input characterization targets, four-color output characterization data sets, and graphic arts applications of densitometry. Work continues on standards relating to ink testing and ink color specifications. The numerical specifications of SWOP proof printing have been captured in ANSI standard CGATS.6-1995. Work has been completed, and Technical Report ANSI CGATS TR 001-1995 has been published, that relates the colorimetry of the printed sheet to the CMYK input for press proofing meeting SWOP and CGATS.6 specifications. Work is ongoing to provide similar data for other printing processes. Such color characterization data is key to the development of color profiles for standard printing conditions. Specifications for color profiles, to allow color definitions to be moved between color management systems, are being developed by the International Color Consortium. The existing graphic arts data exchange, process control, and color related standards are summarized and the current status of work in progress is reviewed. In addition, the interaction of the formal standards programs and other industry-driven specification activity is discussed.

  16. A biphasic process of resistance among suspects: The mobilization and decline of self-regulatory resources.

    PubMed

    Madon, Stephanie; Guyll, Max; Yang, Yueran; Smalarz, Laura; Marschall, Justin; Lannin, Daniel G

    2017-04-01

    We conducted two experiments to test whether police interrogation elicits a biphasic process of resistance from suspects. According to this process, the initial threat of police interrogation mobilizes suspects to resist interrogative influence in a manner akin to a fight or flight response, but suspects' protracted self-regulation of their behavior during subsequent questioning increases their susceptibility to interrogative influence in the long-run. In Experiment 1 (N = 316), participants who were threatened by an accusation of misconduct exhibited responses indicative of mobilization and more strongly resisted social pressure to acquiesce to suggestive questioning than did participants who were not accused. In Experiment 2 (N = 160), self-regulatory decline that was induced during questioning about misconduct undermined participants' ability to resist suggestive questioning. These findings support a theoretical account of the dynamic and temporal nature of suspects' responses to police interrogation over the course of questioning. (PsycINFO Database Record

  17. How to Apply for Protection Time Graphic

    EPA Pesticide Factsheets

    We will review insect repellent products that voluntarily apply to use the repellency awareness graphic to ensure that their scientific data meet current testing protocols and standard evaluation processes.

  18. A High Speed Mobile Courier Data Access System That Processes Database Queries in Real-Time

    NASA Astrophysics Data System (ADS)

    Gatsheni, Barnabas Ndlovu; Mabizela, Zwelakhe

    A secure high-speed query processing mobile courier data access (MCDA) system for a Courier Company has been developed. This system uses the wireless networks in combination with wired networks for updating a live database at the courier centre in real-time by an offsite worker (the Courier). The system is protected by VPN based on IPsec. There is no system that we know of to date that performs the task for the courier as proposed in this paper.

  19. Image reproduction with interactive graphics

    NASA Technical Reports Server (NTRS)

    Buckner, J. D.; Council, H. W.; Edwards, T. R.

    1974-01-01

    Software application or development in optical image digital data processing requires a fast, good quality, yet inexpensive hard copy of processed images. To achieve this, a Cambo camera with an f 2.8/150-mm Xenotar lens in a Copal shutter having a Graflok back for 4 x 5 Polaroid type 57 pack-film has been interfaced to an existing Adage, AGT-30/Electro-Mechanical Research, EMR 6050 graphic computer system. Time-lapse photography in conjunction with a log to linear voltage transformation has resulted in an interactive system capable of producing a hard copy in 54 sec. The interactive aspect of the system lies in a Tektronix 4002 graphic computer terminal and its associated hard copy unit.

  20. Self-Authored Graphic Design: A Strategy for Integrative Studies

    ERIC Educational Resources Information Center

    McCarthy, Steven; De Almeida, Cristina Melibeu

    2002-01-01

    The purpose of this essay is to introduce the concepts of self-authorship in graphic design education as part of an integrative pedagogy. The enhanced potential of harnessing graphic design's dual modalities--the integrative processes inherent in design thinking and doing, and the ability of graphic design to engage other disciplines by giving…

  1. Flowfield computer graphics

    NASA Technical Reports Server (NTRS)

    Desautel, Richard

    1993-01-01

    The objectives of this research include supporting the Aerothermodynamics Branch's research by developing graphical visualization tools for both the branch's adaptive grid code and flow field ray tracing code. The completed research for the reporting period includes development of a graphical user interface (GUI) and its implementation into the NAS Flowfield Analysis Software Tool kit (FAST), for both the adaptive grid code (SAGE) and the flow field ray tracing code (CISS).

  2. GRAPS: Graphical Plotting System.

    DTIC Science & Technology

    1985-07-01

    stand-alone unit, but could (with a few modifications) be incorporated into a larger system; e.g., the IGUANA system (ref 3). The GRAPS system...the GRAPS software into another system (e.g., the IGUANA ) could possibly introduce additional requirements. I have simply listed those things...Graphics Utility For Army NEC Automation ( IGUANA ), May 1985. 4. Hewlett-Packard Corp., Interfacing and Programming Manual - HP 7470A Graphics Plotter

  3. Development of a prototype chest digital tomosynthesis (CDT) R/F system with fast image reconstruction using graphics processing unit (GPU) programming

    NASA Astrophysics Data System (ADS)

    Choi, Sunghoon; Lee, Seungwan; Lee, Haenghwa; Lee, Donghoon; Choi, Seungyeon; Shin, Jungwook; Seo, Chang-Woo; Kim, Hee-Joung

    2017-03-01

    Digital tomosynthesis offers the advantage of low radiation doses compared to conventional computed tomography (CT) by utilizing small numbers of projections ( 80) acquired over a limited angular range. It produces 3D volumetric data, although there are artifacts due to incomplete sampling. Based upon these characteristics, we developed a prototype digital tomosynthesis R/F system for applications in chest imaging. Our prototype chest digital tomosynthesis (CDT) R/F system contains an X-ray tube with high power R/F pulse generator, flat-panel detector, R/F table, electromechanical radiographic subsystems including a precise motor controller, and a reconstruction server. For image reconstruction, users select between analytic and iterative reconstruction methods. Our reconstructed images of Catphan700 and LUNGMAN phantoms clearly and rapidly described the internal structures of phantoms using graphics processing unit (GPU) programming. Contrast-to-noise ratio (CNR) values of the CTP682 module of Catphan700 were higher in images using a simultaneous algebraic reconstruction technique (SART) than in those using filtered back-projection (FBP) for all materials by factors of 2.60, 3.78, 5.50, 2.30, 3.70, and 2.52 for air, lung foam, low density polyethylene (LDPE), Delrin® (acetal homopolymer resin), bone 50% (hydroxyapatite), and Teflon, respectively. Total elapsed times for producing 3D volume were 2.92 s and 86.29 s on average for FBP and SART (20 iterations), respectively. The times required for reconstruction were clinically feasible. Moreover, the total radiation dose from our system (5.68 mGy) was lower than that of conventional chest CT scan. Consequently, our prototype tomosynthesis R/F system represents an important advance in digital tomosynthesis applications.

  4. Modelling multi-phase liquid-sediment scour and resuspension induced by rapid flows using Smoothed Particle Hydrodynamics (SPH) accelerated with a Graphics Processing Unit (GPU)

    NASA Astrophysics Data System (ADS)

    Fourtakas, G.; Rogers, B. D.

    2016-06-01

    A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.

  5. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    NASA Astrophysics Data System (ADS)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU

  6. Enhancing mobile phones for people with visual impairments through haptic icons: the effect of learning processes.

    PubMed

    Galdón, Pedro Maria; Madrid, R Ignacio; De La Rubia-Cuestas, Ernesto J; Diaz-Estrella, Antonio; Gonzalez, Lourdes

    2013-01-01

    We report the results of a study on the learnability ofhaptic icons used in a system for incoming-call identification in mobile phones. The aim was to explore the feasibility of using haptic icons to create new assistive technologies for people with visual impairments. We compared the performance and satisfaction of users with different visual capacities (visually impaired vs. sighted) and using different learning processes (unimodal vs. multimodal). A better recognition rate and user experience were observed for the visually impaired than for sighted users and for multimodal rather than unimodal learning processes.

  7. A Photo Storm Report Mobile Application, Processing/Distribution System, and AWIPS-II Display Concept

    NASA Astrophysics Data System (ADS)

    Longmore, S. P.; Bikos, D.; Szoke, E.; Miller, S. D.; Brummer, R.; Lindsey, D. T.; Hillger, D.

    2014-12-01

    The increasing use of mobile phones equipped with digital cameras and the ability to post images and information to the Internet in real-time has significantly improved the ability to report events almost instantaneously. In the context of severe weather reports, a representative digital image conveys significantly more information than a simple text or phone relayed report to a weather forecaster issuing severe weather warnings. It also allows the forecaster to reasonably discern the validity and quality of a storm report. Posting geo-located, time stamped storm report photographs utilizing a mobile phone application to NWS social media weather forecast office pages has generated recent positive feedback from forecasters. Building upon this feedback, this discussion advances the concept, development, and implementation of a formalized Photo Storm Report (PSR) mobile application, processing and distribution system and Advanced Weather Interactive Processing System II (AWIPS-II) plug-in display software.The PSR system would be composed of three core components: i) a mobile phone application, ii) a processing and distribution software and hardware system, and iii) AWIPS-II data, exchange and visualization plug-in software. i) The mobile phone application would allow web-registered users to send geo-location, view direction, and time stamped PSRs along with severe weather type and comments to the processing and distribution servers. ii) The servers would receive PSRs, convert images and information to NWS network bandwidth manageable sizes in an AWIPS-II data format, distribute them on the NWS data communications network, and archive the original PSRs for possible future research datasets. iii) The AWIPS-II data and exchange plug-ins would archive PSRs, and the visualization plug-in would display PSR locations, times and directions by hour, similar to surface observations. Hovering on individual PSRs would reveal photo thumbnails and clicking on them would display the

  8. The Role of Mobile Technologies in Health Care Processes: The Case of Cancer Supportive Care

    PubMed Central

    Cucciniello, Maria; Guerrazzi, Claudia

    2015-01-01

    Background Health care systems are gradually moving toward new models of care based on integrated care processes shared by different care givers and on an empowered role of the patient. Mobile technologies are assuming an emerging role in this scenario. This is particularly true in care processes where the patient has a particularly enhanced role, as is the case of cancer supportive care. Objective This paper aims to review existing studies on the actual role and use of mobile technology during the different stages of care processes, with particular reference to cancer supportive care. Methods We carried out a review of literature with the aim of identifying studies related to the use of mHealth in cancer care and cancer supportive care. The final sample size consists of 106 records. Results There is scant literature concerning the use of mHealth in cancer supportive care. Looking more generally at cancer care, we found that mHealth is mainly used for self-management activities carried out by patients. The main tools used are mobile devices like mobile phones and tablets, but remote monitoring devices also play an important role. Text messaging technologies (short message service, SMS) have a minor role, with the exception of middle income countries where text messaging plays a major role. Telehealth technologies are still rarely used in cancer care processes. If we look at the different stages of health care processes, we can see that mHealth is mainly used during the treatment of patients, especially for self-management activities. It is also used for prevention and diagnosis, although to a lesser extent, whereas it appears rarely used for decision-making and follow-up activities. Conclusions Since mHealth seems to be employed only for limited uses and during limited phases of the care process, it is unlikely that it can really contribute to the creation of new care models. This under-utilization may depend on many issues, including the need for it to be embedded

  9. Parallel processor-based raster graphics system architecture

    DOEpatents

    Littlefield, Richard J.

    1990-01-01

    An apparatus for generating raster graphics images from the graphics command stream includes a plurality of graphics processors connected in parallel, each adapted to receive any part of the graphics command stream for processing the command stream part into pixel data. The apparatus also includes a frame buffer for mapping the pixel data to pixel locations and an interconnection network for interconnecting the graphics processors to the frame buffer. Through the interconnection network, each graphics processor may access any part of the frame buffer concurrently with another graphics processor accessing any other part of the frame buffer. The plurality of graphics processors can thereby transmit concurrently pixel data to pixel locations in the frame buffer.

  10. HIN9/468: The Last Mile - Secure and mobile data processing in healthcare

    PubMed Central

    Bludau, HB; Vocke, A; Herzog, W

    1999-01-01

    Motivation According to the Federal Ministry the avowed target of modern medicine is to administer the best medical care, the newest scientific insights and the knowledge of experienced specialists on affordable conditions to every patient no matter whether he is located in a rural area or in a teaching hospital. One way of administer information is on mobile tools. To find out more about the influence of mobile computer on the physician-patient relation, the acceptance of these tools as well as prerequisites of new security and data-processing concepts were investigated in a simulation study. Methods The Personal Digital Assistant Based on a personal digital assistant a prototype was developed. The Apple Newton was used because it appeared suitable for easy data input and retrieval by the means of a touch screen with handwriting recognition. The device was coupled with a conventional cellular phone for voice and data transfer The prototype provided several functions for information processing: access to a patient database access to medical knowledge documentation of diagnosis electronic requests forms for investigations tools for the personal organization. The prototype of an accessibility and safety manager was integrated. This software enables to control telephone accessibility individually. Situational adjustments and a complex set of rules configured the way arriving calls were dealt with. Moreover this software contained a component for sending and receiving text messages. The Simulation Study In simulation studies, test users are observed while working with prototypical technology in a close-to-reality environment. The aim is to test an early prototype in its avowed environment to obtain design proposals for technology by the future users. Within the Ladenburger group "Security in communications technology" of the Gottlieb-Daimler und Karl-Benz-Stiftung an investigation at the Heidelberg University Medical Centre was conducted under organisational management

  11. A Graphical Physics Course

    NASA Astrophysics Data System (ADS)

    Wood, Roy C.

    2001-11-01

    There has been a desire in recent years to introduce physics to students at the middle school, or freshmen high school level. However, traditional physics courses involve a great deal of mathematics, and this makes physics unattractive to many of them. In the last few decades, courses have been developed with a focus that is more conceptual than mathematical, and is generally referred to as conceptual physics. These two types of courses emphasize two methods that physicist use to solve physics problems. However, there is a third, graphical method that is also useful, and complements mathematical and verbal reasoning. A course emphasizing graphical methods would deal with quantitative graphical diagrams, as well as qualitative diagrams. Examples of quantitative graphical diagrams are scaled force diagrams and scaled optical ray-tracing diagrams. A course based on this type of approach would involve measurements and uncertainties, and would involve active (hands-on) student participation suitable for younger students. This talk will discuss a graphical physics course, and its benefits to younger students.

  12. Control of Chemical Effects in the Separation Process of a Differential Mobility / Mass Spectrometer System

    PubMed Central

    Schneider, Bradley B.; Coy, Stephen L.; Krylov, Evgeny V.; Nazarov, Erkinjon G.

    2013-01-01

    Differential mobility spectrometry (DMS) separates ions on the basis of the difference in their migration rates under high versus low electric fields. Several models describing the physical nature of this field mobility dependence have been proposed but emerging as a dominant effect is the clusterization model sometimes referred to as the dynamic cluster-decluster model. DMS resolution and peak capacity is strongly influenced by the addition of modifiers which results in the formation and dissociation of clusters. This process increases selectivity due to the unique chemical interactions that occur between an ion and neutral gas phase molecules. It is thus imperative to bring the parameters influencing the chemical interactions under control and find ways to exploit them in order to improve the analytical utility of the device. In this paper we describe three important areas that need consideration in order to stabilize and capitalize on the chemical processes that dominate a DMS separation. The first involves means of controlling the dynamic equilibrium of the clustering reactions with high concentrations of specific reagents. The second area involves a means to deal with the unwanted heterogeneous cluster ion populations emitted from the electrospray ionization process that degrade resolution and sensitivity. The third involves fine control of parameters that affect the fundamental collision processes, temperature and pressure. PMID:20065515

  13. Graphical functions in parametric space

    NASA Astrophysics Data System (ADS)

    Golz, Marcel; Panzer, Erik; Schnetz, Oliver

    2016-12-01

    Graphical functions are positive functions on the punctured complex plane Csetminus {0,1} which arise in quantum field theory. We generalize a parametric integral representation for graphical functions due to Lam, Lebrun and Nakanishi, which implies the real analyticity of graphical functions. Moreover, we prove a formula that relates graphical functions of planar dual graphs.

  14. mHealth Quality: A Process to Seal the Qualified Mobile Health Apps.

    PubMed

    Yasini, Mobin; Beranger, Jérôme; Desmarais, Pierre; Perez, Lucas; Marchand, Guillaume

    2016-01-01

    A large number of mobile health applications (apps) are currently available with a variety of functionalities. The user ratings in the app stores seem not to be reliable to determine the quality of the apps. The traditional methods of evaluation are not suitable for fast paced nature of mobile technology. In this study, we propose a collaborative multidimensional scale to assess the quality of mHealth apps. During our process, the app quality is assessed in various aspects including medical reliability, legal consistency, ethical consistency, usability aspects, personal data privacy and IT security. A hypothetico-deductive approach was used in various working groups to define the audit criteria based on the various use cases that an app could provide. These criteria were then implemented into a web based self-administered questionnaires and the generation of automatic reports were considered. This method is on the one hand specific to each app because it allows to assess each health app according to its offered functionalities. On the other hand, this method is automatic, transferable to all apps and adapted to the dynamic nature of mobile technology.

  15. Graphical workstation capability for reliability modeling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.

    1992-01-01

    In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.

  16. Near Real-Time Assessment of Anatomic and Dosimetric Variations for Head and Neck Radiation Therapy via Graphics Processing Unit–based Dose Deformation Framework

    SciTech Connect

    Qi, X. Sharon; Santhanam, Anand; Neylon, John; Min, Yugang; Armstrong, Tess; Sheng, Ke; Staton, Robert J.; Pukala, Jason; Pham, Andrew; Low, Daniel A.; Lee, Steve P.; Steinberg, Michael; Manon, Rafael; Chen, Allen M.; Kupelian, Patrick

    2015-06-01

    Purpose: The purpose of this study was to systematically monitor anatomic variations and their dosimetric consequences during intensity modulated radiation therapy (IMRT) for head and neck (H&N) cancer by using a graphics processing unit (GPU)-based deformable image registration (DIR) framework. Methods and Materials: Eleven IMRT H&N patients undergoing IMRT with daily megavoltage computed tomography (CT) and weekly kilovoltage CT (kVCT) scans were included in this analysis. Pretreatment kVCTs were automatically registered with their corresponding planning CTs through a GPU-based DIR framework. The deformation of each contoured structure in the H&N region was computed to account for nonrigid change in the patient setup. The Jacobian determinant of the planning target volumes and the surrounding critical structures were used to quantify anatomical volume changes. The actual delivered dose was calculated accounting for the organ deformation. The dose distribution uncertainties due to registration errors were estimated using a landmark-based gamma evaluation. Results: Dramatic interfractional anatomic changes were observed. During the treatment course of 6 to 7 weeks, the parotid gland volumes changed up to 34.7%, and the center-of-mass displacement of the 2 parotid glands varied in the range of 0.9 to 8.8 mm. For the primary treatment volume, the cumulative minimum and mean and equivalent uniform doses assessed by the weekly kVCTs were lower than the planned doses by up to 14.9% (P=.14), 2% (P=.39), and 7.3% (P=.05), respectively. The cumulative mean doses were significantly higher than the planned dose for the left parotid (P=.03) and right parotid glands (P=.006). The computation including DIR and dose accumulation was ultrafast (∼45 seconds) with registration accuracy at the subvoxel level. Conclusions: A systematic analysis of anatomic variations in the H&N region and their dosimetric consequences is critical in improving treatment efficacy. Nearly real

  17. High-Mobility Ambipolar Organic Thin-Film Transistor Processed From a Nonchlorinated Solvent.

    PubMed

    Sonar, Prashant; Chang, Jingjing; Kim, Jae H; Ong, Kok-Haw; Gann, Eliot; Manzhos, Sergei; Wu, Jishan; McNeill, Christopher R

    2016-09-21

    Polymer semiconductor PDPPF-DFT, which combines furan-substituted diketopyrrolopyrrole (DPP) and a 3,4-difluorothiophene base, has been designed and synthesized. PDPPF-DFT polymer semiconductor thin film processed from nonchlorinated hexane is used as an active layer in thin-film transistors. As a result, balanced hole and electron mobilities of 0.26 and 0.12 cm(2)/(V s) are achieved for PDPPF-DFT. This is the first report of using nonchlorinated hexane solvent for fabricating high-performance ambipolar thin-film transistor devices.

  18. Doping suppression and mobility enhancement of graphene transistors fabricated using an adhesion promoting dry transfer process

    SciTech Connect

    Cheol Shin, Woo; Hun Mun, Jeong; Yong Kim, Taek; Choi, Sung-Yool; Jin Cho, Byung E-mail: tskim1@kaist.ac.kr; Yoon, Taeshik; Kim, Taek-Soo E-mail: tskim1@kaist.ac.kr

    2013-12-09

    We present the facile dry transfer of graphene synthesized via chemical vapor deposition on copper film to a functional device substrate. High quality uniform dry transfer of graphene to oxidized silicon substrate was achieved by exploiting the beneficial features of a poly(4-vinylphenol) adhesive layer involving a strong adhesion energy to graphene and negligible influence on the electronic and structural properties of graphene. The graphene field effect transistors (FETs) fabricated using the dry transfer process exhibit excellent electrical performance in terms of high FET mobility and low intrinsic doping level, which proves the feasibility of our approach in graphene-based nanoelectronics.

  19. Recovery of critical and value metals from mobile electronics enabled by electrochemical processing

    SciTech Connect

    Tedd E. Lister; Peiming Wang; Andre Anderko

    2014-10-01

    Electrochemistry-based schemes were investigated as a means to recover critical and value metals from scrap mobile electronics. Mobile electronics offer a growing feedstock for replenishing value and critical metals and reducing need to exhaust primary sources. The electrorecycling process generates oxidizing agents at an anode to dissolve metals from the scrap matrix while reducing dissolved metals at the cathode. The process uses a single cell to maximize energy efficiency. E vs pH diagrams and metals dissolution experiments were used to assess effectiveness of various solution chemistries. Following this work, a flow chart was developed where two stages of electrorecycling were proposed: 1) initial dissolution of Cu, Sn, Ag and magnet materials using Fe+3 generated in acidic sulfate and 2) final dissolution of Pd and Au using Cl2 generated in an HCl solution. Experiments were performed using a simulated metal mixture equivalent to 5 cell phones. Both Cu and Ag were recovered at ~ 97% using Fe+3 while leaving Au and Pd intact. Strategy for extraction of rare earth elements (REE) from dissolved streams is discussed as well as future directions in process development.

  20. Data processing and quality evaluation of a boat-based mobile laser scanning system.

    PubMed

    Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri

    2013-09-17

    Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0-1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data.

  1. Data Processing and Quality Evaluation of a Boat-Based Mobile Laser Scanning System

    PubMed Central

    Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri

    2013-01-01

    Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0–1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data. PMID:24048340

  2. Quantifying Visual Similarity in Clinical Iconic Graphics

    PubMed Central

    Payne, Philip R.O.; Starren, Justin B.

    2005-01-01

    Objective: The use of icons and other graphical components in user interfaces has become nearly ubiquitous. The interpretation of such icons is based on the assumption that different users perceive the shapes similarly. At the most basic level, different users must agree on which shapes are similar and which are different. If this similarity can be measured, it may be usable as the basis to design better icons. Design: The purpose of this study was to evaluate a novel method for categorizing the visual similarity of graphical primitives, called Presentation Discovery, in the domain of mammography. Six domain experts were given 50 common textual mammography findings and asked to draw how they would represent those findings graphically. Nondomain experts sorted the resulting graphics into groups based on their visual characteristics. The resulting groups were then analyzed using traditional statistics and hypothesis discovery tools. Strength of agreement was evaluated using computational simulations of sorting behavior. Measurements: Sorter agreement was measured at both the individual graphical and concept-group levels using a novel simulation-based method. “Consensus clusters” of graphics were derived using a hierarchical clustering algorithm. Results: The multiple sorters were able to reliably group graphics into similar groups that strongly correlated with underlying domain concepts. Visual inspection of the resulting consensus clusters indicated that graphical primitives that could be informative in the design of icons were present. Conclusion: The method described provides a rigorous alternative to intuitive design processes frequently employed in the design of icons and other graphical interface components. PMID:15684136

  3. Techno-economic assessment of the Mobil Two-Stage Slurry Fischer-Tropsch/ZSM-5 process

    SciTech Connect

    El Sawy, A.; Gray, D.; Neuworth, M.; Tomlinson, G.

    1984-11-01

    A techno-economic assessment of the Mobil Two-Stage Slurry Fischer-Tropsch reactor system was carried out. Mobil bench-scale data were evaluated and scaled to a commercial plant design that produced specification high-octane gasoline and high-cetane diesel fuel. Comparisons were made with three reference plants - a SASOL (US) plant using dry ash Lurgi gasifiers and Synthol synthesis units, a modified SASOL plant with a British Gas Corporation slagging Lurgi gasifier (BGC/Synthol) and a BGC/slurry-phase process based on scaled data from the Koelbel Rheinpreussen-Koppers plant. A conceptual commercial version of the Mobil two-stage process shows a higher process efficiency than a SASOL (US) and a BGC/Synthol plant. The Mobil plant gave lower gasoline costs than obtained from the SASOL (US) and BGC/Synthol versions. Comparison with published data from a slurry-phase Fischer-Tropsch (Koelbel) unit indicated that product costs from the Mobil process were within 6% of the Koelbel values. A high-wax version of the Mobil process combined with wax hydrocracking could produce gasoline and diesel fuel at comparable cost to the lowest values achieved from prior published slurry-phase results. 27 references, 18 figures, 49 tables.

  4. Software For Animated Graphics

    NASA Technical Reports Server (NTRS)

    Merritt, F.; Bancroft, G.; Kelaita, P.

    1992-01-01

    Graphics Animation System (GAS) software package serves as easy-to-use, menu-driven program providing fast, simple viewing capabilities as well as more-complex features for rendering and animation in computational fluid dynamics (CFD). Displays two- and three-dimensional objects along with computed data and records animation sequences on video digital disk, videotape, and 16-mm film. Written in C.

  5. Graphic Life Map.

    ERIC Educational Resources Information Center

    Schulze, Patricia

    This is a prewriting activity for personal memoir or autobiographical writing. Grade 6-8 students brainstorm for important memories, create graphics or symbols for their most important memories, and construct a life map on tag board or construction paper, connecting drawings and captions of high and low points with a highway. During four 50-minute…

  6. Comics & Graphic Novels

    ERIC Educational Resources Information Center

    Cleaver, Samantha

    2008-01-01

    Not so many years ago, comic books in school were considered the enemy. Students caught sneaking comics between the pages of bulky--and less engaging--textbooks were likely sent to the principal. Today, however, comics, including classics such as "Superman" but also their generally more complex, nuanced cousins, graphic novels, are not only…

  7. Computing Graphical Confidence Bounds

    NASA Technical Reports Server (NTRS)

    Mezzacappa, M. A.

    1983-01-01

    Approximation for graphical confidence bounds is simple enough to run on programmable calculator. Approximation is used in lieu of numerical tables not always available, and exact calculations, which often require rather sizable computer resources. Approximation verified for collection of up to 50 data points. Method used to analyze tile-strength data on Space Shuttle thermal-protection system.

  8. Graphic Novels: A Roundup.

    ERIC Educational Resources Information Center

    Kan, Katherine L.

    1994-01-01

    Reviews graphic novels for young adults, including five titles from "The Adventures of Tintin," a French series that often uses ethnic and racial stereotypes which reflect the time in which they were published, and "Wolverine," a Marvel comic character adventure. (Contains six references.) (LRW)

  9. Evaluation of food processing wastewater loading characteristics on metal mobilization within the soil.

    PubMed

    Julien, Ryan; Safferman, Steven

    2015-01-01

    Wastewater generated during food processing is commonly treated using land-application systems which primarily rely on soil microbes to transform nutrients and organic compounds into benign byproducts. Naturally occurring metals in the soil may be chemically reduced via microbially mediated oxidation-reduction reactions as oxygen becomes depleted. Some metals such as manganese and iron become water soluble when chemically reduced, leading to groundwater contamination. Alternatively, metals within the wastewater may not become assimilated into the soil and leach into the groundwater if the environment is not sufficiently oxidizing. A lab-scale column study was conducted to investigate the impacts of wastewater loading values on metal mobilization within the soil. Oxygen content and volumetric water data were collected via soil sensors for the duration of the study. The pH, chemical oxygen demand, manganese, and iron concentrations in the influent and effluent water from each column were measured. Average organic loading and organic loading per dose were shown to have statistically significant impacts using Spearman's Rank Correlation Coefficient on effluent water quality. The Hydraulic resting period qualitatively appeared to have impacts on effluent water quality. This study verifies that excessive organic loading of land application systems causes mobilization of naturally occurring metals and prevents those added in the wastewater from becoming immobilized, resulting in ineffective wastewater treatment. Results also indicate the need to consider the organic dose load and hydraulic resting period in the treatment system design. Findings from this study demonstrate waste application twice daily may encourage soil aeration and allow for increased organic loading while limiting the mobilization of metals already in the soil and those being applied.

  10. Graphics database creation and manipulation: HyperCard Graphics Database Toolkit and Apple Graphics Source

    NASA Astrophysics Data System (ADS)

    Herman, Jeffrey; Fry, David

    1990-08-01

    Because graphic files can be stored in a number ofdifferent file formats, it has traditionally been difficult to create a graphics database from which users can open, copy, and print graphic files, where each file in the database may be in one ofseverai different formats. HyperCard Graphics Database Toolkit has been designed and written by Apple Computer to enable software developers to facilitate the creation of customized graphics databases. Using a database developed with the toolkit, users can open, copy, or print a graphic transparently, without having to know or understand the complexities of file formats. In addition, the toolkit includes a graphic user interface, graphic design, on-line help, and search algorithms that enable users to locate specific graphics quickly. Currently, the toolkit handles graphics in the formats MPNT, PICT, and EPSF, and is open to supporting other formats as well. Developers can use the toolkit to alter the code, the interface, and the graphic design in order to customize their database for the needs oftheir users. This paper discusses the structure ofthe toolkit and one implementation, Apple Graphics Source (AGS). AGS contains over 2,000 graphics used in Apple's books and manuals. AGS enables users to find existing graphics of Apple products and use them for presentations, new publications, papers, and software projects.

  11. Perception of tactile graphics: embossings versus cutouts.

    PubMed

    Kalia, Amy; Hopkins, Rose; Jin, David; Yazzolino, Lindsay; Verma, Svena; Merabet, Lotfi; Phillips, Flip; Sinha, Pawan

    2014-01-01

    Graphical information, such as illustrations, graphs, and diagrams, are an essential complement to text for conveying knowledge about the world. Although graphics can be communicated well via the visual modality, conveying this information via touch has proven to be challenging. The lack of easily comprehensible tactile graphics poses a problem for the blind. In this paper, we advance a hypothesis for the limited effectiveness of tactile graphics. The hypothesis contends that conventional graphics that rely upon embossings on two-dimensional surfaces do not allow the deployment of tactile exploratory procedures that are crucial for assessing global shape. Besides potentially accounting for some of the shortcomings of current approaches, this hypothesis also serves a prescriptive purpose by suggesting a different strategy for conveying graphical information via touch, one based on cutouts. We describe experiments demonstrating the greater effectiveness of this approach for conveying shape and identity information. These results hold the potential for creating more comprehensible tactile drawings for the visually impaired while also providing insights into shape estimation processes in the tactile modality.

  12. Span graphics display utilities handbook, first edition

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Green, J. L.; Newman, R.

    1985-01-01

    The Space Physics Analysis Network (SPAN) is a computer network connecting scientific institutions throughout the United States. This network provides an avenue for timely, correlative research between investigators, in a multidisciplinary approach to space physics studies. An objective in the development of SPAN is to make available direct and simplified procedures that scientists can use, without specialized training, to exchange information over the network. Information exchanges include raw and processes data, analysis programs, correspondence, documents, and graphite images. This handbook details procedures that can be used to exchange graphic images over SPAN. The intent is to periodically update this handbook to reflect the constantly changing facilities available on SPAN. The utilities described within reflect an earnest attempt to provide useful descriptions of working utilities that can be used to transfer graphic images across the network. Whether graphic images are representative of satellite servations or theoretical modeling and whether graphics images are of device dependent or independent type, the SPAN graphics display utilities handbook will be the users guide to graphic image exchange.

  13. Hydrothermal Processes and Mobile Element Transport in Martian Impact Craters - Evidence from Terrestrial Analogue Craters

    NASA Technical Reports Server (NTRS)

    Newsom, H. E.; Nelson, M. J.; Shearer, C. K.; Dressler, B. L.

    2005-01-01

    Hydrothermal alteration and chemical transport involving impact craters probably occurred on Mars throughout its history. Our studies of alteration products and mobile element transport in ejecta blanket and drill core samples from impact craters show that these processes may have contributed to the surface composition of Mars. Recent work on the Chicxulub Yaxcopoil-1 drill core has provided important information on the relative mobility of many elements that may be relevant to Mars. The Chicxulub impact structure in the Yucatan Peninsula of Mexico and offshore in the Gulf of Mexico is one of the largest impact craters identified on the Earth, has a diameter of 180-200 km, and is associated with the mass extinctions at the K/T boundary. The Yax-1 hole was drilled in 2001 and 2002 on the Yaxcopoil hacienda near Merida on the Yucatan Peninsula. Yax-1 is located just outside of the transient cavity, which explains some of the unusual characteristics of the core stratigraphy. No typical impact melt sheet was encountered in the hole and most of the Yax-1 impactites are breccias. In particular, the impact melt and breccias are only 100 m thick which is surprising taking into account the considerably thicker breccia accumulations towards the center of the structure and farther outside the transient crater encountered by other drill holes.

  14. Career Opportunities in Computer Graphics.

    ERIC Educational Resources Information Center

    Langer, Victor

    1983-01-01

    Reviews the impact of computer graphics on industrial productivity. Details the computer graphics technician curriculum at Milwaukee Area Technical College and the cooperative efforts of business and industry to fund and equip the program. (SK)

  15. Graphic Grown Up

    ERIC Educational Resources Information Center

    Kim, Ann

    2009-01-01

    It's no secret that children and YAs are clued in to graphic novels (GNs) and that comics-loving adults are positively giddy that this format is getting the recognition it deserves. Still, there is a whole swath of library card-carrying grown-up readers out there with no idea where to start. Splashy movies such as "300" and "Spider-Man" and their…

  16. Graphical Contingency Analysis Tool

    SciTech Connect

    2010-03-02

    GCA is a visual analytic tool for power grid contingency analysis to provide more decision support for power grid operations. GCA allows power grid operators to quickly gain situational awareness of power grid by converting large amounts of operational data to graphic domain with a color contoured map; identify system trend and foresee and discern emergencies by performing trending analysis; identify the relationships between system configurations and affected assets by conducting clustering analysis; and identify the best action by interactively evaluate candidate actions.

  17. John Herschel's Graphical Method

    NASA Astrophysics Data System (ADS)

    Hankins, Thomas L.

    2011-01-01

    In 1833 John Herschel published an account of his graphical method for determining the orbits of double stars. He had hoped to be the first to determine such orbits, but Felix Savary in France and Johann Franz Encke in Germany beat him to the punch using analytical methods. Herschel was convinced, however, that his graphical method was much superior to analytical methods, because it used the judgment of the hand and eye to correct the inevitable errors of observation. Line graphs of the kind used by Herschel became common only in the 1830s, so Herschel was introducing a new method. He also found computation fatiguing and devised a "wheeled machine" to help him out. Encke was skeptical of Herschel's methods. He said that he lived for calculation and that the English would be better astronomers if they calculated more. It is difficult to believe that the entire Scientific Revolution of the 17th century took place without graphs and that only a few examples appeared in the 18th century. Herschel promoted the use of graphs, not only in astronomy, but also in the study of meteorology and terrestrial magnetism. Because he was the most prominent scientist in England, Herschel's advocacy greatly advanced graphical methods.

  18. Timeseries Signal Processing for Enhancing Mobile Surveys: Learning from Field Studies

    NASA Astrophysics Data System (ADS)

    Risk, D. A.; Lavoie, M.; Marshall, A. D.; Baillie, J.; Atherton, E. E.; Laybolt, W. D.

    2015-12-01

    Vehicle-based surveys using laser and other analyzers are now commonplace in research and industry. In many cases when these studies target biologically-relevant gases like methane and carbon dioxide, the minimum detection limits are often coarse (ppm) relative to the analyzer's capabilities (ppb), because of the inherent variability in the ambient background concentrations across the landscape that creates noise and uncertainty. This variation arises from localized biological sinks and sources, but also atmospheric turbulence, air pooling, and other factors. Computational processing routines are widely used in many fields to increase resolution of a target signal in temporally dense data, and offer promise for enhancing mobile surveying techniques. Signal processing routines can both help identify anomalies at very low levels, or can be used inversely to remove localized industrially-emitted anomalies from ecological data. This presentation integrates learnings from various studies in which simple signal processing routines were used successfully to isolate different temporally-varying components of 1 Hz timeseries measured with laser- and UV fluorescence-based analyzers. As illustrative datasets, we present results from industrial fugitive emission studies from across Canada's western provinces and other locations, and also an ecological study that aimed to model near-surface concentration variability across different biomes within eastern Canada. In these cases, signal processing algorithms contributed significantly to the clarity of both industrial, and ecological processes. In some instances, signal processing was too computationally intensive for real-time in-vehicle processing, but we identified workarounds for analyzer-embedded software that contributed to an improvement in real-time resolution of small anomalies. Signal processing is a natural accompaniment to these datasets, and many avenues are open to researchers who wish to enhance existing, and future

  19. Approximate Counting of Graphical Realizations.

    PubMed

    Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations.

  20. Approximate Counting of Graphical Realizations

    PubMed Central

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  1. Low Cost Graphics. Second Edition.

    ERIC Educational Resources Information Center

    Tinker, Robert F.

    This manual describes the CALM TV graphics interface, a low-cost means of producing quality graphics on an ordinary TV. The system permits the output of data in graphic as well as alphanumeric form and the input of data from the face of the TV using a light pen. The integrated circuits required in the interface can be obtained from standard…

  2. Selecting Mangas and Graphic Novels

    ERIC Educational Resources Information Center

    Nylund, Carol

    2007-01-01

    The decision to add graphic novels, and particularly the Japanese styled called manga, was one the author has debated for a long time. In this article, the author shares her experience when she purchased graphic novels and mangas to add to her library collection. She shares how graphic novels and mangas have revitalized the library.

  3. Computer graphics in aerodynamic analysis

    NASA Technical Reports Server (NTRS)

    Cozzolongo, J. V.

    1984-01-01

    The use of computer graphics and its application to aerodynamic analyses on a routine basis is outlined. The mathematical modelling of the aircraft geometries and the shading technique implemented are discussed. Examples of computer graphics used to display aerodynamic flow field data and aircraft geometries are shown. A future need in computer graphics for aerodynamic analyses is addressed.

  4. Learning with Interactive Graphical Representations.

    ERIC Educational Resources Information Center

    Saljo, Roger, Ed.

    1999-01-01

    The seven articles of this theme issue deal with the use of computer-based interactive graphical representations. Studying their use will bring answers to users of static graphics in traditional paper-based media and those who plan instruction using graphical representations that allow semantically direct manipulation. (SLD)

  5. A stable solution-processed polymer semiconductor with record high-mobility for printed transistors.

    PubMed

    Li, Jun; Zhao, Yan; Tan, Huei Shuan; Guo, Yunlong; Di, Chong-An; Yu, Gui; Liu, Yunqi; Lin, Ming; Lim, Suo Hon; Zhou, Yuhua; Su, Haibin; Ong, Beng S

    2012-01-01

    Microelectronic circuits/arrays produced via high-speed printing instead of traditional photolithographic processes offer an appealing approach to creating the long-sought after, low-cost, large-area flexible electronics. Foremost among critical enablers to propel this paradigm shift in manufacturing is a stable, solution-processable, high-performance semiconductor for printing functionally capable thin-film transistors - fundamental building blocks of microelectronics. We report herein the processing and optimisation of solution-processable polymer semiconductors for thin-film transistors, demonstrating very high field-effect mobility, high on/off ratio, and excellent shelf-life and operating stabilities under ambient conditions. Exceptionally high-gain inverters and functional ring oscillator devices on flexible substrates have been demonstrated. This optimised polymer semiconductor represents a significant progress in semiconductor development, dispelling prevalent skepticism surrounding practical usability of organic semiconductors for high-performance microelectronic devices, opening up application opportunities hitherto functionally or economically inaccessible with silicon technologies, and providing an excellent structural framework for fundamental studies of charge transport in organic systems.

  6. A stable solution-processed polymer semiconductor with record high-mobility for printed transistors

    PubMed Central

    Li, Jun; Zhao, Yan; Tan, Huei Shuan; Guo, Yunlong; Di, Chong-An; Yu, Gui; Liu, Yunqi; Lin, Ming; Lim, Suo Hon; Zhou, Yuhua; Su, Haibin; Ong, Beng S.

    2012-01-01

    Microelectronic circuits/arrays produced via high-speed printing instead of traditional photolithographic processes offer an appealing approach to creating the long-sought after, low-cost, large-area flexible electronics. Foremost among critical enablers to propel this paradigm shift in manufacturing is a stable, solution-processable, high-performance semiconductor for printing functionally capable thin-film transistors — fundamental building blocks of microelectronics. We report herein the processing and optimisation of solution-processable polymer semiconductors for thin-film transistors, demonstrating very high field-effect mobility, high on/off ratio, and excellent shelf-life and operating stabilities under ambient conditions. Exceptionally high-gain inverters and functional ring oscillator devices on flexible substrates have been demonstrated. This optimised polymer semiconductor represents a significant progress in semiconductor development, dispelling prevalent skepticism surrounding practical usability of organic semiconductors for high-performance microelectronic devices, opening up application opportunities hitherto functionally or economically inaccessible with silicon technologies, and providing an excellent structural framework for fundamental studies of charge transport in organic systems. PMID:23082244

  7. Using mobile devices to improve the safety of medication administration processes.

    PubMed

    Navas, H; Graffi Moltrasio, L; Ares, F; Strumia, G; Dourado, E; Alvarez, M

    2015-01-01

    Within preventable medical errors, those related to medications are frequent in every stage of the prescribing cycle. Nursing is responsible for maintaining each patients safety and care quality. Moreover, nurses are the last people who can detect an error in medication before its administration. Medication administration is one of the riskiest tasks in nursing. The use of information and communication technologies is related to a decrease in these errors. Including mobile devices related to 2D code reading of patients and medication will decrease the possibility of error when preparing and administering medication by nurses. A cross-platform software (iOS and Android) was developed to ensure the five Rights of the medication administration process (patient, medication, dose, route and schedule). Deployment in November showed 39% use.

  8. Analysis of mobile fronthaul bandwidth and wireless transmission performance in split-PHY processing architecture.

    PubMed

    Miyamoto, Kenji; Kuwano, Shigeru; Terada, Jun; Otaka, Akihiro

    2016-01-25

    We analyze the mobile fronthaul (MFH) bandwidth and the wireless transmission performance in the split-PHY processing (SPP) architecture, which redefines the functional split of centralized/cloud RAN (C-RAN) while preserving high wireless coordinated multi-point (CoMP) transmission/reception performance. The SPP architecture splits the base stations (BS) functions between wireless channel coding/decoding and wireless modulation/demodulation, and employs its own CoMP joint transmission and reception schemes. Simulation results show that the SPP architecture reduces the MFH bandwidth by up to 97% from conventional C-RAN while matching the wireless bit error rate (BER) performance of conventional C-RAN in uplink joint reception with only 2-dB signal to noise ratio (SNR) penalty.

  9. Linking geochemical processes in mud volcanoes with arsenic mobilization driven by organic matter.

    PubMed

    Liu, Chia-Chuan; Kar, Sandeep; Jean, Jiin-Shuh; Wang, Chung-Ho; Lee, Yao-Chang; Sracek, Ondra; Li, Zhaohui; Bundschuh, Jochen; Yang, Huai-Jen; Chen, Chien-Yen

    2013-11-15

    The present study deals with geochemical characterization of mud fluids and sediments collected from Kunshuiping (KSP), Liyushan (LYS), Wushanting (WST), Sinyangnyuhu (SYNH), Hsiaokunshui (HKS) and Yenshuikeng (YSK) mud volcanoes in southwestern Taiwan. Chemical constituents (cations, anions, trace elements, organic carbon, humic acid, and stable isotopes) in both fluids and mud were analyzed to investigate the geochemical processes and spatial variability among the mud volcanoes under consideration. Analytical results suggested that the anoxic mud volcanic fluids are highly saline, implying connate water as the probable source. The isotopic signature indicated that δ(18)O-rich fluids may be associated with silicate and carbonate mineral released through water-rock interaction, along with dehydration of clay minerals. Considerable amounts of arsenic in mud irrespective of fluid composition suggested possible release through biogeochemical processes in the subsurface environment. Sequential extraction of As from the mud indicated that As was mostly present in organic and sulphidic phases, and adsorbed on amorphous Mn oxyhydroxides. Volcanic mud and fluids are rich in organic matter (in terms of organic carbon), and the presence of humic acid in mud has implications for the binding of arsenic. Functional groups of humic acid also showed variable sources of organic matter among the mud volcanoes being examined. Because arsenate concentration in the mud fluids was found to be independent from geochemical factors, it was considered that organic matter may induce arsenic mobilization through an adsorption/desorption mechanism with humic substances under reducing conditions. Organic matter therefore plays a significant role in the mobility of arsenic in mud volcanoes.

  10. Graphics performance in rich Internet applications.

    PubMed

    Hoetzlein, Rama C

    2012-01-01

    Rendering performance for rich Internet applications (RIAs) has recently focused on the debate between using Flash and HTML5 for streaming video and gaming on mobile devices. A key area not widely explored, however, is the scalability of raw bitmap graphics performance for RIAs. Does Flash render animated sprites faster than HTML5? How much faster is WebGL than Flash? Answers to these questions are essential for developing large-scale data visualizations, online games, and truly dynamic websites. A new test methodology analyzes graphics performance across RIA frameworks and browsers, revealing specific performance outliers in existing frameworks. The results point toward a future in which all online experiences might be GPU accelerated.

  11. Evaluation of a Mobile Hot Cell Technology for Processing Idaho National Laboratory Remote-Handled Wastes

    SciTech Connect

    B.J. Orchard; L.A. Harvego; R.P. Miklos; F. Yapuncich; L. Care

    2009-03-01

    The Idaho National Laboratory (INL) currently does not have the necessary capabilities to process all remote-handled wastes resulting from the Laboratory’s nuclear-related missions. Over the years, various U.S. Department of Energy (DOE)-sponsored programs undertaken at the INL have produced radioactive wastes and other materials that are categorized as remote-handled (contact radiological dose rate > 200 mR/hr). These materials include Spent Nuclear Fuel (SNF), transuranic (TRU) waste, waste requiring geological disposal, low-level waste (LLW), mixed waste (both radioactive and hazardous per the Resource Conservation and Recovery Act [RCRA]), and activated and/or radioactively-contaminated reactor components. The waste consists primarily of uranium, plutonium, other TRU isotopes, and shorter-lived isotopes such as cesium and cobalt with radiological dose rates up to 20,000 R/hr. The hazardous constituents in the waste consist primarily of reactive metals (i.e., sodium and sodium-potassium alloy [NaK]), which are reactive and ignitable per RCRA, making the waste difficult to handle and treat. A smaller portion of the waste is contaminated with other hazardous components (i.e., RCRA toxicity characteristic metals). Several analyses of alternatives to provide the required remote-handling and treatment capability to manage INL’s remote-handled waste have been conducted over the years and have included various options ranging from modification of existing hot cells to construction of new hot cells. Previous analyses have identified a mobile processing unit as an alternative for providing the required remote-handled waste processing capability; however, it was summarily dismissed as being a potentially viable alternative based on limitations of a specific design considered. In 2008 INL solicited expressions of interest from Vendors who could provide existing, demonstrated technology that could be applied to the retrieval, sorting, treatment (as required), and

  12. Graphical Model Theory for Wireless Sensor Networks

    SciTech Connect

    Davis, William B.

    2002-12-08

    Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm.

  13. Graphical timeline editing

    NASA Technical Reports Server (NTRS)

    Meyer, Patrick E.; Jaap, John P.

    1994-01-01

    NASA's Experiment Scheduling Program (ESP), which has been used for approximately 12 Spacelab missions, is being enhanced with the addition of a Graphical Timeline Editor. The GTE Clipboard, as it is called, was developed to demonstrate new technology which will lead the development of International Space Station Alpha's Payload Planning System and support the remaining Spacelab missions. ESP's GTE Clipboard is developed in C using MIT's X Windows System X11R5 and follows OSF/Motif Style Guide Revision 1.2.

  14. Mobilization of Cr(VI) from chromite ore processing residue through acid treatment.

    PubMed

    Tinjum, James M; Benson, Craig H; Edil, Tuncer B

    2008-02-25

    Batch leaching studies on chromite ore processing residue (COPR) were performed using acids to investigate leaching of hexavalent chromium, Cr(VI), with respect to particle size, reaction time, and type of acid (HNO(3) and H(2)SO(4)). Aqueous Cr(VI) is maximized at approximately 0.04 mol Cr(VI) per kg of dry COPR at pH 7.6-8.1. Cr(VI) mobilized more slowly for larger particles, and the pH increased with time and increased more rapidly for smaller particles, suggesting that rate limitations occur in the solid phase. With H(2)SO(4), the pH stabilized at a higher value (8.8 for H(2)SO(4) vs. 8.0 for HNO(3)) and more rapidly (16 h vs. 30 h), and the differences in pH for different particle sizes were smaller. The acid neutralization capacity (ANC) of COPR is very large (8 mol HNO(3) per kg of dry COPR for a stable eluate pH of 7.5). Changes to the elemental and mineralogical composition and distribution in COPR particles after mixing with acid indicate that Cr(VI)-bearing solids dissolved. However, concentrations of Cr(VI) >2800 mg kg(-1) (>50% of the pre-treatment concentration) were still found after mixing with acid, regardless of the particle size, reaction time, or type of acid used. The residual Cr(VI) appears to be partially associated with poorly-ordered Fe and Al oxyhydroxides that precipitated in the interstitial areas of COPR particles. Remediation strategies that use HNO(3) or H(2)SO(4) to neutralize COPR or to maximize Cr(VI) in solution are likely to require extensive amounts of acid, may not mobilize all of the Cr(VI), and may require extended contact time, even under well-mixed conditions.

  15. Real-Time Radio Wave Propagation for Mobile Ad-Hoc Network Emulation and Simulation Using General Purpose Graphics Processing Units (GPGPUs)

    DTIC Science & Technology

    2014-05-01

    required substantial reformulation . An example of this reformulation was the replacement of nested conditional control flow inside inner loops. Second...Longley-Rice algorithm focused on the ITM algorithm LRPROP routine. LRPROP was implemented in both the Brook+ and CUDA languages . Subsequently, 5 the...kernels that make up LRPROP that were also implemented in the architecture specific Brooke+ and CUDA languages . The kernels are executed in succession

  16. Design and Certification of the Extravehicular Activity Mobility Unit (EMU) Water Processing Jumper

    NASA Technical Reports Server (NTRS)

    Peterson, Laurie J.; Neumeyer, Derek J.; Lewis, John F.

    2006-01-01

    The Extravehicular Mobility Units (EMUs) onboard the International Space Station (ISS) experienced a failure due to cooling water contamination from biomass and corrosion byproducts forming solids around the EMU pump rotor. The coolant had no biocide and a low pH which induced biofilm growth and corrosion precipitates, respectively. NASA JSC was tasked with building hardware to clean the ionic, organic, and particulate load from the EMU coolant loop before and after Extravehicular Activity (EVAs). Based on a return sample of the EMU coolant loop, the chemical load was well understood, but there was not sufficient volume of the returned sample to analyze particulates. Through work with EMU specialists, chemists, (EVA) Mission Operations Directorate (MOD) representation, safety and mission assurance, astronaut crew, and team engineers, requirements were developed for the EMU Water Processing hardware (sometimes referred to as the Airlock Coolant Loop Recovery [A/L CLR] system). Those requirements ranged from the operable level of ionic, organic, and particulate load, interfaces to the EMU, maximum cycle time, operating pressure drop, flow rate, and temperature, leakage rates, and biocide levels for storage. Design work began in February 2005 and certification was completed in April 2005 to support a return to flight launch date of May 12, 2005. This paper will discuss the details of the design and certification of the EMU Water Processing hardware and its components

  17. Data Analysis with Graphical Models: Software Tools

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.

    1994-01-01

    Probabilistic graphical models (directed and undirected Markov fields, and combined in chain graphs) are used widely in expert systems, image processing and other areas as a framework for representing and reasoning with probabilities. They come with corresponding algorithms for performing probabilistic inference. This paper discusses an extension to these models by Spiegelhalter and Gilks, plates, used to graphically model the notion of a sample. This offers a graphical specification language for representing data analysis problems. When combined with general methods for statistical inference, this also offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper outlines the framework and then presents some basic tools for the task: a graphical version of the Pitman-Koopman Theorem for the exponential family, problem decomposition, and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.

  18. Software Package For Real-Time Graphics

    NASA Technical Reports Server (NTRS)

    Malone, Jacqueline C.; Moore, Archie L.

    1991-01-01

    Software package for master graphics interactive console (MAGIC) at Western Aeronautical Test Range (WATR) of NASA Ames Research Center provides general-purpose graphical display system for real-time and post-real-time analysis of data. Written in C language and intended for use on workstation of interactive raster imaging system (IRIS) equipped with level-V Unix operating system. Enables flight researchers to create their own displays on basis of individual requirements. Applicable to monitoring of complicated processes in chemical industry.

  19. DspaceOgre 3D Graphics Visualization Tool

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.

    2011-01-01

    This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.

  20. Interactive Learning for Graphic Design Foundations

    ERIC Educational Resources Information Center

    Chu, Sauman; Ramirez, German Mauricio Mejia

    2012-01-01

    One of the biggest problems for students majoring in pre-graphic design is students' inability to apply their knowledge to different design solutions. The purpose of this study is to examine the effectiveness of interactive learning modules in facilitating knowledge acquisition during the learning process and to create interactive learning modules…

  1. Big system: Interactive graphics for the engineer

    NASA Technical Reports Server (NTRS)

    Quenneville, C. E.

    1975-01-01

    The BCS Interactive Graphics System (BIG System) approach to graphics was presented, along with several significant engineering applications. The BIG System precompiler, the graphics support library, and the function requirements of graphics applications are discussed. It was concluded that graphics standardization and a device independent code can be developed to assure maximum graphic terminal transferability.

  2. Interactive computer graphics: the arms race

    SciTech Connect

    Hafemeister, D.W.

    1983-01-01

    By using interactive computer graphics (ICG) it is possible to discuss the numerical aspects of some arms race issues with more specificity and in a visual way. The number of variables involved in these issues can be quite large; computers operated in the interactive, graphical mode, can allow exploration of the variables, leading to a greater understanding of the issues. This paper will examine some examples of interactive computer graphics: (1) the relationship between silo hardening and the accuracy, yield, and reliability of ICBMs; (2) target vulnerability (Minuteman, Dense Pack); (3) counterforce vs. countervalue weapons; (4) civil defense; (5) gravitational bias error; (6) MIRV; (7) national vulnerability to a preemptive first strike; (8) radioactive fallout; (9) digital-image processing with charge-coupled devices. 17 references, 11 figures, 1 table.

  3. Finding the Sweet Spot: Network Structures and Processes for Increased Knowledge Mobilization

    ERIC Educational Resources Information Center

    Briscoe, Patricia; Pollock, Katina; Campbell, Carol; Carr-Harris, Shasta

    2015-01-01

    The use of networks in public education is one of many knowledge mobilization (KMb) strategies utilized to promote evidence-based research into practice. However, challenges exist in the ability to mobilize knowledge through networks. The purpose of this paper is to explore how networks work. Data were collected from virtual discussions for an…

  4. The Effects of Image-Based Concept Mapping on the Learning Outcomes and Cognitive Processes of Mobile Learners

    ERIC Educational Resources Information Center

    Yen, Jung-Chuan; Lee, Chun-Yi; Chen, I-Jung

    2012-01-01

    The purpose of this study was to investigate the effects of different teaching strategies (text-based concept mapping vs. image-based concept mapping) on the learning outcomes and cognitive processes of mobile learners. Eighty-six college freshmen enrolled in the "Local Area Network Planning and Implementation" course taught by the first author…

  5. Making Higher Education More European through Student Mobility? Revisiting EU Initiatives in the Context of the Bologna Process

    ERIC Educational Resources Information Center

    Papatsiba, Vassiliki

    2006-01-01

    This paper focuses on the analysis of student mobility in the EU as a means to stimulate convergence of diverse higher education systems. The argument is based on official texts and other texts of political communication of the European Commission. The following discussion is placed within the current context of the Bologna process and its aim to…

  6. Ash iron mobilization through physicochemical processing in volcanic eruption plumes: a numerical modeling approach

    NASA Astrophysics Data System (ADS)

    Hoshyaripour, G. A.; Hort, M.; Langmann, B.

    2015-08-01

    It has been shown that volcanic ash fertilizes the Fe-limited areas of the surface ocean through releasing soluble iron. As ash iron is mostly insoluble upon the eruption, it is hypothesized that heterogeneous in-plume and in-cloud processing of the ash promote the iron solubilization. Direct evidences concerning such processes are, however, lacking. In this study, a 1-D numerical model is developed to simulate the physicochemical interactions of the gas-ash-aerosol in volcanic eruption plumes focusing on the iron mobilization processes at temperatures between 600 and 0 °C. Results show that sulfuric acid and water vapor condense at ~ 150 and ~ 50 °C on the ash surface, respectively. This liquid phase then efficiently scavenges the surrounding gases (> 95 % of HCl, 3-20 % of SO2 and 12-62 % of HF) forming an extremely acidic coating at the ash surface. The low pH conditions of the aqueous film promote acid-mediated dissolution of the Fe-bearing phases present in the ash material. We estimate that 0.1-33 % of the total iron available at the ash surface is dissolved in the aqueous phase before the freezing point is reached. The efficiency of dissolution is controlled by the halogen content of the erupted gas as well as the mineralogy of the iron at ash surface: elevated halogen concentrations and presence of Fe2+-carrying phases lead to the highest dissolution efficiency. Findings of this study are in agreement with the data obtained through leaching experiments.

  7. Fast DRR splat rendering using common consumer graphics hardware.

    PubMed

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-11-01

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2 x 10(6) voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine.

  8. Fast DRR splat rendering using common consumer graphics hardware

    SciTech Connect

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-11-15

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2x10{sup 6} voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine.

  9. Dietary Assessment on a Mobile Phone Using Image Processing and Pattern Recognition Techniques: Algorithm Design and System Prototyping

    PubMed Central

    Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing

    2015-01-01

    Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work. PMID:26225994

  10. Dietary Assessment on a Mobile Phone Using Image Processing and Pattern Recognition Techniques: Algorithm Design and System Prototyping.

    PubMed

    Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing

    2015-07-27

    Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work.

  11. High-mobility solution-processed copper phthalocyanine-based organic field-effect transistors.

    PubMed

    Chaure, Nandu B; Cammidge, Andrew N; Chambrier, Isabelle; Cook, Michael J; Cain, Markys G; Murphy, Craig E; Pal, Chandana; Ray, Asim K

    2011-04-01

    Solution-processed films of 1,4,8,11,15,18,22,25-octakis(hexyl) copper phthalocyanine (CuPc6) were utilized as an active semiconducting layer in the fabrication of organic field-effect transistors (OFETs) in the bottom-gate configurations using chemical vapour deposited silicon dioxide (SiO2) as gate dielectrics. The surface treatment of the gate dielectric with a self-assembled monolayer of octadecyltrichlorosilane (OTS) resulted in values of 4×10(-2) cm(2) V(-1) s(-1) and 10(6) for saturation mobility and on/off current ratio, respectively. This improvement was accompanied by a shift in the threshold voltage from 3 V for untreated devices to -2 V for OTS treated devices. The trap density at the interface between the gate dielectric and semiconductor decreased by about one order of magnitude after the surface treatment. The transistors with the OTS treated gate dielectrics were more stable over a 30-day period in air than untreated ones.

  12. Omega-3 chicken egg detection system using a mobile-based image processing segmentation method

    NASA Astrophysics Data System (ADS)

    Nurhayati, Oky Dwi; Kurniawan Teguh, M.; Cintya Amalia, P.

    2017-02-01

    An Omega-3 chicken egg is a chicken egg produced through food engineering technology. It is produced by hen fed with high omega-3 fatty acids. So, it has fifteen times nutrient content of omega-3 higher than Leghorn's. Visually, its shell has the same shape and colour as Leghorn's. Each egg can be distinguished by breaking the egg's shell and testing the egg yolk's nutrient content in a laboratory. But, those methods were proven not effective and efficient. Observing this problem, the purpose of this research is to make an application to detect the type of omega-3 chicken egg by using a mobile-based computer vision. This application was built in OpenCV computer vision library to support Android Operating System. This experiment required some chicken egg images taken using an egg candling box. We used 60 omega-3 chicken and Leghorn eggs as samples. Then, using an Android smartphone, image acquisition of the egg was obtained. After that, we applied several steps using image processing methods such as Grab Cut, convert RGB image to eight bit grayscale, median filter, P-Tile segmentation, and morphology technique in this research. The next steps were feature extraction which was used to extract feature values via mean, variance, skewness, and kurtosis from each image. Finally, using digital image measurement, some chicken egg images were classified. The result showed that omega-3 chicken egg and Leghorn egg had different values. This system is able to provide accurate reading around of 91%.

  13. Slow electrons impinging on dielectric solids. II. Implantation profiles, electron mobility, and recombination processes

    SciTech Connect

    Miotello, A.; Dapor, M.

    1997-07-01

    When an insulator is subject to electron irradiation, a fraction of electrons is absorbed while the rest is backscattered. The injected electrons cannot be definitely trapped but they must instead recombine with positive charges left near the irradiated surface when secondary electrons are emitted; this is justified on the basis that dielectric breakdown is not observed during specific experiments of electron irradiation of insulators. The dynamics of the absorbed electrons depend on a number of parameters: the number of trapped electrons, the charge-space distribution, the mobility, and the number of secondary electrons emitted from the region near the surface of the dielectric. The time evolution of the surface electric has been studied by integration of the continutity equation for the relevant transport processes of the injected charge by adopting, as the charge source term, the distribution of the absorbed electrons as obtained by a Monte Carlo simulation. The image charge has been also introduced in the calculation in order to take into account the change in the dielectric constant when passing from the material to the vacuum. Selected computational results are reported to illustrate the role of the relevant parameters which control the charging effects in electron-irradiated insulators. {copyright} {ital 1997} {ital The American Physical Society}

  14. Graphical programming of telerobotic tasks

    SciTech Connect

    Small, D.E.; McDonald, M.J.

    1996-11-01

    With a goal of producing faster, safer, and cheaper technologies for nuclear waste cleanup, Sandia is actively developing and extending intelligent systems technologies through the US Department of Energy Office of Technology Development (DOE OTD) Robotic Technology Development Program (RTDP). Graphical programming is a key technology for robotic waste cleanup that Sandia is developing for this goal. Graphical programming uses simulation such as TELEGRIP `on-line` to program and control robots. Characterized by its model-based control architecture, integrated simulation, `point-and-click` graphical user interfaces, task and path planning software, and network communications, Sandia`s Graphical Programming systems allow operators to focus on high-level robotic tasks rather than the low-level details. Use of scripted tasks, rather than customized programs minimizes the necessity of recompiling supervisory control systems and enhances flexibility. Rapid world-modelling technologies allow Graphical Programming to be used in dynamic and unpredictable environments including digging and pipe-cutting. This paper describes Sancho, Sandia`s most advanced graphical programming supervisory software. Sancho, now operational on several robot systems, incorporates all of Sandia`s recent advances in supervisory control. Graphical programming uses 3-D graphics models as intuitive operator interfaces to program and control complex robotic systems. The goal of the paper is to help the reader understand how Sandia implements graphical programming systems and which key features in Sancho have proven to be most effective.

  15. Solution-Processed Transistors Using Colloidal Nanocrystals with Composition-Matched Molecular "Solders": Approaching Single Crystal Mobility.

    PubMed

    Jang, Jaeyoung; Dolzhnikov, Dmitriy S; Liu, Wenyong; Nam, Sooji; Shim, Moonsub; Talapin, Dmitri V

    2015-10-14

    Crystalline silicon-based complementary metal-oxide-semiconductor transistors have become a dominant platform for today's electronics. For such devices, expensive and complicated vacuum processes are used in the preparation of active layers. This increases cost and restricts the scope of applications. Here, we demonstrate high-performance solution-processed CdSe nanocrystal (NC) field-effect transistors (FETs) that exhibit very high carrier mobilities (over 400 cm(2)/(V s)). This is comparable to the carrier mobilities of crystalline silicon-based transistors. Furthermore, our NC FETs exhibit high operational stability and MHz switching speeds. These NC FETs are prepared by spin coating colloidal solutions of CdSe NCs capped with molecular solders [Cd2Se3](2-) onto various oxide gate dielectrics followed by thermal annealing. We show that the nature of gate dielectrics plays an important role in soldered CdSe NC FETs. The capacitance of dielectrics and the NC electronic structure near gate dielectric affect the distribution of localized traps and trap filling, determining carrier mobility and operational stability of the NC FETs. We expand the application of the NC soldering process to core-shell NCs consisting of a III-V InAs core and a CdSe shell with composition-matched [Cd2Se3](2-) molecular solders. Soldering CdSe shells forms nanoheterostructured material that combines high electron mobility and near-IR photoresponse.

  16. Evaluating Texts for Graphical Literacy Instruction: The Graphic Rating Tool

    ERIC Educational Resources Information Center

    Roberts, Kathryn L.; Brugar, Kristy A.; Norman, Rebecca R.

    2015-01-01

    In this article, we present the Graphical Rating Tool (GRT), which is designed to evaluate the graphical devices that are commonly found in content-area, non-fiction texts, in order to identify books that are well suited for teaching about those devices. We also present a "best of" list of science and social studies books, which includes…

  17. All-digital multicarrier demodulators for on-board processing satellites in mobile communication systems

    NASA Astrophysics Data System (ADS)

    Yim, Wan Hung

    Economical operation of future satellite systems for mobile communications can only be fulfilled by using dedicated on-board processing satellites, which would allow both cheap earth terminals and lower space segment costs. With on-board modems and codecs, the up-link and down-link can be optimized separately. An attractive scheme is to use frequency-division multiple access/single chanel per carrier (FDMA/SCPC) on the up-link and time division multiplexing (TDM) on the down-link. This scheme allows mobile terminals to transmit a narrow band, low power signal, resulting in smaller dishes and high power amplifiers (HPA's) with lower output power. On the up-link, there are hundreds to thousands of FDM channels to be demodulated on-board. The most promising approach is the use of all-digital multicarrier demodulators (MCD's), where analog and digital hardware are efficiently shared among channels, and digital signal processing (DSP) is used at an early stage to take advantage of very large scale integration (VLSI) implementation. A MCD consists of a channellizer for separation of frequency division multiplexing (FDM) channels, followed by individual modulators for each channel. Major research areas in MCD's are in multirate DSP, and the optimal estimation for synchronization, which form the basis of the thesis. Complex signal theories are central to the development of structured approaches for the sampling and processing of bandpass signals, which are the foundations in both channellizer and demodulator design. In multirate DSP, polyphase theories replace many ad-hoc, tedious and error-prone design procedures. For example, a polyphase-matrix deep space network frequency and timing system (DFT) channellizer includes all efficient filter bank techniques as special cases. Also, a polyphase-lattice filter is derived, not only for sampling rate conversion, but also capable of sampling phase variation, which is required for symbol timing adjustment in all

  18. Solution-Processed Organic-Inorganic Perovskite Field-Effect Transistors with High Hole Mobilities.

    PubMed

    Matsushima, Toshinori; Hwang, Sunbin; Sandanayaka, Atula S D; Qin, Chuanjiang; Terakawa, Shinobu; Fujihara, Takashi; Yahiro, Masayuki; Adachi, Chihaya

    2016-12-01

    A very high hole mobility of 15 cm(2) V(-1) s(-1) along with negligible hysteresis are demonstrated in transistors with an organic-inorganic perovskite semiconductor. This high mobility results from the well-developed perovskite crystallites, improved conversion to perovskite, reduced hole trap density, and improved hole injection by employing a top-contact/top-gate structure with surface treatment and MoOx hole-injection layers.

  19. REQUIREMENTS FOR GRAPHIC TEACHING MACHINES.

    ERIC Educational Resources Information Center

    HICKEY, ALBERT; AND OTHERS

    AN EXPERIMENT WAS REPORTED WHICH DEMONSTRATES THAT GRAPHICS ARE MORE EFFECTIVE THAN SYMBOLS IN ACQUIRING ALGEBRA CONCEPTS. THE SECOND PHASE OF THE STUDY DEMONSTRATED THAT GRAPHICS IN HIGH SCHOOL TEXTBOOKS WERE RELIABLY CLASSIFIED IN A MATRIX OF 480 FUNCTIONAL STIMULUS-RESPONSE CATEGORIES. SUGGESTIONS WERE MADE FOR EXTENDING THE CLASSIFICATION…

  20. Graphic Interfaces and Online Information.

    ERIC Educational Resources Information Center

    Percival, J. Mark

    1990-01-01

    Discusses the growing importance of the use of Graphic User Interfaces (GUIs) with microcomputers and online services. Highlights include the development of graphics interfacing with microcomputers; CD-ROM databases; an evaluation of HyperCard as a potential interface to electronic mail and online commercial databases; and future possibilities.…

  1. Graphics Display of Foreign Scripts.

    ERIC Educational Resources Information Center

    Abercrombie, John R.

    1987-01-01

    Describes Graphics Project for Foreign Language Learning at the University of Pennsylvania, which has developed ways of displaying foreign scripts on microcomputers. Character design on computer screens is explained; software for graphics, printing, and language instruction is discussed; and a text editor is described that corrects optically…

  2. Computer Graphics Evolution: A Survey.

    ERIC Educational Resources Information Center

    Gartel, Laurence M.

    1985-01-01

    The history of the field of computer graphics is discussed. In 1976 there were no institutions that offered any kind of study of computer graphics. Today electronic image-making is seen as a viable, legitimate art form, and courses are offered by many universities and colleges. (RM)

  3. Interpreting Association from Graphical Displays

    ERIC Educational Resources Information Center

    Fitzallen, Noleine

    2016-01-01

    Research that has explored students' interpretations of graphical representations has not extended to include how students apply understanding of particular statistical concepts related to one graphical representation to interpret different representations. This paper reports on the way in which students' understanding of covariation, evidenced…

  4. Integration of rocket turbine design and analysis through computer graphics

    NASA Technical Reports Server (NTRS)

    Hsu, Wayne; Boynton, Jim

    1988-01-01

    An interactive approach with engineering computer graphics is used to integrate the design and analysis processes of a rocket engine turbine into a progressive and iterative design procedure. The processes are interconnected through pre- and postprocessors. The graphics are used to generate the blade profiles, their stacking, finite element generation, and analysis presentation through color graphics. Steps of the design process discussed include pitch-line design, axisymmetric hub-to-tip meridional design, and quasi-three-dimensional analysis. The viscous two- and three-dimensional analysis codes are executed after acceptable designs are achieved and estimates of initial losses are confirmed.

  5. Automatic Palette Identification of Colored Graphics

    NASA Astrophysics Data System (ADS)

    Lacroix, Vinciane

    The median-shift, a new clustering algorithm, is proposed to automatically identify the palette of colored graphics, a pre-requisite for graphics vectorization. The median-shift is an iterative process which shifts each data point to the "median" point of its neighborhood defined thanks to a distance measure and a maximum radius, the only parameter of the method. The process is viewed as a graph transformation which converges to a set of clusters made of one or several connected vertices. As the palette identification depends on color perception, the clustering is performed in the L*a*b* feature space. As pixels located on edges are made of mixed colors not expected to be part of the palette, they are removed from the initial data set by an automatic pre-processing. Results are shown on scanned maps and on the Macbeth color chart and compared to well established methods.

  6. ElectroEncephaloGraphics: Making waves in computer graphics research.

    PubMed

    Mustafa, Maryam; Magnor, Marcus

    2014-01-01

    Electroencephalography (EEG) is a novel modality for investigating perceptual graphics problems. Until recently, EEG has predominantly been used for clinical diagnosis, in psychology, and by the brain-computer-interface community. Researchers are extending it to help understand the perception of visual output from graphics applications and to create approaches based on direct neural feedback. Researchers have applied EEG to graphics to determine perceived image and video quality by detecting typical rendering artifacts, to evaluate visualization effectiveness by calculating the cognitive load, and to automatically optimize rendering parameters for images and videos on the basis of implicit neural feedback.

  7. Graphic arts techniques and equipment: A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Technology utilization of NASA sponsored projects involving graphic arts techniques and equipment is discussed. The subjects considered are: (1) modification to graphics tools, (1) new graphics tools, (3) visual aids for graphics, and (4) graphic arts shop hints. Photographs and diagrams are included to support the written material.

  8. High Mobility Flexible Amorphous IGZO Thin-Film Transistors with a Low Thermal Budget Ultra-Violet Pulsed Light Process.

    PubMed

    Benwadih, M; Coppard, R; Bonrad, K; Klyszcz, A; Vuillaume, D

    2016-12-21

    Amorphous, sol-gel processed, indium gallium zinc oxide (IGZO) transistors on plastic substrate with a printable gate dielectric and an electron mobility of 4.5 cm(2)/(V s), as well as a mobility of 7 cm(2)/(V s) on solid substrate (Si/SiO2) are reported. These performances are obtained using a low temperature pulsed light annealing technique. Ultraviolet (UV) pulsed light system is an innovative technique compared to conventional (furnace or hot-plate) annealing process that we successfully implemented on sol-gel IGZO thin film transistors (TFTs) made on plastic substrate. The photonic annealing treatment has been optimized to obtain IGZO TFTs with significant electrical properties. Organic gate dielectric layers deposited on this pulsed UV light annealed films have also been optimized. This technique is very promising for the development of amorphous IGZO TFTs on plastic substrates.

  9. Graphical presentation of diagnostic information

    PubMed Central

    Whiting, Penny F; Sterne, Jonathan AC; Westwood, Marie E; Bachmann, Lucas M; Harbord, Roger; Egger, Matthias; Deeks, Jonathan J

    2008-01-01

    Background Graphical displays of results allow researchers to summarise and communicate the key findings of their study. Diagnostic information should be presented in an easily interpretable way, which conveys both test characteristics (diagnostic accuracy) and the potential for use in clinical practice (predictive value). Methods We discuss the types of graphical display commonly encountered in primary diagnostic accuracy studies and systematic reviews of such studies, and systematically review the use of graphical displays in recent diagnostic primary studies and systematic reviews. Results We identified 57 primary studies and 49 systematic reviews. Fifty-six percent of primary studies and 53% of systematic reviews used graphical displays to present results. Dot-plot or box-and- whisker plots were the most commonly used graph in primary studies and were included in 22 (39%) studies. ROC plots were the most common type of plot included in systematic reviews and were included in 22 (45%) reviews. One primary study and five systematic reviews included a probability-modifying plot. Conclusion Graphical displays are currently underused in primary diagnostic accuracy studies and systematic reviews of such studies. Diagnostic accuracy studies need to include multiple types of graphic in order to provide both a detailed overview of the results (diagnostic accuracy) and to communicate information that can be used to inform clinical practice (predictive value). Work is required to improve graphical displays, to better communicate the utility of a test in clinical practice and the implications of test results for individual patients. PMID:18405357

  10. 78 FR 4438 - Investigations: Terminations, Modifications and Rulings: Certain Electronic Devices With Graphics...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-22

    ... Data Processing Systems, Components Thereof, and Associated Software AGENCY: U.S. International Trade... graphics data processing systems, components thereof, and associated software, by reason of infringement...

  11. Graphic arts color standards update: 1997

    NASA Astrophysics Data System (ADS)

    McDowell, David Q.

    1997-04-01

    Standards relating to color data definition continue to be a dominant theme in both the US and international graphic arts standards activity. There is a growing understanding of the role that metrology and printing process definition play in helping define stable conditions to which color characterization data can be related. Standards have been published to define color measurement and computation requirements, scanner input characterization targets, four- color output characterization, and graphic arts applications for both transmission and reflection densitometry. Work continues on standards relating to ink testing, reference ink color specifications, and printing process definition. In addition, efforts are underway to document, in ANSI and ISO Technical Reports, colorimetric characterization data for those printing processes having broad-based usage. These include various applications of offset, gravure, and flexographic printing processes. Such data is key to the success of color profiles developed in accordance with the specifications being developed by the International Color Consortium. The published graphic arts imaging and color- related standards and technical reports are summarized and the current status of the work in progress is reviewed. In addition, the interaction of the formal standards programs and other industry-driven color activities is discussed.

  12. Raster graphics extensions to the core system

    NASA Technical Reports Server (NTRS)

    Foley, J. D.

    1984-01-01

    A conceptual model of raster graphics systems was developed. The model integrates core-like graphics package concepts with contemporary raster display architectures. The conceptual model of raster graphics introduces multiple pixel matrices with associated index tables.

  13. Factors That Influence the Diffusion Process of Mobile Devices in Higher Education in Botswana and Namibia

    ERIC Educational Resources Information Center

    Asino, Tutaleni I.

    2015-01-01

    This comparative study uses the Diffusion of Innovation (DoI) theoretical framework to explore factors that influence diffusion of mobile devices in higher education in Botswana and Namibia. The five attributes (Relative Avantage, Compatability, Complexity, Trialability, and Observability) of the persuasion stage, which have been found in previous…

  14. A Comparative Analysis of the Processes of Social Mobility in the USSR and in Today's Russia

    ERIC Educational Resources Information Center

    Shkaratan, O. I.; Iastrebov, G. A.

    2012-01-01

    When it comes to analyzing problems of mobility, most studies of the post-Soviet era have cited random and unconnected data with respect to the Soviet era, on the principle of comparing "the old" and "the new." The authors have deemed it possible (although based on material that is not fully comparable) to examine the late…

  15. Mobile Air Monitoring Data Processing Strategies and Effects on Spatial Air Pollution Trends

    EPA Science Inventory

    The collection of real-time air quality measurements while in motion (i.e., mobile monitoring) is currently conducted worldwide to evaluate in situ emissions, local air quality trends, and air pollutant exposure. This measurement strategy pushes the limits of traditional data an...

  16. An efficient process for producing economical and eco-friendly cotton textile composites for mobile industry

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The mobile industry comprised of airplanes, automotives, and ships uses enormous quantities of various types of textiles. Just a few decades ago, most of these textile products and composites were made with woven or knitted fabrics that were mostly made with the then only available natural fibers, i...

  17. Managing facts and concepts: computer graphics and information graphics from a graphic designer's perspective

    SciTech Connect

    Marcus, A.

    1983-01-01

    This book emphasizes the importance of graphic design for an information-oriented society. In an environment in which many new graphic communication technologies are emerging, it raises some issues which graphic designers and managers of graphic design production should consider in using the new technology effectively. In its final sections, it gives an example of the steps taken in designing a visual narrative as a prototype for responsible information-oriented graphic design. The management of complex facts and concepts, of complex systems of ideas and issues, presented in a visual as well as verbal narrative or dialogue and conveyed through new technology will challenge the graphic design community in the coming decades. This shift to visual-verbal communication has repercussions in the educational system and the political/governance systems that go beyond the scope of this book. If there is a single goal for this book, it is to stimulate the reader and then to provide references that will help you learn more about graphic design in an era of communication when know business is show business.

  18. Defining Identities through Multiliteracies: EL Teens Narrate Their Immigration Experiences as Graphic Stories

    ERIC Educational Resources Information Center

    Danzak, Robin L.

    2011-01-01

    Based on a framework of identity-as-narrative and multiliteracies, this article describes "Graphic Journeys," a multimedia literacy project in which English learners (ELs) in middle school created graphic stories that expressed their families' immigration experiences. The process involved reading graphic novels, journaling, interviewing, and…

  19. Write Is Right: Using Graphic Organizers to Improve Student Mathematical Problem Solving

    ERIC Educational Resources Information Center

    Zollman, Alan

    2012-01-01

    Teachers have used graphic organizers successfully in teaching the writing process. This paper describes graphic organizers and their potential mathematics benefits for both students and teachers, elucidates a specific graphic organizer adaptation for mathematical problem solving, and discusses results using the "four-corners-and-a-diamond"…

  20. Correlation between the shape of the ion mobility signals and the stepwise folding process of polylactide ions.

    PubMed

    Duez, Q; Josse, T; Lemaur, V; Chirot, F; Choi, C M; Dubois, P; Dugourd, P; Cornil, J; Gerbaux, P; De Winter, J

    2017-03-01

    In the field of polymer characterization, the use of ion mobility mass spectrometry (IMMS) remains mainly devoted to the temporal separation of cationized oligomers according to their charge states, molecular masses and macromolecular architectures in order to probe the presence of different structures. When analyzing multiply charged polymer ions by IMMS, the most striking feature is the observation of breaking points in the evolution of the average collision cross sections with the number of monomer units. Those breaking points are associated to the folding of the polymer chain around the cationizing agents. Here, we scrutinize the shape of the arrival time distribution (ATD) of polylactide ions and associate the broadening as well as the loss of symmetry of the ATD signals to the coexistence of different populations of ions attributed to the transition from opened to folded stable structures. The observation of distinct distributions reveals the absence of folded/extended structure interconversion on the ion mobility time scale (1-10 ms) and then on the lifetime of ions within the mass spectrometer at room temperature. In order to obtain information on the possible interconversion between the different observed populations upon ion activation, we performed IM-IM-MS experiments (tandem ion mobility measurements). To do so, mobility-selected ions were activated by collisions before a second mobility measurement. Interestingly, the conversion by collisional activation from a globular structure into a (partially) extended structure, i.e. the gas phase unfolding of the ions, was not observed in the energetic regime available with the used experimental setup. The absence of folded/extended interconversion, even upon collisional activation, points to the fact that the polylactide ions are 'frozen' in their specific 3D structure during the desolvation/ionization electrospray processes. Copyright © 2017 John Wiley & Sons, Ltd.

  1. EPA Communications Stylebook: Graphics Guide

    EPA Pesticide Factsheets

    Includes standards and guidance for graphics typography, layout, composition, color scheme, appropriate use of charts and graphs, logos and related symbols, and consistency with the message of accompanied content.

  2. Calculators and Computers: Graphical Addition.

    ERIC Educational Resources Information Center

    Spero, Samuel W.

    1978-01-01

    A computer program is presented that generates problem sets involving sketching graphs of trigonometric functions using graphical addition. The students use calculators to sketch the graphs and a computer solution is used to check it. (MP)

  3. Graphical Representation of Complex Functions.

    ERIC Educational Resources Information Center

    Renka, Robert J.

    1988-01-01

    Describes methods and software for graphing representation of a complex function of a complex variable. Includes an application of a graphical interpretation of the complex zeros of the cubic and their properties. (PK)

  4. Parallel Debugging Using Graphical Views

    DTIC Science & Technology

    1988-03-01

    Voyeur , a prototype system for creating graphical views of parallel programs, provid(s a cost-effective way to construct such views for any parallel...programming system. We illustrate Voyeur by discussing four views created for debugging Poker programs. One is a vteneral trace facility for any Poker...Graphical views are essential for debugging parallel programs because of the large quan- tity of state information contained in parallel programs. Voyeur

  5. Hydraulics Graphics Package. Users Manual

    DTIC Science & Technology

    1985-11-01

    Engineering Center, Corps of Engineers, Department of the Army, as the origin of the program(s). IT, i HGP -3!OO Ju ’ ŕ Dlst! 3pbo.i:, HYDRAULICS GRAPHICS...Davis, California 95616 (916) 551-1748 (FTS) 460-1748 HGP Hydraulics Graphics Package Users Manual TABLE OF CONTENTS Chapter Subject Page 1 Introduction...5 2.4 Use of Disk Files ........ ................ 6 3 HGP Free Format User Input 3.1 Command Language Syntax ...... ............. 8 3.2

  6. Planetary Photojournal Home Page Graphic

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This image is an unannotated version of the Planetary Photojournal Home Page graphic. This digital collage contains a highly stylized rendition of our solar system and points beyond. As this graphic was intended to be used as a navigation aid in searching for data within the Photojournal, certain artistic embellishments have been added (color, location, etc.). Several data sets from various planetary and astronomy missions were combined to create this image.

  7. Photojournal Home Page Graphic 2007

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image is an unannotated version of the Photojournal Home Page graphic released in October 2007. This digital collage contains a highly stylized rendition of our solar system and points beyond. As this graphic was intended to be used as a navigation aid in searching for data within the Photojournal, certain artistic embellishments have been added (color, location, etc.). Several data sets from various planetary and astronomy missions were combined to create this image.

  8. The Effects of Integrating Mobile and CAD Technology in Teaching Design Process for Malaysian Polytechnic Architecture Student in Producing Creative Product

    ERIC Educational Resources Information Center

    Hassan, Isham Shah; Ismail, Mohd Arif; Mustapha, Ramlee

    2010-01-01

    The purpose of this research is to examine the effect of integrating the digital media such as mobile and CAD technology on designing process of Malaysian polytechnic architecture students in producing a creative product. A website is developed based on Caroll's minimal theory, while mobile and CAD technology integration is based on Brown and…

  9. Split CV mobility at low temperature operation of Ge pFinFETs fabricated with STI first and last processes

    NASA Astrophysics Data System (ADS)

    Oliveira, A. V.; Simoen, E.; Agopian, P. G. D.; Martino, J. A.; Mitard, J.; Witters, L.; Langer, R.; Collaert, N.; Thean, A.; Claeys, C.

    2016-11-01

    The effective hole mobility of long strained Ge pFinFETs, fabricated with shallow trench isolation (STI) first and last approaches, is systematically evaluated from room temperature down to 77 K, from planar-like (100 nm) to narrow (20 nm) devices. The goal is to identify the dominant scattering mechanism. Here, the split capacitance-voltage (CV) technique has been applied, based on combined current-voltage (I-V) and CV measurements. It is shown that even at 77 K, the phonon scattering mechanism dominates the STI last process, while the Coulomb scattering strongly affects the STI first approach. On the other hand, the latter shows slightly higher hole mobility compared to the STI last counterpart.

  10. Space-Time Processing for Tactical Mobile Ad-Hoc Networks

    DTIC Science & Technology

    2005-08-01

    electrical engineering of antenna array design and electronics; computer science of networking; and the mathematics of information and control theory...MIMO networks at this time. The COST 259 model does provide improvements over the GSM model since it adds direction of arrival statistics to the...an open research issue especially because the channel statistics are not yet known for mobile MIMO networks. Physical and network diversity could be

  11. Building a Mobile HIV Prevention App for Men Who Have Sex With Men: An Iterative and Community-Driven Process

    PubMed Central

    McDougal, Sarah J; Sullivan, Patrick S; Stekler, Joanne D; Stephenson, Rob

    2015-01-01

    Background Gay, bisexual, and other men who have sex with men (MSM) account for a disproportionate burden of new HIV infections in the United States. Mobile technology presents an opportunity for innovative interventions for HIV prevention. Some HIV prevention apps currently exist; however, it is challenging to encourage users to download these apps and use them regularly. An iterative research process that centers on the community’s needs and preferences may increase the uptake, adherence, and ultimate effectiveness of mobile apps for HIV prevention. Objective The aim of this paper is to provide a case study to illustrate how an iterative community approach to a mobile HIV prevention app can lead to changes in app content to appropriately address the needs and the desires of the target community. Methods In this three-phase study, we conducted focus group discussions (FGDs) with MSM and HIV testing counselors in Atlanta, Seattle, and US rural regions to learn preferences for building a mobile HIV prevention app. We used data from these groups to build a beta version of the app and theater tested it in additional FGDs. A thematic data analysis examined how this approach addressed preferences and concerns expressed by the participants. Results There was an increased willingness to use the app during theater testing than during the first phase of FGDs. Many concerns that were identified in phase one (eg, disagreements about reminders for HIV testing, concerns about app privacy) were considered in building the beta version. Participants perceived these features as strengths during theater testing. However, some disagreements were still present, especially regarding the tone and language of the app. Conclusions These findings highlight the benefits of using an interactive and community-driven process to collect data on app preferences when building a mobile HIV prevention app. Through this process, we learned how to be inclusive of the larger MSM population without

  12. Trends in Continuity and Interpolation for Computer Graphics.

    PubMed

    Gonzalez Garcia, Francisco

    2015-01-01

    In every computer graphics oriented application today, it is a common practice to texture 3D models as a way to obtain realistic material. As part of this process, mesh texturing, deformation, and visualization are all key parts of the computer graphics field. This PhD dissertation was completed in the context of these three important and related fields in computer graphics. The article presents techniques that improve on existing state-of-the-art approaches related to continuity and interpolation in texture space (texturing), object space (deformation), and screen space (rendering).

  13. Interactive commputer graphics: the arms race

    NASA Astrophysics Data System (ADS)

    Hafemeister, D. W.

    1983-10-01

    By using interactive computer graphics (ICG) it is possible to discuss the numerical aspects of some arms race issues with more specificity and in a visual way. The number of variables involved in these issues can be quite large; computers operated in the interactive, graphical mode, can allow exploration of the variables, leading to a greater understanding of the issues. This paper will examine some examples of interactive computer grahics: (1) the relationship between silo hardening and the accuracy, yield, and reliability of ICBMs; (2) target vulnerability (Minuteman, Dense Pack); (3) counterforce vs. countervalue weapons; (4) civil defense; (5) gravitational bias error; (6) MIRV; (7) national vulnerability to a preemptive first strike; (8) radioactive fallout; (9) digital image processing with charge-coupled devices.

  14. Computer graphics in architecture and engineering

    NASA Technical Reports Server (NTRS)

    Greenberg, D. P.

    1975-01-01

    The present status of the application of computer graphics to the building profession or architecture and its relationship to other scientific and technical areas were discussed. It was explained that, due to the fragmented nature of architecture and building activities (in contrast to the aerospace industry), a comprehensive, economic utilization of computer graphics in this area is not practical and its true potential cannot now be realized due to the present inability of architects and structural, mechanical, and site engineers to rely on a common data base. Future emphasis will therefore have to be placed on a vertical integration of the construction process and effective use of a three-dimensional data base, rather than on waiting for any technological breakthrough in interactive computing.

  15. On the possibility of producing true real-time retinal cross-sectional images using a graphics processing unit enhanced master-slave optical coherence tomography system

    NASA Astrophysics Data System (ADS)

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian

    2015-07-01

    In a previous report, we demonstrated master-slave optical coherence tomography (MS-OCT), an OCT method that does not need resampling of data and can be used to deliver en face images from several depths simultaneously. In a separate report, we have also demonstrated MS-OCT's capability of producing cross-sectional images of a quality similar to those provided by the traditional Fourier domain (FD) OCT technique, but at a much slower rate. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real time. We analyze the conditions that ensure a true real-time B-scan imaging operation and demonstrate in vivo real-time images from human fovea and the optic nerve, with resolution and sensitivity comparable to those produced using the traditional FD-based method, however, without the need of data resampling.

  16. RHENIUM SOLUBILITY IN BOROSILICATE NUCLEAR WASTE GLASS IMPLICATIONS FOR THE PROCESSING AND IMMOBILIZATION OF TECHNETIUM-99 (AND SUPPORTING INFORMATION WITH GRAPHICAL ABSTRACT)

    SciTech Connect

    AA KRUGER; A GOEL; CP RODRIGUEZ; JS MCCLOY; MJ SCHWEIGER; WW LUKENS; JR, BJ RILEY; D KIM; M LIEZERS; P HRMA

    2012-08-13

    The immobilization of 99Tc in a suitable host matrix has proved a challenging task for researchers in the nuclear waste community around the world. At the Hanford site in Washington State in the U.S., the total amount of 99Tc in low-activity waste (LAW) is {approx} 1,300 kg and the current strategy is to immobilize the 99Tc in borosilicate glass with vitrification. In this context, the present article reports on the solubility and retention of rhenium, a nonradioactive surrogate for 99Tc, in a LAW sodium borosilicate glass. Due to the radioactive nature of technetium, rhenium was chosen as a simulant because of previously established similarities in ionic radii and other chemical aspects. The glasses containing target Re concentrations varying from 0 to10,000 ppm by mass were synthesized in vacuum-sealed quartz ampoules to minimize the loss of Re by volatilization during melting at 1000 DC. The rhenium was found to be present predominantly as Re7 + in all the glasses as observed by X-ray absorption near-edge structure (XANES). The solubility of Re in borosilicate glasses was determined to be {approx}3,000 ppm (by mass) using inductively coupled plasma-optical emission spectroscopy (ICP-OES). At higher rhenium concentrations, some additional material was retained in the glasses in the form of alkali perrhenate crystalline inclusions detected by X-ray diffraction (XRD) and laser ablation-ICP mass spectrometry (LA-ICP-MS). Assuming justifiably substantial similarities between Re7 + and Tc 7+ behavior in this glass system, these results implied that the processing and immobilization of 99Tc from radioactive wastes should not be limited by the solubility of 99Tc in borosilicate LAW glasses.

  17. Cartooning History: Canada's Stories in Graphic Novels

    ERIC Educational Resources Information Center

    King, Alyson E.

    2012-01-01

    In recent years, historical events, issues, and characters have been portrayed in an increasing number of non-fiction graphic texts. Similar to comics and graphic novels, graphic texts are defined as fully developed, non-fiction narratives told through panels of sequential art. Such non-fiction graphic texts are being used to teach history in…

  18. Comprehending, Composing, and Celebrating Graphic Poetry

    ERIC Educational Resources Information Center

    Calo, Kristine M.

    2011-01-01

    The use of graphic poetry in classrooms is encouraged as a way to engage students and motivate them to read and write poetry. This article discusses how graphic poetry can help students with their comprehension of poetry while tapping into popular culture. It is organized around three main sections--reading graphic poetry, writing graphic poetry,…

  19. Antinomies of Semiotics in Graphic Design

    ERIC Educational Resources Information Center

    Storkerson, Peter

    2010-01-01

    The following paper assesses the roles played by semiotics in graphic design and in graphic design education, which both reflects and shapes practice. It identifies a series of factors; graphic design education methods and culture; semiotic theories themselves and their application to graphic design; the two wings of Peircian semiotics and…

  20. Computer Graphics. Curriculum Guide for Technology Education.

    ERIC Educational Resources Information Center

    Craft, Clyde O.

    This curriculum guide for a 1-quarter or 1-semester course in computer graphics is designed to be used with Apple II computers. Some of the topics covered include the following: computer graphics terminology and applications, operating Apple computers, graphics programming in BASIC using various programs and commands, computer graphics painting,…

  1. Animation graphic interface for the space shuttle onboard computer

    NASA Technical Reports Server (NTRS)

    Wike, Jeffrey; Griffith, Paul

    1989-01-01

    Graphics interfaces designed to operate on space qualified hardware challenge software designers to display complex information under processing power and physical size constraints. Under contract to Johnson Space Center, MICROEXPERT Systems is currently constructing an intelligent interface for the LASER DOCKING SENSOR (LDS) flight experiment. Part of this interface is a graphic animation display for Rendezvous and Proximity Operations. The displays have been designed in consultation with Shuttle astronauts. The displays show multiple views of a satellite relative to the shuttle, coupled with numeric attitude information. The graphics are generated using position data received by the Shuttle Payload and General Support Computer (PGSC) from the Laser Docking Sensor. Some of the design considerations include crew member preferences in graphic data representation, single versus multiple window displays, mission tailoring of graphic displays, realistic 3D images versus generic icon representations of real objects, the physical relationship of the observers to the graphic display, how numeric or textual information should interface with graphic data, in what frame of reference objects should be portrayed, recognizing conditions of display information-overload, and screen format and placement consistency.

  2. Graphics for Stereo Visualization Theater for Supercomputing 1998

    NASA Technical Reports Server (NTRS)

    Antipuesto, Joel; Reid, Lisa (Technical Monitor)

    1998-01-01

    The Stereo Visualization Theater is a high-resolution graphics demonstration that prides a review of current research being performed at NASA. Using a stereoscopic projection, multiple participants can explore scientific data in new ways. The pre-processed audio and video are being played in real-time off of a workstation. A stereo graphics filter for the projector and passive polarized glasses worn by audience members are used to create the stereo effect.

  3. Actors and intentions in the development process of a mobile phone platform for self-management of hypertension.

    PubMed

    Ranerup, Agneta; Hallberg, Inger

    2014-06-24

    Aim: The aim of this study was to enhance the knowledge regarding actors and intentions in the development process of a mobile phone platform for self-management of hypertension. Methods: Our research approach was a 14-month longitudinal "real-time ethnography" method of description and analysis. Data were collected through focus groups with patients and providers, patient interviews, and design meetings with researchers and experts. The analysis was informed by the concepts of actors and inscriptions in actor-network theory (ANT). Results: Our study showed that laypersons, scientific actors, as well as technology itself, might influence development processes of support for self-management of hypertension. The intentions were inscribed into the technology design as well as the models of learning and treatment. Conclusions: The study highlighted important aspects of how actors and intentions feature in the development of the mobile phone platform to support self-management of hypertension. The study indicated the multifacetedness of the participating actors, including the prominent role of technology. The concrete results of such processes included questions in the self-report system, learning and treatment models.

  4. Graphical mass factorization

    NASA Astrophysics Data System (ADS)

    Humpert, B.; van Neerven, W. L.

    1981-07-01

    We point to the close analogy between (multiplicative) BPHZ-renormalization and mass factorization. Adapation of the forest formula to mass singular graphs allows an alternative proof of mass factorization. A diagrammatic method is developed to carry out diagram-by-diagram mass factorization with the mass singularities being subtracted by counter terms which built up the operator matrix element. The reasoning is exposed for deep-inelastic (DI) scattering and for the Drell-Yan (DY) process.

  5. Graphical programming of telerobotic tasks

    SciTech Connect

    Small, D.E.; McDonald, M.J.

    1997-02-01

    With a goal of producing faster, safer, and cheaper technologies for nuclear waste cleanup, Sandia is actively developing and extending intelligent systems technologies. Graphical Programming is a key technology for robotic waste cleanup that Sandia is developing for this goal. This paper describes Sancho, Sandia most advanced Graphical Programming supervisory software. Sancho, now operational on several robot systems, incorporates all of Sandia`s recent advances in supervisory control. Sancho, developed to rapidly apply Graphical Programming on a diverse set of robot systems, uses a general set of tools to implement task and operational behavior. Sancho can be rapidly reconfigured for new tasks and operations without modifying the supervisory code. Other innovations include task-based interfaces, event-based sequencing, and sophisticated GUI design. These innovations have resulted in robot control programs and approaches that are easier and safer to use than teleoperation, off-line programming, or full automation.

  6. Intellectual system of identification of Arabic graphics

    NASA Astrophysics Data System (ADS)

    Abdoullayeva, Gulchin G.; Aliyev, Telman A.; Gurbanova, Nazakat G.

    2001-08-01

    The studies made by using the domain of graphic images allowed creating facilities of the artificial intelligence for letters, letter combinations etc. for various graphics and prints. The work proposes a system of recognition and identification of symbols of the Arabic graphics, which has its own specificity as compared to Latin and Cyrillic ones. The starting stage of the recognition and the identification is coding with further entry of information into a computer. Here the problem of entry is one of the essentials. For entry of a large volume of information in the unit of time a scanner is usually employed. Along with the scanner the authors suggest their elaboration of technical facilities for effective input and coding of the information. For refinement of symbols not identified from the scanner mostly for a small bulk of information the developed coding devices are used directly in the process of writing. The functional design of the software is elaborated on the basis of the heuristic model of the creative activity of a researcher and experts in the description and estimation of states of the weakly formalizable systems on the strength of the methods of identification and of selection of geometric features.

  7. Trend Monitoring System (TMS) graphics software

    NASA Technical Reports Server (NTRS)

    Brown, J. S.

    1979-01-01

    A prototype bus communications systems, which is being used to support the Trend Monitoring System (TMS) and to evaluate the bus concept is considered. A set of FORTRAN-callable graphics subroutines for the host MODCOMP comuter, and an approach to splitting graphics work between the host and the system's intelligent graphics terminals are described. The graphics software in the MODCOMP and the operating software package written for the graphics terminals are included.

  8. Collection Of Software For Computer Graphics

    NASA Technical Reports Server (NTRS)

    Hibbard, Eric A.; Makatura, George

    1990-01-01

    Ames Research Graphics System (ARCGRAPH) collection of software libraries and software utilities assisting researchers in generating, manipulating, and visualizing graphical data. Defines metafile format containing device-independent graphical data. File format used with various computer-graphics-manipulation and -animation software packages at Ames, including SURF (COSMIC Program ARC-12381) and GAS (COSMIC Program ARC-12379). Consists of two-stage "pipeline" used to put out graphical primitives. ARCGRAPH libraries developed on VAX computer running VMS.

  9. Graphic Journeys: Graphic Novels' Representations of Immigrant Experiences

    ERIC Educational Resources Information Center

    Boatright, Michael D.

    2010-01-01

    This article explores how immigrant experiences are represented in the narratives of three graphic novels published in the last decade: Tan's (2007) "The Arrival," Kiyama's (1931/1999) "The Four Immigrants Manga: A Japanese Experience in San Francisco, 1904-1924," and Yang's (2006) "American Born Chinese." Through a theoretical lens informed by…

  10. Real-time pulmonary graphics.

    PubMed

    Mammel, Mark C; Donn, Steven M

    2015-06-01

    Real-time pulmonary graphics now enable clinicians to view lung mechanics and patient-ventilator interactions on a breath-to-breath basis. Displays of pressure, volume, and flow waveforms, pressure-volume and flow-volume loops, and trend screens enable clinicians to customize ventilator settings based on the underlying pathophysiology and responses of the individual patient. This article reviews the basic concepts of pulmonary graphics and demonstrates how they contribute to our understanding of respiratory physiology and the management of neonatal respiratory failure.

  11. Investigation of characteristics and transformation processes of megacity emission plumes using a mobile laboratory in the Paris metropolitan area

    NASA Astrophysics Data System (ADS)

    von der Weiden-Reinmüller, S.-L.; Drewnick, F.; Zhang, Q.; Meleux, F.; Beekmann, M.; Borrmann, S.

    2012-04-01

    A growing fraction of the world's population is living in urban agglomerations of increasing size. Currently, 20 cities worldwide qualify as so-called megacities, having more than 10 million inhabitants. These intense pollution hot-spots cause a number of scientific questions concerning their influence on local and regional air quality, which is connected with human health, flora and fauna. In the framework of the European Union FP7 MEGAPOLI project (Megacities: Emissions, urban, regional and Global Atmospheric POLlution and climate effects, and Integrated tools for assessment and mitigation) two major field campaigns were carried out in the greater Paris region in July 2009 and January/February 2010. This work presents results from mobile particulate and gas phase measurements with focus on the characteristics of the Paris emission plume and its impact on the regional air quality and on aerosol transformation processes within this plume as it travels away from its source. In addition differences between summer and winter conditions are discussed. The mobile laboratory was equipped with high time resolution instrumentation to measure particle number concentrations (dP > 2.5 nm), size distributions (dP ~ 5 nm - 32 μm), sub-micron chemical composition (non-refractory species using Aerodyne HR-ToF-AMS, PAH and black carbon) as well as major trace gases (CO2, SO2, O3, NOx) and standard meteorological parameters. On-board webcam and GPS allow detailed monitoring of traffic situation and vehicle track. In a total of 29 mobile and 25 stationary measurements with the mobile laboratory the Paris emission plume as well as the atmospheric background was characterized under various meteorological conditions. This allows investigating the influence of external factors like temperature, solar radiation or precipitation on the plume characteristics. Three measurement strategies were applied to investigate the emission plume. First, circular mobile measurements around Paris

  12. Common Graphics Library (CGL). Volume 1: LEZ user's guide

    NASA Technical Reports Server (NTRS)

    Taylor, Nancy L.; Hammond, Dana P.; Hofler, Alicia S.; Miner, David L.

    1988-01-01

    Users are introduced to and instructed in the use of the Langley Easy (LEZ) routines of the Common Graphics Library (CGL). The LEZ routines form an application independent graphics package which enables the user community to view data quickly and easily, while providing a means of generating scientific charts conforming to the publication and/or viewgraph process. A distinct advantage for using the LEZ routines is that the underlying graphics package may be replaced or modified without requiring the users to change their application programs. The library is written in ANSI FORTRAN 77, and currently uses a CORE-based underlying graphics package, and is therefore machine independent, providing support for centralized and/or distributed computer systems.

  13. Graphic Communications. Career Education Guide.

    ERIC Educational Resources Information Center

    Dependents Schools (DOD), Washington, DC. European Area.

    The curriculum guide is designed to provide students with realistic training in graphic communications theory and practice within the secondary educational framework and to prepare them for entry into an occupation or continuing postsecondary education. The program modules outlined in the guide have been grouped into four areas: printing,…

  14. Recorded Music and Graphic Design.

    ERIC Educational Resources Information Center

    Osterer, Irv

    1998-01-01

    Reviews the history of art as an element of music-recording packaging. Describes a project in which students design a jacket for either cassette or CD using a combination of computerized and traditional rendering techniques. Reports that students have been inspired to look into careers in graphic design. (DSK)

  15. [Graphic reconstruction of anatomic surfaces].

    PubMed

    Ciobanu, O

    2004-01-01

    The paper deals with the graphic reconstruction of anatomic surfaces in a virtual 3D setting. Scanning technologies and soft provides a greater flexibility in the digitization of surfaces and a higher resolution and accuracy. An alternative cheap method for the reconstruction of 3D anatomic surfaces is presented in connection with some studies and international projects developed by Medical Design research team.

  16. Overview of Graphical User Interfaces.

    ERIC Educational Resources Information Center

    Hulser, Richard P.

    1993-01-01

    Discussion of graphical user interfaces for online public access catalogs (OPACs) covers the history of OPACs; OPAC front-end design, including examples from Indiana University and the University of Illinois; and planning and implementation of a user interface. (10 references) (EA)

  17. Graphics Design Technology Curriculum Guide.

    ERIC Educational Resources Information Center

    Idaho State Dept. of Education, Boise. Div. of Vocational Education.

    This Idaho secondary education curriculum guide provides lists of tasks, performance objectives, and enabling objectives for instruction intended to impart entry-level employment skills in graphics design technology. The first list states all tasks for 11 areas; separate lists for each area follow. Each task on the lists is accompanied by a…

  18. Graphic Novels in the Classroom

    ERIC Educational Resources Information Center

    Martin, Adam

    2009-01-01

    Today many authors and artists adapt works of classic literature into a medium more "user friendly" to the increasingly visual student population. Stefan Petrucha and Kody Chamberlain's version of "Beowulf" is one example. The graphic novel captures the entire epic in arresting images and contrasts the darkness of the setting and characters with…

  19. Graphic Arts/Offset Lithography.

    ERIC Educational Resources Information Center

    Hoisington, James; Metcalf, Joseph

    This revised curriculum for graphic arts is designed to provide secondary and postsecondary students with entry-level skills and an understanding of current printing technology. It contains lesson plans based on entry-level competencies for offset lithography as identified by educators and industry representatives. The guide is divided into 15…

  20. Multi Platform Graphics Subroutine Library

    SciTech Connect

    Brand, Hal

    1992-02-21

    DIGLIB is a collection of general graphics subroutines. It was designed to be small, reasonably fast, device-independent, and compatible with DEC-supplied operating systems for VAXes, PDP-11s, and LSI-11s, and the DOS operating system for IBM PCs and IBM-compatible machines. The software is readily usable by casual programmers for two-dimensional plotting.

  1. Interactive Computer Graphics

    NASA Technical Reports Server (NTRS)

    Kenwright, David

    2000-01-01

    Aerospace data analysis tools that significantly reduce the time and effort needed to analyze large-scale computational fluid dynamics simulations have emerged this year. The current approach for most postprocessing and visualization work is to explore the 3D flow simulations with one of a dozen or so interactive tools. While effective for analyzing small data sets, this approach becomes extremely time consuming when working with data sets larger than one gigabyte. An active area of research this year has been the development of data mining tools that automatically search through gigabyte data sets and extract the salient features with little or no human intervention. With these so-called feature extraction tools, engineers are spared the tedious task of manually exploring huge amounts of data to find the important flow phenomena. The software tools identify features such as vortex cores, shocks, separation and attachment lines, recirculation bubbles, and boundary layers. Some of these features can be extracted in a few seconds; others take minutes to hours on extremely large data sets. The analysis can be performed off-line in a batch process, either during or following the supercomputer simulations. These computations have to be performed only once, because the feature extraction programs search the entire data set and find every occurrence of the phenomena being sought. Because the important questions about the data are being answered automatically, interactivity is less critical than it is with traditional approaches.

  2. Forced gradient infiltration experiments: effect on the release processes of mobile particles and organic contaminants

    NASA Astrophysics Data System (ADS)

    Pagels, B.; Reichel, K.; Totsche, K. U.

    2009-04-01

    Mobile colloidal and suspended matter is likely to affect themobility of polycyclic aromatic hydrocarbons (PAHs) in the unsaturatedsoil zone at contaminated sites. We studied the release of mobile (organic) particles (MOPs), which include among others dissolved and colloidal organic matter in response to forced sprinkling infiltration and multiple flow interrupts using undisturbed zero-tensionlysimeters. The aim was to assess the effect of these MOPs on the exportof PAHs and other contaminants in floodplain soils. Seepage water samples were analyzed for dissolvedand colloidal organic carbon (DOC), PAH, suspended particles, pH, electrical conductivity, turbidity,zeta potential and surface tension in the fraction smaller 0.7 m. In additional selected PAH were analysed in the size fraction > 0.7 m. Bromide was used as a conservative tracer to determine the flow regime. First arrival of bromide was detected 3.8 hours after start of irrigation. The concentration gradually increased and reached a level of C/C0=0.1 just before the flow interrupt (FI). After flow was resumed, effluent bromide concentration was equal to the concentration before the FI. Ongoing irrigation caused a breakthrough wave, which continuously increased until the bromide concentration reached ~100% of the input concentration. A high-intensity rain event of 4 L m -2 h-1 upon summer-dried lysimeters results in a release of particles in a the size of 250-400 nm. In addition it seems that with the initial exported seepage water surface-active agents are released which is indicated by the decrease of the surface to 60 mN m-1 (Pure water: 72mN m-1). The turbidity values range from 8-14 FAU. The concentration of DOC is about 30-40 mg L-1 in the initial effluent fractions and equilibrates to 15 mg L-1 with ongoing percolation. The PAHs in the fraction < 0.7 m amount to 0.02 g L-1, and 0.05 g L-1 in the fraction > 0.7 m. After establishing steady state flow conditions, first arrival of bromide was detected

  3. Experiences with distributing graphic software between processors

    SciTech Connect

    Hamlin, G.; George, J.E.

    1982-01-01

    Software to aid the distribution and coordination of tasks between different processors was developed to distribute applications written in Fortran. This development led to the discovery of problems unique to Fortran and to interesting practical solutions. Two graphical applications were distributed to a variety of machines and machine pairs: CDC 7600-LSI/11, Cray-LSI/11, VAX 11/780, and Apollo. These distributions pointed out several parameters such as the use of Fortran Common, communication parameters, and processing capabilities that can affect the successful distribution of applications.

  4. Teaching graphics in technical communication classes

    NASA Technical Reports Server (NTRS)

    Spurgeon, K. C.

    1981-01-01

    Graphic aids convey and clarify information more efficiently and accurately than words alone therefore, most technical writing includes the use of graphics. Ways of accumulating and presenting graphics illustrations on a shoestring budget are suggested. These include collecting graphics from companies, annual reports and laminating them for workshop use or putting them on a flip chart for classroom presentation, creating overhead transparencies to demonstrate different levels of effectiveness of graphic aids, and bringing in grahic artists for question/answer periods or in class workshops. Also included are an extensive handout as an introduction to graphics, sample assignments, and a selected and annotated bibliography.

  5. Ammonia mobility in chabazite: insight into the diffusion component of the NH3-SCR process.

    PubMed

    O'Malley, Alexander J; Hitchcock, Iain; Sarwar, Misbah; Silverwood, Ian P; Hindocha, Sheena; Catlow, C Richard A; York, Andrew P E; Collier, P J

    2016-06-29

    The diffusion of ammonia in commercial NH3-SCR catalyst Cu-CHA was measured and compared with H-CHA using quasielastic neutron scattering (QENS) and molecular dynamics (MD) simulations to assess the effect of counterion presence on NH3 mobility in automotive emission control relevant zeolite catalysts. QENS experiments observed jump diffusion with a jump distance of 3 Å, giving similar self-diffusion coefficient measurements for both Cu- and H-CHA samples, in the range of ca. 5-10 × 10(-10) m(2) s(-1) over the measured temperature range. Self-diffusivities calculated by MD were within a factor of 6 of those measured experimentally at each temperature. The activation energies of diffusion were also similar for both studied systems: 3.7 and 4.4 kJ mol(-1) for the H- and Cu-chabazite respectively, suggesting that counterion presence has little impact on ammonia diffusivity on the timescale of the QENS experiment. An explanation is given by the MD simulations, which showed the strong coordination of NH3 with Cu(2+) counterions in the centre of the chabazite cage, shielding other molecules from interaction with the ion, and allowing for intercage diffusion through the 8-ring windows (consistent with the experimentally observed jump length) to carry on unhindered.

  6. Applicability of different onboard routing and processing techniques to mobile satellite systems

    NASA Technical Reports Server (NTRS)

    Craig, A. D.; Marston, P. C.; Bakken, P. M.; Vernucci, A.; Benedicto, J.

    1993-01-01

    The paper summarizes a study contract recently undertaken for ESA. The study compared the effectiveness of several processing architectures applied to multiple beam, geostationary global and European regional missions. The paper discusses architectures based on transparent SS-FDMA analog, transparent DSP and regenerative processing. Quantitative comparisons are presented and general conclusions are given with respect to suitability of the architectures to different mission requirements.

  7. GnuForPlot Graphics

    SciTech Connect

    2015-11-04

    Gnuforplot Graphics is a Fortran90 program designed to generate two and three dimensional plots of data on a personal computer. The program uses calls to the open source code Gnuplot to generate the plots. Two Fortran90 programs have been written to use the Gnuplot graphics capabilities. The first program, named Plotsetup.f90 reads data from output files created by either the Stadium or LeachXS/Orchestra modeling codes and saves the data in arrays for plotting. This program then calls Gnuforplot which takes the data array along with user specified parameters to set plot specifications and issues Gnuplot commands that generate the screen plots. The user can view the plots and optionally save copies in jpeg format.

  8. Graphical Models for Ordinal Data

    PubMed Central

    Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2014-01-01

    A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267

  9. Representing Learning With Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    Probabilistic graphical models are being used widely in artificial intelligence, for instance, in diagnosis and expert systems, as a unified qualitative and quantitative framework for representing and reasoning with probabilities and independencies. Their development and use spans several fields including artificial intelligence, decision theory and statistics, and provides an important bridge between these communities. This paper shows by way of example that these models can be extended to machine learning, neural networks and knowledge discovery by representing the notion of a sample on the graphical model. Not only does this allow a flexible variety of learning problems to be represented, it also provides the means for representing the goal of learning and opens the way for the automatic development of learning algorithms from specifications.

  10. Computer Graphics for Multimedia and Hypermedia Development.

    ERIC Educational Resources Information Center

    Mohler, James L.

    1998-01-01

    Discusses several theoretical and technical aspects of computer-graphics development that are useful for creating hypermedia and multimedia materials. Topics addressed include primary bitmap attributes in computer graphics, the jigsaw principle, and raster layering. (MSE)

  11. Computer Graphics and Administrative Decision-Making.

    ERIC Educational Resources Information Center

    Yost, Michael

    1984-01-01

    Reduction in prices now makes it possible for almost any institution to use computer graphics for administrative decision making and research. Current and potential uses of computer graphics in these two areas are discussed. (JN)

  12. Microcomputer Simulated CAD for Engineering Graphics.

    ERIC Educational Resources Information Center

    Huggins, David L.; Myers, Roy E.

    1983-01-01

    Describes a simulated computer-aided-graphics (CAD) program at The Pennsylvania State University. Rationale for the program, facilities, microcomputer equipment (Apple) used, and development of a software package for simulating applied engineering graphics are considered. (JN)

  13. Freedom System Text and Graphics System (TAGS)

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The Text and Graphics System (TAGS) is a high-resolution facsimile system that scans text or graphics material and converts the analog SCAN data into serial digital data. This video shows the TAGS in operation.

  14. Spatio-temporal modelling of electrical supply systems to optimize the site planning process for the "power to mobility" technology

    NASA Astrophysics Data System (ADS)

    Karl, Florian; Zink, Roland

    2016-04-01

    The transformation of the energy sector towards decentralized renewable energies (RE) requires also storage systems to ensure security of supply. The new "Power to Mobility" (PtM) technology is one potential solution to use electrical overproduction to produce methane for i.e. gas vehicles. Motivated by these fact, the paper presents a methodology for a GIS-based temporal modelling of the power grid, to optimize the site planning process for the new PtM-technology. The modelling approach is based on a combination of the software QuantumGIS for the geographical and topological energy supply structure and OpenDSS for the net modelling. For a case study (work in progress) of the city of Straubing (Lower Bavaria) the parameters of the model are quantified. The presentation will discuss the methodology as well as the first results with a view to the application on a regional scale.

  15. Learning Graphical Models With Hubs.

    PubMed

    Tan, Kean Ming; London, Palma; Mohan, Karthik; Lee, Su-In; Fazel, Maryam; Witten, Daniela

    2014-10-01

    We consider the problem of learning a high-dimensional graphical model in which there are a few hub nodes that are densely-connected to many other nodes. Many authors have studied the use of an ℓ1 penalty in order to learn a sparse graph in the high-dimensional setting. However, the ℓ1 penalty implicitly assumes that each edge is equally likely and independent of all other edges. We propose a general framework to accommodate more realistic networks with hub nodes, using a convex formulation that involves a row-column overlap norm penalty. We apply this general framework to three widely-used probabilistic graphical models: the Gaussian graphical model, the covariance graph model, and the binary Ising model. An alternating direction method of multipliers algorithm is used to solve the corresponding convex optimization problems. On synthetic data, we demonstrate that our proposed framework outperforms competitors that do not explicitly model hub nodes. We illustrate our proposal on a webpage data set and a gene expression data set.

  16. An Approach to Realizing Process Control for Underground Mining Operations of Mobile Machines.

    PubMed

    Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John

    2015-01-01

    The excavation and production in underground mines are complicated processes which consist of many different operations. The process of underground mining is considerably constrained by the geometry and geology of the mine. The various mining operations are normally performed in series at each working face. The delay of a single operation will lead to a domino effect, thus delay the starting time for the next process and the completion time of the entire process. This paper presents a new approach to the process control for underground mining operations, e.g. drilling, bolting, mucking. This approach can estimate the working time and its probability for each operation more efficiently and objectively by improving the existing PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). If the delay of the critical operation (which is on a critical path) inevitably affects the productivity of mined ore, the approach can rapidly assign mucking machines new jobs to increase this amount at a maximum level by using a new mucking algorithm under external constraints.

  17. An Approach to Realizing Process Control for Underground Mining Operations of Mobile Machines

    PubMed Central

    Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John

    2015-01-01

    The excavation and production in underground mines are complicated processes which consist of many different operations. The process of underground mining is considerably constrained by the geometry and geology of the mine. The various mining operations are normally performed in series at each working face. The delay of a single operation will lead to a domino effect, thus delay the starting time for the next process and the completion time of the entire process. This paper presents a new approach to the process control for underground mining operations, e.g. drilling, bolting, mucking. This approach can estimate the working time and its probability for each operation more efficiently and objectively by improving the existing PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). If the delay of the critical operation (which is on a critical path) inevitably affects the productivity of mined ore, the approach can rapidly assign mucking machines new jobs to increase this amount at a maximum level by using a new mucking algorithm under external constraints. PMID:26062092

  18. GRAPHICS MANAGER (GFXMGR): An interactive graphics software program for the Advanced Electronics Design (AED) graphics controller, Model 767

    SciTech Connect

    Faculjak, D.A.

    1988-03-01

    Graphics Manager (GFXMGR) is menu-driven, user-friendly software designed to interactively create, edit, and delete graphics displays on the Advanced Electronics Design (AED) graphics controller, Model 767. The software runs on the VAX family of computers and has been used successfully in security applications to create and change site layouts (maps) of specific facilities. GFXMGR greatly benefits graphics development by minimizing display-development time, reducing tedium on the part of the user, and improving system performance. It is anticipated that GFXMGR can be used to create graphics displays for many types of applications. 8 figs., 2 tabs.

  19. Minard's Graphic of Napoleon in Russia.

    ERIC Educational Resources Information Center

    Hardy, Charles

    1992-01-01

    Describes the use of Charles Minard's graphic of Napoleon's 1812 Russian Campaign as an instructional tool in history classes. Maintains that the graphic, created in 1861, can be analyzed by students to determine six historical and geographical factors involved in Napoleon's defeat. Includes a copy of Minard's graphic. (CFR)

  20. Narrative Problems of Graphic Design History.

    ERIC Educational Resources Information Center

    Margolin, Victor

    1994-01-01

    Discusses three major accounts (by Philip Meggs, Enric Satue and Richard Hollis) of graphic design history. Notes that these texts address the history of graphic design, but each raises questions about what material to include, as well as how graphic design is both related to and distinct from other visual practices such as typography, art…