Science.gov

Sample records for mobile graphics processing

  1. Evaluating Mobile Graphics Processing Units (GPUs) for Real-Time Resource Constrained Applications

    SciTech Connect

    Meredith, J; Conger, J; Liu, Y; Johnson, J

    2005-11-11

    Modern graphics processing units (GPUs) can provide tremendous performance boosts for some applications beyond what a single CPU can accomplish, and their performance is growing at a rate faster than CPUs as well. Mobile GPUs available for laptops have the small form factor and low power requirements suitable for use in embedded processing. We evaluated several desktop and mobile GPUs and CPUs on traditional and non-traditional graphics tasks, as well as on the most time consuming pieces of a full hyperspectral imaging application. Accuracy remained high despite small differences in arithmetic operations like rounding. Performance improvements are summarized here relative to a desktop Pentium 4 CPU.

  2. Oklahoma's Mobile Computer Graphics Laboratory.

    ERIC Educational Resources Information Center

    McClain, Gerald R.

    This Computer Graphics Laboratory houses an IBM 1130 computer, U.C.C. plotter, printer, card reader, two key punch machines, and seminar-type classroom furniture. A "General Drafting Graphics System" (GDGS) is used, based on repetitive use of basic coordinate and plot generating commands. The system is used by 12 institutions of higher education…

  3. Graphical Language for Data Processing

    NASA Technical Reports Server (NTRS)

    Alphonso, Keith

    2011-01-01

    A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.

  4. Color graphics, interactive processing, and the supercomputer

    NASA Technical Reports Server (NTRS)

    Smith-Taylor, Rudeen

    1987-01-01

    The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.

  5. Cockpit weather graphics using mobile satellite communications

    NASA Astrophysics Data System (ADS)

    Seth, Shashi

    Many new companies are pushing state-of-the-art technology to bring a revolution in the cockpits of General Aviation (GA) aircraft. The vision, according to Dr. Bruce Holmes - the Assistant Director for Aeronautics at National Aeronautics and Space Administration's (NASA) Langley Research Center, is to provide such an advanced flight control system that the motor and cognitive skills you use to drive a car would be very similar to the ones you would use to fly an airplane. We at ViGYAN, Inc., are currently developing a system called the Pilot Weather Advisor (PWxA), which would be a part of such an advanced technology flight management system. The PWxA provides graphical depictions of weather information in the cockpit of aircraft in near real-time, through the use of broadcast satellite communications. The purpose of this system is to improve the safety and utility of GA aircraft operations. Considerable effort is being extended for research in the design of graphical weather systems, notably the works of Scanlon and Dash. The concept of providing pilots with graphical depictions of weather conditions, overlaid on geographical and navigational maps, is extremely powerful.

  6. Cockpit weather graphics using mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Seth, Shashi

    1993-01-01

    Many new companies are pushing state-of-the-art technology to bring a revolution in the cockpits of General Aviation (GA) aircraft. The vision, according to Dr. Bruce Holmes - the Assistant Director for Aeronautics at National Aeronautics and Space Administration's (NASA) Langley Research Center, is to provide such an advanced flight control system that the motor and cognitive skills you use to drive a car would be very similar to the ones you would use to fly an airplane. We at ViGYAN, Inc., are currently developing a system called the Pilot Weather Advisor (PWxA), which would be a part of such an advanced technology flight management system. The PWxA provides graphical depictions of weather information in the cockpit of aircraft in near real-time, through the use of broadcast satellite communications. The purpose of this system is to improve the safety and utility of GA aircraft operations. Considerable effort is being extended for research in the design of graphical weather systems, notably the works of Scanlon and Dash. The concept of providing pilots with graphical depictions of weather conditions, overlaid on geographical and navigational maps, is extremely powerful.

  7. Graphic Design in Libraries: A Conceptual Process

    ERIC Educational Resources Information Center

    Ruiz, Miguel

    2014-01-01

    Providing successful library services requires efficient and effective communication with users; therefore, it is important that content creators who develop visual materials understand key components of design and, specifically, develop a holistic graphic design process. Graphic design, as a form of visual communication, is the process of…

  8. Optimization Techniques for 3D Graphics Deployment on Mobile Devices

    NASA Astrophysics Data System (ADS)

    Koskela, Timo; Vatjus-Anttila, Jarkko

    2015-03-01

    3D Internet technologies are becoming essential enablers in many application areas including games, education, collaboration, navigation and social networking. The use of 3D Internet applications with mobile devices provides location-independent access and richer use context, but also performance issues. Therefore, one of the important challenges facing 3D Internet applications is the deployment of 3D graphics on mobile devices. In this article, we present an extensive survey on optimization techniques for 3D graphics deployment on mobile devices and qualitatively analyze the applicability of each technique from the standpoints of visual quality, performance and energy consumption. The analysis focuses on optimization techniques related to data-driven 3D graphics deployment, because it supports off-line use, multi-user interaction, user-created 3D graphics and creation of arbitrary 3D graphics. The outcome of the analysis facilitates the development and deployment of 3D Internet applications on mobile devices and provides guidelines for future research.

  9. Graphics applications utilizing parallel processing

    NASA Technical Reports Server (NTRS)

    Rice, John R.

    1990-01-01

    The results are presented of research conducted to develop a parallel graphic application algorithm to depict the numerical solution of the 1-D wave equation, the vibrating string. The research was conducted on a Flexible Flex/32 multiprocessor and a Sequent Balance 21000 multiprocessor. The wave equation is implemented using the finite difference method. The synchronization issues that arose from the parallel implementation and the strategies used to alleviate the effects of the synchronization overhead are discussed.

  10. Process and representation in graphical displays

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne

    1990-01-01

    How people comprehend graphics is examined. Graphical comprehension involves the cognitive representation of information from a graphic display and the processing strategies that people apply to answer questions about graphics. Research on representation has examined both the features present in a graphic display and the cognitive representation of the graphic. The key features include the physical components of a graph, the relation between the figure and its axes, and the information in the graph. Tests of people's memory for graphs indicate that both the physical and informational aspect of a graph are important in the cognitive representation of a graph. However, the physical (or perceptual) features overshadow the information to a large degree. Processing strategies also involve a perception-information distinction. In order to answer simple questions (e.g., determining the value of a variable, comparing several variables, and determining the mean of a set of variables), people switch between two information processing strategies: (1) an arithmetic, look-up strategy in which they use a graph much like a table, looking up values and performing arithmetic calculations; and (2) a perceptual strategy in which they use the spatial characteristics of the graph to make comparisons and estimations. The user's choice of strategies depends on the task and the characteristics of the graph. A theory of graphic comprehension is presented.

  11. Process and representation in graphical displays

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne

    1993-01-01

    Our initial model of graphic comprehension has focused on statistical graphs. Like other models of human-computer interaction, models of graphical comprehension can be used by human-computer interface designers and developers to create interfaces that present information in an efficient and usable manner. Our investigation of graph comprehension addresses two primary questions: how do people represent the information contained in a data graph?; and how do they process information from the graph? The topics of focus for graphic representation concern the features into which people decompose a graph and the representations of the graph in memory. The issue of processing can be further analyzed as two questions: what overall processing strategies do people use?; and what are the specific processing skills required?

  12. HMI conventions for process control graphics.

    PubMed

    Pikaar, Ruud N

    2012-01-01

    Process operators supervise and control complex processes. To enable the operator to do an adequate job, instrumentation and process control engineers need to address several related topics, such as console design, information design, navigation, and alarm management. In process control upgrade projects, usually a 1:1 conversion of existing graphics is proposed. This paper suggests another approach, efficiently leading to a reduced number of new powerful process graphics, supported by a permanent process overview displays. In addition a road map for structuring content (process information) and conventions for the presentation of objects, symbols, and so on, has been developed. The impact of the human factors engineering approach on process control upgrade projects is illustrated by several cases.

  13. Graphics processing unit-assisted lossless decompression

    DOEpatents

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  14. Grid fill algorithm for vector graphics render on mobile devices

    NASA Astrophysics Data System (ADS)

    Zhang, Jixian; Yue, Kun; Yuan, Guowu; Zhang, Binbin

    2015-12-01

    The performance of vector graphics render has always been one of the key elements in mobile devices and the most important step to improve the performance is to enhance the efficiency of polygon fill algorithms. In this paper, we proposed a new and more efficient polygon fill algorithm based on the scan line algorithm and Grid Fill Algorithm (GFA). First, we elaborated the GFA through solid fill. Second, we described the techniques for implementing antialiasing and self-intersection polygon fill with GFA. Then, we discussed the implementation of GFA based on the gradient fill. Generally, compared to other fill algorithms, GFA has better performance and achieves faster fill speed, which is specifically consistent with the inherent characteristics of mobile devices. Experimental results show that better fill effects can be achieved by using GFA.

  15. Graphical analysis of power systems for mobile robotics

    NASA Astrophysics Data System (ADS)

    Raade, Justin William

    The field of mobile robotics places stringent demands on the power system. Energetic autonomy, or the ability to function for a useful operation time independent of any tether, refueling, or recharging, is a driving force in a robot designed for a field application. The focus of this dissertation is the development of two graphical analysis tools, namely Ragone plots and optimal hybridization plots, for the design of human scale mobile robotic power systems. These tools contribute to the intuitive understanding of the performance of a power system and expand the toolbox of the design engineer. Ragone plots are useful for graphically comparing the merits of different power systems for a wide range of operation times. They plot the specific power versus the specific energy of a system on logarithmic scales. The driving equations in the creation of a Ragone plot are derived in terms of several important system parameters. Trends at extreme operation times (both very short and very long) are examined. Ragone plot analysis is applied to the design of several power systems for high-power human exoskeletons. Power systems examined include a monopropellant-powered free piston hydraulic pump, a gasoline-powered internal combustion engine with hydraulic actuators, and a fuel cell with electric actuators. Hybrid power systems consist of two or more distinct energy sources that are used together to meet a single load. They can often outperform non-hybrid power systems in low duty-cycle applications or those with widely varying load profiles and long operation times. Two types of energy sources are defined: engine-like and capacitive. The hybridization rules for different combinations of energy sources are derived using graphical plots of hybrid power system mass versus the primary system power. Optimal hybridization analysis is applied to several power systems for low-power human exoskeletons. Hybrid power systems examined include a fuel cell and a solar panel coupled with

  16. Relativistic hydrodynamics on graphics processing units

    NASA Astrophysics Data System (ADS)

    Sikorski, Jan; Cygert, Sebastian; Porter-Sobieraj, Joanna; Słodkowski, Marcin; Krzyżanowski, Piotr; Ksiażek, Natalia; Duda, Przemysław

    2014-05-01

    Hydrodynamics calculations have been successfully used in studies of the bulk properties of the Quark-Gluon Plasma, particularly of elliptic flow and shear viscosity. However, there are areas (for instance event-by-event simulations for flow fluctuations and higher-order flow harmonics studies) where further advancement is hampered by lack of efficient and precise 3+1D program. This problem can be solved by using Graphics Processing Unit (GPU) computing, which offers unprecedented increase of the computing power compared to standard CPU simulations. In this work, we present an implementation of 3+1D ideal hydrodynamics simulations on the Graphics Processing Unit using Nvidia CUDA framework. MUSTA-FORCE (MUlti STAge, First ORder CEntral, with a slope limiter and MUSCL reconstruction) and WENO (Weighted Essentially Non-Oscillating) schemes are employed in the simulations, delivering second (MUSTA-FORCE), fifth and seventh (WENO) order of accuracy. Third order Runge-Kutta scheme was used for integration in the time domain. Our implementation improves the performance by about 2 orders of magnitude compared to a single threaded program. The algorithm tests of 1+1D shock tube and 3+1D simulations with ellipsoidal and Hubble-like expansion are presented.

  17. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  18. Process control graphics for petrochemical plants

    SciTech Connect

    Lieber, R.E.

    1982-12-01

    Describes many specialized features of a computer control system, schematic/graphics in particular, which are vital to effectively run today's complex refineries and chemical plants. Illustrates such control systems as a full-graphic control house panel of the 60s, a European refinery control house of the early 70s, and the Ingolstadt refinery control house. Presents diagram showing a shape library. Implementation of state-of-the-art control theory, distributed control, dual hi-way digital instrument systems, and many other person-machine interface developments have been prime factors in process control. Further developments in person-machine interfaces are in progress including voice input/output, touch screen, and other entry devices. Color usage, angle of projection, control house lighting, and pattern recognition are all being studied by vendors, users, and academics. These studies involve psychologists concerned with ''quality of life'' factors, employee relations personnel concerned with labor contracts or restrictions, as well as operations personnel concerned with just getting the plant to run better.

  19. Graphics Processing Units for HEP trigger systems

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-07-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  20. Kernel density estimation using graphical processing unit

    NASA Astrophysics Data System (ADS)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  1. Identification of Learning Processes by Means of Computer Graphics.

    ERIC Educational Resources Information Center

    Sorensen, Birgitte Holm

    1993-01-01

    Describes a development project for the use of computer graphics and video in connection with an inservice training course for primary education teachers in Denmark. Topics addressed include research approaches to computers; computer graphics in learning processes; activities relating to computer graphics; the role of the teacher; and student…

  2. Semantic Context and Graphic Processing in the Acquisition of Reading.

    ERIC Educational Resources Information Center

    Thompson, G. B.

    1981-01-01

    Two experiments provided tests of predictions about children's use of semantic contextual information in reading, under conditions of minimal experience with graphic processes. Subjects, aged 6 1/2, 8, and 11, orally read passages of continuous text with normal and with low semantic constraints under various graphic conditions, including cursive…

  3. Reading the Graphics: What Is the Relationship between Graphical Reading Processes and Student Comprehension?

    ERIC Educational Resources Information Center

    Norman, Rebecca R.

    2012-01-01

    Research on comprehension of written text and reading processes suggests a greater use of reading processes is associated with higher scores on comprehension measures of those same texts. Although researchers have suggested that the graphics in text convey important meaning, little research exists on the relationship between children's processes…

  4. The New Digital Engineering Design and Graphics Process.

    ERIC Educational Resources Information Center

    Barr, R. E.; Krueger, T. J.; Aanstoos, T. A.

    2002-01-01

    Summarizes the digital engineering design process using software widely available for the educational setting. Points out that newer technology used in the field is not used in engineering graphics education. (DDR)

  5. Graphics processing, video digitizing, and presentation of geologic information

    SciTech Connect

    Sanchez, J.D. )

    1990-02-01

    Computer users have unparalleled opportunities to use powerful desktop computers to generate, manipulate, analyze and use graphic information for better communication. Processing graphic geologic information on a personal computer like the Amiga used for the projects discussed here enables geoscientists to create and manipulate ideas in ways once available only to those with access to large budgets and large mainframe computers. Desktop video applications such as video digitizing and powerful graphic processing application programs add a new dimension to the creation and manipulation of geologic information. Videotape slide shows and animated geology give geoscientists new tools to examine and present information. Telecommunication programs such as ATalk III, which can be used as an all-purpose telecommunications program or can emulate a Tektronix 4014 terminal, allow the user to access Sun and Prime minicomputers and manipulate graphic geologic information stored there. Graphics information displayed on the monitor screen can be captured and saved in the standard Amiga IFF graphic format. These IFF files can be processed using image processing programs such as Butcher. Butcher offers edge mapping, resolution conversion, color separation, false colors, toning, positive-negative reversals, etc. Multitasking and easy expansion that includes IBM-XT and AT co-processing offer unique capabilities for graphic processing and file transfer between Amiga-DOS and MS-DOS. Digital images produced by satellites and airborne scanners can be analyzed on the Amiga using the A-Image processing system developed by the CSIRO Division of Mathematics and Statistics and the School of Mathematics and Computing at Curtin University, Australia.

  6. Graphic Arts: Process Camera, Stripping, and Platemaking. Third Edition.

    ERIC Educational Resources Information Center

    Crummett, Dan

    This document contains teacher and student materials for a course in graphic arts concentrating on camera work, stripping, and plate making in the printing process. Eight units of instruction cover the following topics: (1) the process camera and darkroom equipment; (2) line photography; (3) halftone photography; (4) other darkroom techniques; (5)…

  7. Graphic Arts: Book Two. Process Camera, Stripping, and Platemaking.

    ERIC Educational Resources Information Center

    Farajollahi, Karim; And Others

    The second of a three-volume set of instructional materials for a course in graphic arts, this manual consists of 10 instructional units dealing with the process camera, stripping, and platemaking. Covered in the individual units are the process camera and darkroom photography, line photography, half-tone photography, other darkroom techniques,…

  8. Graphic Arts: The Press and Finishing Processes. Third Edition.

    ERIC Educational Resources Information Center

    Crummett, Dan

    This document contains teacher and student materials for a course in graphic arts concentrating on printing presses and the finishing process for publications. Seven units of instruction cover the following topics: (1) offset press systems; (2) offset inks and dampening chemistry; (3) offset press operating procedures; (4) preventive maintenance…

  9. Graphic Arts: Book Three. The Press and Related Processes.

    ERIC Educational Resources Information Center

    Farajollahi, Karim; And Others

    The third of a three-volume set of instructional materials for a graphic arts course, this manual consists of nine instructional units dealing with presses and related processes. Covered in the units are basic press fundamentals, offset press systems, offset press operating procedures, offset inks and dampening chemistry, preventive maintenance…

  10. An Interactive Graphics Program for Investigating Digital Signal Processing.

    ERIC Educational Resources Information Center

    Miller, Billy K.; And Others

    1983-01-01

    Describes development of an interactive computer graphics program for use in teaching digital signal processing. The program allows students to interactively configure digital systems on a monitor display and observe their system's performance by means of digital plots on the system's outputs. A sample program run is included. (JN)

  11. Digital-Computer Processing of Graphical Data. Final Report.

    ERIC Educational Resources Information Center

    Freeman, Herbert

    The final report of a two-year study concerned with the digital-computer processing of graphical data. Five separate investigations carried out under this study are described briefly, and a detailed bibliography, complete with abstracts, is included in which are listed the technical papers and reports published during the period of this program.…

  12. The Use of Computer Graphics in the Design Process.

    ERIC Educational Resources Information Center

    Palazzi, Maria

    This master's thesis examines applications of computer technology to the field of industrial design and ways in which technology can transform the traditional process. Following a statement of the problem, the history and applications of the fields of computer graphics and industrial design are reviewed. The traditional industrial design process…

  13. Beam line error analysis, position correction, and graphic processing

    NASA Astrophysics Data System (ADS)

    Wang, Fuhua; Mao, Naifeng

    1993-12-01

    A beam transport line error analysis and beam position correction code called ``EAC'' has been enveloped associated with a graphics and data post processing package for TRANSPORT. Based on the linear optics design using TRANSPORT or other general optics codes, EAC independently analyzes effects of magnet misalignments, systematic and statistical errors of magnetic fields as well as the effects of the initial beam positions, on the central trajectory and upon the transverse beam emittance dilution. EAC also provides an efficient way to develop beam line trajectory correcting schemes. The post processing package generates various types of graphics such as the beam line geometrical layout, plots of the Twiss parameters, beam envelopes, etc. It also generates an EAC input file, thus connecting EAC with general optics codes. EAC and the post processing package are small size codes, that are easy to access and use. They have become useful tools for the design of transport lines at SSCL.

  14. Efficient magnetohydrodynamic simulations on graphics processing units with CUDA

    NASA Astrophysics Data System (ADS)

    Wong, Hon-Cheng; Wong, Un-Hong; Feng, Xueshang; Tang, Zesheng

    2011-10-01

    Magnetohydrodynamic (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the best of the author's knowledge, the first implementation of MHD simulations entirely on GPUs with CUDA, named GPU-MHD, to accelerate the simulation process. GPU-MHD supports both single and double precision computations. A series of numerical tests have been performed to validate the correctness of our code. Accuracy evaluation by comparing single and double precision computation results is also given. Performance measurements of both single and double precision are conducted on both the NVIDIA GeForce GTX 295 (GT200 architecture) and GTX 480 (Fermi architecture) graphics cards. These measurements show that our GPU-based implementation achieves between one and two orders of magnitude of improvement depending on the graphics card used, the problem size, and the precision when comparing to the original serial CPU MHD implementation. In addition, we extend GPU-MHD to support the visualization of the simulation results and thus the whole MHD simulation and visualization process can be performed entirely on GPUs.

  15. Graphics processing unit based computation for NDE applications

    NASA Astrophysics Data System (ADS)

    Nahas, C. A.; Rajagopal, Prabhu; Balasubramaniam, Krishnan; Krishnamurthy, C. V.

    2012-05-01

    Advances in parallel processing in recent years are helping to improve the cost of numerical simulation. Breakthroughs in Graphical Processing Unit (GPU) based computation now offer the prospect of further drastic improvements. The introduction of 'compute unified device architecture' (CUDA) by NVIDIA (the global technology company based in Santa Clara, California, USA) has made programming GPUs for general purpose computing accessible to the average programmer. Here we use CUDA to develop parallel finite difference schemes as applicable to two problems of interest to NDE community, namely heat diffusion and elastic wave propagation. The implementations are for two-dimensions. Performance improvement of the GPU implementation against serial CPU implementation is then discussed.

  16. Parallelizing the Cellular Potts Model on graphics processing units

    NASA Astrophysics Data System (ADS)

    Tapia, José Juan; D'Souza, Roshan M.

    2011-04-01

    The Cellular Potts Model (CPM) is a lattice based modeling technique used for simulating cellular structures in computational biology. The computational complexity of the model means that current serial implementations restrict the size of simulation to a level well below biological relevance. Parallelization on computing clusters enables scaling the size of the simulation but marginally addresses computational speed due to the limited memory bandwidth between nodes. In this paper we present new data-parallel algorithms and data structures for simulating the Cellular Potts Model on graphics processing units. Our implementations handle most terms in the Hamiltonian, including cell-cell adhesion constraint, cell volume constraint, cell surface area constraint, and cell haptotaxis. We use fine level checkerboards with lock mechanisms using atomic operations to enable consistent updates while maintaining a high level of parallelism. A new data-parallel memory allocation algorithm has been developed to handle cell division. Tests show that our implementation enables simulations of >10 cells with lattice sizes of up to 256 3 on a single graphics card. Benchmarks show that our implementation runs ˜80× faster than serial implementations, and ˜5× faster than previous parallel implementations on computing clusters consisting of 25 nodes. The wide availability and economy of graphics cards mean that our techniques will enable simulation of realistically sized models at a fraction of the time and cost of previous implementations and are expected to greatly broaden the scope of CPM applications.

  17. A graphical technique for wastewater minimisation in batch processes.

    PubMed

    Majozi, Thokozani; Brouckaert, C J; Buckley, C A

    2006-03-01

    Presented in this paper is a graphical technique for freshwater and wastewater minimisation in completely batch operations. Water minimisation is achieved through the exploitation of inter- and intra-process water reuse and recycle opportunities. In the context of this paper, a completely batch operation is one in which water reuse or recycle can only be effected either at the start or the end of the process. During the course of the operation, water reuse and recycle opportunities are completely nullified. The intrinsic two-dimensionally constrained nature of batch processes is taken into consideration. In the first instance, time dimension is taken as a primary constraint and concentration a secondary constraint. Subsequently, the priority of constraints is reversed so as to demonstrate the effect of the targeting procedure on the final design. Attention is brought to the fact that first and cyclic-state targeting are essential in completely batch operations. Moreover, the exploration and use of inherent storage in batch processes is demonstrated using a real-life case study. Like most graphical techniques, the presented methodology is limited to single contaminants.

  18. Off-line graphics processing: a case study

    SciTech Connect

    Harris, D.D.

    1983-09-01

    The Drafting Systems organization at Bendix, Kansas City Division, is responsible for the creation of computer-readable media used for producing photoplots, phototools, and production traveler illustrations. From 1977 when the organization acquired its first Applicon system, until 1982 when the off-line graphics processing system was added, the production of Gerber photoplotter tapes and APPLE files presented an ever increasing load on the Applicon systems. This paper describes how the organization is now using a VAX to offload this work from the Applicon systems and presents the techniques used to automate the flow of data from the Applicon sources to the final users.

  19. Line-by-line spectroscopic simulations on graphics processing units

    NASA Astrophysics Data System (ADS)

    Collange, Sylvain; Daumas, Marc; Defour, David

    2008-01-01

    We report here on software that performs line-by-line spectroscopic simulations on gases. Elaborate models (such as narrow band and correlated-K) are accurate and efficient for bands where various components are not simultaneously and significantly active. Line-by-line is probably the most accurate model in the infrared for blends of gases that contain high proportions of H 2O and CO 2 as this was the case for our prototype simulation. Our implementation on graphics processing units sustains a speedup close to 330 on computation-intensive tasks and 12 on memory intensive tasks compared to implementations on one core of high-end processors. This speedup is due to data parallelism, efficient memory access for specific patterns and some dedicated hardware operators only available in graphics processing units. It is obtained leaving most of processor resources available and it would scale linearly with the number of graphics processing units in parallel machines. Line-by-line simulation coupled with simulation of fluid dynamics was long believed to be economically intractable but our work shows that it could be done with some affordable additional resources compared to what is necessary to perform simulations on fluid dynamics alone. Program summaryProgram title: GPU4RE Catalogue identifier: ADZY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 62 776 No. of bytes in distributed program, including test data, etc.: 1 513 247 Distribution format: tar.gz Programming language: C++ Computer: x86 PC Operating system: Linux, Microsoft Windows. Compilation requires either gcc/g++ under Linux or Visual C++ 2003/2005 and Cygwin under Windows. It has been tested using gcc 4.1.2 under Ubuntu Linux 7.04 and using Visual C

  20. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    NASA Astrophysics Data System (ADS)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  1. Exploiting graphics processing units for computational biology and bioinformatics.

    PubMed

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  2. Accelerated space object tracking via graphic processing unit

    NASA Astrophysics Data System (ADS)

    Jia, Bin; Liu, Kui; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    In this paper, a hybrid Monte Carlo Gauss mixture Kalman filter is proposed for the continuous orbit estimation problem. Specifically, the graphic processing unit (GPU) aided Monte Carlo method is used to propagate the uncertainty of the estimation when the observation is not available and the Gauss mixture Kalman filter is used to update the estimation when the observation sequences are available. A typical space object tracking problem using the ground radar is used to test the performance of the proposed algorithm. The performance of the proposed algorithm is compared with the popular cubature Kalman filter (CKF). The simulation results show that the ordinary CKF diverges in 5 observation periods. In contrast, the proposed hybrid Monte Carlo Gauss mixture Kalman filter achieves satisfactory performance in all observation periods. In addition, by using the GPU, the computational time is over 100 times less than that using the conventional central processing unit (CPU).

  3. Graphics processing unit accelerated computation of digital holograms.

    PubMed

    Kang, Hoonjong; Yaraş, Fahri; Onural, Levent

    2009-12-01

    An approximation for fast digital hologram generation is implemented on a central processing unit (CPU), a graphics processing unit (GPU), and a multi-GPU computational platform. The computational performance of the method on each platform is measured and compared. The computational speed on the GPU platform is much faster than on a CPU, and the algorithm could be further accelerated on a multi-GPU platform. In addition, the accuracy of the algorithm for single- and double-precision arithmetic is evaluated. The quality of the reconstruction from the algorithm using single-precision arithmetic is comparable with the quality from the double-precision arithmetic, and thus the implementation using single-precision arithmetic on a multi-GPU platform can be used for holographic video displays.

  4. Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.

    PubMed

    Dematté, Lorenzo

    2012-01-01

    Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output

  5. Graphics Processing Units and High-Dimensional Optimization

    PubMed Central

    Zhou, Hua; Lange, Kenneth; Suchard, Marc A.

    2011-01-01

    This paper discusses the potential of graphics processing units (GPUs) in high-dimensional optimization problems. A single GPU card with hundreds of arithmetic cores can be inserted in a personal computer and dramatically accelerates many statistical algorithms. To exploit these devices fully, optimization algorithms should reduce to multiple parallel tasks, each accessing a limited amount of data. These criteria favor EM and MM algorithms that separate parameters and data. To a lesser extent block relaxation and coordinate descent and ascent also qualify. We demonstrate the utility of GPUs in nonnegative matrix factorization, PET image reconstruction, and multidimensional scaling. Speedups of 100 fold can easily be attained. Over the next decade, GPUs will fundamentally alter the landscape of computational statistics. It is time for more statisticians to get on-board. PMID:21847315

  6. Implementing wide baseline matching algorithms on a graphics processing unit.

    SciTech Connect

    Rothganger, Fredrick H.; Larson, Kurt W.; Gonzales, Antonio Ignacio; Myers, Daniel S.

    2007-10-01

    Wide baseline matching is the state of the art for object recognition and image registration problems in computer vision. Though effective, the computational expense of these algorithms limits their application to many real-world problems. The performance of wide baseline matching algorithms may be improved by using a graphical processing unit as a fast multithreaded co-processor. In this paper, we present an implementation of the difference of Gaussian feature extractor, based on the CUDA system of GPU programming developed by NVIDIA, and implemented on their hardware. For a 2000x2000 pixel image, the GPU-based method executes nearly thirteen times faster than a comparable CPU-based method, with no significant loss of accuracy.

  7. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    SciTech Connect

    Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele

    2014-05-07

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  8. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    NASA Astrophysics Data System (ADS)

    Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele

    2014-05-01

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  9. Graphics processing units accelerated semiclassical initial value representation molecular dynamics.

    PubMed

    Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele

    2014-05-01

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly. PMID:24811627

  10. Graphic Arts: Process Camera, Stripping, and Platemaking. Teacher Guide.

    ERIC Educational Resources Information Center

    Feasley, Sue C., Ed.

    This curriculum guide is the second in a three-volume series of instructional materials for competency-based graphic arts instruction. Each publication is designed to include the technical content and tasks necessary for a student to be employed in an entry-level graphic arts occupation. Introductory materials include an instructional/task…

  11. Graphic Arts: The Press and Finishing Processes. Teacher Guide.

    ERIC Educational Resources Information Center

    Feasley, Sue C., Ed.

    This curriculum guide is the third in a three-volume series of instructional materials for competency-based graphic arts instruction. Each publication is designed to include the technical content and tasks necessary for a student to be employed in an entry-level graphic arts occupation. Introductory materials include an instructional/task analysis…

  12. Role of Graphics Tools in the Learning Design Process

    ERIC Educational Resources Information Center

    Laisney, Patrice; Brandt-Pomares, Pascale

    2015-01-01

    This paper discusses the design activities of students in secondary school in France. Graphics tools are now part of the capacity of design professionals. It is therefore apt to reflect on their integration into the technological education. Has the use of intermediate graphical tools changed students' performance, and if so in what direction,…

  13. Accelerating sparse linear algebra using graphics processing units

    NASA Astrophysics Data System (ADS)

    Spagnoli, Kyle E.; Humphrey, John R.; Price, Daniel K.; Kelmelis, Eric J.

    2011-06-01

    The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math processor capable of over 1 TFLOPS of peak computational throughput at a cost similar to a high-end CPU with excellent FLOPS-to-watt ratio. High-level sparse linear algebra operations are computationally intense, often requiring large amounts of parallel operations and would seem a natural fit for the processing power of the GPU. Our work is on a GPU accelerated implementation of sparse linear algebra routines. We present results from both direct and iterative sparse system solvers. The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is accomplished by allowing each platform to do the work it performs most naturally. For example, the CPU is responsible for graph theory portion of the direct solvers while the GPU simultaneously performs the low level linear algebra routines.

  14. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  15. Parallel Latent Semantic Analysis using a Graphics Processing Unit

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E; Cavanagh, Joseph M

    2009-01-01

    Latent Semantic Analysis (LSA) can be used to reduce the dimensions of large Term-Document datasets using Singular Value Decomposition. However, with the ever expanding size of data sets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. The Graphics Processing Unit (GPU) can solve some highly parallel problems much faster than the traditional sequential processor (CPU). Thus, a deployable system using a GPU to speedup large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a computer cluster. In this paper, we presented a parallel LSA implementation on the GPU, using NVIDIA Compute Unified Device Architecture (CUDA) and Compute Unified Basic Linear Algebra Subprograms (CUBLAS). The performance of this implementation is compared to traditional LSA implementation on CPU using an optimized Basic Linear Algebra Subprograms library. For large matrices that have dimensions divisible by 16, the GPU algorithm ran five to six times faster than the CPU version.

  16. Accelerating radio astronomy cross-correlation with graphics processing units

    NASA Astrophysics Data System (ADS)

    Clark, M. A.; LaPlante, P. C.; Greenhill, L. J.

    2013-05-01

    We present a highly parallel implementation of the cross-correlation of time-series data using graphics processing units (GPUs), which is scalable to hundreds of independent inputs and suitable for the processing of signals from 'large-Formula' arrays of many radio antennas. The computational part of the algorithm, the X-engine, is implemented efficiently on NVIDIA's Fermi architecture, sustaining up to 79% of the peak single-precision floating-point throughput. We compare performance obtained for hardware- and software-managed caches, observing significantly better performance for the latter. The high performance reported involves use of a multi-level data tiling strategy in memory and use of a pipelined algorithm with simultaneous computation and transfer of data from host to device memory. The speed of code development, flexibility, and low cost of the GPU implementations compared with application-specific integrated circuit (ASIC) and field programmable gate array (FPGA) implementations have the potential to greatly shorten the cycle of correlator development and deployment, for cases where some power-consumption penalty can be tolerated.

  17. Accelerating sino-atrium computer simulations with graphic processing units.

    PubMed

    Zhang, Hong; Xiao, Zheng; Lin, Shien-fong

    2015-01-01

    Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations. PMID:26406070

  18. Accelerating sino-atrium computer simulations with graphic processing units.

    PubMed

    Zhang, Hong; Xiao, Zheng; Lin, Shien-fong

    2015-01-01

    Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.

  19. Handling geophysical flows: Numerical modelling using Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario

    2016-04-01

    Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta

  20. Kinematic modelling of disc galaxies using graphics processing units

    NASA Astrophysics Data System (ADS)

    Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.

    2016-01-01

    With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.

  1. Multilevel Summation of Electrostatic Potentials Using Graphics Processing Units*

    PubMed Central

    Hardy, David J.; Stone, John E.; Schulten, Klaus

    2009-01-01

    Physical and engineering practicalities involved in microprocessor design have resulted in flat performance growth for traditional single-core microprocessors. The urgent need for continuing increases in the performance of scientific applications requires the use of many-core processors and accelerators such as graphics processing units (GPUs). This paper discusses GPU acceleration of the multilevel summation method for computing electrostatic potentials and forces for a system of charged atoms, which is a problem of paramount importance in biomolecular modeling applications. We present and test a new GPU algorithm for the long-range part of the potentials that computes a cutoff pair potential between lattice points, essentially convolving a fixed 3-D lattice of “weights” over all sub-cubes of a much larger lattice. The implementation exploits the different memory subsystems provided on the GPU to stream optimally sized data sets through the multiprocessors. We demonstrate for the full multilevel summation calculation speedups of up to 26 using a single GPU and 46 using multiple GPUs, enabling the computation of a high-resolution map of the electrostatic potential for a system of 1.5 million atoms in under 12 seconds. PMID:20161132

  2. Use of general purpose graphics processing units with MODFLOW

    USGS Publications Warehouse

    Hughes, Joseph D.; White, Jeremy T.

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.

  3. Use of general purpose graphics processing units with MODFLOW.

    PubMed

    Hughes, Joseph D; White, Jeremy T

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized. PMID:23281733

  4. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    SciTech Connect

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  5. Use of general purpose graphics processing units with MODFLOW.

    PubMed

    Hughes, Joseph D; White, Jeremy T

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.

  6. Real-time massively parallel processing of spectral optical coherence tomography data on graphics processing units

    NASA Astrophysics Data System (ADS)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Targowski, Piotr

    2011-06-01

    In this contribution we describe a specialised data processing system for Spectral Optical Coherence Tomography (SOCT) biomedical imaging which utilises massively parallel data processing on a low-cost, Graphics Processing Unit (GPU). One of the most significant limitations of SOCT is the data processing time on the main processor of the computer (CPU), which is generally longer than the data acquisition. Therefore, real-time imaging with acceptable quality is limited to a small number of tomogram lines (A-scans). Recent progress in graphics cards technology gives a promising solution of this problem. The newest graphics processing units allow not only for a very high speed three dimensional (3D) rendering, but also for a general purpose parallel numerical calculations with efficiency higher than provided by the CPU. The presented system utilizes CUDATM graphic card and allows for a very effective real time SOCT imaging. The total imaging speed for 2D data consisting of 1200 A-scans is higher than refresh rate of a 120 Hz monitor. 3D rendering of the volume data build of 10 000 A-scans is performed with frame rate of about 9 frames per second. These frame rates include data transfer from a frame grabber to GPU, data processing and 3D rendering to the screen. The software description includes data flow, parallel processing and organization of threads. For illustration we show real time high resolution SOCT imaging of human skin and eye.

  7. Massively Parallel Latent Semantic Analyzes using a Graphics Processing Unit

    SciTech Connect

    Cavanagh, Joseph M; Cui, Xiaohui

    2009-01-01

    Latent Semantic Indexing (LSA) aims to reduce the dimensions of large Term-Document datasets using Singular Value Decomposition. However, with the ever expanding size of data sets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. The Graphics Processing Unit (GPU) can solve some highly parallel problems much faster than the traditional sequential processor (CPU). Thus, a deployable system using a GPU to speedup large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a computer cluster. Due to the GPU s application-specific architecture, harnessing the GPU s computational prowess for LSA is a great challenge. We present a parallel LSA implementation on the GPU, using NVIDIA Compute Unified Device Architecture and Compute Unified Basic Linear Algebra Subprograms. The performance of this implementation is compared to traditional LSA implementation on CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1000x1000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran five to six times faster than the CPU version. The large variation is due to architectural benefits the GPU has for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  8. Area-delay trade-offs of texture decompressors for a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Novoa Súñer, Emilio; Ituero, Pablo; López-Vallejo, Marisa

    2011-05-01

    Graphics Processing Units have become a booster for the microelectronics industry. However, due to intellectual property issues, there is a serious lack of information on implementation details of the hardware architecture that is behind GPUs. For instance, the way texture is handled and decompressed in a GPU to reduce bandwidth usage has never been dealt with in depth from a hardware point of view. This work addresses a comparative study on the hardware implementation of different texture decompression algorithms for both conventional (PCs and video game consoles) and mobile platforms. Circuit synthesis is performed targeting both a reconfigurable hardware platform and a 90nm standard cell library. Area-delay trade-offs have been extensively analyzed, which allows us to compare the complexity of decompressors and thus determine suitability of algorithms for systems with limited hardware resources.

  9. Fast evaluation of Helmholtz potential on graphics processing units (GPUs)

    NASA Astrophysics Data System (ADS)

    Li, Shaojing; Livshitz, Boris; Lomakin, Vitaliy

    2010-11-01

    This paper presents a parallel algorithm implemented on graphics processing units (GPUs) for rapidly evaluating spatial convolutions between the Helmholtz potential and a large-scale source distribution. The algorithm implements a non-uniform grid interpolation method (NGIM), which uses amplitude and phase compensation and spatial interpolation from a sparse grid to compute the field outside a source domain. NGIM reduces the computational time cost of the direct field evaluation at N observers due to N co-located sources from O( N2) to O( N) in the static and low-frequency regimes, to O( N log N) in the high-frequency regime, and between these costs in the mixed-frequency regime. Memory requirements scale as O( N) in all frequency regimes. Several important differences between CPU and GPU implementations of the NGIM are required to result in optimal performance on respective platforms. In particular, in the CPU implementations all operations, where possible, are pre-computed and stored in memory in a preprocessing stage. This reduces the computational time but significantly increases the memory consumption. In the GPU implementations, where handling memory often is a critical bottle neck, several special memory handling techniques are used to accelerate the computations. A significant latency of the GPU global memory access is hidden by implementing coalesced reading, which requires arranging many array elements in contiguous parts of memory. Contrary to the CPU version, most of the steps in the GPU implementations are executed on-fly and only necessary arrays are kept in memory. This results in significantly reduced memory consumption, increased problem size N that can be handled, and reduced computational time on GPUs. The obtained GPU-CPU speed-up ratios are from 150 to 400 depending on the required accuracy and problem size. The presented method and its CPU and GPU implementations can find important applications in various fields of physics and engineering.

  10. Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

    PubMed Central

    Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf

    2013-01-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867

  11. Flocking-based Document Clustering on the Graphics Processing Unit

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E; Patton, Robert M; ST Charles, Jesse Lee

    2008-01-01

    Abstract?Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. Each bird represents a single document and flies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to receive results in a reasonable amount of time. However, flocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have found increased performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefit the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NIVIDA? we developed a document flocking implementation to be run on the NIVIDA?GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3000 documents. The results of these tests were very significant. Performance gains ranged from three to nearly five times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  12. Viscoelastic Finite Difference Modeling Using Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.

    2014-12-01

    Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size

  13. Megahertz processing rate for Fourier domain optical coherence tomography using a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuuki; Kamiyama, Dai

    2012-01-01

    We developed the ultra high-speed processing of FD-OCT images using a low-cost graphics processing unit (GPU) with many stream processors to realize highly parallel processing. The processing line rates of half range FD-OCT and full range FD-OCT were 1.34 MHz and 0.70 MHz for a spectral interference image of 1024 FFT size x 2048 lateral A-scans, respectively. A display rate of 22.5 frames per second for processed full range images was achieved in our OCT system using an InGaAs line scan camera operated at 47 kHz.

  14. Graphic Arts: Process Camera, Stripping, and Platemaking. Fourth Edition. Teacher Edition [and] Student Edition.

    ERIC Educational Resources Information Center

    Multistate Academic and Vocational Curriculum Consortium, Stillwater, OK.

    This publication contains both a teacher edition and a student edition of materials for a course in graphic arts that covers the process camera, stripping, and platemaking. The course introduces basic concepts and skills necessary for entry-level employment in a graphic communication occupation. The contents of the materials are tied to measurable…

  15. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    NASA Astrophysics Data System (ADS)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  16. High Speed Data Processing for Imaging MS-Based Molecular Histology Using Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Jones, Emrys A.; van Zeijl, René J. M.; Andrén, Per E.; Deelder, André M.; Wolters, Lex; McDonnell, Liam A.

    2012-04-01

    Imaging MS enables the distributions of hundreds of biomolecular ions to be determined directly from tissue samples. The application of multivariate methods, to identify pixels possessing correlated MS profiles, is referred to as molecular histology as tissues can be annotated on the basis of the MS profiles. The application of imaging MS-based molecular histology to larger tissue series, for clinical applications, requires significantly increased computational capacity in order to efficiently analyze the very large, highly dimensional datasets. Such datasets are highly suited to processing using graphical processor units, a very cost-effective solution for high speed processing. Here we demonstrate up to 13× speed improvements for imaging MS-based molecular histology using off-the-shelf components, and demonstrate equivalence with CPU based calculations. It is then discussed how imaging MS investigations may be designed to fully exploit the high speed of graphical processor units.

  17. Software Graphics Processing Unit (sGPU) for Deep Space Applications

    NASA Technical Reports Server (NTRS)

    McCabe, Mary; Salazar, George; Steele, Glen

    2015-01-01

    A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.

  18. Abstract User Interfaces for Mobile Processes

    NASA Astrophysics Data System (ADS)

    Zaplata, Sonja; Vilenica, Ante; Bade, Dirk; Kunze, Christian P.

    An important focus of recent business process management systems is on the distributed, self-contained and even disconnected execution of processes involving mobile devices. Such an execution context leads to the class of mobile processes which are able to migrate between mobile and stationary devices in order to share functionalities and resources provided by the entire (mobile) environment. However, both the description and the execution of tasks which involve interactions of mobile users still require the executing device and its context to be known in advance in order to come up with a suitable user interface. Since this seems not appropriate for such decentralized and highly dynamic mobile processes, this work focuses on the integration of manual tasks on the respective ad-hoc creation of user interfaces at runtime. As an important prerequisite for that, this paper first presents an abstract and modality-independent interaction model to support the development and execution of user-centric mobile processes. Furthermore, the paper describes a prototype implementation for a corresponding system infrastructure component based on a service-oriented execution module, and, finally, shows its integration into the DEMAC (Distributed Environment for Mobility-Aware Computing) middleware.

  19. Grace: A cross-platform micromagnetic simulator on graphics processing units

    NASA Astrophysics Data System (ADS)

    Zhu, Ru

    2015-12-01

    A micromagnetic simulator running on graphics processing units (GPUs) is presented. Different from GPU implementations of other research groups which are predominantly running on NVidia's CUDA platform, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform independent. It runs on GPUs from venders including NVidia, AMD and Intel, and achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude. The simulator paved the way for running large size micromagnetic simulations on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics cards, and is freely available to download.

  20. Mobile Devices and GPU Parallelism in Ionospheric Data Processing

    NASA Astrophysics Data System (ADS)

    Mascharka, D.; Pankratius, V.

    2015-12-01

    Scientific data acquisition in the field is often constrained by data transfer backchannels to analysis environments. Geoscientists are therefore facing practical bottlenecks with increasing sensor density and variety. Mobile devices, such as smartphones and tablets, offer promising solutions to key problems in scientific data acquisition, pre-processing, and validation by providing advanced capabilities in the field. This is due to affordable network connectivity options and the increasing mobile computational power. This contribution exemplifies a scenario faced by scientists in the field and presents the "Mahali TEC Processing App" developed in the context of the NSF-funded Mahali project. Aimed at atmospheric science and the study of ionospheric Total Electron Content (TEC), this app is able to gather data from various dual-frequency GPS receivers. It demonstrates parsing of full-day RINEX files on mobile devices and on-the-fly computation of vertical TEC values based on satellite ephemeris models that are obtained from NASA. Our experiments show how parallel computing on the mobile device GPU enables fast processing and visualization of up to 2 million datapoints in real-time using OpenGL. GPS receiver bias is estimated through minimum TEC approximations that can be interactively adjusted by scientists in the graphical user interface. Scientists can also perform approximate computations for "quickviews" to reduce CPU processing time and memory consumption. In the final stage of our mobile processing pipeline, scientists can upload data to the cloud for further processing. Acknowledgements: The Mahali project (http://mahali.mit.edu) is funded by the NSF INSPIRE grant no. AGS-1343967 (PI: V. Pankratius). We would like to acknowledge our collaborators at Boston College, Virginia Tech, Johns Hopkins University, Colorado State University, as well as the support of UNAVCO for loans of dual-frequency GPS receivers for use in this project, and Intel for loans of

  1. The Employment of an Iterative Design Process to Develop a Pulmonary Graphical Display

    PubMed Central

    Wachter, S. Blake; Agutter, Jim; Syroid, Noah; Drews, Frank; Weinger, Matthew B.; Westenskow, Dwayne

    2003-01-01

    Objective: Data representations on today's medical monitors need to be improved to advance clinical awareness and prevent data vigilance errors. Simply building graphical displays does not ensure an improvement in clinical performance because displays have to be consistent with the user's clinical processes and mental models. In this report, the development of an original pulmonary graphical display for anesthesia is used as an example to show an iterative design process with built-in usability testing. Design: The process reported here is rapid, inexpensive, and requires a minimal number of subjects per development cycle. Three paper-based tests evaluated the anatomic, variable mapping, and graphical diagnostic meaning of the pulmonary display. Measurements: A confusion matrix compared the designer's intended answer with the subject's chosen answer. Considering deviations off the diagonal of the confusion matrix as design weaknesses, the pulmonary display was modified and retested. The iterative cycle continued until the anatomic and variable mapping cumulative test scores for a chosen design scored above 90% and the graphical diagnostic meaning test scored above 75%. Results: The iterative development test resulted in five design iterations. The final graphical pulmonary display improved the overall intuitiveness by 18%. The display was tested in three categories: anatomic features, variable mapping, and diagnostic accuracy. The anatomic intuitiveness increased by 25%, variable mapping intuitiveness increased by 34%, and diagnostic accuracy decreased slightly by 4%. Conclusion: With this rapid iterative development process, an intuitive graphical display can be developed inexpensively prior to formal testing in an experimental setting. PMID:12668693

  2. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    PubMed

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-01

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  3. Student Thinking Processes While Constructing Graphic Representations of Textbook Content: What Insights Do Think-Alouds Provide?

    ERIC Educational Resources Information Center

    Scott, D. Beth; Dreher, Mariam Jean

    2016-01-01

    This study examined the thinking processes students engage in while constructing graphic representations of textbook content. Twenty-eight students who either used graphic representations in a routine manner during social studies instruction or learned to construct graphic representations based on the rhetorical patterns used to organize textbook…

  4. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  5. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    NASA Technical Reports Server (NTRS)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  6. Multiparallel decompression simultaneously using multicore central processing unit and graphic processing unit

    NASA Astrophysics Data System (ADS)

    Petta, Andrea; Serra, Luigi; De Nino, Maurizio

    2013-01-01

    The discrete wavelet transform (DWT)-based compression algorithm is widely used in many image compression systems. The time-consuming computation of the 9/7 discrete wavelet decomposition and the bit-plane decoding is usually the bottleneck of these systems. In order to perform real-time decompression on a massive bit stream of compressed images continuously down-linked from the satellite, we propose a different graphic processing unit (GPU)-accelerated decoding system. In this system, the GPU and multiple central processing unit (CPU) threads are run in parallel. To obtain the maximum throughput via a different pipeline structure for processing continuous satellite images, an additional balancing algorithm workload has been implemented to distribute the jobs to both CPU and GPU parts to have approximately the same processing speed. Through the pipelined CPU and GPU heterogeneous computing, the entire decoding system approaches a speedup of 15× as compared to its single-threaded CPU counterpart. The proposed channel and source decoding system is able to decompress 1024×1024 satellite images at a speed of 20 frames/s.

  7. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  8. Graphics processing simulation and trade-off study for cockpit applications

    NASA Astrophysics Data System (ADS)

    Groat, Jeff; Hancock, William R.; Johnson, Michael J.; Shackleton, John; Spaanenburg, Henk; Steeves, Todd; Bishop, Richard G.; Peterson, Gregory D.; Read, Britton C., III

    1996-05-01

    Under the sponsorship of Wright Laboratory (contract F33615-92-C-3802), Honeywell has been involved in the definition of next-generation display processors. This paper describes the top-level design approach, simulation and tradeoff studies, as well as the resulting architectural concepts for the cockpit display generator (CDG) processing system. The CDG architecture provides the graphical and video processing power needed to drive future high- resolution display devices and to generate advanced display formats for improved pilot situation awareness. The foremost objective of the CDG design is to achieve super-graphics workstation performance in a form factor suitable for avionics applications. The CDG design provides multichannel, high-performance 2-D and 3-D graphics and real-time video manipulation. Requirements for the CDG have been defined by the needs of Panoramic Cockpit Control and Display System (PCCADS) 2000 cockpits. Most notable are requirements for low-volume, low-power, real-time performance and tolerance for harsh environmental conditions. These goals have been realized by combining customized graphics pipelines with standard processing elements. The CDG design has been implemented as a software 'prototype' using VHDL performance and functional models. This novel design approach allows architectural tradeoffs to be made within the context of a standard design language, VHDL. Simulations have been developed to specify and evaluate particular system performance and functional and design aspects.

  9. Fast blood flow visualization of high-resolution laser speckle imaging data using graphics processing unit.

    PubMed

    Liu, Shusen; Li, Pengcheng; Luo, Qingming

    2008-09-15

    Laser speckle contrast analysis (LASCA) is a non-invasive, full-field optical technique that produces two-dimensional map of blood flow in biological tissue by analyzing speckle images captured by CCD camera. Due to the heavy computation required for speckle contrast analysis, video frame rate visualization of blood flow which is essentially important for medical usage is hardly achieved for the high-resolution image data by using the CPU (Central Processing Unit) of an ordinary PC (Personal Computer). In this paper, we introduced GPU (Graphics Processing Unit) into our data processing framework of laser speckle contrast imaging to achieve fast and high-resolution blood flow visualization on PCs by exploiting the high floating-point processing power of commodity graphics hardware. By using GPU, a 12-60 fold performance enhancement is obtained in comparison to the optimized CPU implementations.

  10. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures

    NASA Astrophysics Data System (ADS)

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.

    2012-03-01

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in realtime by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  11. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures.

    PubMed

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R

    2012-02-23

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  12. Parallel computing for simultaneous iterative tomographic imaging by graphics processing units

    NASA Astrophysics Data System (ADS)

    Bello-Maldonado, Pedro D.; López, Ricardo; Rogers, Colleen; Jin, Yuanwei; Lu, Enyue

    2016-05-01

    In this paper, we address the problem of accelerating inversion algorithms for nonlinear acoustic tomographic imaging by parallel computing on graphics processing units (GPUs). Nonlinear inversion algorithms for tomographic imaging often rely on iterative algorithms for solving an inverse problem, thus computationally intensive. We study the simultaneous iterative reconstruction technique (SIRT) for the multiple-input-multiple-output (MIMO) tomography algorithm which enables parallel computations of the grid points as well as the parallel execution of multiple source excitation. Using graphics processing units (GPUs) and the Compute Unified Device Architecture (CUDA) programming model an overall improvement of 26.33x was achieved when combining both approaches compared with sequential algorithms. Furthermore we propose an adaptive iterative relaxation factor and the use of non-uniform weights to improve the overall convergence of the algorithm. Using these techniques, fast computations can be performed in parallel without the loss of image quality during the reconstruction process.

  13. SraTailor: graphical user interface software for processing and visualizing ChIP-seq data.

    PubMed

    Oki, Shinya; Maehara, Kazumitsu; Ohkawa, Yasuyuki; Meno, Chikara

    2014-12-01

    Raw data from ChIP-seq (chromatin immunoprecipitation combined with massively parallel DNA sequencing) experiments are deposited in public databases as SRAs (Sequence Read Archives) that are publically available to all researchers. However, to graphically visualize ChIP-seq data of interest, the corresponding SRAs must be downloaded and converted into BigWig format, a process that involves complicated command-line processing. This task requires users to possess skill with script languages and sequence data processing, a requirement that prevents a wide range of biologists from exploiting SRAs. To address these challenges, we developed SraTailor, a GUI (Graphical User Interface) software package that automatically converts an SRA into a BigWig-formatted file. Simplicity of use is one of the most notable features of SraTailor: entering an accession number of an SRA and clicking the mouse are the only steps required to obtain BigWig-formatted files and to graphically visualize the extents of reads at given loci. SraTailor is also able to make peak calls, generate files of other formats, process users' own data, and accept various command-line-like options. Therefore, this software makes ChIP-seq data fully exploitable by a wide range of biologists. SraTailor is freely available at http://www.devbio.med.kyushu-u.ac.jp/sra_tailor/, and runs on both Mac and Windows machines.

  14. Graphics Processing Unit-Based Bioheat Simulation to Facilitate Rapid Decision Making Associated with Cryosurgery Training.

    PubMed

    Keelan, Robert; Zhang, Hong; Shimada, Kenji; Rabin, Yoed

    2016-04-01

    This study focuses on the implementation of an efficient numerical technique for cryosurgery simulations on a graphics processing unit as an alternative means to accelerate runtime. This study is part of an ongoing effort to develop computerized training tools for cryosurgery, with prostate cryosurgery as a developmental model. The ability to perform rapid simulations of various test cases is critical to facilitate sound decision making associated with medical training. Consistent with clinical practice, the training tool aims at correlating the frozen region contour and the corresponding temperature field with the target region shape. The current study focuses on the feasibility of graphics processing unit-based computation using C++ accelerated massive parallelism, as one possible implementation. Benchmark results on a variety of computation platforms display between 3-fold acceleration (laptop) and 13-fold acceleration (gaming computer) of cryosurgery simulation, in comparison with the more common implementation on a multicore central processing unit. While the general concept of graphics processing unit-based simulations is not new, its application to phase-change problems, combined with the unique requirements for cryosurgery optimization, represents the core contribution of the current study.

  15. Impact of memory bottleneck on the performance of graphics processing units

    NASA Astrophysics Data System (ADS)

    Son, Dong Oh; Choi, Hong Jun; Kim, Jong Myon; Kim, Cheol Hong

    2015-12-01

    Recent graphics processing units (GPUs) can process general-purpose applications as well as graphics applications with the help of various user-friendly application programming interfaces (APIs) supported by GPU vendors. Unfortunately, utilizing the hardware resource in the GPU efficiently is a challenging problem, since the GPU architecture is totally different to the traditional CPU architecture. To solve this problem, many studies have focused on the techniques for improving the system performance using GPUs. In this work, we analyze the GPU performance varying GPU parameters such as the number of cores and clock frequency. According to our simulations, the GPU performance can be improved by 125.8% and 16.2% on average as the number of cores and clock frequency increase, respectively. However, the performance is saturated when memory bottleneck problems incur due to huge data requests to the memory. The performance of GPUs can be improved as the memory bottleneck is reduced by changing GPU parameters dynamically.

  16. TRIIG - Time-lapse reproduction of images through interactive graphics. [digital processing of quality hard copy

    NASA Technical Reports Server (NTRS)

    Buckner, J. D.; Council, H. W.; Edwards, T. R.

    1974-01-01

    Description of the hardware and software implementing the system of time-lapse reproduction of images through interactive graphics (TRIIG). The system produces a quality hard copy of processed images in a fast and inexpensive manner. This capability allows for optimal development of processing software through the rapid viewing of many image frames in an interactive mode. Three critical optical devices are used to reproduce an image: an Optronics photo reader/writer, the Adage Graphics Terminal, and Polaroid Type 57 high speed film. Typical sources of digitized images are observation satellites, such as ERTS or Mariner, computer coupled electron microscopes for high-magnification studies, or computer coupled X-ray devices for medical research.

  17. Signal processing for ION mobility spectrometers

    NASA Technical Reports Server (NTRS)

    Taylor, S.; Hinton, M.; Turner, R.

    1995-01-01

    Signal processing techniques for systems based upon Ion Mobility Spectrometry will be discussed in the light of 10 years of experience in the design of real-time IMS. Among the topics to be covered are compensation techniques for variations in the number density of the gas - the use of an internal standard (a reference peak) or pressure and temperature sensors. Sources of noise and methods for noise reduction will be discussed together with resolution limitations and the ability of deconvolution techniques to improve resolving power. The use of neural networks (either by themselves or as a component part of a processing system) will be reviewed.

  18. Process analysis using ion mobility spectrometry.

    PubMed

    Baumbach, J I

    2006-03-01

    Ion mobility spectrometry, originally used to detect chemical warfare agents, explosives and illegal drugs, is now frequently applied in the field of process analytics. The method combines both high sensitivity (detection limits down to the ng to pg per liter and ppb(v)/ppt(v) ranges) and relatively low technical expenditure with a high-speed data acquisition. In this paper, the working principles of IMS are summarized with respect to the advantages and disadvantages of the technique. Different ionization techniques, sample introduction methods and preseparation methods are considered. Proven applications of different types of ion mobility spectrometer (IMS) used at ISAS will be discussed in detail: monitoring of gas insulated substations, contamination in water, odoration of natural gas, human breath composition and metabolites of bacteria. The example applications discussed relate to purity (gas insulated substations), ecology (contamination of water resources), plants and person safety (odoration of natural gas), food quality control (molds and bacteria) and human health (breath analysis).

  19. Process analysis using ion mobility spectrometry.

    PubMed

    Baumbach, J I

    2006-03-01

    Ion mobility spectrometry, originally used to detect chemical warfare agents, explosives and illegal drugs, is now frequently applied in the field of process analytics. The method combines both high sensitivity (detection limits down to the ng to pg per liter and ppb(v)/ppt(v) ranges) and relatively low technical expenditure with a high-speed data acquisition. In this paper, the working principles of IMS are summarized with respect to the advantages and disadvantages of the technique. Different ionization techniques, sample introduction methods and preseparation methods are considered. Proven applications of different types of ion mobility spectrometer (IMS) used at ISAS will be discussed in detail: monitoring of gas insulated substations, contamination in water, odoration of natural gas, human breath composition and metabolites of bacteria. The example applications discussed relate to purity (gas insulated substations), ecology (contamination of water resources), plants and person safety (odoration of natural gas), food quality control (molds and bacteria) and human health (breath analysis). PMID:16132133

  20. RF Processing the NLCTA Injector Using Real Time Graphical Vacuum Displays

    NASA Astrophysics Data System (ADS)

    Gold, Saul L.

    1997-05-01

    One of the objectives of the NLCTA is to demonstrate the reliable operation of high peak power X-band RF transmission and acceleration systems. RF processing is an important function in this endeavor. The first klystron, pulse compression (SLEDII) and injector accelerator sections were processed to 50 MW SLED input power with a power multiplication at the output of SLEDII of almost 4. The paper describes RF processing by the use of real time graphical instrumentation that allows the viewing and recording of system vacuum levels and RF breakdown.

  1. Lamb wave propagation modelling and simulation using parallel processing architecture and graphical cards

    NASA Astrophysics Data System (ADS)

    Paćko, P.; Bielak, T.; Spencer, A. B.; Staszewski, W. J.; Uhl, T.; Worden, K.

    2012-07-01

    This paper demonstrates new parallel computation technology and an implementation for Lamb wave propagation modelling in complex structures. A graphical processing unit (GPU) and computer unified device architecture (CUDA), available in low-cost graphical cards in standard PCs, are used for Lamb wave propagation numerical simulations. The local interaction simulation approach (LISA) wave propagation algorithm has been implemented as an example. Other algorithms suitable for parallel discretization can also be used in practice. The method is illustrated using examples related to damage detection. The results demonstrate good accuracy and effective computational performance of very large models. The wave propagation modelling presented in the paper can be used in many practical applications of science and engineering.

  2. A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG

    NASA Astrophysics Data System (ADS)

    Griffiths, M. K.; Fedun, V.; Erdélyi, R.

    2015-03-01

    Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1-3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.

  3. Fast extended focused imaging in digital holography using a graphics processing unit.

    PubMed

    Wang, Le; Zhao, Jianlin; Di, Jianglei; Jiang, Hongzhen

    2011-05-01

    We present a simple and effective method for reconstructing extended focused images in digital holography using a graphics processing unit (GPU). The Fresnel transform method is simplified by an algorithm named fast Fourier transform pruning with frequency shift. Then the pixel size consistency problem is solved by coordinate transformation and combining the subpixel resampling and the fast Fourier transform pruning with frequency shift. With the assistance of the GPU, we implemented an improved parallel version of this method, which obtained about a 300-500-fold speedup compared with central processing unit codes.

  4. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    PubMed

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  5. Graphics processing unit-assisted density profile calculations in the KSTAR reflectometer

    NASA Astrophysics Data System (ADS)

    Seo, Seong-Heon; Oh, Dong Keun

    2014-11-01

    Wavelet transform (WT) is widely used in signal processing. The frequency modulation reflectometer in the KSTAR applies this technique to get the phase information from the mixer output measurements. Since WT is a time consuming process, it is difficult to calculate the density profile in real time. The data analysis time, however, can be significantly reduced by the use of the Graphics Processing Unit (GPU), with its powerful computing capability, in WT. A bottle neck in the KSTAR data processing exists in the data input and output (IO) process between the CPU and its peripheral devices. In this paper, the details of the WT implementation assisted by a GPU in the KSTAR reflectometer are presented and the consequent performance improvement is reported. The real time density profile calculation from the reflectometer measurements is also discussed.

  6. Graphics processing unit-assisted density profile calculations in the KSTAR reflectometer.

    PubMed

    Seo, Seong-Heon; Oh, Dong Keun

    2014-11-01

    Wavelet transform (WT) is widely used in signal processing. The frequency modulation reflectometer in the KSTAR applies this technique to get the phase information from the mixer output measurements. Since WT is a time consuming process, it is difficult to calculate the density profile in real time. The data analysis time, however, can be significantly reduced by the use of the Graphics Processing Unit (GPU), with its powerful computing capability, in WT. A bottle neck in the KSTAR data processing exists in the data input and output (IO) process between the CPU and its peripheral devices. In this paper, the details of the WT implementation assisted by a GPU in the KSTAR reflectometer are presented and the consequent performance improvement is reported. The real time density profile calculation from the reflectometer measurements is also discussed. PMID:25430234

  7. Systems Biology Graphical Notation: Process Description language Level 1 Version 1.3.

    PubMed

    Moodie, Stuart; Le Novère, Nicolas; Demir, Emek; Mi, Huaiyu; Villéger, Alice

    2015-01-01

    The Systems Biological Graphical Notation (SBGN) is an international community effort for standardized graphical representations of biological pathways and networks. The goal of SBGN is to provide unambiguous pathway and network maps for readers with different scientific backgrounds as well as to support efficient and accurate exchange of biological knowledge between different research communities, industry, and other players in systems biology. Three SBGN languages, Process Description (PD), Entity Relationship (ER) and Activity Flow (AF), allow for the representation of different aspects of biological and biochemical systems at different levels of detail. The SBGN Process Description language represents biological entities and processes between these entities within a network. SBGN PD focuses on the mechanistic description and temporal dependencies of biological interactions and transformations. The nodes (elements) are split into entity nodes describing, e.g., metabolites, proteins, genes and complexes, and process nodes describing, e.g., reactions and associations. The edges (connections) provide descriptions of relationships (or influences) between the nodes, such as consumption, production, stimulation and inhibition. Among all three languages of SBGN, PD is the closest to metabolic and regulatory pathways in biological literature and textbooks, but its well-defined semantics offer a superior precision in expressing biological knowledge. PMID:26528561

  8. Systems Biology Graphical Notation: Process Description language Level 1 Version 1.3.

    PubMed

    Moodie, Stuart; Le Novère, Nicolas; Demir, Emek; Mi, Huaiyu; Villéger, Alice

    2015-09-04

    The Systems Biological Graphical Notation (SBGN) is an international community effort for standardized graphical representations of biological pathways and networks. The goal of SBGN is to provide unambiguous pathway and network maps for readers with different scientific backgrounds as well as to support efficient and accurate exchange of biological knowledge between different research communities, industry, and other players in systems biology. Three SBGN languages, Process Description (PD), Entity Relationship (ER) and Activity Flow (AF), allow for the representation of different aspects of biological and biochemical systems at different levels of detail. The SBGN Process Description language represents biological entities and processes between these entities within a network. SBGN PD focuses on the mechanistic description and temporal dependencies of biological interactions and transformations. The nodes (elements) are split into entity nodes describing, e.g., metabolites, proteins, genes and complexes, and process nodes describing, e.g., reactions and associations. The edges (connections) provide descriptions of relationships (or influences) between the nodes, such as consumption, production, stimulation and inhibition. Among all three languages of SBGN, PD is the closest to metabolic and regulatory pathways in biological literature and textbooks, but its well-defined semantics offer a superior precision in expressing biological knowledge.

  9. Creating Interactive Graphical Overlays in the Advanced Weather Interactive Processing System Using Shapefiles and DGM Files

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Lafosse, Richard; Hood, Doris; Hoeth, Brian

    2007-01-01

    Graphical overlays can be created in real-time in the Advanced Weather Interactive Processing System (AWIPS) using shapefiles or Denver AWIPS Risk Reduction and Requirements Evaluation (DARE) Graphics Metafile (DGM) files. This presentation describes how to create graphical overlays on-the-fly for AWIPS, by using two examples of AWIPS applications that were created by the Applied Meteorology Unit (AMU) located at Cape Canaveral Air Force Station (CCAFS), Florida. The first example is the Anvil Threat Corridor Forecast Tool, which produces a shapefile that depicts a graphical threat corridor of the forecast movement of thunderstorm anvil clouds, based on the observed or forecast upper-level winds. This tool is used by the Spaceflight Meteorology Group (SMG) at Johnson Space Center, Texas and 45th Weather Squadron (45 WS) at CCAFS to analyze the threat of natural or space vehicle-triggered lightning over a location. The second example is a launch and landing trajectory tool that produces a DGM file that plots the ground track of space vehicles during launch or landing. The trajectory tool can be used by SMG and the 45 WS forecasters to analyze weather radar imagery along a launch or landing trajectory. The presentation will list the advantages and disadvantages of both file types for creating interactive graphical overlays in future AWIPS applications. Shapefiles are a popular format used extensively in Geographical Information Systems. They are usually used in AWIPS to depict static map backgrounds. A shapefile stores the geometry and attribute information of spatial features in a dataset (ESRI 1998). Shapefiles can contain point, line, and polygon features. Each shapefile contains a main file, index file, and a dBASE table. The main file contains a record for each spatial feature, which describes the feature with a list of its vertices. The index file contains the offset of each record from the beginning of the main file. The dBASE table contains records for each

  10. Efficient neighbor list calculation for molecular simulation of colloidal systems using graphics processing units

    NASA Astrophysics Data System (ADS)

    Howard, Michael P.; Anderson, Joshua A.; Nikoubashman, Arash; Glotzer, Sharon C.; Panagiotopoulos, Athanassios Z.

    2016-06-01

    We present an algorithm based on linear bounding volume hierarchies (LBVHs) for computing neighbor (Verlet) lists using graphics processing units (GPUs) for colloidal systems characterized by large size disparities. We compare this to a GPU implementation of the current state-of-the-art CPU algorithm based on stenciled cell lists. We report benchmarks for both neighbor list algorithms in a Lennard-Jones binary mixture with synthetic interaction range disparity and a realistic colloid solution. LBVHs outperformed the stenciled cell lists for systems with moderate or large size disparity and dilute or semidilute fractions of large particles, conditions typical of colloidal systems.

  11. FAST TRACK COMMUNICATION: Integrating post-Newtonian equations on graphics processing units

    NASA Astrophysics Data System (ADS)

    Herrmann, Frank; Silberholz, John; Bellone, Matías; Guerberoff, Gustavo; Tiglio, Manuel

    2010-02-01

    We report on early results of a numerical and statistical study of binary black hole inspirals. The two black holes are evolved using post-Newtonian approximations starting with initially randomly distributed spin vectors. We characterize certain aspects of the distribution shortly before merger. In particular we note the uniform distribution of black hole spin vector dot products shortly before merger and a high correlation between the initial and final black hole spin vector dot products in the equal-mass, maximally spinning case. More than 300 million simulations were performed on graphics processing units, and we demonstrate a speed-up of a factor 50 over a more conventional CPU implementation.

  12. Solution of relativistic quantum optics problems using clusters of graphical processing units

    SciTech Connect

    Gordon, D.F. Hafizi, B.; Helle, M.H.

    2014-06-15

    Numerical solution of relativistic quantum optics problems requires high performance computing due to the rapid oscillations in a relativistic wavefunction. Clusters of graphical processing units are used to accelerate the computation of a time dependent relativistic wavefunction in an arbitrary external potential. The stationary states in a Coulomb potential and uniform magnetic field are determined analytically and numerically, so that they can used as initial conditions in fully time dependent calculations. Relativistic energy levels in extreme magnetic fields are recovered as a means of validation. The relativistic ionization rate is computed for an ion illuminated by a laser field near the usual barrier suppression threshold, and the ionizing wavefunction is displayed.

  13. Graphics processing unit-accelerated double random phase encoding for fast image encryption

    NASA Astrophysics Data System (ADS)

    Lee, Jieun; Yi, Faliu; Saifullah, Rao; Moon, Inkyu

    2014-11-01

    We propose a fast double random phase encoding (DRPE) algorithm using a graphics processing unit (GPU)-based stream-processing model. A performance analysis of the accelerated DRPE implementation that employs the Compute Unified Device Architecture programming environment is presented. We show that the proposed methodology executed on a GPU can dramatically increase encryption speed compared with central processing unit sequential computing. Our experimental results demonstrate that in encryption data of an image with a pixel size of 1000×1000, where one pixel has a 32-bit depth, our GPU version of the DRPE scheme can be approximately two times faster than the advanced encryption standard algorithm implemented on a GPU. In addition, the quality of parallel processing on the presented DRPE acceleration method is evaluated with performance parameters, such as speedup, efficiency, and redundancy.

  14. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding.

    SciTech Connect

    Loughry, Thomas A.

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  15. Implementation and performance of a general purpose graphics processing unit in hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    van der Werff, H. M. A.; Bakker, W. H.

    2014-02-01

    A graphics processing unit (GPU) can perform massively parallel computations at relatively low cost. Software interfaces like NVIDIA CUDA allow for General Purpose computing on a GPU (GPGPU). Wrappers of the CUDA libraries for higher-level programming languages such as MATLAB and IDL allow its use in image processing. In this paper, we implement GPGPU in IDL with two distance measures frequently used in image classification, Euclidean distance and spectral angle, and apply these to hyperspectral imagery. First we vary the data volume of a synthetic dataset by changing the number of image pixels, spectral bands and classification endmembers to determine speed-up and to find the smallest data volume that would still benefit from using graphics hardware. Then we process real datasets that are too large to fit in the GPU memory, and study the effect of resulting extra data transfers on computing performance. We show that our GPU algorithms outperform the same algorithms for a central processor unit (CPU), that a significant speed-up can already be obtained on relatively small datasets, and that data transfers in large datasets do not significantly influence performance. Given that no specific knowledge on parallel computing is required for this implementation, remote sensing scientists should now be able to implement and use GPGPU for their data analysis.

  16. Modified graphical autocatalytic set model of combustion process in circulating fluidized bed boiler

    NASA Astrophysics Data System (ADS)

    Yusof, Nurul Syazwani; Bakar, Sumarni Abu; Ismail, Razidah

    2014-07-01

    Circulating Fluidized Bed Boiler (CFB) is a device for generating steam by burning fossil fuels in a furnace operating under a special hydrodynamic condition. Autocatalytic Set has provided a graphical model of chemical reactions that occurred during combustion process in CFB. Eight important chemical substances known as species were represented as nodes and catalytic relationships between nodes are represented by the edges in the graph. In this paper, the model is extended and modified by considering other relevant chemical reactions that also exist during the process. Catalytic relationship among the species in the model is discussed. The result reveals that the modified model is able to gives more explanation of the relationship among the species during the process at initial time t.

  17. BarraCUDA - a fast short read sequence aligner using graphics processing units

    PubMed Central

    2012-01-01

    Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net PMID:22244497

  18. Computation of Large Covariance Matrices by SAMMY on Graphical Processing Units and Multicore CPUs

    SciTech Connect

    Arbanas, Goran; Dunn, Michael E; Wiarda, Dorothea

    2011-01-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The U-235 RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000 x 20,000 that had previously taken days, took approximately one minute on the GPU. Similar performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms.

  19. 76 FR 70490 - Certain Electronic Devices With Graphics Data Processing Systems, Components Thereof, and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-14

    ... September 22, 2011, under section 337 of the Tariff Act of 1930, as amended, 19 U.S.C. 1337, on behalf of S3 Graphics Co., Ltd. of British West Indies and S3 Graphics, Inc. of Fremont, California. ] An amended... notice of investigation shall be served: (a) The complainants are: S3 Graphics Co., Ltd., 2nd...

  20. Acceleration of Early-Photon Fluorescence Molecular Tomography with Graphics Processing Units

    PubMed Central

    Wang, Xin; Zhang, Bin; Cao, Xu; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-01-01

    Fluorescence molecular tomography (FMT) with early-photons can improve the spatial resolution and fidelity of the reconstructed results. However, its computing scale is always large which limits its applications. In this paper, we introduced an acceleration strategy for the early-photon FMT with graphics processing units (GPUs). According to the procedure, the whole solution of FMT was divided into several modules and the time consumption for each module is studied. In this strategy, two most time consuming modules (Gd and W modules) were accelerated with GPU, respectively, while the other modules remained coded in the Matlab. Several simulation studies with a heterogeneous digital mouse atlas were performed to confirm the performance of the acceleration strategy. The results confirmed the feasibility of the strategy and showed that the processing speed was improved significantly. PMID:23606899

  1. Fast direct reconstruction strategy of dynamic fluorescence molecular tomography using graphics processing units

    NASA Astrophysics Data System (ADS)

    Chen, Maomao; Zhang, Jiulou; Cai, Chuangjian; Gao, Yang; Luo, Jianwen

    2016-06-01

    Dynamic fluorescence molecular tomography (DFMT) is a valuable method to evaluate the metabolic process of contrast agents in different organs in vivo, and direct reconstruction methods can improve the temporal resolution of DFMT. However, challenges still remain due to the large time consumption of the direct reconstruction methods. An acceleration strategy using graphics processing units (GPU) is presented. The procedure of conjugate gradient optimization in the direct reconstruction method is programmed using the compute unified device architecture and then accelerated on GPU. Numerical simulations and in vivo experiments are performed to validate the feasibility of the strategy. The results demonstrate that, compared with the traditional method, the proposed strategy can reduce the time consumption by ˜90% without a degradation of quality.

  2. Simplified electroholographic color reconstruction system using graphics processing unit and liquid crystal display projector.

    PubMed

    Shiraki, Atsushi; Takada, Naoki; Niwa, Masashi; Ichihashi, Yasuyuki; Shimobaba, Tomoyoshi; Masuda, Nobuyuki; Ito, Tomoyoshi

    2009-08-31

    We have constructed a simple color electroholography system that has excellent cost performance. It uses a graphics processing unit (GPU) and a liquid crystal display (LCD) projector. The structure of the GPU is suitable for calculating computer-generated holograms (CGHs). The calculation speed of the GPU is approximately 1,500 times faster than that of a central processing unit. The LCD projector is an inexpensive, high-performance device for displaying CGHs. It has high-definition LCD panels for red, green and blue. Thus, it can be easily used for color electroholography. For a three-dimensional object consisting of 1,000 points, our system succeeded in real-time color holographic reconstruction at rate of 30 frames per second.

  3. Real-time digital holographic microscopy using the graphic processing unit.

    PubMed

    Shimobaba, Tomoyoshi; Sato, Yoshikuni; Miura, Junya; Takenouchi, Mai; Ito, Tomoyoshi

    2008-08-01

    Digital holographic microscopy (DHM) is a well-known powerful method allowing both the amplitude and phase of a specimen to be simultaneously observed. In order to obtain a reconstructed image from a hologram, numerous calculations for the Fresnel diffraction are required. The Fresnel diffraction can be accelerated by the FFT (Fast Fourier Transform) algorithm. However, real-time reconstruction from a hologram is difficult even if we use a recent central processing unit (CPU) to calculate the Fresnel diffraction by the FFT algorithm. In this paper, we describe a real-time DHM system using a graphic processing unit (GPU) with many stream processors, which allows use as a highly parallel processor. The computational speed of the Fresnel diffraction using the GPU is faster than that of recent CPUs. The real-time DHM system can obtain reconstructed images from holograms whose size is 512 x 512 grids in 24 frames per second.

  4. Computation of the Density Matrix in Electronic Structure Theory in Parallel on Multiple Graphics Processing Units.

    PubMed

    Cawkwell, M J; Wood, M A; Niklasson, Anders M N; Mniszewski, S M

    2014-12-01

    The algorithm developed in Cawkwell, M. J. et al. J. Chem. Theory Comput. 2012 , 8 , 4094 for the computation of the density matrix in electronic structure theory on a graphics processing unit (GPU) using the second-order spectral projection (SP2) method [ Niklasson, A. M. N. Phys. Rev. B 2002 , 66 , 155115 ] has been efficiently parallelized over multiple GPUs on a single compute node. The parallel implementation provides significant speed-ups with respect to the single GPU version with no loss of accuracy. The performance and accuracy of the parallel GPU-based algorithm is compared with the performance of the SP2 algorithm and traditional matrix diagonalization methods on a multicore central processing unit (CPU).

  5. Denoising NMR time-domain signal by singular-value decomposition accelerated by graphics processing units.

    PubMed

    Man, Pascal P; Bonhomme, Christian; Babonneau, Florence

    2014-01-01

    We present a post-processing method that decreases the NMR spectrum noise without line shape distortion. As a result the signal-to-noise (S/N) ratio of a spectrum increases. This method is called Cadzow enhancement procedure that is based on the singular-value decomposition of time-domain signal. We also provide software whose execution duration is a few seconds for typical data when it is executed in modern graphic-processing unit. We tested this procedure not only on low sensitive nucleus (29)Si in hybrid materials but also on low gyromagnetic ratio, quadrupole nucleus (87)Sr in reference sample Sr(NO3)2. Improving the spectrum S/N ratio facilitates the determination of T/Q ratio of hybrid materials. It is also applicable to simulated spectrum, resulting shorter simulation duration for powder averaging. An estimation of the number of singular values needed for denoising is also provided. PMID:24880899

  6. High-speed nonlinear finite element analysis for surgical simulation using graphics processing units.

    PubMed

    Taylor, Z A; Cheng, M; Ourselin, S

    2008-05-01

    The use of biomechanical modelling, especially in conjunction with finite element analysis, has become common in many areas of medical image analysis and surgical simulation. Clinical employment of such techniques is hindered by conflicting requirements for high fidelity in the modelling approach, and fast solution speeds. We report the development of techniques for high-speed nonlinear finite element analysis for surgical simulation. We use a fully nonlinear total Lagrangian explicit finite element formulation which offers significant computational advantages for soft tissue simulation. However, the key contribution of the work is the presentation of a fast graphics processing unit (GPU) solution scheme for the finite element equations. To the best of our knowledge, this represents the first GPU implementation of a nonlinear finite element solver. We show that the present explicit finite element scheme is well suited to solution via highly parallel graphics hardware, and that even a midrange GPU allows significant solution speed gains (up to 16.8 x) compared with equivalent CPU implementations. For the models tested the scheme allows real-time solution of models with up to 16,000 tetrahedral elements. The use of GPUs for such purposes offers a cost-effective high-performance alternative to expensive multi-CPU machines, and may have important applications in medical image analysis and surgical simulation. PMID:18450538

  7. Real-time nonlinear finite element analysis for surgical simulation using graphics processing units.

    PubMed

    Taylor, Zeike A; Cheng, Mario; Ourselin, Sébastien

    2007-01-01

    Clinical employment of biomechanical modelling techniques in areas of medical image analysis and surgical simulation is often hindered by conflicting requirements for high fidelity in the modelling approach and high solution speeds. We report the development of techniques for high-speed nonlinear finite element (FE) analysis for surgical simulation. We employ a previously developed nonlinear total Lagrangian explicit FE formulation which offers significant computational advantages for soft tissue simulation. However, the key contribution of the work is the presentation of a fast graphics processing unit (GPU) solution scheme for the FE equations. To the best of our knowledge this represents the first GPU implementation of a nonlinear FE solver. We show that the present explicit FE scheme is well-suited to solution via highly parallel graphics hardware, and that even a midrange GPU allows significant solution speed gains (up to 16.4x) compared with equivalent CPU implementations. For the models tested the scheme allows real-time solution of models with up to 16000 tetrahedral elements. The use of GPUs for such purposes offers a cost-effective high-performance alternative to expensive multi-CPU machines, and may have important applications in medical image analysis and surgical simulation. PMID:18051120

  8. Open-source graphics processing unit-accelerated ray tracer for optical simulation

    NASA Astrophysics Data System (ADS)

    Mauch, Florian; Gronle, Marc; Lyda, Wolfram; Osten, Wolfgang

    2013-05-01

    Ray tracing still is the workhorse in optical design and simulation. Its basic principle, propagating light as a set of mutually independent rays, implies a linear dependency of the computational effort and the number of rays involved in the problem. At the same time, the mutual independence of the light rays bears a huge potential for parallelization of the computational load. This potential has recently been recognized in the visualization community, where graphics processing unit (GPU)-accelerated ray tracing is used to render photorealistic images. However, precision requirements in optical simulation are substantially higher than in visualization, and therefore performance results known from visualization cannot be expected to transfer to optical simulation one-to-one. In this contribution, we present an open-source implementation of a GPU-accelerated ray tracer, based on nVidias acceleration engine OptiX, that traces in double precision and exploits the massively parallel architecture of modern graphics cards. We compare its performance to a CPU-based tracer that has been developed in parallel.

  9. Lossy hyperspectral image compression tuned for spectral mixture analysis applications on NVidia graphics processing units

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Plaza, Javier; Sánchez, Sergio; Paz, Abel

    2009-08-01

    In this paper, we develop a computationally efficient approach for lossy compression of remotely sensed hyperspectral images which has been specifically tuned to preserve the relevant information required in spectral mixture analysis (SMA) applications. The proposed method is based on two steps: 1) endmember extraction, and 2) linear spectral unmixing. Two endmember extraction algorithms: the pixel purity index (PPI) and the automatic morphological endmember extraction (AMEE), and a fully constrained linear spectral unmixing (FCLSU) algorithm have been considered in this work to devise the proposed lossy compression strategy. The proposed methodology has been implemented in graphics processing units (GPUs) of NVidiaTM type. Our experiments demonstrate that it can achieve very high compression ratios when applied to standard hyperspectral data sets, and can also retain the relevant information required for spectral unmixing in a computationally efficient way, achieving speedups in the order of 26 on a NVidiaTM GeForce 8800 GTX graphic card when compared to an optimized implementation of the same code in a dual-core CPU.

  10. SU-E-P-59: A Graphical Interface for XCAT Phantom Configuration, Generation and Processing

    SciTech Connect

    Myronakis, M; Cai, W; Dhou, S; Cifter, F; Lewis, J; Hurwitz, M

    2015-06-15

    Purpose: To design a comprehensive open-source, publicly available, graphical user interface (GUI) to facilitate the configuration, generation, processing and use of the 4D Extended Cardiac-Torso (XCAT) phantom. Methods: The XCAT phantom includes over 9000 anatomical objects as well as respiratory, cardiac and tumor motion. It is widely used for research studies in medical imaging and radiotherapy. The phantom generation process involves the configuration of a text script to parameterize the geometry, motion, and composition of the whole body and objects within it, and to generate simulated PET or CT images. To avoid the need for manual editing or script writing, our MATLAB-based GUI uses slider controls, drop-down lists, buttons and graphical text input to parameterize and process the phantom. Results: Our GUI can be used to: a) generate parameter files; b) generate the voxelized phantom; c) combine the phantom with a lesion; d) display the phantom; e) produce average and maximum intensity images from the phantom output files; f) incorporate irregular patient breathing patterns; and f) generate DICOM files containing phantom images. The GUI provides local help information using tool-tip strings on the currently selected phantom, minimizing the need for external documentation. The DICOM generation feature is intended to simplify the process of importing the phantom images into radiotherapy treatment planning systems or other clinical software. Conclusion: The GUI simplifies and automates the use of the XCAT phantom for imaging-based research projects in medical imaging or radiotherapy. This has the potential to accelerate research conducted with the XCAT phantom, or to ease the learning curve for new users. This tool does not include the XCAT phantom software itself. We would like to acknowledge funding from MRA, Varian Medical Systems Inc.

  11. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    PubMed

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  12. Real-time display on Fourier domain optical coherence tomography system using a graphics processing unit.

    PubMed

    Watanabe, Yuuki; Itagaki, Toshiki

    2009-01-01

    Fourier domain optical coherence tomography (FD-OCT) requires resampling of spectrally resolved depth information from wavelength to wave number, and the subsequent application of the inverse Fourier transform. The display rates of OCT images are much slower than the image acquisition rates due to processing speed limitations on most computers. We demonstrate a real-time display of processed OCT images using a linear-in-wave-number (linear-k) spectrometer and a graphics processing unit (GPU). We use the linear-k spectrometer with the combination of a diffractive grating with 1200 lines/mm and a F2 equilateral prism in the 840-nm spectral region to avoid calculating the resampling process. The calculations of the fast Fourier transform (FFT) are accelerated by the GPU with many stream processors, which realizes highly parallel processing. A display rate of 27.9 frames/sec for processed images (2048 FFT size x 1000 lateral A-scans) is achieved in our OCT system using a line scan CCD camera operated at 27.9 kHz. PMID:20059237

  13. Real-time display on Fourier domain optical coherence tomography system using a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuuki; Itagaki, Toshiki

    2009-11-01

    Fourier domain optical coherence tomography (FD-OCT) requires resampling of spectrally resolved depth information from wavelength to wave number, and the subsequent application of the inverse Fourier transform. The display rates of OCT images are much slower than the image acquisition rates due to processing speed limitations on most computers. We demonstrate a real-time display of processed OCT images using a linear-in-wave-number (linear-k) spectrometer and a graphics processing unit (GPU). We use the linear-k spectrometer with the combination of a diffractive grating with 1200 lines/mm and a F2 equilateral prism in the 840-nm spectral region to avoid calculating the resampling process. The calculations of the fast Fourier transform (FFT) are accelerated by the GPU with many stream processors, which realizes highly parallel processing. A display rate of 27.9 frames/sec for processed images (2048 FFT size×1000 lateral A-scans) is achieved in our OCT system using a line scan CCD camera operated at 27.9 kHz.

  14. Implementation and optimization of ultrasound signal processing algorithms on mobile GPU

    NASA Astrophysics Data System (ADS)

    Kong, Woo Kyu; Lee, Wooyoul; Kim, Kyu Cheol; Yoo, Yangmo; Song, Tai-Kyong

    2014-03-01

    A general-purpose graphics processing unit (GPGPU) has been used for improving computing power in medical ultrasound imaging systems. Recently, a mobile GPU becomes powerful to deal with 3D games and videos at high frame rates on Full HD or HD resolution displays. This paper proposes the method to implement ultrasound signal processing on a mobile GPU available in the high-end smartphone (Galaxy S4, Samsung Electronics, Seoul, Korea) with programmable shaders on the OpenGL ES 2.0 platform. To maximize the performance of the mobile GPU, the optimization of shader design and load sharing between vertex and fragment shader was performed. The beamformed data were captured from a tissue mimicking phantom (Model 539 Multipurpose Phantom, ATS Laboratories, Inc., Bridgeport, CT, USA) by using a commercial ultrasound imaging system equipped with a research package (Ultrasonix Touch, Ultrasonix, Richmond, BC, Canada). The real-time performance is evaluated by frame rates while varying the range of signal processing blocks. The implementation method of ultrasound signal processing on OpenGL ES 2.0 was verified by analyzing PSNR with MATLAB gold standard that has the same signal path. CNR was also analyzed to verify the method. From the evaluations, the proposed mobile GPU-based processing method has no significant difference with the processing using MATLAB (i.e., PSNR<52.51 dB). The comparable results of CNR were obtained from both processing methods (i.e., 11.31). From the mobile GPU implementation, the frame rates of 57.6 Hz were achieved. The total execution time was 17.4 ms that was faster than the acquisition time (i.e., 34.4 ms). These results indicate that the mobile GPU-based processing method can support real-time ultrasound B-mode processing on the smartphone.

  15. Particle-In-Cell simulations of high pressure plasmas using graphics processing units

    NASA Astrophysics Data System (ADS)

    Gebhardt, Markus; Atteln, Frank; Brinkmann, Ralf Peter; Mussenbrock, Thomas; Mertmann, Philipp; Awakowicz, Peter

    2009-10-01

    Particle-In-Cell (PIC) simulations are widely used to understand the fundamental phenomena in low-temperature plasmas. Particularly plasmas at very low gas pressures are studied using PIC methods. The inherent drawback of these methods is that they are very time consuming -- certain stability conditions has to be satisfied. This holds even more for the PIC simulation of high pressure plasmas due to the very high collision rates. The simulations take up to very much time to run on standard computers and require the help of computer clusters or super computers. Recent advances in the field of graphics processing units (GPUs) provides every personal computer with a highly parallel multi processor architecture for very little money. This architecture is freely programmable and can be used to implement a wide class of problems. In this paper we present the concepts of a fully parallel PIC simulation of high pressure plasmas using the benefits of GPU programming.

  16. FAST CALCULATION OF THE LOMB-SCARGLE PERIODOGRAM USING GRAPHICS PROCESSING UNITS

    SciTech Connect

    Townsend, R. H. D.

    2010-12-15

    I introduce a new code for fast calculation of the Lomb-Scargle periodogram that leverages the computing power of graphics processing units (GPUs). After establishing a background to the newly emergent field of GPU computing, I discuss the code design and narrate key parts of its source. Benchmarking calculations indicate no significant differences in accuracy compared to an equivalent CPU-based code. However, the differences in performance are pronounced; running on a low-end GPU, the code can match eight CPU cores, and on a high-end GPU it is faster by a factor approaching 30. Applications of the code include analysis of long photometric time series obtained by ongoing satellite missions and upcoming ground-based monitoring facilities, and Monte Carlo simulation of periodogram statistical properties.

  17. On the use of graphics processing units (GPUs) for molecular dynamics simulation of spherical particles

    NASA Astrophysics Data System (ADS)

    Hidalgo, R. C.; Kanzaki, T.; Alonso-Marroquin, F.; Luding, S.

    2013-06-01

    General-purpose computation on Graphics Processing Units (GPU) on personal computers has recently become an attractive alternative to parallel computing on clusters and supercomputers. We present the GPU-implementation of an accurate molecular dynamics algorithm for a system of spheres. The new hybrid CPU-GPU implementation takes into account all the degrees of freedom, including the quaternion representation of 3D rotations. For additional versatility, the contact interaction between particles is defined using a force law of enhanced generality, which accounts for the elastic and dissipative interactions, and the hard-sphere interaction parameters are translated to the soft-sphere parameter set. We prove that the algorithm complies with the statistical mechanical laws by examining the homogeneous cooling of a granular gas with rotation. The results are in excellent agreement with well established mean-field theories for low-density hard sphere systems. This GPU technique dramatically reduces user waiting time, compared with a traditional CPU implementation.

  18. Acceleration of Electron Repulsion Integral Evaluation on Graphics Processing Units via Use of Recurrence Relations.

    PubMed

    Miao, Yipu; Merz, Kenneth M

    2013-02-12

    Electron repulsion integral (ERI) calculation on graphical processing units (GPUs) can significantly accelerate quantum chemical calculations. Herein, the ab initio self-consistent-field (SCF) calculation is implemented on GPUs using recurrence relations, which is one of the fastest ERI evaluation algorithms currently available. A direct-SCF scheme to assemble the Fock matrix efficiently is presented, wherein ERIs are evaluated on-the-fly to avoid CPU-GPU data transfer, a well-known architectural bottleneck in GPU specific computation. Realized speedups on GPUs reach 10-100 times relative to traditional CPU nodes, with accuracies of better than 1 × 10(-7) for systems with more than 4000 basis functions. PMID:26588740

  19. A graphical method to evaluate predominant geochemical processes occurring in groundwater systems for radiocarbon dating

    USGS Publications Warehouse

    Han, Liang-Feng; Plummer, L. Niel; Aggarwal, Pradeep

    2012-01-01

    A graphical method is described for identifying geochemical reactions needed in the interpretation of radiocarbon age in groundwater systems. Graphs are constructed by plotting the measured 14C, δ13C, and concentration of dissolved inorganic carbon and are interpreted according to specific criteria to recognize water samples that are consistent with a wide range of processes, including geochemical reactions, carbon isotopic exchange, 14C decay, and mixing of waters. The graphs are used to provide a qualitative estimate of radiocarbon age, to deduce the hydrochemical complexity of a groundwater system, and to compare samples from different groundwater systems. Graphs of chemical and isotopic data from a series of previously-published groundwater studies are used to demonstrate the utility of the approach. Ultimately, the information derived from the graphs is used to improve geochemical models for adjustment of radiocarbon ages in groundwater systems.

  20. Acceleration of the GAMESS-UK electronic structure package on graphical processing units.

    PubMed

    Wilkinson, Karl A; Sherwood, Paul; Guest, Martyn F; Naidoo, Kevin J

    2011-07-30

    The approach used to calculate the two-electron integral by many electronic structure packages including generalized atomic and molecular electronic structure system-UK has been designed for CPU-based compute units. We redesigned the two-electron compute algorithm for acceleration on a graphical processing unit (GPU). We report the acceleration strategy and illustrate it on the (ss|ss) type integrals. This strategy is general for Fortran-based codes and uses the Accelerator compiler from Portland Group International and GPU-based accelerators from Nvidia. The evaluation of (ss|ss) type integrals within calculations using Hartree Fock ab initio methods and density functional theory are accelerated by single and quad GPU hardware systems by factors of 43 and 153, respectively. The overall speedup for a single self consistent field cycle is at least a factor of eight times faster on a single GPU compared with that of a single CPU. PMID:21541963

  1. Efficient implementation of effective core potential integrals and gradients on graphical processing units.

    PubMed

    Song, Chenchen; Wang, Lee-Ping; Sachse, Torsten; Preiss, Julia; Presselt, Martin; Martínez, Todd J

    2015-07-01

    Effective core potential integral and gradient evaluations are accelerated via implementation on graphical processing units (GPUs). Two simple formulas are proposed to estimate the upper bounds of the integrals, and these are used for screening. A sorting strategy is designed to balance the workload between GPU threads properly. Significant improvements in performance and reduced scaling with system size are observed when combining the screening and sorting methods, and the calculations are highly efficient for systems containing up to 10 000 basis functions. The GPU implementation preserves the precision of the calculation; the ground state Hartree-Fock energy achieves good accuracy for CdSe and ZnTe nanocrystals, and energy is well conserved in ab initio molecular dynamics simulations. PMID:26156472

  2. Using general-purpose computing on graphics processing units (GPGPU) to accelerate the ordinary kriging algorithm

    NASA Astrophysics Data System (ADS)

    Gutiérrez de Ravé, E.; Jiménez-Hornero, F. J.; Ariza-Villaverde, A. B.; Gómez-López, J. M.

    2014-03-01

    Spatial interpolation methods have been applied to many disciplines, the ordinary kriging interpolation being one of the methods most frequently used. However, kriging comprises a computational cost that scales as the cube of the number of data points. Therefore, one most pressing problems in geostatistical simulations is that of developing methods that can reduce the computational time. Weights calculation and then the estimate for each unknown point is the most time-consuming step in ordinary kriging. This work investigates the potential reduction in execution time by selecting the suitable operations involved in this step to be parallelized by using general-purpose computing on graphics processing units (GPGPU) and Compute Unified Device Architecture (CUDA). This study has been performed by taking into account comparative studies between graphic and central processing units on two different machines, a personal computer (GPU, GeForce 9500, and CPU, AMD Athlon X2 4600) and a server (GPU, Tesla C1060, and CPU, Xeon 5600). In addition, two data types (float and double) have been considered in the executions. The experimental results indicate that parallel implementation of matrix inverse by using GPGPU and CUDA will be enough to reduce the execution time of weights calculation and estimation for each unknown point and, as a result, the global performance time of ordinary kriging. In addition, suitable array dimensions for using the available parallelized code have been determined for each case. Thus, it is possible to obtain relevant saved times compared to those resulting from considering wider parallelized extension. This fact demonstrates the convenience of carrying out this kind of study in other interpolation calculation methodologies using matrices.

  3. Real-time resampling in Fourier domain optical coherence tomography using a graphics processing unit.

    PubMed

    Van der Jeught, Sam; Bradu, Adrian; Podoleanu, Adrian Gh

    2010-01-01

    Fourier domain optical coherence tomography (FD-OCT) requires either a linear-in-wavenumber spectrometer or a computationally heavy software algorithm to recalibrate the acquired optical signal from wavelength to wavenumber. The first method is sensitive to the position of the prism in the spectrometer, while the second method drastically slows down the system speed when it is implemented on a serially oriented central processing unit. We implement the full resampling process on a commercial graphics processing unit (GPU), distributing the necessary calculations to many stream processors that operate in parallel. A comparison between several recalibration methods is made in terms of performance and image quality. The GPU is also used to accelerate the fast Fourier transform (FFT) and to remove the background noise, thereby achieving full GPU-based signal processing without the need for extra resampling hardware. A display rate of 25 framessec is achieved for processed images (1,024 x 1,024 pixels) using a line-scan charge-coupled device (CCD) camera operating at 25.6 kHz. PMID:20614994

  4. Real-time resampling in Fourier domain optical coherence tomography using a graphics processing unit

    NASA Astrophysics Data System (ADS)

    van der Jeught, Sam; Bradu, Adrian; Podoleanu, Adrian Gh.

    2010-05-01

    Fourier domain optical coherence tomography (FD-OCT) requires either a linear-in-wavenumber spectrometer or a computationally heavy software algorithm to recalibrate the acquired optical signal from wavelength to wavenumber. The first method is sensitive to the position of the prism in the spectrometer, while the second method drastically slows down the system speed when it is implemented on a serially oriented central processing unit. We implement the full resampling process on a commercial graphics processing unit (GPU), distributing the necessary calculations to many stream processors that operate in parallel. A comparison between several recalibration methods is made in terms of performance and image quality. The GPU is also used to accelerate the fast Fourier transform (FFT) and to remove the background noise, thereby achieving full GPU-based signal processing without the need for extra resampling hardware. A display rate of 25 frames/sec is achieved for processed images (1024×1024 pixels) using a line-scan charge-coupled device (CCD) camera operating at 25.6 kHz.

  5. Real-time resampling in Fourier domain optical coherence tomography using a graphics processing unit.

    PubMed

    Van der Jeught, Sam; Bradu, Adrian; Podoleanu, Adrian Gh

    2010-01-01

    Fourier domain optical coherence tomography (FD-OCT) requires either a linear-in-wavenumber spectrometer or a computationally heavy software algorithm to recalibrate the acquired optical signal from wavelength to wavenumber. The first method is sensitive to the position of the prism in the spectrometer, while the second method drastically slows down the system speed when it is implemented on a serially oriented central processing unit. We implement the full resampling process on a commercial graphics processing unit (GPU), distributing the necessary calculations to many stream processors that operate in parallel. A comparison between several recalibration methods is made in terms of performance and image quality. The GPU is also used to accelerate the fast Fourier transform (FFT) and to remove the background noise, thereby achieving full GPU-based signal processing without the need for extra resampling hardware. A display rate of 25 framessec is achieved for processed images (1,024 x 1,024 pixels) using a line-scan charge-coupled device (CCD) camera operating at 25.6 kHz.

  6. Fast computation of MadGraph amplitudes on graphics processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Hagiwara, K.; Kanzaki, J.; Li, Q.; Okamura, N.; Stelzer, T.

    2013-11-01

    Continuing our previous studies on QED and QCD processes, we use the graphics processing unit (GPU) for fast calculations of helicity amplitudes for general Standard Model (SM) processes. Additional HEGET codes to handle all SM interactions are introduced, as well as the program MG2CUDA that converts arbitrary MadGraph generated HELAS amplitudes (FORTRAN) into HEGET codes in CUDA. We test all the codes by comparing amplitudes and cross sections for multi-jet processes at the LHC associated with production of single and double weak bosons, a top-quark pair, Higgs boson plus a weak boson or a top-quark pair, and multiple Higgs bosons via weak-boson fusion, where all the heavy particles are allowed to decay into light quarks and leptons with full spin correlations. All the helicity amplitudes computed by HEGET are found to agree with those computed by HELAS within the expected numerical accuracy, and the cross sections obtained by gBASES, a GPU version of the Monte Carlo integration program, agree with those obtained by BASES (FORTRAN), as well as those obtained by MadGraph. The performance of GPU was over a factor of 10 faster than CPU for all processes except those with the highest number of jets.

  7. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit

    PubMed Central

    Lee, Kenneth K. C.; Mariampillai, Adrian; Yu, Joe X. Z.; Cadotte, David W.; Wilson, Brian C.; Standish, Beau A.; Yang, Victor X. D.

    2012-01-01

    Abstract: Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second. PMID:22808428

  8. Process industries - graphic arts, paint, plastics, and textiles: all cousins under the skin

    NASA Astrophysics Data System (ADS)

    Simon, Frederick T.

    2002-06-01

    The origin and selection of colors in the process industries is different depending upon how the creative process is applied and what are the capabilities of the manufacturing process. The fashion industry (clothing) with its supplier of textiles is the leader of color innovation. Color may be introduced into textile products at several stages in the manufacturing process from fiber through yarn and finally into fabric. The paint industry is divided into two major applications: automotive and trades sales. Automotive colors are selected by stylists who are in the employ of the automobile manufacturers. Trade sales paint on the other hand can be decided by paint manufactureres or by invididuals who patronize custom mixing facilities. Plastics colors are for the most part decided by the industrial designers who include color as part of the design. Graphic Arts (painting) is a burgeoning industry that uses color in image reproduction and package design. Except for text, printed material in color today has become the norm rather than an exception.

  9. Real-time blood flow visualization using the graphics processing unit

    PubMed Central

    Yang, Owen; Cuccia, David; Choi, Bernard

    2011-01-01

    Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ∼10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark. PMID:21280915

  10. Real-time blood flow visualization using the graphics processing unit

    NASA Astrophysics Data System (ADS)

    Yang, Owen; Cuccia, David; Choi, Bernard

    2011-01-01

    Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ~10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark.

  11. Atmospheric process evaluation of mobile source emissions

    SciTech Connect

    1995-07-01

    During the past two decades there has been a considerable effort in the US to develop and introduce an alternative to the use of gasoline and conventional diesel fuel for transportation. The primary motives for this effort have been twofold: energy security and improvement in air quality, most notably ozone, or smog. The anticipated improvement in air quality is associated with a decrease in the atmospheric reactivity, and sometimes a decrease in the mass emission rate, of the organic gas and NO{sub x} emissions from alternative fuels when compared to conventional transportation fuels. Quantification of these air quality impacts is a prerequisite to decisions on adopting alternative fuels. The purpose of this report is to present a critical review of the procedures and data base used to assess the impact on ambient air quality of mobile source emissions from alternative and conventional transportation fuels and to make recommendations as to how this process can be improved. Alternative transportation fuels are defined as methanol, ethanol, CNG, LPG, and reformulated gasoline. Most of the discussion centers on light-duty AFVs operating on these fuels. Other advanced transportation technologies and fuels such as hydrogen, electric vehicles, and fuel cells, will not be discussed. However, the issues raised herein can also be applied to these technologies and other classes of vehicles, such as heavy-duty diesels (HDDs). An evaluation of the overall impact of AFVs on society requires consideration of a number of complex issues. It involves the development of new vehicle technology associated with engines, fuel systems, and emission control technology; the implementation of the necessary fuel infrastructure; and an appropriate understanding of the economic, health, safety, and environmental impacts associated with the use of these fuels. This report addresses the steps necessary to properly evaluate the impact of AFVs on ozone air quality.

  12. Visual displays that directly interface and provide read-outs of molecular states via molecular graphics processing units.

    PubMed

    Poje, Julia E; Kastratovic, Tamara; Macdonald, Andrew R; Guillermo, Ana C; Troetti, Steven E; Jabado, Omar J; Fanning, M Leigh; Stefanovic, Darko; Macdonald, Joanne

    2014-08-25

    The monitoring of molecular systems usually requires sophisticated technologies to interpret nanoscale events into electronic-decipherable signals. We demonstrate a new method for obtaining read-outs of molecular states that uses graphics processing units made from molecular circuits. Because they are made from molecules, the units are able to directly interact with molecular systems. We developed deoxyribozyme-based graphics processing units able to monitor nucleic acids and output alphanumerical read-outs via a fluorescent display. Using this design we created a molecular 7-segment display, a molecular calculator able to add and multiply small numbers, and a molecular automaton able to diagnose Ebola and Marburg virus sequences. These molecular graphics processing units provide insight for the construction of autonomous biosensing devices, and are essential components for the development of molecular computing platforms devoid of electronics.

  13. [Influence of the recording interval and a graphic organizer on the writing process/product and on other psychological variables].

    PubMed

    García Sánchez, Jesús N; Rodríguez Pérez, Celestino

    2007-05-01

    An experimental study of the influence of the recording interval and a graphic organizer on the processes of writing composition and on the final product is presented. We studied 326 participants, age 10 to 16 years old, by means of a nested design. Two groups were compared: one group was aided in the writing process with a graphic organizer and the other was not. Each group was subdivided into two further groups: one with a mean recording interval of 45 seconds and the other with approximately 90 seconds recording interval in a writing log. The results showed that the group aided by a graphic organizer obtained better results both in processes and writing product, and that the groups assessed with an average interval of 45 seconds obtained worse results. Implications for educational practice are discussed, and limitations and future perspectives are commented on.

  14. Parallel particle swarm optimization on a graphics processing unit with application to trajectory optimization

    NASA Astrophysics Data System (ADS)

    Wu, Q.; Xiong, F.; Wang, F.; Xiong, Y.

    2016-10-01

    In order to reduce the computational time, a fully parallel implementation of the particle swarm optimization (PSO) algorithm on a graphics processing unit (GPU) is presented. Instead of being executed on the central processing unit (CPU) sequentially, PSO is executed in parallel via the GPU on the compute unified device architecture (CUDA) platform. The processes of fitness evaluation, updating of velocity and position of all particles are all parallelized and introduced in detail. Comparative studies on the optimization of four benchmark functions and a trajectory optimization problem are conducted by running PSO on the GPU (GPU-PSO) and CPU (CPU-PSO). The impact of design dimension, number of particles and size of the thread-block in the GPU and their interactions on the computational time is investigated. The results show that the computational time of the developed GPU-PSO is much shorter than that of CPU-PSO, with comparable accuracy, which demonstrates the remarkable speed-up capability of GPU-PSO.

  15. Optical diagnostics of a single evaporating droplet using fast parallel computing on graphics processing units

    NASA Astrophysics Data System (ADS)

    Jakubczyk, D.; Migacz, S.; Derkachov, G.; Woźniak, M.; Archer, J.; Kolwas, K.

    2016-09-01

    We report on the first application of the graphics processing units (GPUs) accelerated computing technology to improve performance of numerical methods used for the optical characterization of evaporating microdroplets. Single microdroplets of various liquids with different volatility and molecular weight (glycerine, glycols, water, etc.), as well as mixtures of liquids and diverse suspensions evaporate inside the electrodynamic trap under the chosen temperature and composition of atmosphere. The series of scattering patterns recorded from the evaporating microdroplets are processed by fitting complete Mie theory predictions with gradientless lookup table method. We showed that computations on GPUs can be effectively applied to inverse scattering problems. In particular, our technique accelerated calculations of the Mie scattering theory on a single-core processor in a Matlab environment over 800 times and almost 100 times comparing to the corresponding code in C language. Additionally, we overcame problems of the time-consuming data post-processing when some of the parameters (particularly the refractive index) of an investigated liquid are uncertain. Our program allows us to track the parameters characterizing the evaporating droplet nearly simultaneously with the progress of evaporation.

  16. Adiabatic/nonadiabatic state-to-state reactive scattering dynamics implemented on graphics processing units.

    PubMed

    Zhang, Pei-Yu; Han, Ke-Li

    2013-09-12

    An efficient graphics processing units (GPUs) version of time-dependent wavepacket code is developed for the atom-diatom state-to-state reactive scattering processes. The propagation of the wavepacket is entirely calculated on GPUs employing the split-operator method after preparation of the initial wavepacket on the central processing unit (CPU). An additional split-operator method is introduced in the rotational part of the Hamiltonian to decrease communication of GPUs without losing accuracy of state-to-state information. The code is tested to calculate the differential cross sections of H + H2 reaction and state-resolved reaction probabilities of nonadiabatic triplet-singlet transitions of O((3)P,(1)D) + H2 for the total angular momentum J = 0. The global speedups of 22.11, 38.80, and 44.80 are found comparing the parallel computation of one GPU, two GPUs by exact rotational operator, and two GPU versions by an approximate rotational operator with serial computation of the CPU, respectively.

  17. Developing extensible lattice-Boltzmann simulators for general-purpose graphics-processing units

    SciTech Connect

    Walsh, S C; Saar, M O

    2011-12-21

    Lattice-Boltzmann methods are versatile numerical modeling techniques capable of reproducing a wide variety of fluid-mechanical behavior. These methods are well suited to parallel implementation, particularly on the single-instruction multiple data (SIMD) parallel processing environments found in computer graphics processing units (GPUs). Although more recent programming tools dramatically improve the ease with which GPU programs can be written, the programming environment still lacks the flexibility available to more traditional CPU programs. In particular, it may be difficult to develop modular and extensible programs that require variable on-device functionality with current GPU architectures. This paper describes a process of automatic code generation that overcomes these difficulties for lattice-Boltzmann simulations. It details the development of GPU-based modules for an extensible lattice-Boltzmann simulation package - LBHydra. The performance of the automatically generated code is compared to equivalent purpose written codes for both single-phase, multiple-phase, and multiple-component flows. The flexibility of the new method is demonstrated by simulating a rising, dissolving droplet in a porous medium with user generated lattice-Boltzmann models and subroutines.

  18. Four-dimensional structural and Doppler optical coherence tomography imaging on graphics processing units

    NASA Astrophysics Data System (ADS)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczynska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2012-10-01

    The authors present the application of graphics processing unit (GPU) programming for real-time three-dimensional (3-D) Fourier domain optical coherence tomography (FdOCT) imaging with implementation of flow visualization algorithms. One of the limitations of FdOCT is data processing time, which is generally longer than data acquisition time. Utilizing additional algorithms, such as Doppler analysis, further increases computation time. The general purpose computing on GPU (GPGPU) has been used successfully for structural OCT imaging, but real-time 3-D imaging of flows has so far not been presented. We have developed software for structural and Doppler OCT processing capable of visualization of two-dimensional (2-D) data (2000 A-scans, 2048 pixels per spectrum) with an image refresh rate higher than 120 Hz. The 3-D imaging of 100×100 A-scans data is performed at a rate of about 9 volumes per second. We describe the software architecture, organization of threads, and optimization. Screen shots recorded during real-time imaging of a flow phantom and the human eye are presented.

  19. Four-dimensional structural and Doppler optical coherence tomography imaging on graphics processing units.

    PubMed

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczynska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2012-10-01

    The authors present the application of graphics processing unit (GPU) programming for real-time three-dimensional (3-D) Fourier domain optical coherence tomography (FdOCT) imaging with implementation of flow visualization algorithms. One of the limitations of FdOCT is data processing time, which is generally longer than data acquisition time. Utilizing additional algorithms, such as Doppler analysis, further increases computation time. The general purpose computing on GPU (GPGPU) has been used successfully for structural OCT imaging, but real-time 3-D imaging of flows has so far not been presented. We have developed software for structural and Doppler OCT processing capable of visualization of two-dimensional (2-D) data (2000 A-scans, 2048 pixels per spectrum) with an image refresh rate higher than 120 Hz. The 3-D imaging of 100×100 A-scans data is performed at a rate of about 9 volumes per second. We describe the software architecture, organization of threads, and optimization. Screen shots recorded during real-time imaging of a flow phantom and the human eye are presented.

  20. Mobile Monitoring Data Processing & Analysis Strategies

    EPA Science Inventory

    The development of portable, high-time resolution instruments for measuring the concentrations of a variety of air pollutants has made it possible to collect data while in motion. This strategy, known as mobile monitoring, involves mounting air sensors on variety of different pla...

  1. Mobile Monitoring Data Processing and Analysis Strategies

    EPA Science Inventory

    The development of portable, high-time resolution instruments for measuring the concentrations of a variety of air pollutants has made it possible to collect data while in motion. This strategy, known as mobile monitoring, involves mounting air sensors on variety of different pla...

  2. Acceleration of iterative Navier-Stokes solvers on graphics processing units

    NASA Astrophysics Data System (ADS)

    Tomczak, Tadeusz; Zadarnowska, Katarzyna; Koza, Zbigniew; Matyka, Maciej; Mirosław, Łukasz

    2013-04-01

    While new power-efficient computer architectures exhibit spectacular theoretical peak performance, they require specific conditions to operate efficiently, which makes porting complex algorithms a challenge. Here, we report results of the semi-implicit method for pressure linked equations (SIMPLE) and the pressure implicit with operator splitting (PISO) methods implemented on the graphics processing unit (GPU). We examine the advantages and disadvantages of the full porting over a partial acceleration of these algorithms run on unstructured meshes. We found that the full-port strategy requires adjusting the internal data structures to the new hardware and proposed a convenient format for storing internal data structures on GPUs. Our implementation is validated on standard steady and unsteady problems and its computational efficiency is checked by comparing its results and run times with those of some standard software (OpenFOAM) run on central processing unit (CPU). The results show that a server-class GPU outperforms a server-class dual-socket multi-core CPU system running essentially the same algorithm by up to a factor of 4.

  3. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches. PMID:24298424

  4. Ultra-fast displaying Spectral Domain Optical Doppler Tomography system using a Graphics Processing Unit.

    PubMed

    Jeong, Hyosang; Cho, Nam Hyun; Jung, Unsang; Lee, Changho; Kim, Jeong-Yeon; Kim, Jeehyun

    2012-01-01

    We demonstrate an ultrafast displaying Spectral Domain Optical Doppler Tomography system using Graphics Processing Unit (GPU) computing. The calculation of FFT and the Doppler frequency shift is accelerated by the GPU. Our system can display processed OCT and ODT images simultaneously in real time at 120 fps for 1,024 pixels × 512 lateral A-scans. The computing time for the Doppler information was dependent on the size of the moving average window, but with a window size of 32 pixels the ODT computation time is only 8.3 ms, which is comparable to the data acquisition time. Also the phase noise decreases significantly with the window size. Since the performance of a real-time display for OCT/ODT is very important for clinical applications that need immediate diagnosis for screening or biopsy. Intraoperative surgery can take much benefit from the real-time display flow rate information from the technology. Moreover, the GPU is an attractive tool for clinical and commercial systems for functional OCT features as well. PMID:22969328

  5. Practical Implementation of Prestack Kirchhoff Time Migration on a General Purpose Graphics Processing Unit

    NASA Astrophysics Data System (ADS)

    Liu, Guofeng; Li, Chun

    2016-08-01

    In this study, we present a practical implementation of prestack Kirchhoff time migration (PSTM) on a general purpose graphic processing unit. First, we consider the three main optimizations of the PSTM GPU code, i.e., designing a configuration based on a reasonable execution, using the texture memory for velocity interpolation, and the application of an intrinsic function in device code. This approach can achieve a speedup of nearly 45 times on a NVIDIA GTX 680 GPU compared with CPU code when a larger imaging space is used, where the PSTM output is a common reflection point that is gathered as I[nx][ny][nh][nt] in matrix format. However, this method requires more memory space so the limited imaging space cannot fully exploit the GPU sources. To overcome this problem, we designed a PSTM scheme with multi-GPUs for imaging different seismic data on different GPUs using an offset value. This process can achieve the peak speedup of GPU PSTM code and it greatly increases the efficiency of the calculations, but without changing the imaging result.

  6. Fast ray-tracing of human eye optics on Graphics Processing Units.

    PubMed

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images.

  7. Spatial resolution recovery utilizing multi-ray tracing and graphic processing unit in PET image reconstruction.

    PubMed

    Liang, Yicheng; Peng, Hao

    2015-02-01

    Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity.

  8. Lossy hyperspectral image compression on a graphics processing unit: parallelization strategy and performance evaluation

    NASA Astrophysics Data System (ADS)

    Santos, Lucana; Magli, Enrico; Vitulli, Raffaele; Núñez, Antonio; López, José F.; Sarmiento, Roberto

    2013-01-01

    There is an intense necessity for the development of new hardware architectures for the implementation of algorithms for hyperspectral image compression on board satellites. Graphics processing units (GPUs) represent a very attractive opportunity, offering the possibility to dramatically increase the computation speed in applications that are data and task parallel. An algorithm for the lossy compression of hyperspectral images is implemented on a GPU using Nvidia computer unified device architecture (CUDA) parallel computing architecture. The parallelization strategy is explained, with emphasis on the entropy coding and bit packing phases, for which a more sophisticated strategy is necessary due to the existing data dependencies. Experimental results are obtained by comparing the performance of the GPU implementation with a single-threaded CPU implementation, showing high speedups of up to 15.41. A profiling of the algorithm is provided, demonstrating the high performance of the designed parallel entropy coding phase. The accuracy of the GPU implementation is presented, as well as the effect of the configuration parameters on performance. The convenience of using GPUs for on-board processing is demonstrated, and solutions to the potential difficulties encountered when accelerating hyperspectral compression algorithms are proposed, if space-qualified GPUs become a reality in the near future.

  9. Ultra-fast displaying Spectral Domain Optical Doppler Tomography system using a Graphics Processing Unit.

    PubMed

    Jeong, Hyosang; Cho, Nam Hyun; Jung, Unsang; Lee, Changho; Kim, Jeong-Yeon; Kim, Jeehyun

    2012-01-01

    We demonstrate an ultrafast displaying Spectral Domain Optical Doppler Tomography system using Graphics Processing Unit (GPU) computing. The calculation of FFT and the Doppler frequency shift is accelerated by the GPU. Our system can display processed OCT and ODT images simultaneously in real time at 120 fps for 1,024 pixels × 512 lateral A-scans. The computing time for the Doppler information was dependent on the size of the moving average window, but with a window size of 32 pixels the ODT computation time is only 8.3 ms, which is comparable to the data acquisition time. Also the phase noise decreases significantly with the window size. Since the performance of a real-time display for OCT/ODT is very important for clinical applications that need immediate diagnosis for screening or biopsy. Intraoperative surgery can take much benefit from the real-time display flow rate information from the technology. Moreover, the GPU is an attractive tool for clinical and commercial systems for functional OCT features as well.

  10. Simulating 3-D lung dynamics using a programmable graphics processing unit.

    PubMed

    Santhanam, Anand P; Hamza-Lup, Felix G; Rolland, Jannick P

    2007-09-01

    Medical simulations of lung dynamics promise to be effective tools for teaching and training clinical and surgical procedures related to lungs. Their effectiveness may be greatly enhanced when visualized in an augmented reality (AR) environment. However, the computational requirements of AR environments limit the availability of the central processing unit (CPU) for the lung dynamics simulation for different breathing conditions. In this paper, we present a method for computing lung deformations in real time by taking advantage of the programmable graphics processing unit (GPU). This will save the CPU time for other AR-associated tasks such as tracking, communication, and interaction management. An approach for the simulations of the three-dimensional (3-D) lung dynamics using Green's formulation in the case of upright position is taken into consideration. We extend this approach to other orientations as well as the subsequent changes in breathing. Specifically, the proposed extension presents a computational optimization and its implementation in a GPU. Results show that the computational requirements for simulating the deformation of a 3-D lung model are significantly reduced for point-based rendering.

  11. High-Throughput Characterization of Porous Materials Using Graphics Processing Units.

    PubMed

    Kim, Jihan; Martin, Richard L; Rübel, Oliver; Haranczyk, Maciej; Smit, Berend

    2012-05-01

    We have developed a high-throughput graphics processing unit (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations, where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CH4 and CO2) and materials' framework atoms. Using a parallel flood fill central processing unit (CPU) algorithm, inaccessible regions inside the framework structures are identified and blocked, based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than those considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple Grand Canonical Monte Carlo (GCMC) simulations concurrently within the GPU.

  12. Fast Monte Carlo simulations of ultrasound-modulated light using a graphics processing unit.

    PubMed

    Leung, Terence S; Powell, Samuel

    2010-01-01

    Ultrasound-modulated optical tomography (UOT) is based on "tagging" light in turbid media with focused ultrasound. In comparison to diffuse optical imaging, UOT can potentially offer a better spatial resolution. The existing Monte Carlo (MC) model for simulating ultrasound-modulated light is central processing unit (CPU) based and has been employed in several UOT related studies. We reimplemented the MC model with a graphics processing unit [(GPU), Nvidia GeForce 9800] that can execute the algorithm up to 125 times faster than its CPU (Intel Core Quad) counterpart for a particular set of optical and acoustic parameters. We also show that the incorporation of ultrasound propagation in photon migration modeling increases the computational time considerably, by a factor of at least 6, in one case, even with a GPU. With slight adjustment to the code, MC simulations were also performed to demonstrate the effect of ultrasonic modulation on the speckle pattern generated by the light model (available as animation). This was computed in 4 s with our GPU implementation as compared to 290 s using the CPU.

  13. Particle-in-cell simulations with charge-conserving current deposition on graphic processing units

    NASA Astrophysics Data System (ADS)

    Ren, Chuang; Kong, Xianglong; Huang, Michael; Decyk, Viktor; Mori, Warren

    2011-10-01

    Recently using CUDA, we have developed an electromagnetic Particle-in-Cell (PIC) code with charge-conserving current deposition for Nvidia graphic processing units (GPU's) (Kong et al., Journal of Computational Physics 230, 1676 (2011). On a Tesla M2050 (Fermi) card, the GPU PIC code can achieve a one-particle-step process time of 1.2 - 3.2 ns in 2D and 2.3 - 7.2 ns in 3D, depending on plasma temperatures. In this talk we will discuss novel algorithms for GPU-PIC including charge-conserving current deposition scheme with few branching and parallel particle sorting. These algorithms have made efficient use of the GPU shared memory. We will also discuss how to replace the computation kernels of existing parallel CPU codes while keeping their parallel structures. This work was supported by U.S. Department of Energy under Grant Nos. DE-FG02-06ER54879 and DE-FC02-04ER54789 and by NSF under Grant Nos. PHY-0903797 and CCF-0747324.

  14. Fast ray-tracing of human eye optics on Graphics Processing Units.

    PubMed

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. PMID:24713524

  15. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  16. OCTGRAV: Sparse Octree Gravitational N-body Code on Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Gaburov, Evghenii; Bédorf, Jeroen; Portegies Zwart, Simon

    2010-10-01

    Octgrav is a new very fast tree-code which runs on massively parallel Graphical Processing Units (GPU) with NVIDIA CUDA architecture. The algorithms are based on parallel-scan and sort methods. The tree-construction and calculation of multipole moments is carried out on the host CPU, while the force calculation which consists of tree walks and evaluation of interaction list is carried out on the GPU. In this way, a sustained performance of about 100GFLOP/s and data transfer rates of about 50GB/s is achieved. It takes about a second to compute forces on a million particles with an opening angle of heta approx 0.5. To test the performance and feasibility, we implemented the algorithms in CUDA in the form of a gravitational tree-code which completely runs on the GPU. The tree construction and traverse algorithms are portable to many-core devices which have support for CUDA or OpenCL programming languages. The gravitational tree-code outperforms tuned CPU code during the tree-construction and shows a performance improvement of more than a factor 20 overall, resulting in a processing rate of more than 2.8 million particles per second. The code has a convenient user interface and is freely available for use.

  17. Accelerating Correlated Quantum Chemistry Calculations Using Graphical Processing Units and a Mixed Precision Matrix Multiplication Library.

    PubMed

    Olivares-Amaya, Roberto; Watson, Mark A; Edgar, Richard G; Vogt, Leslie; Shao, Yihan; Aspuru-Guzik, Alán

    2010-01-12

    Two new tools for the acceleration of computational chemistry codes using graphical processing units (GPUs) are presented. First, we propose a general black-box approach for the efficient GPU acceleration of matrix-matrix multiplications where the matrix size is too large for the whole computation to be held in the GPU's onboard memory. Second, we show how to improve the accuracy of matrix multiplications when using only single-precision GPU devices by proposing a heterogeneous computing model, whereby single- and double-precision operations are evaluated in a mixed fashion on the GPU and central processing unit, respectively. The utility of the library is illustrated for quantum chemistry with application to the acceleration of resolution-of-the-identity second-order Møller-Plesset perturbation theory calculations for molecules, which we were previously unable to treat. In particular, for the 168-atom valinomycin molecule in a cc-pVDZ basis set, we observed speedups of 13.8, 7.8, and 10.1 times for single-, double- and mixed-precision general matrix multiply (SGEMM, DGEMM, and MGEMM), respectively. The corresponding errors in the correlation energy were reduced from -10.0 to -1.2 kcal mol(-1) for SGEMM and MGEMM, respectively, while higher accuracy can be easily achieved with a different choice of cutoff parameter.

  18. Graphical Technique to Support the Teaching/Learning Process of Software Process Reference Models

    NASA Astrophysics Data System (ADS)

    Espinosa-Curiel, Ismael Edrein; Rodríguez-Jacobo, Josefina; Fernández-Zepeda, José Alberto

    In this paper, we propose a set of diagrams to visualize software process reference models (PRM). The diagrams, called dimods, are the combination of some visual and process modeling techniques such as rich pictures, mind maps, IDEF and RAD diagrams. We show the use of this technique by designing a set of dimods for the Mexican Software Industry Process Model (MoProSoft). Additionally, we perform an evaluation of the usefulness of dimods. The result of the evaluation shows that dimods may be a support tool that facilitates the understanding, memorization, and learning of software PRMs in both, software development organizations and universities. The results also show that dimods may have advantages over the traditional description methods for these types of models.

  19. VACTIV: A graphical dialog based program for an automatic processing of line and band spectra

    NASA Astrophysics Data System (ADS)

    Zlokazov, V. B.

    2013-05-01

    The program VACTIV-Visual ACTIV-has been developed for an automatic analysis of spectrum-like distributions, in particular gamma-ray spectra or alpha-spectra and is a standard graphical dialog based Windows XX application, driven by a menu, mouse and keyboard. On the one hand, it was a conversion of an existing Fortran program ACTIV [1] to the DELPHI language; on the other hand, it is a transformation of the sequential syntax of Fortran programming to a new object-oriented style, based on the organization of event interactions. New features implemented in the algorithms of both the versions consisted in the following as peak model both an analytical function and a graphical curve could be used; the peak search algorithm was able to recognize not only Gauss peaks but also peaks with an irregular form; both narrow peaks (2-4 channels) and broad ones (50-100 channels); the regularization technique in the fitting guaranteed a stable solution in the most complicated cases of strongly overlapping or weak peaks. The graphical dialog interface of VACTIV is much more convenient than the batch mode of ACTIV. [1] V.B. Zlokazov, Computer Physics Communications, 28 (1982) 27-37. NEW VERSION PROGRAM SUMMARYProgram Title: VACTIV Catalogue identifier: ABAC_v2_0 Licensing provisions: no Programming language: DELPHI 5-7 Pascal. Computer: IBM PC series. Operating system: Windows XX. RAM: 1 MB Keywords: Nuclear physics, spectrum decomposition, least squares analysis, graphical dialog, object-oriented programming. Classification: 17.6. Catalogue identifier of previous version: ABAC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 28 (1982) 27 Does the new version supersede the previous version?: Yes. Nature of problem: Program VACTIV is intended for precise analysis of arbitrary spectrum-like distributions, e.g. gamma-ray and X-ray spectra and allows the user to carry out the full cycle of automatic processing of such spectra, i.e. calibration, automatic peak search

  20. VACTIV: A graphical dialog based program for an automatic processing of line and band spectra

    NASA Astrophysics Data System (ADS)

    Zlokazov, V. B.

    2013-05-01

    The program VACTIV-Visual ACTIV-has been developed for an automatic analysis of spectrum-like distributions, in particular gamma-ray spectra or alpha-spectra and is a standard graphical dialog based Windows XX application, driven by a menu, mouse and keyboard. On the one hand, it was a conversion of an existing Fortran program ACTIV [1] to the DELPHI language; on the other hand, it is a transformation of the sequential syntax of Fortran programming to a new object-oriented style, based on the organization of event interactions. New features implemented in the algorithms of both the versions consisted in the following as peak model both an analytical function and a graphical curve could be used; the peak search algorithm was able to recognize not only Gauss peaks but also peaks with an irregular form; both narrow peaks (2-4 channels) and broad ones (50-100 channels); the regularization technique in the fitting guaranteed a stable solution in the most complicated cases of strongly overlapping or weak peaks. The graphical dialog interface of VACTIV is much more convenient than the batch mode of ACTIV. [1] V.B. Zlokazov, Computer Physics Communications, 28 (1982) 27-37. NEW VERSION PROGRAM SUMMARYProgram Title: VACTIV Catalogue identifier: ABAC_v2_0 Licensing provisions: no Programming language: DELPHI 5-7 Pascal. Computer: IBM PC series. Operating system: Windows XX. RAM: 1 MB Keywords: Nuclear physics, spectrum decomposition, least squares analysis, graphical dialog, object-oriented programming. Classification: 17.6. Catalogue identifier of previous version: ABAC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 28 (1982) 27 Does the new version supersede the previous version?: Yes. Nature of problem: Program VACTIV is intended for precise analysis of arbitrary spectrum-like distributions, e.g. gamma-ray and X-ray spectra and allows the user to carry out the full cycle of automatic processing of such spectra, i.e. calibration, automatic peak search

  1. Large-scale analytical Fourier transform of photomask layouts using graphics processing units

    NASA Astrophysics Data System (ADS)

    Sakamoto, Julia A.

    2015-10-01

    Compensation of lens-heating effects during the exposure scan in an optical lithographic system requires knowledge of the heating profile in the pupil of the projection lens. A necessary component in the accurate estimation of this profile is the total integrated distribution of light, relying on the squared modulus of the Fourier transform (FT) of the photomask layout for individual process layers. Requiring a layout representation in pixelated image format, the most common approach is to compute the FT numerically via the fast Fourier transform (FFT). However, the file size for a standard 26- mm×33-mm mask with 5-nm pixels is an overwhelming 137 TB in single precision; the data importing process alone, prior to FFT computation, can render this method highly impractical. A more feasible solution is to handle layout data in a highly compact format with vertex locations of mask features (polygons), which correspond to elements in an integrated circuit, as well as pattern symmetries and repetitions (e.g., GDSII format). Provided the polygons can decompose into shapes for which analytical FT expressions are possible, the analytical approach dramatically reduces computation time and alleviates the burden of importing extensive mask data. Algorithms have been developed for importing and interpreting hierarchical layout data and computing the analytical FT on a graphics processing unit (GPU) for rapid parallel processing, not assuming incoherent imaging. Testing was performed on the active layer of a 392- μm×297-μm virtual chip test structure with 43 substructures distributed over six hierarchical levels. The factor of improvement in the analytical versus numerical approach for importing layout data, performing CPU-GPU memory transfers, and executing the FT on a single NVIDIA Tesla K20X GPU was 1.6×104, 4.9×103, and 3.8×103, respectively. Various ideas for algorithm enhancements will be discussed.

  2. Process and Object Interpretations of Vector Magnitude Mediated by Use of the Graphics Calculator.

    ERIC Educational Resources Information Center

    Forster, Patricia

    2000-01-01

    Analyzes the development of one student's understanding of vector magnitude and how her problem solving was mediated by use of the absolute value graphics calculator function. (Contains 35 references.) (Author/ASK)

  3. Density functional theory calculation on many-cores hybrid central processing unit-graphic processing unit architectures.

    PubMed

    Genovese, Luigi; Ospici, Matthieu; Deutsch, Thierry; Méhaut, Jean-François; Neelov, Alexey; Goedecker, Stefan

    2009-07-21

    We present the implementation of a full electronic structure calculation code on a hybrid parallel architecture with graphic processing units (GPUs). This implementation is performed on a free software code based on Daubechies wavelets. Such code shows very good performances, systematic convergence properties, and an excellent efficiency on parallel computers. Our GPU-based acceleration fully preserves all these properties. In particular, the code is able to run on many cores which may or may not have a GPU associated, and thus on parallel and massive parallel hybrid machines. With double precision calculations, we may achieve considerable speedup, between a factor of 20 for some operations and a factor of 6 for the whole density functional theory code.

  4. TMSEEG: A MATLAB-Based Graphical User Interface for Processing Electrophysiological Signals during Transcranial Magnetic Stimulation

    PubMed Central

    Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C.; Wong, Willy; Daskalakis, Zafiris J.; Farzan, Faranak

    2016-01-01

    Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline

  5. Accelerating resolution-of-the-identity second-order Møller-Plesset quantum chemistry calculations with graphical processing units.

    PubMed

    Vogt, Leslie; Olivares-Amaya, Roberto; Kermes, Sean; Shao, Yihan; Amador-Bedolla, Carlos; Aspuru-Guzik, Alan

    2008-03-13

    The modification of a general purpose code for quantum mechanical calculations of molecular properties (Q-Chem) to use a graphical processing unit (GPU) is reported. A 4.3x speedup of the resolution-of-the-identity second-order Møller-Plesset perturbation theory (RI-MP2) execution time is observed in single point energy calculations of linear alkanes. The code modification is accomplished using the compute unified basic linear algebra subprograms (CUBLAS) library for an NVIDIA Quadro FX 5600 graphics card. Furthermore, speedups of other matrix algebra based electronic structure calculations are anticipated as a result of using a similar approach.

  6. Accelerated Molecular Dynamics Simulations with the AMOEBA Polarizable Force Field on Graphics Processing Units.

    PubMed

    Lindert, Steffen; Bucher, Denis; Eastman, Peter; Pande, Vijay; McCammon, J Andrew

    2013-11-12

    The accelerated molecular dynamics (aMD) method has recently been shown to enhance the sampling of biomolecules in molecular dynamics (MD) simulations, often by several orders of magnitude. Here, we describe an implementation of the aMD method for the OpenMM application layer that takes full advantage of graphics processing units (GPUs) computing. The aMD method is shown to work in combination with the AMOEBA polarizable force field (AMOEBA-aMD), allowing the simulation of long time-scale events with a polarizable force field. Benchmarks are provided to show that the AMOEBA-aMD method is efficiently implemented and produces accurate results in its standard parametrization. For the BPTI protein, we demonstrate that the protein structure described with AMOEBA remains stable even on the extended time scales accessed at high levels of accelerations. For the DNA repair metalloenzyme endonuclease IV, we show that the use of the AMOEBA force field is a significant improvement over fixed charged models for describing the enzyme active-site. The new AMOEBA-aMD method is publicly available (http://wiki.simtk.org/openmm/VirtualRepository) and promises to be interesting for studying complex systems that can benefit from both the use of a polarizable force field and enhanced sampling.

  7. Graphic processing unit accelerated real-time partially coherent beam generator

    NASA Astrophysics Data System (ADS)

    Ni, Xiaolong; Liu, Zhi; Chen, Chunyi; Jiang, Huilin; Fang, Hanhan; Song, Lujun; Zhang, Su

    2016-07-01

    A method of using liquid-crystals (LCs) to generate a partially coherent beam in real-time is described. An expression for generating a partially coherent beam is given and calculated using a graphic processing unit (GPU), i.e., the GeForce GTX 680. A liquid-crystal on silicon (LCOS) with 256 × 256 pixels is used as the partially coherent beam generator (PCBG). An optimizing method with partition convolution is used to improve the generating speed of our LC PCBG. The total time needed to generate a random phase map with a coherence width range from 0.015 mm to 1.5 mm is less than 2.4 ms for calculation and readout with the GPU; adding the time needed for the CPU to read and send to LCOS with the response time of the LC PCBG, the real-time partially coherent beam (PCB) generation frequency of our LC PCBG is up to 312 Hz. To our knowledge, it is the first real-time partially coherent beam generator. A series of experiments based on double pinhole interference are performed. The result shows that to generate a laser beam with a coherence width of 0.9 mm and 1.5 mm, with a mean error of approximately 1%, the RMS values needed 0.021306 and 0.020883 and the PV values required 0.073576 and 0.072998, respectively.

  8. Simulation of Coarse-Grained Protein-Protein Interactions with Graphics Processing Units.

    PubMed

    Tunbridge, Ian; Best, Robert B; Gain, James; Kuttel, Michelle M

    2010-11-01

    We report a hybrid parallel central and graphics processing units (CPU-GPU) implementation of a coarse-grained model for replica exchange Monte Carlo (REMC) simulations of protein assemblies. We describe the design, optimization, validation, and benchmarking of our algorithms, particularly the parallelization strategy, which is specific to the requirements of GPU hardware. Performance evaluation of our hybrid implementation shows scaled speedup as compared to a single-core CPU; reference simulations of small 100 residue proteins have a modest speedup of 4, while large simulations with thousands of residues are up to 1400 times faster. Importantly, the combination of coarse-grained models with highly parallel GPU hardware vastly increases the length- and time-scales accessible for protein simulation, making it possible to simulate much larger systems of interacting proteins than have previously been attempted. As a first step toward the simulation of the assembly of an entire viral capsid, we have demonstrated that the chosen coarse-grained model, together with REMC sampling, is capable of identifying the correctly bound structure, for a pair of fragments from the human hepatitis B virus capsid. Our parallel solution can easily be generalized to other interaction functions and other types of macromolecules and has implications for the parallelization of similar N-body problems that require random access lookups. PMID:26617104

  9. Parallel design of JPEG-LS encoder on graphics processing units

    NASA Astrophysics Data System (ADS)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  10. Accelerating image reconstruction in three-dimensional optoacoustic tomography on graphics processing units

    PubMed Central

    Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A.; Anastasio, Mark A.

    2013-01-01

    Purpose: Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Methods: Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. Results: The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Conclusions: Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction. PMID:23387778

  11. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.

  12. Graphics processing unit accelerated one-dimensional blood flow computation in the human arterial tree.

    PubMed

    Itu, Lucian; Sharma, Puneet; Kamen, Ali; Suciu, Constantin; Comaniciu, Dorin

    2013-12-01

    One-dimensional blood flow models have been used extensively for computing pressure and flow waveforms in the human arterial circulation. We propose an improved numerical implementation based on a graphics processing unit (GPU) for the acceleration of the execution time of one-dimensional model. A novel parallel hybrid CPU-GPU algorithm with compact copy operations (PHCGCC) and a parallel GPU only (PGO) algorithm are developed, which are compared against previously introduced PHCG versions, a single-threaded CPU only algorithm and a multi-threaded CPU only algorithm. Different second-order numerical schemes (Lax-Wendroff and Taylor series) are evaluated for the numerical solution of one-dimensional model, and the computational setups include physiologically motivated non-periodic (Windkessel) and periodic boundary conditions (BC) (structured tree) and elastic and viscoelastic wall laws. Both the PHCGCC and the PGO implementations improved the execution time significantly. The speed-up values over the single-threaded CPU only implementation range from 5.26 to 8.10 × , whereas the speed-up values over the multi-threaded CPU only implementation range from 1.84 to 4.02 × . The PHCGCC algorithm performs best for an elastic wall law with non-periodic BC and for viscoelastic wall laws, whereas the PGO algorithm performs best for an elastic wall law with periodic BC.

  13. High-throughput Characterization of Porous Materials Using Graphics Processing Units

    SciTech Connect

    Kim, Jihan; Martin, Richard L.; Ruebel, Oliver; Haranczyk, Maciej; Smit, Berend

    2012-03-19

    We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CH$_{4}$ and CO$_{2}$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.

  14. The application of projected conjugate gradient solvers on graphical processing units

    SciTech Connect

    Lin, Youzuo; Renaut, Rosemary

    2011-01-26

    Graphical processing units introduce the capability for large scale computation at the desktop. Presented numerical results verify that efficiencies and accuracies of basic linear algebra subroutines of all levels when implemented in CUDA and Jacket are comparable. But experimental results demonstrate that the basic linear algebra subroutines of level three offer the greatest potential for improving efficiency of basic numerical algorithms. We consider the solution of the multiple right hand side set of linear equations using Krylov subspace-based solvers. Thus, for the multiple right hand side case, it is more efficient to make use of a block implementation of the conjugate gradient algorithm, rather than to solve each system independently. Jacket is used for the implementation. Furthermore, including projection from one system to another improves efficiency. A relevant example, for which simulated results are provided, is the reconstruction of a three dimensional medical image volume acquired from a positron emission tomography scanner. Efficiency of the reconstruction is improved by using projection across nearby slices.

  15. Density-fitted singles and doubles coupled cluster on graphics processing units

    SciTech Connect

    Sherrill, David; Sumpter, Bobby G; DePrince, III, A. Eugene

    2014-01-01

    We adapt an algorithm for singles and doubles coupled cluster (CCSD) that uses density fitting (DF) or Cholesky decomposition (CD) in the construction and contraction of all electron repulsion integrals (ERI s) for use on heterogeneous compute nodes consisting of a multicore CPU and at least one graphics processing unit (GPU). The use of approximate 3-index ERI s ameliorates two of the major difficulties in designing scientific algorithms for GPU s: (i) the extremely limited global memory on the devices and (ii) the overhead associated with data motion across the PCI bus. For the benzene trimer described by an aug-cc-pVDZ basis set, the use of a single NVIDIA Tesla C2070 (Fermi) GPU accelerates a CD-CCSD computation by a factor of 2.1, relative to the multicore CPU-only algorithm that uses 6 highly efficient Intel core i7-3930K CPU cores. The use of two Fermis provides an acceleration of 2.89, which is comparable to that observed when using a single NVIDIA Kepler K20c GPU (2.73).

  16. Seismic interpretation using Support Vector Machines implemented on Graphics Processing Units

    SciTech Connect

    Kuzma, H A; Rector, J W; Bremer, D

    2006-06-22

    Support Vector Machines (SVMs) estimate lithologic properties of rock formations from seismic data by interpolating between known models using synthetically generated model/data pairs. SVMs are related to kriging and radial basis function neural networks. In our study, we train an SVM to approximate an inverse to the Zoeppritz equations. Training models are sampled from distributions constructed from well-log statistics. Training data is computed via a physically realistic forward modeling algorithm. In our experiments, each training data vector is a set of seismic traces similar to a 2-d image. The SVM returns a model given by a weighted comparison of the new data to each training data vector. The method of comparison is given by a kernel function which implicitly transforms data into a high-dimensional feature space and performs a dot-product. The feature space of a Gaussian kernel is made up of sines and cosines and so is appropriate for band-limited seismic problems. Training an SVM involves estimating a set of weights from the training model/data pairs. It is designed to be an easy problem; at worst it is a quadratic programming problem on the order of the size of the training set. By implementing the slowest part of our SVM algorithm on a graphics processing unit (GPU), we improve the speed of the algorithm by two orders of magnitude. Our SVM/GPU combination achieves results that are similar to those of conventional iterative inversion in fractions of the time.

  17. Graphics processing unit (GPU)-based computation of heat conduction in thermally anisotropic solids

    NASA Astrophysics Data System (ADS)

    Nahas, C. A.; Balasubramaniam, Krishnan; Rajagopal, Prabhu

    2013-01-01

    Numerical modeling of anisotropic media is a computationally intensive task since it brings additional complexity to the field problem in such a way that the physical properties are different in different directions. Largely used in the aerospace industry because of their lightweight nature, composite materials are a very good example of thermally anisotropic media. With advancements in video gaming technology, parallel processors are much cheaper today and accessibility to higher-end graphical processing devices has increased dramatically over the past couple of years. Since these massively parallel GPUs are very good in handling floating point arithmetic, they provide a new platform for engineers and scientists to accelerate their numerical models using commodity hardware. In this paper we implement a parallel finite difference model of thermal diffusion through anisotropic media using the NVIDIA CUDA (Compute Unified device Architecture). We use the NVIDIA GeForce GTX 560 Ti as our primary computing device which consists of 384 CUDA cores clocked at 1645 MHz with a standard desktop pc as the host platform. We compare the results from standard CPU implementation for its accuracy and speed and draw implications for simulation using the GPU paradigm.

  18. Acceleration of High Angular Momentum Electron Repulsion Integrals and Integral Derivatives on Graphics Processing Units.

    PubMed

    Miao, Yipu; Merz, Kenneth M

    2015-04-14

    We present an efficient implementation of ab initio self-consistent field (SCF) energy and gradient calculations that run on Compute Unified Device Architecture (CUDA) enabled graphical processing units (GPUs) using recurrence relations. We first discuss the machine-generated code that calculates the electron-repulsion integrals (ERIs) for different ERI types. Next we describe the porting of the SCF gradient calculation to GPUs, which results in an acceleration of the computation of the first-order derivative of the ERIs. However, only s, p, and d ERIs and s and p derivatives could be executed simultaneously on GPUs using the current version of CUDA and generation of NVidia GPUs using a previously described algorithm [Miao and Merz J. Chem. Theory Comput. 2013, 9, 965-976.]. Hence, we developed an algorithm to compute f type ERIs and d type ERI derivatives on GPUs. Our benchmarks shows the performance GPU enable ERI and ERI derivative computation yielded speedups of 10-18 times relative to traditional CPU execution. An accuracy analysis using double-precision calculations demonstrates that the overall accuracy is satisfactory for most applications. PMID:26574356

  19. Dynamic Precision for Electron Repulsion Integral Evaluation on Graphical Processing Units (GPUs).

    PubMed

    Luehr, Nathan; Ufimtsev, Ivan S; Martínez, Todd J

    2011-04-12

    It has recently been demonstrated that novel streaming architectures found in consumer video gaming hardware such as graphical processing units (GPUs) are well-suited to a broad range of computations including electronic structure theory (quantum chemistry). Although recent GPUs have developed robust support for double precision arithmetic, they continue to provide 2-8× more hardware units for single precision. In order to maximize performance on GPU architectures, we present a technique of dynamically selecting double or single precision evaluation for electron repulsion integrals (ERIs) in Hartree-Fock and density functional self-consistent field (SCF) calculations. We show that precision error can be effectively controlled by evaluating only the largest integrals in double precision. By dynamically scaling the precision cutoff over the course of the SCF procedure, we arrive at a scheme that minimizes the number of double precision integral evaluations for any desired accuracy. This dynamic precision scheme is shown to be effective for an array of molecules ranging in size from 20 to nearly 2000 atoms. PMID:26606344

  20. Graphics processing unit accelerated one-dimensional blood flow computation in the human arterial tree.

    PubMed

    Itu, Lucian; Sharma, Puneet; Kamen, Ali; Suciu, Constantin; Comaniciu, Dorin

    2013-12-01

    One-dimensional blood flow models have been used extensively for computing pressure and flow waveforms in the human arterial circulation. We propose an improved numerical implementation based on a graphics processing unit (GPU) for the acceleration of the execution time of one-dimensional model. A novel parallel hybrid CPU-GPU algorithm with compact copy operations (PHCGCC) and a parallel GPU only (PGO) algorithm are developed, which are compared against previously introduced PHCG versions, a single-threaded CPU only algorithm and a multi-threaded CPU only algorithm. Different second-order numerical schemes (Lax-Wendroff and Taylor series) are evaluated for the numerical solution of one-dimensional model, and the computational setups include physiologically motivated non-periodic (Windkessel) and periodic boundary conditions (BC) (structured tree) and elastic and viscoelastic wall laws. Both the PHCGCC and the PGO implementations improved the execution time significantly. The speed-up values over the single-threaded CPU only implementation range from 5.26 to 8.10 × , whereas the speed-up values over the multi-threaded CPU only implementation range from 1.84 to 4.02 × . The PHCGCC algorithm performs best for an elastic wall law with non-periodic BC and for viscoelastic wall laws, whereas the PGO algorithm performs best for an elastic wall law with periodic BC. PMID:24009129

  1. Accelerating frequency-domain diffuse optical tomographic image reconstruction using graphics processing units.

    PubMed

    Prakash, Jaya; Chandrasekharan, Venkittarayan; Upendra, Vishwajith; Yalavarthy, Phaneendra K

    2010-01-01

    Diffuse optical tomographic image reconstruction uses advanced numerical models that are computationally costly to be implemented in the real time. The graphics processing units (GPUs) offer desktop massive parallelization that can accelerate these computations. An open-source GPU-accelerated linear algebra library package is used to compute the most intensive matrix-matrix calculations and matrix decompositions that are used in solving the system of linear equations. These open-source functions were integrated into the existing frequency-domain diffuse optical image reconstruction algorithms to evaluate the acceleration capability of the GPUs (NVIDIA Tesla C 1060) with increasing reconstruction problem sizes. These studies indicate that single precision computations are sufficient for diffuse optical tomographic image reconstruction. The acceleration per iteration can be up to 40, using GPUs compared to traditional CPUs in case of three-dimensional reconstruction, where the reconstruction problem is more underdetermined, making the GPUs more attractive in the clinical settings. The current limitation of these GPUs in the available onboard memory (4 GB) that restricts the reconstruction of a large set of optical parameters, more than 13,377.

  2. Interactive Computing and Graphics in Undergraduate Digital Signal Processing. Microcomputing Working Paper Series F 84-9.

    ERIC Educational Resources Information Center

    Onaral, Banu; And Others

    This report describes the development of a Drexel University electrical and computer engineering course on digital filter design that used interactive computing and graphics, and was one of three courses in a senior-level sequence on digital signal processing (DSP). Interactive and digital analysis/design routines and the interconnection of these…

  3. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    SciTech Connect

    Rath, N. Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q.; Kato, S.

    2014-04-15

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  4. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    NASA Astrophysics Data System (ADS)

    Rath, N.; Kato, S.; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q.

    2014-04-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  5. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units.

    PubMed

    Rath, N; Kato, S; Levesque, J P; Mauel, M E; Navratil, G A; Peng, Q

    2014-04-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  6. Mobile Ultrasound Plane Wave Beamforming on iPhone or iPad using Metal- based GPU Processing

    NASA Astrophysics Data System (ADS)

    Hewener, Holger J.; Tretbar, Steffen H.

    Mobile and cost effective ultrasound devices are being used in point of care scenarios or the drama room. To reduce the costs of such devices we already presented the possibilities of consumer devices like the Apple iPad for full signal processing of raw data for ultrasound image generation. Using technologies like plane wave imaging to generate a full image with only one excitation/reception event the acquisition times and power consumption of ultrasound imaging can be reduced for low power mobile devices based on consumer electronics realizing the transition from FPGA or ASIC based beamforming into more flexible software beamforming. The massive parallel beamforming processing can be done with the Apple framework "Metal" for advanced graphics and general purpose GPU processing for the iOS platform. We were able to integrate the beamforming reconstruction into our mobile ultrasound processing application with imaging rates up to 70 Hz on iPad Air 2 hardware.

  7. Simulating data processing for an Advanced Ion Mobility Mass Spectrometer

    SciTech Connect

    Chavarría-Miranda, Daniel; Clowers, Brian H.; Anderson, Gordon A.; Belov, Mikhail E.

    2007-11-03

    We have designed and implemented a Cray XD-1-based sim- ulation of data capture and signal processing for an ad- vanced Ion Mobility mass spectrometer (Hadamard trans- form Ion Mobility). Our simulation is a hybrid application that uses both an FPGA component and a CPU-based soft- ware component to simulate Ion Mobility mass spectrome- try data processing. The FPGA component includes data capture and accumulation, as well as a more sophisticated deconvolution algorithm based on a PNNL-developed en- hancement to standard Hadamard transform Ion Mobility spectrometry. The software portion is in charge of stream- ing data to the FPGA and collecting results. We expect the computational and memory addressing logic of the FPGA component to be portable to an instrument-attached FPGA board that can be interfaced with a Hadamard transform Ion Mobility mass spectrometer.

  8. In-Situ Statistical Analysis of Autotune Simulation Data using Graphical Processing Units

    SciTech Connect

    Ranjan, Niloo; Sanyal, Jibonananda; New, Joshua Ryan

    2013-08-01

    Developing accurate building energy simulation models to assist energy efficiency at speed and scale is one of the research goals of the Whole-Building and Community Integration group, which is a part of Building Technologies Research and Integration Center (BTRIC) at Oak Ridge National Laboratory (ORNL). The aim of the Autotune project is to speed up the automated calibration of building energy models to match measured utility or sensor data. The workflow of this project takes input parameters and runs EnergyPlus simulations on Oak Ridge Leadership Computing Facility s (OLCF) computing resources such as Titan, the world s second fastest supercomputer. Multiple simulations run in parallel on nodes having 16 processors each and a Graphics Processing Unit (GPU). Each node produces a 5.7 GB output file comprising 256 files from 64 simulations. Four types of output data covering monthly, daily, hourly, and 15-minute time steps for each annual simulation is produced. A total of 270TB+ of data has been produced. In this project, the simulation data is statistically analyzed in-situ using GPUs while annual simulations are being computed on the traditional processors. Titan, with its recent addition of 18,688 Compute Unified Device Architecture (CUDA) capable NVIDIA GPUs, has greatly extended its capability for massively parallel data processing. CUDA is used along with C/MPI to calculate statistical metrics such as sum, mean, variance, and standard deviation leveraging GPU acceleration. The workflow developed in this project produces statistical summaries of the data which reduces by multiple orders of magnitude the time and amount of data that needs to be stored. These statistical capabilities are anticipated to be useful for sensitivity analysis of EnergyPlus simulations.

  9. Computing the Density Matrix in Electronic Structure Theory on Graphics Processing Units.

    PubMed

    Cawkwell, M J; Sanville, E J; Mniszewski, S M; Niklasson, Anders M N

    2012-11-13

    The self-consistent solution of a Schrödinger-like equation for the density matrix is a critical and computationally demanding step in quantum-based models of interatomic bonding. This step was tackled historically via the diagonalization of the Hamiltonian. We have investigated the performance and accuracy of the second-order spectral projection (SP2) algorithm for the computation of the density matrix via a recursive expansion of the Fermi operator in a series of generalized matrix-matrix multiplications. We demonstrate that owing to its simplicity, the SP2 algorithm [Niklasson, A. M. N. Phys. Rev. B2002, 66, 155115] is exceptionally well suited to implementation on graphics processing units (GPUs). The performance in double and single precision arithmetic of a hybrid GPU/central processing unit (CPU) and full GPU implementation of the SP2 algorithm exceed those of a CPU-only implementation of the SP2 algorithm and traditional matrix diagonalization when the dimensions of the matrices exceed about 2000 × 2000. Padding schemes for arrays allocated in the GPU memory that optimize the performance of the CUBLAS implementations of the level 3 BLAS DGEMM and SGEMM subroutines for generalized matrix-matrix multiplications are described in detail. The analysis of the relative performance of the hybrid CPU/GPU and full GPU implementations indicate that the transfer of arrays between the GPU and CPU constitutes only a small fraction of the total computation time. The errors measured in the self-consistent density matrices computed using the SP2 algorithm are generally smaller than those measured in matrices computed via diagonalization. Furthermore, the errors in the density matrices computed using the SP2 algorithm do not exhibit any dependence of system size, whereas the errors increase linearly with the number of orbitals when diagonalization is employed.

  10. A GRAPHICS PROCESSING UNIT-ENABLED, HIGH-RESOLUTION COSMOLOGICAL MICROLENSING PARAMETER SURVEY

    SciTech Connect

    Bate, N. F.; Fluke, C. J.

    2012-01-10

    In the era of synoptic surveys, the number of known gravitationally lensed quasars is set to increase by over an order of magnitude. These new discoveries will enable a move from single-quasar studies to investigations of statistical samples, presenting new opportunities to test theoretical models for the structure of quasar accretion disks and broad emission line regions (BELRs). As one crucial step in preparing for this influx of new lensed systems, a large-scale exploration of microlensing convergence-shear parameter space is warranted, requiring the computation of O(10{sup 5}) high-resolution magnification maps. Based on properties of known lensed quasars, and expectations from accretion disk/BELR modeling, we identify regions of convergence-shear parameter space, map sizes, smooth matter fractions, and pixel resolutions that should be covered. We describe how the computationally time-consuming task of producing {approx}290,000 magnification maps with sufficient resolution (10,000{sup 2} pixel map{sup -1}) to probe scales from the inner edge of the accretion disk to the BELR can be achieved in {approx}400 days on a 100 teraflop s{sup -1} high-performance computing facility, where the processing performance is achieved with graphics processing units. We illustrate a use-case for the parameter survey by investigating the effects of varying the lens macro-model on accretion disk constraints in the lensed quasar Q2237+0305. We find that although all constraints are consistent within their current error bars, models with more densely packed microlenses tend to predict shallower accretion disk radial temperature profiles. With a large parameter survey such as the one described here, such systematics on microlensing measurements could be fully explored.

  11. Parallel flow accumulation algorithms for graphical processing units with application to RUSLE model

    NASA Astrophysics Data System (ADS)

    Sten, Johan; Lilja, Harri; Hyväluoma, Jari; Westerholm, Jan; Aspnäs, Mats

    2016-04-01

    Digital elevation models (DEMs) are widely used in the modeling of surface hydrology, which typically includes the determination of flow directions and flow accumulation. The use of high-resolution DEMs increases the accuracy of flow accumulation computation, but as a drawback, the computational time may become excessively long if large areas are analyzed. In this paper we investigate the use of graphical processing units (GPUs) for efficient flow accumulation calculations. We present two new parallel flow accumulation algorithms based on dependency transfer and topological sorting and compare them to previously published flow transfer and indegree-based algorithms. We benchmark the GPU implementations against industry standards, ArcGIS and SAGA. With the flow-transfer D8 flow routing model and binary input data, a speed up of 19 is achieved compared to ArcGIS and 15 compared to SAGA. We show that on GPUs the topological sort-based flow accumulation algorithm leads on average to a speedup by a factor of 7 over the flow-transfer algorithm. Thus a total speed up of the order of 100 is achieved. We test the algorithms by applying them to the Revised Universal Soil Loss Equation (RUSLE) erosion model. For this purpose we present parallel versions of the slope, LS factor and RUSLE algorithms and show that the RUSLE erosion results for an area of 12 km x 24 km containing 72 million cells can be calculated in less than a second. Since flow accumulation is needed in many hydrological models, the developed algorithms may find use in many other applications than RUSLE modeling. The algorithm based on topological sorting is particularly promising for dynamic hydrological models where flow accumulations are repeatedly computed over an unchanged DEM.

  12. Porting ONETEP to graphical processing unit-based coprocessors. 1. FFT box operations.

    PubMed

    Wilkinson, Karl; Skylaris, Chris-Kriton

    2013-10-30

    We present the first graphical processing unit (GPU) coprocessor-enabled version of the Order-N Electronic Total Energy Package (ONETEP) code for linear-scaling first principles quantum mechanical calculations on materials. This work focuses on porting to the GPU the parts of the code that involve atom-localized fast Fourier transform (FFT) operations. These are among the most computationally intensive parts of the code and are used in core algorithms such as the calculation of the charge density, the local potential integrals, the kinetic energy integrals, and the nonorthogonal generalized Wannier function gradient. We have found that direct porting of the isolated FFT operations did not provide any benefit. Instead, it was necessary to tailor the port to each of the aforementioned algorithms to optimize data transfer to and from the GPU. A detailed discussion of the methods used and tests of the resulting performance are presented, which show that individual steps in the relevant algorithms are accelerated by a significant amount. However, the transfer of data between the GPU and host machine is a significant bottleneck in the reported version of the code. In addition, an initial investigation into a dynamic precision scheme for the ONETEP energy calculation has been performed to take advantage of the enhanced single precision capabilities of GPUs. The methods used here result in no disruption to the existing code base. Furthermore, as the developments reported here concern the core algorithms, they will benefit the full range of ONETEP functionality. Our use of a directive-based programming model ensures portability to other forms of coprocessors and will allow this work to form the basis of future developments to the code designed to support emerging high-performance computing platforms. PMID:24038140

  13. Accelerating POCS interpolation of 3D irregular seismic data with Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Wang, Shu-Qin; Gao, Xing; Yao, Zhen-Xing

    2010-10-01

    Seismic trace interpolation is necessary for high-resolution imaging when the acquired data are not adequate or when some traces are missing. Projection-onto-convex-sets (POCS) interpolation can gradually recover missing traces with an iterative algorithm, but its computational cost in a 3D CPU-based implementation is too high for practical applications. We present a computing scheme to speedup 3D POCS interpolation with graphics processing units (GPUs). We accelerate the most time-consuming part of the 3D POCS algorithm (i.e. Fourier transforms) by taking advantage of a GPU-based Fourier transform library. Other parts are fine-tuned to maximize the utilization of GPU computing resources. We upload the whole input data set to the global memory of the GPUs and reuse it until the final result is obtained. This can avoid low-bandwidth data transfer between CPU and GPUs. We minimize the number of intermediate 3D arrays to save GPU global memory by optimizing the algorithm implementation. This allows us to handle a much larger input data set. When reducing the runtime of our GPU implementation, the coalescing of global memory access and the 3D CUFFT library provides us with the greatest performance improvements. Numerical results show that our scheme is 3-29× times faster than the optimized CPU-based implementation, depending on the size of 3D data set. Our GPU computing scheme allows a significant reduction of computational cost and would facilitate 3D POCS interpolation for practical applications.

  14. Fast analysis of molecular dynamics trajectories with graphics processing units—Radial distribution function histogramming

    NASA Astrophysics Data System (ADS)

    Levine, Benjamin G.; Stone, John E.; Kohlmeyer, Axel

    2011-05-01

    The calculation of radial distribution functions (RDFs) from molecular dynamics trajectory data is a common and computationally expensive analysis task. The rate limiting step in the calculation of the RDF is building a histogram of the distance between atom pairs in each trajectory frame. Here we present an implementation of this histogramming scheme for multiple graphics processing units (GPUs). The algorithm features a tiling scheme to maximize the reuse of data at the fastest levels of the GPU's memory hierarchy and dynamic load balancing to allow high performance on heterogeneous configurations of GPUs. Several versions of the RDF algorithm are presented, utilizing the specific hardware features found on different generations of GPUs. We take advantage of larger shared memory and atomic memory operations available on state-of-the-art GPUs to accelerate the code significantly. The use of atomic memory operations allows the fast, limited-capacity on-chip memory to be used much more efficiently, resulting in a fivefold increase in performance compared to the version of the algorithm without atomic operations. The ultimate version of the algorithm running in parallel on four NVIDIA GeForce GTX 480 (Fermi) GPUs was found to be 92 times faster than a multithreaded implementation running on an Intel Xeon 5550 CPU. On this multi-GPU hardware, the RDF between two selections of 1,000,000 atoms each can be calculated in 26.9 s per frame. The multi-GPU RDF algorithms described here are implemented in VMD, a widely used and freely available software package for molecular dynamics visualization and analysis.

  15. FLOCKING-BASED DOCUMENT CLUSTERING ON THE GRAPHICS PROCESSING UNIT [Book Chapter

    SciTech Connect

    Charles, J S; Patton, R M; Potok, T E; Cui, X

    2008-01-01

    Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the fl ocking behavior of birds. Each bird represents a single document and fl ies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly diffi cult to receive results in a reasonable amount of time. However, fl ocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have experienced improved performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefi t the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NVIDIA®, we developed a document fl ocking implementation to be run on the NVIDIA® GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3,000 documents. The results of these tests were very signifi cant. Performance gains ranged from three to nearly fi ve times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  16. Large eddy simulations of turbulent flows on graphics processing units: Application to film-cooling flows

    NASA Astrophysics Data System (ADS)

    Shinn, Aaron F.

    Computational Fluid Dynamics (CFD) simulations can be very computationally expensive, especially for Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) of turbulent ows. In LES the large, energy containing eddies are resolved by the computational mesh, but the smaller (sub-grid) scales are modeled. In DNS, all scales of turbulence are resolved, including the smallest dissipative (Kolmogorov) scales. Clusters of CPUs have been the standard approach for such simulations, but an emerging approach is the use of Graphics Processing Units (GPUs), which deliver impressive computing performance compared to CPUs. Recently there has been great interest in the scientific computing community to use GPUs for general-purpose computation (such as the numerical solution of PDEs) rather than graphics rendering. To explore the use of GPUs for CFD simulations, an incompressible Navier-Stokes solver was developed for a GPU. This solver is capable of simulating unsteady laminar flows or performing a LES or DNS of turbulent ows. The Navier-Stokes equations are solved via a fractional-step method and are spatially discretized using the finite volume method on a Cartesian mesh. An immersed boundary method based on a ghost cell treatment was developed to handle flow past complex geometries. The implementation of these numerical methods had to suit the architecture of the GPU, which is designed for massive multithreading. The details of this implementation will be described, along with strategies for performance optimization. Validation of the GPU-based solver was performed for fundamental bench-mark problems, and a performance assessment indicated that the solver was over an order-of-magnitude faster compared to a CPU. The GPU-based Navier-Stokes solver was used to study film-cooling flows via Large Eddy Simulation. In modern gas turbine engines, the film-cooling method is used to protect turbine blades from hot combustion gases. Therefore, understanding the physics of

  17. Graphic Arts: The Press and Finishing Processes. Fourth Edition. Teacher Edition [and] Student Edition.

    ERIC Educational Resources Information Center

    Farajollahi, Karim; Ogle, Gary; Reed, William; Woodcock, Kenneth

    Part of a series of instructional materials for courses on graphic communication, this packet contains both teacher and student materials for seven units that cover the following topics: (1) offset press systems; (2) offset inks and dampening chemistry; (3) offset press operating procedures; (4) preventive maintenance and troubleshooting; (5) job…

  18. Creating Interactive Graphical Overlays in the Advanced Weather Interactive Processing System (AWIPS) Using Shapefiles and DGM Files

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Lafosse, Richard; Hood, Doris; Hoeth, Brian

    2007-01-01

    Graphical overlays can be created in real-time in the Advanced Weather Interactive Processing System (AWIPS) using shapefiles or DARE Graphics Metafile (DGM) files. This presentation describes how to create graphical overlays on-the-fly for AWIPS, by using two examples of AWIPS applications that were created by the Applied Meteorology Unit (AMU). The first example is the Anvil Threat Corridor Forecast Tool, which produces a shapefile that depicts a graphical threat corridor of the forecast movement of thunderstorm anvil clouds, based on the observed or forecast upper-level winds. This tool is used by the Spaceflight Meteorology Group (SMG) and 45th Weather Squadron (45 WS) to analyze the threat of natural or space vehicle-triggered lightning over a location. The second example is a launch and landing trajectory tool that produces a DGM file that plots the ground track of space vehicles during launch or landing. The trajectory tool can be used by SMG and the 45 WS forecasters to analyze weather radar imagery along a launch or landing trajectory. Advantages of both file types will be listed.

  19. Computer Graphics.

    ERIC Educational Resources Information Center

    Halpern, Jeanne W.

    1970-01-01

    Computer graphics have been called the most exciting development in computer technology. At the University of Michigan, three kinds of graphics output equipment are now being used: symbolic printers, line plotters or drafting devices, and cathode-ray tubes (CRT). Six examples are given that demonstrate the range of graphics use at the University.…

  20. Efficient particle-in-cell simulation of auroral plasma phenomena using a CUDA enabled graphics processing unit

    NASA Astrophysics Data System (ADS)

    Sewell, Stephen

    This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.

  1. Application of computer generated color graphic techniques to the processing and display of three dimensional fluid dynamic data

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.; Putt, C. W.; Giamati, C. C.

    1981-01-01

    Color coding techniques used in the processing of remote sensing imagery were adapted and applied to the fluid dynamics problems associated with turbofan mixer nozzles. The computer generated color graphics were found to be useful in reconstructing the measured flow field from low resolution experimental data to give more physical meaning to this information and in scanning and interpreting the large volume of computer generated data from the three dimensional viscous computer code used in the analysis.

  2. Compressed sensing reconstruction for whole-heart imaging with 3D radial trajectories: a graphics processing unit implementation.

    PubMed

    Nam, Seunghoon; Akçakaya, Mehmet; Basha, Tamer; Stehning, Christian; Manning, Warren J; Tarokh, Vahid; Nezafat, Reza

    2013-01-01

    A disadvantage of three-dimensional (3D) isotropic acquisition in whole-heart coronary MRI is the prolonged data acquisition time. Isotropic 3D radial trajectories allow undersampling of k-space data in all three spatial dimensions, enabling accelerated acquisition of the volumetric data. Compressed sensing (CS) reconstruction can provide further acceleration in the acquisition by removing the incoherent artifacts due to undersampling and improving the image quality. However, the heavy computational overhead of the CS reconstruction has been a limiting factor for its application. In this article, a parallelized implementation of an iterative CS reconstruction method for 3D radial acquisitions using a commercial graphics processing unit is presented. The execution time of the graphics processing unit-implemented CS reconstruction was compared with that of the C++ implementation, and the efficacy of the undersampled 3D radial acquisition with CS reconstruction was investigated in both phantom and whole-heart coronary data sets. Subsequently, the efficacy of CS in suppressing streaking artifacts in 3D whole-heart coronary MRI with 3D radial imaging and its convergence properties were studied. The CS reconstruction provides improved image quality (in terms of vessel sharpness and suppression of noise-like artifacts) compared with the conventional 3D gridding algorithm, and the graphics processing unit implementation greatly reduces the execution time of CS reconstruction yielding 34-54 times speed-up compared with C++ implementation. PMID:22392604

  3. International Student Mobility and the Bologna Process

    ERIC Educational Resources Information Center

    Teichler, Ulrich

    2012-01-01

    The Bologna Process is the newest of a chain of activities stimulated by supra-national actors since the 1950s to challenge national borders in higher education in Europe. Now, the ministers in charge of higher education of the individual European countries have agreed to promote a similar cycle-structure of study programmes and programmes based…

  4. Phase transitions in contagion processes mediated by recurrent mobility patterns

    NASA Astrophysics Data System (ADS)

    Balcan, Duygu; Vespignani, Alessandro

    2011-07-01

    Human mobility and activity patterns mediate contagion on many levels, including the spatial spread of infectious diseases, diffusion of rumours, and emergence of consensus. These patterns however are often dominated by specific locations and recurrent flows and poorly modelled by the random diffusive dynamics generally used to study them. Here we develop a theoretical framework to analyse contagion within a network of locations where individuals recall their geographic origins. We find a phase transition between a regime in which the contagion affects a large fraction of the system and one in which only a small fraction is affected. This transition cannot be uncovered by continuous deterministic models because of the stochastic features of the contagion process and defines an invasion threshold that depends on mobility parameters, providing guidance for controlling contagion spread by constraining mobility processes. We recover the threshold behaviour by analysing diffusion processes mediated by real human commuting data.

  5. Interpretation of Medical Imaging Data with a Mobile Application: A Mobile Digital Imaging Processing Environment

    PubMed Central

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J.; Ullmann, Jeremy F. P.; Janke, Andrew L.

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users’ expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587

  6. Ab initio nonadiabatic dynamics of multichromophore complexes: a scalable graphical-processing-unit-accelerated exciton framework.

    PubMed

    Sisto, Aaron; Glowacki, David R; Martinez, Todd J

    2014-09-16

    ("fragmenting") a molecular system and then stitching it back together. In this Account, we address both of these problems, the first by using graphical processing units (GPUs) and electronic structure algorithms tuned for these architectures and the second by using an exciton model as a framework in which to stitch together the solutions of the smaller problems. The multitiered parallel framework outlined here is aimed at nonadiabatic dynamics simulations on large supramolecular multichromophoric complexes in full atomistic detail. In this framework, the lowest tier of parallelism involves GPU-accelerated electronic structure theory calculations, for which we summarize recent progress in parallelizing the computation and use of electron repulsion integrals (ERIs), which are the major computational bottleneck in both density functional theory (DFT) and time-dependent density functional theory (TDDFT). The topmost tier of parallelism relies on a distributed memory framework, in which we build an exciton model that couples chromophoric units. Combining these multiple levels of parallelism allows access to ground and excited state dynamics for large multichromophoric assemblies. The parallel excitonic framework is in good agreement with much more computationally demanding TDDFT calculations of the full assembly. PMID:25186064

  7. Using wesBench to Study the Rendering Performance of Graphics Processing Units

    SciTech Connect

    Bethel, Edward W

    2010-01-08

    Graphics operations consist of two broad operations. The first, which we refer to here as vertex operations, consists of transformation, lighting, primitive assembly, and so forth. The second, which we refer to as pixel or fragment operations, consist of rasterization, texturing, scissoring, blending, and fill. Overall GPU rendering performance is a function of throughput of both these interdependent stages: if one stage is slower than the other, the faster stage will be forced to run more slowly and overall rendering performance will be adversely affected. This relationship is commutative: if the later stage has a greater workload than the earlier stage, the earlier stage will be forced to 'slow down.' For example, a large triangle that covers many screen pixels will incur a very small amount of work in the vertex stage while at the same time incurring a relatively large amount of work in the fragment stage. Rendering performance of a scene consisting of many large-area triangles will be limited by throughput of the fragment stage, which will have relatively more work than the vertex stage. There are two main objectives for this document. First, we introduce a new graphics benchmark, wesBench, which is useful for measuring performance of both stages of the rendering pipeline under varying conditions. Second, we present its methodology for measuring performance and show results of several performance measurement studies aimed at producing better understanding of GPU rendering performance characteristics and limits under varying configurations. First, in Section 2, we explore the 'crossover' point between geometry and rasterization. Second, in Section 3, we explore additional performance characteristics, some of which are ill- or un-documented. Lastly, several appendices provide additional material concerning problems with the gfxbench benchmark, and details about the new wesBench graphics benchmark.

  8. NATURAL graphics

    NASA Technical Reports Server (NTRS)

    Jones, R. H.

    1984-01-01

    The hardware and software developments in computer graphics are discussed. Major topics include: system capabilities, hardware design, system compatibility, and software interface with the data base management system.

  9. FamSeq: a variant calling program for family-based sequencing data using graphics processing units.

    PubMed

    Peng, Gang; Fan, Yu; Wang, Wenyi

    2014-10-01

    Various algorithms have been developed for variant calling using next-generation sequencing data, and various methods have been applied to reduce the associated false positive and false negative rates. Few variant calling programs, however, utilize the pedigree information when the family-based sequencing data are available. Here, we present a program, FamSeq, which reduces both false positive and false negative rates by incorporating the pedigree information from the Mendelian genetic model into variant calling. To accommodate variations in data complexity, FamSeq consists of four distinct implementations of the Mendelian genetic model: the Bayesian network algorithm, a graphics processing unit version of the Bayesian network algorithm, the Elston-Stewart algorithm and the Markov chain Monte Carlo algorithm. To make the software efficient and applicable to large families, we parallelized the Bayesian network algorithm that copes with pedigrees with inbreeding loops without losing calculation precision on an NVIDIA graphics processing unit. In order to compare the difference in the four methods, we applied FamSeq to pedigree sequencing data with family sizes that varied from 7 to 12. When there is no inbreeding loop in the pedigree, the Elston-Stewart algorithm gives analytical results in a short time. If there are inbreeding loops in the pedigree, we recommend the Bayesian network method, which provides exact answers. To improve the computing speed of the Bayesian network method, we parallelized the computation on a graphics processing unit. This allowed the Bayesian network method to process the whole genome sequencing data of a family of 12 individuals within two days, which was a 10-fold time reduction compared to the time required for this computation on a central processing unit.

  10. Business process analysis of a foodborne outbreak investigation mobile system

    NASA Astrophysics Data System (ADS)

    Nowicki, T.; Waszkowski, R.; Saniuk, A.

    2016-08-01

    Epidemiological investigation during an outbreak of food-borne disease requires taking a number of activities carried out in the field. This results in a restriction of access to current data about the epidemic and reducing the possibility of transferring information from the field to headquarters. This problem can be solved by using an appropriate system of mobile devices. The purpose of this paper is to present the IT solution based on the central repository for epidemiological investigations and mobile devices designed for use in the field. Based on such a solution business processes can be properly rebuild in a way to achieve better results in the activities of health inspectors.

  11. High mobility epitaxial graphene devices via aqueous-ozone processing

    NASA Astrophysics Data System (ADS)

    Yager, Tom; Webb, Matthew J.; Grennberg, Helena; Yakimova, Rositsa; Lara-Avila, Samuel; Kubatkin, Sergey

    2015-02-01

    We find that monolayer epitaxial graphene devices exposed to aggressive aqueous-ozone processing and annealing became cleaner from post-fabrication organic resist residuals and, significantly, maintain their high carrier mobility. Additionally, we observe a decrease in carrier density from inherent strong n-type doping to extremely low p-type doping after processing. This transition is explained to be a consequence of the cleaning effect of aqueous-ozone processing and annealing, since the observed removal of resist residuals from SiC/G enables the exposure of the bare graphene to dopants present in ambient conditions. The resulting combination of charge neutrality, high mobility, large area clean surfaces, and susceptibility to environmental species suggest this processed graphene system as an ideal candidate for gas sensing applications.

  12. Image processing for navigation on a mobile embedded platform: design of an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Loose, Harald; Lemke, Christiane; Papazov, Chavdar

    2006-02-01

    This paper deals with intelligent mobile platforms connected to a camera controlled by a small hardware-platform called RCUBE. This platform is able to provide features of a typical actuator-sensor board with various inputs and outputs as well as computing power and image recognition capabilities. Several intelligent autonomous RCBUE devices can be equipped and programmed to participate in the BOSPORUS network. These components form an intelligent network for gathering sensor and image data, sensor data fusion, navigation and control of mobile platforms. The RCUBE platform provides a standalone solution for image processing, which will be explained and presented. It plays a major role for several components in a reference implementation of the BOSPORUS system. On the one hand, intelligent cameras will be positioned in the environment, analyzing the events from a fixed point of view and sharing their perceptions with other components in the system. On the other hand, image processing results will contribute to a reliable navigation of a mobile system, which is crucially important. Fixed landmarks and other objects appropriate for determining the position of a mobile system can be recognized. For navigation other methods are added, i.e. GPS calculations and odometers.

  13. [Dynamic Pulse Signal Processing and Analyzing in Mobile System].

    PubMed

    Chou, Yongxin; Zhang, Aihua; Ou, Jiqing; Qi, Yusheng

    2015-09-01

    In order to derive dynamic pulse rate variability (DPRV) signal from dynamic pulse signal in real time, a method for extracting DPRV signal was proposed and a portable mobile monitoring system was designed. The system consists of a front end for collecting and wireless sending pulse signal and a mobile terminal. The proposed method is employed to extract DPRV from dynamic pulse signal in mobile terminal, and the DPRV signal is analyzed both in the time domain and the frequency domain and also with non-linear method in real time. The results show that the proposed method can accurately derive DPRV signal in real time, the system can be used for processing and analyzing DPRV signal in real time.

  14. Real-time display on SD-OCT using a linear-in-wavenumber spectrometer and a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuuki; Itagaki, Toshiki

    2010-02-01

    We demonstrated a real-time display of processed OCT images using a linear-in-wavenumber (linear-k) spectrometer and a graphics processing unit (GPU). We used the linear-k spectrometer with optimal combination of a diffractive grating with 1200 lines/mm and a F2 equilateral prism in the 840 nm spectral region, to avoid calculating the re-sampling process. The calculations of the FFT (fast Fourier transform) were accelerated by the low cost GPU with many stream processors, which realized highly parallel processing. A display rate of 27.9 frames per second for processed images (2048 FFT size × 1000 lateral A-scans) was achieved in our OCT system using a line scan CCD camera operated at 27.9 kHz.

  15. Business Graphics

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Genigraphics Corporation's Masterpiece 8770 FilmRecorder is an advanced high resolution system designed to improve and expand a company's in-house graphics production. GRAFTIME/software package was designed to allow office personnel with minimal training to produce professional level graphics for business communications and presentations. Products are no longer being manufactured.

  16. Graphic Arts.

    ERIC Educational Resources Information Center

    Kempe, Joseph; Kinde, Bruce

    This curriculum guide is intended to assist vocational instructors in preparing students for entry-level employment in the graphic arts field and getting them ready for advanced training in the workplace. The package contains an overview of new and emerging graphic arts technologies, competency/skill and task lists for the occupations of…

  17. Graphic Storytelling

    ERIC Educational Resources Information Center

    Thompson, John

    2009-01-01

    Graphic storytelling is a medium that allows students to make and share stories, while developing their art communication skills. American comics today are more varied in genre, approach, and audience than ever before. When considering the impact of Japanese manga on the youth, graphic storytelling emerges as a powerful player in pop culture. In…

  18. The fast multipole method on parallel clusters, multicore processors, and graphics processing units

    NASA Astrophysics Data System (ADS)

    Darve, Eric; Cecka, Cris; Takahashi, Toru

    2011-02-01

    In this article, we discuss how the fast multipole method (FMM) can be implemented on modern parallel computers, ranging from computer clusters to multicore processors and graphics cards (GPU). The FMM is a somewhat difficult application for parallel computing because of its tree structure and the fact that it requires many complex operations which are not regularly structured. Computational linear algebra with dense matrices for example allows many optimizations that leverage the regular computation pattern. FMM can be similarly optimized but we will see that the complexity of the optimization steps is greater. The discussion will start with a general presentation of FMMs. We briefly discuss parallel methods for the FMM, such as building the FMM tree in parallel, and reducing communication during the FMM procedure. Finally, we will focus on porting and optimizing the FMM on GPUs.

  19. An atomic orbital-based formulation of the complete active space self-consistent field method on graphical processing units

    SciTech Connect

    Hohenstein, Edward G.; Luehr, Nathan; Ufimtsev, Ivan S.; Martínez, Todd J.

    2015-06-14

    Despite its importance, state-of-the-art algorithms for performing complete active space self-consistent field (CASSCF) computations have lagged far behind those for single reference methods. We develop an algorithm for the CASSCF orbital optimization that uses sparsity in the atomic orbital (AO) basis set to increase the applicability of CASSCF. Our implementation of this algorithm uses graphical processing units (GPUs) and has allowed us to perform CASSCF computations on molecular systems containing more than one thousand atoms. Additionally, we have implemented analytic gradients of the CASSCF energy; the gradients also benefit from GPU acceleration as well as sparsity in the AO basis.

  20. A real-time GNSS-R system based on software-defined radio and graphics processing units

    NASA Astrophysics Data System (ADS)

    Hobiger, Thomas; Amagai, Jun; Aida, Masanori; Narita, Hideki

    2012-04-01

    Reflected signals of the Global Navigation Satellite System (GNSS) from the sea or land surface can be utilized to deduce and monitor physical and geophysical parameters of the reflecting area. Unlike most other remote sensing techniques, GNSS-Reflectometry (GNSS-R) operates as a passive radar that takes advantage from the increasing number of navigation satellites that broadcast their L-band signals. Thereby, most of the GNSS-R receiver architectures are based on dedicated hardware solutions. Software-defined radio (SDR) technology has advanced in the recent years and enabled signal processing in real-time, which makes it an ideal candidate for the realization of a flexible GNSS-R system. Additionally, modern commodity graphic cards, which offer massive parallel computing performances, allow to handle the whole signal processing chain without interfering with the PC's CPU. Thus, this paper describes a GNSS-R system which has been developed on the principles of software-defined radio supported by General Purpose Graphics Processing Units (GPGPUs), and presents results from initial field tests which confirm the anticipated capability of the system.

  1. Multimedia information processing in the SWAN mobile networked computing system

    NASA Astrophysics Data System (ADS)

    Agrawal, Prathima; Hyden, Eoin; Krzyzanowsji, Paul; Srivastava, Mani B.; Trotter, John

    1996-03-01

    Anytime anywhere wireless access to databases, such as medical and inventory records, can simplify workflow management in a business, and reduce or even eliminate the cost of moving paper documents. Moreover, continual progress in wireless access technology promises to provide per-user bandwidths of the order of a few Mbps, at least in indoor environments. When combined with the emerging high-speed integrated service wired networks, it enables ubiquitous and tetherless access to and processing of multimedia information by mobile users. To leverage on this synergy an indoor wireless network based on room-sized cells and multimedia mobile end-points is being developed at AT&T Bell Laboratories. This research network, called SWAN (Seamless Wireless ATM Networking), allows users carrying multimedia end-points such as PDAs, laptops, and portable multimedia terminals, to seamlessly roam while accessing multimedia data streams from the wired backbone network. A distinguishing feature of the SWAN network is its use of end-to-end ATM connectivity as opposed to the connectionless mobile-IP connectivity used by present day wireless data LANs. This choice allows the wireless resource in a cell to be intelligently allocated amongst various ATM virtual circuits according to their quality of service requirements. But an efficient implementation of ATM in a wireless environment requires a proper mobile network architecture. In particular, the wireless link and medium-access layers need to be cognizant of the ATM traffic, while the ATM layers need to be cognizant of the mobility enabled by the wireless layers. This paper presents an overview of SWAN's network architecture, briefly discusses the issues in making ATM mobile and wireless, and describes initial multimedia applications for SWAN.

  2. The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units

    NASA Astrophysics Data System (ADS)

    Hall, Clifford; Ji, Weixiao; Blaisten-Barojas, Estela

    2014-02-01

    We present a CPU-GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU-GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU-GPU duets.

  3. The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units

    SciTech Connect

    Hall, Clifford; Ji, Weixiao; Blaisten-Barojas, Estela

    2014-02-01

    We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.

  4. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    PubMed

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  5. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  6. Enabling customer self service through image processing on mobile devices

    NASA Astrophysics Data System (ADS)

    Kliche, Ingmar; Hellmann, Sascha; Kreutel, Jörn

    2013-03-01

    Our paper will outline the results of a research project that employs image processing for the automatic diagnosis of technical devices whose internal state is communicated through visual displays. In particular, we developed a method for detecting exceptional states of retail wireless routers, analysing the state and blinking behaviour of the LEDs that make up most routers' user interface. The method was made configurable by means of abstracting away from a particular device's display properties, thus being able to analyse a whole range of different devices whose displays are covered by our abstraction. The method of analysis and its configuration mechanism were implemented as a native mobile application for the Android Platform. It employs the local camera of mobile devices for capturing a router's state, and uses overlaid visual hints for guiding the user toward that perspective from where an analysis is possible.

  7. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  8. Robot graphic simulation testbed

    NASA Technical Reports Server (NTRS)

    Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.

    1991-01-01

    The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts.

  9. Distributed cooperating processes in a mobile robot control system

    NASA Technical Reports Server (NTRS)

    Skillman, Thomas L., Jr.

    1988-01-01

    A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.

  10. Analytic first derivatives of floating occupation molecular orbital-complete active space configuration interaction on graphical processing units.

    PubMed

    Hohenstein, Edward G; Bouduban, Marine E F; Song, Chenchen; Luehr, Nathan; Ufimtsev, Ivan S; Martínez, Todd J

    2015-07-01

    The floating occupation molecular orbital-complete active space configuration interaction (FOMO-CASCI) method is a promising alternative to the state-averaged complete active space self-consistent field (SA-CASSCF) method. We have formulated the analytic first derivative of FOMO-CASCI in a manner that is well-suited for a highly efficient implementation using graphical processing units (GPUs). Using this implementation, we demonstrate that FOMO-CASCI gradients are of similar computational expense to configuration interaction singles (CIS) or time-dependent density functional theory (TDDFT). In contrast to CIS and TDDFT, FOMO-CASCI can describe multireference character of the electronic wavefunction. We show that FOMO-CASCI compares very favorably to SA-CASSCF in its ability to describe molecular geometries and potential energy surfaces around minimum energy conical intersections. Finally, we apply FOMO-CASCI to the excited state hydrogen transfer reaction in methyl salicylate. PMID:26156469

  11. Real-Space Density Functional Theory on Graphical Processing Units: Computational Approach and Comparison to Gaussian Basis Set Methods.

    PubMed

    Andrade, Xavier; Aspuru-Guzik, Alán

    2013-10-01

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs. PMID:26589153

  12. Analytic first derivatives of floating occupation molecular orbital-complete active space configuration interaction on graphical processing units.

    PubMed

    Hohenstein, Edward G; Bouduban, Marine E F; Song, Chenchen; Luehr, Nathan; Ufimtsev, Ivan S; Martínez, Todd J

    2015-07-01

    The floating occupation molecular orbital-complete active space configuration interaction (FOMO-CASCI) method is a promising alternative to the state-averaged complete active space self-consistent field (SA-CASSCF) method. We have formulated the analytic first derivative of FOMO-CASCI in a manner that is well-suited for a highly efficient implementation using graphical processing units (GPUs). Using this implementation, we demonstrate that FOMO-CASCI gradients are of similar computational expense to configuration interaction singles (CIS) or time-dependent density functional theory (TDDFT). In contrast to CIS and TDDFT, FOMO-CASCI can describe multireference character of the electronic wavefunction. We show that FOMO-CASCI compares very favorably to SA-CASSCF in its ability to describe molecular geometries and potential energy surfaces around minimum energy conical intersections. Finally, we apply FOMO-CASCI to the excited state hydrogen transfer reaction in methyl salicylate.

  13. IGIS (Interactive Geologic Interpretation System) computer-aided photogeologic mapping with image processing, graphics and CAD/CAM capabilities

    SciTech Connect

    McGuffie, B.A.; Johnson, L.F.; Alley, R.E.; Lang, H.R. )

    1989-10-01

    Advances in computer technology are changing the way geologists integrate and use data. Although many geoscience disciplines are absolutely dependent upon computer processing, photogeological and map interpretation computer procedures are just now being developed. Historically, geologists collected data in the field and mapped manually on a topographic map or aerial photographic base. New software called the interactive Geologic Interpretation System (IGIS) is being developed at the Jet Propulsion Laboratory (JPL) within the National Aeronautics and Space Administration (NASA)-funded Multispectral Analysis of Sedimentary Basins Project. To complement conventional geological mapping techniques, Landsat Thematic Mapper (TM) or other digital remote sensing image data and co-registered digital elevation data are combined using computer imaging, graphics, and CAD/CAM techniques to provide tools for photogeologic interpretation, strike/dip determination, cross section construction, stratigraphic section measurement, topographic slope measurement, terrain profile generation, rotatable 3-D block diagram generation, and seismic analysis.

  14. Real-Space Density Functional Theory on Graphical Processing Units: Computational Approach and Comparison to Gaussian Basis Set Methods.

    PubMed

    Andrade, Xavier; Aspuru-Guzik, Alán

    2013-10-01

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.

  15. Accelerating the performance of a novel meshless method based on collocation with radial basis functions by employing a graphical processing unit as a parallel coprocessor

    NASA Astrophysics Data System (ADS)

    Owusu-Banson, Derek

    In recent times, a variety of industries, applications and numerical methods including the meshless method have enjoyed a great deal of success by utilizing the graphical processing unit (GPU) as a parallel coprocessor. These benefits often include performance improvement over the previous implementations. Furthermore, applications running on graphics processors enjoy superior performance per dollar and performance per watt than implementations built exclusively on traditional central processing technologies. The GPU was originally designed for graphics acceleration but the modern GPU, known as the General Purpose Graphical Processing Unit (GPGPU) can be used for scientific and engineering calculations. The GPGPU consists of massively parallel array of integer and floating point processors. There are typically hundreds of processors per graphics card with dedicated high-speed memory. This work describes an application written by the author, titled GaussianRBF to show the implementation and results of a novel meshless method that in-cooperates the collocation of the Gaussian radial basis function by utilizing the GPU as a parallel co-processor. Key phases of the proposed meshless method have been executed on the GPU using the NVIDIA CUDA software development kit. Especially, the matrix fill and solution phases have been carried out on the GPU, along with some post processing. This approach resulted in a decreased processing time compared to similar algorithm implemented on the CPU while maintaining the same accuracy.

  16. Mobil uses two-layer coating process on replacement pipe

    SciTech Connect

    Not Available

    1991-03-01

    Mobil Oil's West Coast Pipe Line, as part of an ongoing program, has replaced sections of its crude oil pipe line that crosses Southern California's San Joaquin Valley. The significant aspect of the replacement project was the use of a new two-part coating process that has the ability to make cathodic protection more effective, while not deteriorating in service. Mobil's crude line extends from the company's San Joaquin Valley oil field in Kern County to the Torrance, Calif., refinery on the south side of Los Angeles. It crosses a variety of terrain including desert, foothills and urban development. Crude oil from the San Joaquin Valley is heavy and requires heating for efficient flow. Normal operating temperature is about 180{degrees}F. Due to moisture in the soil surrounding the hot line, the risk of corrosion is constant. Additionally, soil stress on such a line extending through the California hills inflicts damage on the protective coating. Under these conditions, coatings can soften, bake out and eventually become brittle. The ultimate result is separation from the pipe. The coating system employs a two-part process. Each of the two coatings are tailored to each other in a patented process, forming a chemical bond between the layers. This enhances the pipe protection both mechanically and electrically.

  17. Perception in statistical graphics

    NASA Astrophysics Data System (ADS)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  18. Programmer's Guide for FFORM. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Anderson, Lougenia; Gales, Larry

    This module is part of a series designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. FFORM is a portable format-free input subroutine package written in ANSI Fortran IV…

  19. Graphic Novels, Web Comics, and Creator Blogs: Examining Product and Process

    ERIC Educational Resources Information Center

    Carter, James Bucky

    2011-01-01

    Young adult literature (YAL) of the late 20th and early 21st century is exploring hybrid forms with growing regularity by embracing textual conventions from sequential art, video games, film, and more. As well, Web-based technologies have given those who consume YAL more immediate access to authors, their metacognitive creative processes, and…

  20. Second and Fourth Graders' Copying Ability: From Graphical to Linguistic Processing

    ERIC Educational Resources Information Center

    Grabowski, Joachim; Weinzierl, Christian; Schmitt, Markus

    2010-01-01

    Particularly in primary school, good performance on copy tasks is an important working technique. With respect to writing skills, copying is a very basic process on which more complex writing abilities are based. We studied the copying ability of second and fourth graders across four types of symbols which vary with respect to their semantic and…

  1. Effects of Graphic Organizers on Student Achievement in the Writing Process

    ERIC Educational Resources Information Center

    Brown, Marjorie

    2011-01-01

    Writing at the high school level requires higher cognitive and literacy skills. Educators must decide the strategies best suited for the varying skills of each process. Compounding this issue is the need to instruct students with learning disabilities. Writing for students with learning disabilities is a struggle at minimum; teachers have to find…

  2. The LHEA PDP 11/70 graphics processing facility users guide

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A compilation of all necessary and useful information needed to allow the inexperienced user to program on the PDP 11/70. Information regarding the use of editing and file manipulation utilities as well as operational procedures are included. The inexperienced user is taken through the process of creating, editing, compiling, task building and debugging his/her FORTRAN program. Also, documentation on additional software is included.

  3. Conceptual Learning with Multiple Graphical Representations: Intelligent Tutoring Systems Support for Sense-Making and Fluency-Building Processes

    ERIC Educational Resources Information Center

    Rau, Martina A.

    2013-01-01

    Most learning environments in the STEM disciplines use multiple graphical representations along with textual descriptions and symbolic representations. Multiple graphical representations are powerful learning tools because they can emphasize complementary aspects of complex learning contents. However, to benefit from multiple graphical…

  4. Genetic algorithm supported by graphical processing unit improves the exploration of effective connectivity in functional brain imaging.

    PubMed

    Chan, Lawrence Wing Chi; Pang, Bin; Shyu, Chi-Ren; Chan, Tao; Khong, Pek-Lan

    2015-01-01

    Brain regions of human subjects exhibit certain levels of associated activation upon specific environmental stimuli. Functional Magnetic Resonance Imaging (fMRI) detects regional signals, based on which we could infer the direct or indirect neuronal connectivity between the regions. Structural Equation Modeling (SEM) is an appropriate mathematical approach for analyzing the effective connectivity using fMRI data. A maximum likelihood (ML) discrepancy function is minimized against some constrained coefficients of a path model. The minimization is an iterative process. The computing time is very long as the number of iterations increases geometrically with the number of path coefficients. Using regular Quad-Core Central Processing Unit (CPU) platform, duration up to 3 months is required for the iterations from 0 to 30 path coefficients. This study demonstrates the application of Graphical Processing Unit (GPU) with the parallel Genetic Algorithm (GA) that replaces the Powell minimization in the standard program code of the analysis software package. It was found in the same example that GA under GPU reduced the duration to 20 h and provided more accurate solution when compared with standard program code under CPU.

  5. A straightforward graphical user interface for basic and advanced signal processing of thermographic infrared sequences

    NASA Astrophysics Data System (ADS)

    Klein, Matthieu T.; Ibarra-Castanedo, Clemente; Maldague, Xavier P.; Bendada, Abdelhakim

    2008-03-01

    IR-View, is a free and open source Matlab software that was released in 1998 at the Computer Vision and Systems Laboratory (CVSL) at Université Laval, Canada, as an answer to many common and recurrent needs in Infrared thermography. IR-View has proven to be a useful tool at CVSL for the past 10 years. The software by itself and/or its concept and functions may be of interest for other laboratories and companies working in research in the IR NDT field. This article describes the functions and processing techniques integrated to IR-View, freely downloadable under the GNU license at http://mivim.gel.ulaval.ca. Demonstration of IR-View functionalities will also be done during the DSS08 SPIE Defense and Security Symposium.

  6. Real-time 4D signal processing and visualization using graphics processing unit on a regular nonlinear-k Fourier-domain OCT system.

    PubMed

    Zhang, Kang; Kang, Jin U

    2010-05-24

    We realized graphics processing unit (GPU) based real-time 4D (3D+time) signal processing and visualization on a regular Fourier-domain optical coherence tomography (FD-OCT) system with a nonlinear k-space spectrometer. An ultra-high speed linear spline interpolation (LSI) method for lambda-to-k spectral re-sampling is implemented in the GPU architecture, which gives average interpolation speeds of >3,000,000 line/s for 1024-pixel OCT (1024-OCT) and >1,400,000 line/s for 2048-pixel OCT (2048-OCT). The complete FD-OCT signal processing including lambda-to-k spectral re-sampling, fast Fourier transform (FFT) and post-FFT processing have all been implemented on a GPU. The maximum complete A-scan processing speeds are investigated to be 680,000 line/s for 1024-OCT and 320,000 line/s for 2048-OCT, which correspond to 1GByte processing bandwidth. In our experiment, a 2048-pixel CMOS camera running up to 70 kHz is used as an acquisition device. Therefore the actual imaging speed is camera- limited to 128,000 line/s for 1024-OCT or 70,000 line/s for 2048-OCT. 3D Data sets are continuously acquired in real time at 1024-OCT mode, immediately processed and visualized as high as 10 volumes/second (12,500 A-scans/volume) by either en face slice extraction or ray-casting based volume rendering from 3D texture mapped in graphics memory. For standard FD-OCT systems, a GPU is the only additional hardware needed to realize this improvement and no optical modification is needed. This technique is highly cost-effective and can be easily integrated into most ultrahigh speed FD-OCT systems to overcome the 3D data processing and visualization bottlenecks.

  7. Real-time 4D signal processing and visualization using graphics processing unit on a regular nonlinear-k Fourier-domain OCT system.

    PubMed

    Zhang, Kang; Kang, Jin U

    2010-05-24

    We realized graphics processing unit (GPU) based real-time 4D (3D+time) signal processing and visualization on a regular Fourier-domain optical coherence tomography (FD-OCT) system with a nonlinear k-space spectrometer. An ultra-high speed linear spline interpolation (LSI) method for lambda-to-k spectral re-sampling is implemented in the GPU architecture, which gives average interpolation speeds of >3,000,000 line/s for 1024-pixel OCT (1024-OCT) and >1,400,000 line/s for 2048-pixel OCT (2048-OCT). The complete FD-OCT signal processing including lambda-to-k spectral re-sampling, fast Fourier transform (FFT) and post-FFT processing have all been implemented on a GPU. The maximum complete A-scan processing speeds are investigated to be 680,000 line/s for 1024-OCT and 320,000 line/s for 2048-OCT, which correspond to 1GByte processing bandwidth. In our experiment, a 2048-pixel CMOS camera running up to 70 kHz is used as an acquisition device. Therefore the actual imaging speed is camera- limited to 128,000 line/s for 1024-OCT or 70,000 line/s for 2048-OCT. 3D Data sets are continuously acquired in real time at 1024-OCT mode, immediately processed and visualized as high as 10 volumes/second (12,500 A-scans/volume) by either en face slice extraction or ray-casting based volume rendering from 3D texture mapped in graphics memory. For standard FD-OCT systems, a GPU is the only additional hardware needed to realize this improvement and no optical modification is needed. This technique is highly cost-effective and can be easily integrated into most ultrahigh speed FD-OCT systems to overcome the 3D data processing and visualization bottlenecks. PMID:20589038

  8. Optimization of Parallel Legendre Transform using Graphics Processing Unit (GPU) for a Geodynamo Code

    NASA Astrophysics Data System (ADS)

    Lokavarapu, H. V.; Matsui, H.

    2015-12-01

    Convection and magnetic field of the Earth's outer core are expected to have vast length scales. To resolve these flows, high performance computing is required for geodynamo simulations using spherical harmonics transform (SHT), a significant portion of the execution time is spent on the Legendre transform. Calypso is a geodynamo code designed to model magnetohydrodynamics of a Boussinesq fluid in a rotating spherical shell, such as the outer core of the Earth. The code has been shown to scale well on computer clusters capable of computing at the order of 10⁵ cores using Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) parallelization for CPUs. To further optimize, we investigate three different algorithms of the SHT using GPUs. One is to preemptively compute the Legendre polynomials on the CPU before executing SHT on the GPU within the time integration loop. In the second approach, both the Legendre polynomials and the SHT are computed on the GPU simultaneously. In the third approach , we initially partition the radial grid for the forward transform and the harmonic order for the backward transform between the CPU and GPU. There after, the partitioned works are simultaneously computed in the time integration loop. We examine the trade-offs between space and time, memory bandwidth and GPU computations on Maverick, a Texas Advanced Computing Center (TACC) supercomputer. We have observed improved performance using a GPU enabled Legendre transform. Furthermore, we will compare and contrast the different algorithms in the context of GPUs.

  9. Computer graphics and the graphic artist

    NASA Technical Reports Server (NTRS)

    Taylor, N. L.; Fedors, E. G.; Pinelli, T. E.

    1985-01-01

    A centralized computer graphics system is being developed at the NASA Langley Research Center. This system was required to satisfy multiuser needs, ranging from presentation quality graphics prepared by a graphic artist to 16-mm movie simulations generated by engineers and scientists. While the major thrust of the central graphics system was directed toward engineering and scientific applications, hardware and software capabilities to support the graphic artists were integrated into the design. This paper briefly discusses the importance of computer graphics in research; the central graphics system in terms of systems, software, and hardware requirements; the application of computer graphics to graphic arts, discussed in terms of the requirements for a graphic arts workstation; and the problems encountered in applying computer graphics to the graphic arts. The paper concludes by presenting the status of the central graphics system.

  10. Grid-based algorithm to search critical points, in the electron density, accelerated by graphics processing units.

    PubMed

    Hernández-Esparza, Raymundo; Mejía-Chica, Sol-Milena; Zapata-Escobar, Andy D; Guevara-García, Alfredo; Martínez-Melchor, Apolinar; Hernández-Pérez, Julio-M; Vargas, Rubicelia; Garza, Jorge

    2014-12-01

    Using a grid-based method to search the critical points in electron density, we show how to accelerate such a method with graphics processing units (GPUs). When the GPU implementation is contrasted with that used on central processing units (CPUs), we found a large difference between the time elapsed by both implementations: the smallest time is observed when GPUs are used. We tested two GPUs, one related with video games and other used for high-performance computing (HPC). By the side of the CPUs, two processors were tested, one used in common personal computers and other used for HPC, both of last generation. Although our parallel algorithm scales quite well on CPUs, the same implementation on GPUs runs around 10× faster than 16 CPUs, with any of the tested GPUs and CPUs. We have found what one GPU dedicated for video games can be used without any problem for our application, delivering a remarkable performance, in fact; this GPU competes against one HPC GPU, in particular when single-precision is used. PMID:25345784

  11. Graphics processing unit accelerated three-dimensional model for the simulation of pulsed low-temperature plasmas

    SciTech Connect

    Fierro, Andrew Dickens, James; Neuber, Andreas

    2014-12-15

    A 3-dimensional particle-in-cell/Monte Carlo collision simulation that is fully implemented on a graphics processing unit (GPU) is described and used to determine low-temperature plasma characteristics at high reduced electric field, E/n, in nitrogen gas. Details of implementation on the GPU using the NVIDIA Compute Unified Device Architecture framework are discussed with respect to efficient code execution. The software is capable of tracking around 10 × 10{sup 6} particles with dynamic weighting and a total mesh size larger than 10{sup 8} cells. Verification of the simulation is performed by comparing the electron energy distribution function and plasma transport parameters to known Boltzmann Equation (BE) solvers. Under the assumption of a uniform electric field and neglecting the build-up of positive ion space charge, the simulation agrees well with the BE solvers. The model is utilized to calculate plasma characteristics of a pulsed, parallel plate discharge. A photoionization model provides the simulation with additional electrons after the initial seeded electron density has drifted towards the anode. Comparison of the performance benefits between the GPU-implementation versus a CPU-implementation is considered, and a speed-up factor of 13 for a 3D relaxation Poisson solver is obtained. Furthermore, a factor 60 speed-up is realized for parallelization of the electron processes.

  12. Performance evaluation for volumetric segmentation of multiple sclerosis lesions using MATLAB and computing engine in the graphical processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Le, Anh H.; Park, Young W.; Ma, Kevin; Jacobs, Colin; Liu, Brent J.

    2010-03-01

    Multiple Sclerosis (MS) is a progressive neurological disease affecting myelin pathways in the brain. Multiple lesions in the white matter can cause paralysis and severe motor disabilities of the affected patient. To solve the issue of inconsistency and user-dependency in manual lesion measurement of MRI, we have proposed a 3-D automated lesion quantification algorithm to enable objective and efficient lesion volume tracking. The computer-aided detection (CAD) of MS, written in MATLAB, utilizes K-Nearest Neighbors (KNN) method to compute the probability of lesions on a per-voxel basis. Despite the highly optimized algorithm of imaging processing that is used in CAD development, MS CAD integration and evaluation in clinical workflow is technically challenging due to the requirement of high computation rates and memory bandwidth in the recursive nature of the algorithm. In this paper, we present the development and evaluation of using a computing engine in the graphical processing unit (GPU) with MATLAB for segmentation of MS lesions. The paper investigates the utilization of a high-end GPU for parallel computing of KNN in the MATLAB environment to improve algorithm performance. The integration is accomplished using NVIDIA's CUDA developmental toolkit for MATLAB. The results of this study will validate the practicality and effectiveness of the prototype MS CAD in a clinical setting. The GPU method may allow MS CAD to rapidly integrate in an electronic patient record or any disease-centric health care system.

  13. Structural Determination of (Al2O3)(n) (n = 1-15) Clusters Based on Graphic Processing Unit.

    PubMed

    Zhang, Qiyao; Cheng, Longjiu

    2015-05-26

    Global optimization algorithms have been widely used in the field of chemistry to search the global minimum structures of molecular and atomic clusters, which is a nondeterministic polynomial problem with the increasing sizes of clusters. Considering that the computational ability of a graphic processing unit (GPU) is much better than that of a central processing unit (CPU), we developed a GPU-based genetic algorithm for structural prediction of clusters and achieved a high acceleration ratio compared to a CPU. On the one-dimensional (1D) operation of a GPU, taking (Al2O3)n clusters as test cases, the peak acceleration ratio in the GPU is about 220 times that in a CPU in single precision and the value is 103 for double precision in calculation of the analytical interatomic potential. The peak acceleration ratio is about 240 and 107 on the block operation, and it is about 77 and 35 on the 2D operation compared to a CPU in single precision and double precision, respectively. And the peak acceleration ratio of the whole genetic algorithm program is about 35 compared to CPU at double precision. Structures of (Al2O3)n clusters at n = 1-10 reported in previous works are successfully located, and their low-lying structures at n = 11-15 are predicted.

  14. PO*WW*ER mobile treatment unit process hazards analysis

    SciTech Connect

    Richardson, R.B.

    1996-06-01

    The objective of this report is to demonstrate that a thorough assessment of the risks associated with the operation of the Rust Geotech patented PO*WW*ER mobile treatment unit (MTU) has been performed and documented. The MTU was developed to treat aqueous mixed wastes at the US Department of Energy (DOE) Albuquerque Operations Office sites. The MTU uses evaporation to separate organics and water from radionuclides and solids, and catalytic oxidation to convert the hazardous into byproducts. This process hazards analysis evaluated a number of accident scenarios not directly related to the operation of the MTU, such as natural phenomena damage and mishandling of chemical containers. Worst case accident scenarios were further evaluated to determine the risk potential to the MTU and to workers, the public, and the environment. The overall risk to any group from operation of the MTU was determined to be very low; the MTU is classified as a Radiological Facility with low hazards.

  15. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  16. GPUDePiCt: A Parallel Implementation of a Clustering Algorithm for Computing Degenerate Primers on Graphics Processing Units.

    PubMed

    Cickovski, Trevor; Flor, Tiffany; Irving-Sachs, Galen; Novikov, Philip; Parda, James; Narasimhan, Giri

    2015-01-01

    In order to make multiple copies of a target sequence in the laboratory, the technique of Polymerase Chain Reaction (PCR) requires the design of "primers", which are short fragments of nucleotides complementary to the flanking regions of the target sequence. If the same primer is to amplify multiple closely related target sequences, then it is necessary to make the primers "degenerate", which would allow it to hybridize to target sequences with a limited amount of variability that may have been caused by mutations. However, the PCR technique can only allow a limited amount of degeneracy, and therefore the design of degenerate primers requires the identification of reasonably well-conserved regions in the input sequences. We take an existing algorithm for designing degenerate primers that is based on clustering and parallelize it in a web-accessible software package GPUDePiCt, using a shared memory model and the computing power of Graphics Processing Units (GPUs). We test our implementation on large sets of aligned sequences from the human genome and show a multi-fold speedup for clustering using our hybrid GPU/CPU implementation over a pure CPU approach for these sequences, which consist of more than 7,500 nucleotides. We also demonstrate that this speedup is consistent over larger numbers and longer lengths of aligned sequences.

  17. GPUDePiCt: A Parallel Implementation of a Clustering Algorithm for Computing Degenerate Primers on Graphics Processing Units.

    PubMed

    Cickovski, Trevor; Flor, Tiffany; Irving-Sachs, Galen; Novikov, Philip; Parda, James; Narasimhan, Giri

    2015-01-01

    In order to make multiple copies of a target sequence in the laboratory, the technique of Polymerase Chain Reaction (PCR) requires the design of "primers", which are short fragments of nucleotides complementary to the flanking regions of the target sequence. If the same primer is to amplify multiple closely related target sequences, then it is necessary to make the primers "degenerate", which would allow it to hybridize to target sequences with a limited amount of variability that may have been caused by mutations. However, the PCR technique can only allow a limited amount of degeneracy, and therefore the design of degenerate primers requires the identification of reasonably well-conserved regions in the input sequences. We take an existing algorithm for designing degenerate primers that is based on clustering and parallelize it in a web-accessible software package GPUDePiCt, using a shared memory model and the computing power of Graphics Processing Units (GPUs). We test our implementation on large sets of aligned sequences from the human genome and show a multi-fold speedup for clustering using our hybrid GPU/CPU implementation over a pure CPU approach for these sequences, which consist of more than 7,500 nucleotides. We also demonstrate that this speedup is consistent over larger numbers and longer lengths of aligned sequences. PMID:26357230

  18. Communication: A reduced scaling J-engine based reformulation of SOS-MP2 using graphics processing units.

    PubMed

    Maurer, S A; Kussmann, J; Ochsenfeld, C

    2014-08-01

    We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N⁵) to O(N³) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows to replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server. PMID:25106563

  19. Communication: A reduced scaling J-engine based reformulation of SOS-MP2 using graphics processing units

    SciTech Connect

    Maurer, S. A.; Kussmann, J.; Ochsenfeld, C.

    2014-08-07

    We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N{sup 5}) to O(N{sup 3}) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows to replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server.

  20. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid.

    PubMed

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn

    2016-09-15

    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc. PMID:27431905

  1. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid.

    PubMed

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn

    2016-09-15

    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc.

  2. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2013-08-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.

  3. Graphical processing unit-based machine vision system for simultaneous measurement of shrinkage and soil release in fabrics

    NASA Astrophysics Data System (ADS)

    Kamalakannan, Sridharan; Gururajan, Arunkumar; Hill, Matthew; Shahriar, Muneem; Sari-Sarraf, Hamed; Hequet, Eric F.

    2010-04-01

    We present a machine vision system for simultaneous and objective evaluation of two important functional attributes of a fabric, namely, soil release and shrinkage. Soil release corresponds to the efficacy of the fabric in releasing stains after laundering and shrinkage essentially quantifies the dimensional changes in the fabric postlaundering. Within the framework of the proposed machine vision scheme, the samples are prepared using a prescribed procedure and subsequently digitized using a commercially available off-the-shelf scanner. Shrinkage measurements in the lengthwise and widthwise directions are obtained by detecting and measuring the distance between two pairs of appropriately placed markers. In addition, these shrinkage markers help in producing estimates of the location of the center of the stain on the fabric image. Using this information, a customized adaptive statistical snake is initialized, which evolves based on region statistics to segment the stain. Once the stain is localized, appropriate measurements can be extracted from the stain and the background image that can help in objectively quantifying stain release. In addition, the statistical snakes algorithm has been parallelized on a graphical processing unit, which allows for rapid evolution of multiple snakes. This, in turn, translates to the fact that multiple stains can be detected and segmented in a computationally efficient fashion. Finally, the aforementioned scheme is validated on a sizeable set of fabric images and the promising nature of the results help in establishing the efficacy of the proposed approach.

  4. Communication: A reduced scaling J-engine based reformulation of SOS-MP2 using graphics processing units

    NASA Astrophysics Data System (ADS)

    Maurer, S. A.; Kussmann, J.; Ochsenfeld, C.

    2014-08-01

    We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N^{5) to O(N^{3) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows to replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server.

  5. Multi-dimensional, mesoscopic Monte Carlo simulations of inhomogeneous reaction-drift-diffusion systems on graphics-processing units.

    PubMed

    Vigelius, Matthias; Meyer, Bernd

    2012-01-01

    For many biological applications, a macroscopic (deterministic) treatment of reaction-drift-diffusion systems is insufficient. Instead, one has to properly handle the stochastic nature of the problem and generate true sample paths of the underlying probability distribution. Unfortunately, stochastic algorithms are computationally expensive and, in most cases, the large number of participating particles renders the relevant parameter regimes inaccessible. In an attempt to address this problem we present a genuine stochastic, multi-dimensional algorithm that solves the inhomogeneous, non-linear, drift-diffusion problem on a mesoscopic level. Our method improves on existing implementations in being multi-dimensional and handling inhomogeneous drift and diffusion. The algorithm is well suited for an implementation on data-parallel hardware architectures such as general-purpose graphics processing units (GPUs). We integrate the method into an operator-splitting approach that decouples chemical reactions from the spatial evolution. We demonstrate the validity and applicability of our algorithm with a comprehensive suite of standard test problems that also serve to quantify the numerical accuracy of the method. We provide a freely available, fully functional GPU implementation. Integration into Inchman, a user-friendly web service, that allows researchers to perform parallel simulations of reaction-drift-diffusion systems on GPU clusters is underway.

  6. Monte Carlo-based fluorescence molecular tomography reconstruction method accelerated by a cluster of graphic processing units.

    PubMed

    Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

    2011-02-01

    High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

  7. Integrative Processing of Verbal and Graphical Information during Re-Reading Predicts Learning from Illustrated Text: An Eye-Movement Study

    ERIC Educational Resources Information Center

    Mason, Lucia; Tornatora, Maria Caterina; Pluchino, Patrik

    2015-01-01

    Printed or digital textbooks contain texts accompanied by various kinds of visualisation. Successful comprehension of these materials requires integrating verbal and graphical information. This study investigates the time course of processing an illustrated text through eye-tracking methodology in the school context. The aims were to identify…

  8. Research of physical-chemical processes in optically transparent materials during coloring points formation by volumetric-graphical laser processing

    NASA Astrophysics Data System (ADS)

    Davidov, Nicolay N.; Sushkova, L. T.; Rufitskii, M. V.; Kudaev, Serge V.; Galkin, Arkadii F.; Orlov, Vitalii N.; Prokoshev, Valerii G.

    1996-03-01

    A distinctive feature of glass is a wide range of correlation between internal absorption and admittance of electro-magnetic streams in a wide wavelength scope starting from gamma rays and up to infrared radiation. This factor provides an opportunity for search of new realizations of processes for machining, control and exploitation of glassware for home appliances, radioelectronics and illumination.

  9. Designing and Implementing an OVERFLOW Reader for ParaView and Comparing Performance Between Central Processing Units and Graphical Processing Units

    NASA Technical Reports Server (NTRS)

    Chawner, David M.; Gomez, Ray J.

    2010-01-01

    In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.

  10. Real-time dual-mode standard/complex Fourier-domain OCT system using graphics processing unit accelerated 4D signal processing and visualization

    NASA Astrophysics Data System (ADS)

    Zhang, Kang; Kang, Jin U.

    2011-03-01

    We realized a real-time dual-mode standard/complex Fourier-domain optical coherence tomography (FD-OCT) system using graphics processing unit (GPU) accelerated 4D (3D+time) signal processing and visualization. For both standard and complex FD-OCT modes, the signal processing tasks were implemented on a dual-GPUs architecture that included λ-to-k spectral re-sampling, fast Fourier transform (FFT), modified Hilbert transform, logarithmic-scaling, and volume rendering. The maximum A-scan processing speeds achieved are >3,000,000 line/s for the standard 1024-pixel-FD-OCT, and >500,000 line/s for the complex 1024-pixel-FD-OCT. Multiple volumerendering of the same 3D data set were preformed and displayed with different view angles. The GPU-acceleration technique is highly cost-effective and can be easily integrated into most ultrahigh speed FD-OCT systems to overcome the 3D data processing and visualization bottlenecks.

  11. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards

    NASA Astrophysics Data System (ADS)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G.

    2011-07-01

    We describe and evaluate a fast implementation of a classical block-matching motion estimation algorithm for multiple graphical processing units (GPUs) using the compute unified device architecture computing engine. The implemented block-matching algorithm uses summed absolute difference error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation, we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and noninteger search grids. The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a noninteger search grid. The additional speedup for a noninteger search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable. In addition, we compared the execution time of the proposed FS GPU implementation with two existing, highly optimized nonfull grid search CPU-based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and simplified unsymmetrical multi-hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation. We also demonstrated that for an image sequence of 720 × 480 pixels in resolution commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  12. Parallel algorithm for solving Kepler’s equation on Graphics Processing Units: Application to analysis of Doppler exoplanet searches

    NASA Astrophysics Data System (ADS)

    Ford, Eric B.

    2009-05-01

    We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.

  13. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    PubMed

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  14. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    PubMed

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards. PMID:22347787

  15. Structural, dynamic, and electrostatic properties of fully hydrated DMPC bilayers from molecular dynamics simulations accelerated with graphical processing units (GPUs).

    PubMed

    Ganesan, Narayan; Bauer, Brad A; Lucas, Timothy R; Patel, Sandeep; Taufer, Michela

    2011-11-15

    We present results of molecular dynamics simulations of fully hydrated DMPC bilayers performed on graphics processing units (GPUs) using current state-of-the-art non-polarizable force fields and a local GPU-enabled molecular dynamics code named FEN ZI. We treat the conditionally convergent electrostatic interaction energy exactly using the particle mesh Ewald method (PME) for solution of Poisson's Equation for the electrostatic potential under periodic boundary conditions. We discuss elements of our implementation of the PME algorithm on GPUs as well as pertinent performance issues. We proceed to show results of simulations of extended lipid bilayer systems using our program, FEN ZI. We performed simulations of DMPC bilayer systems consisting of 17,004, 68,484, and 273,936 atoms in explicit solvent. We present bilayer structural properties (atomic number densities, electron density profiles), deuterium order parameters (S(CD)), electrostatic properties (dipole potential, water dipole moments), and orientational properties of water. Predicted properties demonstrate excellent agreement with experiment and previous all-atom molecular dynamics simulations. We observe no statistically significant differences in calculated structural or electrostatic properties for different system sizes, suggesting the small bilayer simulations (less than 100 lipid molecules) provide equivalent representation of structural and electrostatic properties associated with significantly larger systems (over 1000 lipid molecules). We stress that the three system size representations will have differences in other properties such as surface capillary wave dynamics or surface tension related effects that are not probed in the current study. The latter properties are inherently dependent on system size. This contribution suggests the suitability of applying emerging GPU technologies to studies of an important class of biological environments, that of lipid bilayers and their associated integral

  16. Mobile non-polluting cleaning and processing apparatus and method

    SciTech Connect

    Shaddock, R.E.

    1980-10-14

    A mobile vehicle is described that has a self-contained apparatus for classifying , cleaning and reconstituting granular or pelletized materials, such as catalysts used in chemical plants, without polluting the surroundings, is adapted to travel from plant to plant over conventional highways and quickly placed in operating condition to remove contaminated granular materials from their beds or towers in chemical plants even when hot and process the materials to remove dust and undersized particles, to classify the granules into batches of different sizes for reuse and to filter out pollutants to protect the surrounding atmosphere. The apparatus includes cyclone and bag filters, classifying screens, power driven equipment for creating airstreams picking up the contaminated granular material from its source to propel it through the cyclone separator and convey the separated dust particles through a bag filter before releasing the filtered air to the atmosphere while the separated granular material is fed by gravity to classifying screens from which the different sized screenings are discharged into bins while an airstream pulls dust particles from the screens through another bag filter before discharging the air. The vehicle stores some of the apparatus in a compact low level position and has a crane for setting up the stored apparatus in upright operating position. Bins for the cleaned and classified granular materials are arranged for nesting together during transportation and are easily positioned by the crane for receiving the granular materials from the classifying screens. A second vehicle may be provided to transport the nested bins.

  17. Developing Online Multimodal Verbal Communication to Enhance the Writing Process in an Audio-Graphic Conferencing Environment

    ERIC Educational Resources Information Center

    Ciekanski, Maud; Chanier, Thierry

    2008-01-01

    Over the last decade, most studies in Computer-Mediated Communication (CMC) have highlighted how online synchronous learning environments implement a new literacy related to multimodal communication. The environment used in our experiment is based on a synchronous audio-graphic conferencing tool. This study concerns false beginners in an English…

  18. Measuring Cognitive Load in Test Items: Static Graphics versus Animated Graphics

    ERIC Educational Resources Information Center

    Dindar, M.; Kabakçi Yurdakul, I.; Inan Dönmez, F.

    2015-01-01

    The majority of multimedia learning studies focus on the use of graphics in learning process but very few of them examine the role of graphics in testing students' knowledge. This study investigates the use of static graphics versus animated graphics in a computer-based English achievement test from a cognitive load theory perspective. Three…

  19. Design Graphics

    NASA Technical Reports Server (NTRS)

    1990-01-01

    A mathematician, David R. Hedgley, Jr. developed a computer program that considers whether a line in a graphic model of a three-dimensional object should or should not be visible. Known as the Hidden Line Computer Code, the program automatically removes superfluous lines and displays an object from a specific viewpoint, just as the human eye would see it. An example of how one company uses the program is the experience of Birdair which specializes in production of fabric skylights and stadium covers. The fabric called SHEERFILL is a Teflon coated fiberglass material developed in cooperation with DuPont Company. SHEERFILL glazed structures are either tension structures or air-supported tension structures. Both are formed by patterned fabric sheets supported by a steel or aluminum frame or cable network. Birdair uses the Hidden Line Computer Code, to illustrate a prospective structure to an architect or owner. The program generates a three- dimensional perspective with the hidden lines removed. This program is still used by Birdair and continues to be commercially available to the public.

  20. HLYWD: a program for post-processing data files to generate selected plots or time-lapse graphics

    SciTech Connect

    Munro, J.K. Jr.

    1980-05-01

    The program HLYWD is a post-processor of output files generated by large plasma simulation computations or of data files containing a time sequence of plasma diagnostics. It is intended to be used in a production mode for either type of application; i.e., it allows one to generate along with the graphics sequence, segments containing title, credits to those who performed the work, text to describe the graphics, and acknowledgement of funding agency. The current version is designed to generate 3D plots and allows one to select type of display (linear or semi-log scales), choice of normalization of function values for display purposes, viewing perspective, and an option to allow continuous rotations of surfaces. This program was developed with the intention of being relatively easy to use, reasonably flexible, and requiring a minimum investment of the user's time. It uses the TV80 library of graphics software and ORDERLIB system software on the CDC 7600 at the National Magnetic Fusion Energy Computing Center at Lawrence Livermore Laboratory in California.

  1. Graphic engine resource management

    NASA Astrophysics Data System (ADS)

    Bautin, Mikhail; Dwarakinath, Ashok; Chiueh, Tzi-cker

    2008-01-01

    Modern consumer-grade 3D graphic cards boast a computation/memory resource that can easily rival or even exceed that of standard desktop PCs. Although these cards are mainly designed for 3D gaming applications, their enormous computational power has attracted developers to port an increasing number of scientific computation programs to these cards, including matrix computation, collision detection, cryptography, database sorting, etc. As more and more applications run on 3D graphic cards, there is a need to allocate the computation/memory resource on these cards among the sharing applications more fairly and efficiently. In this paper, we describe the design, implementation and evaluation of a Graphic Processing Unit (GPU) scheduler based on Deficit Round Robin scheduling that successfully allocates to every process an equal share of the GPU time regardless of their demand. This scheduler, called GERM, estimates the execution time of each GPU command group based on dynamically collected statistics, and controls each process's GPU command production rate through its CPU scheduling priority. Measurements on the first GERM prototype show that this approach can keep the maximal GPU time consumption difference among concurrent GPU processes consistently below 5% for a variety of application mixes.

  2. Space Spurred Computer Graphics

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Dicomed Corporation was asked by NASA in the early 1970s to develop processing capabilities for recording images sent from Mars by Viking spacecraft. The company produced a film recorder which increased the intensity levels and the capability for color recording. This development led to a strong technology base resulting in sophisticated computer graphics equipment. Dicomed systems are used to record CAD (computer aided design) and CAM (computer aided manufacturing) equipment, to update maps and produce computer generated animation.

  3. Mobile Technology and CAD Technology Integration in Teaching Architectural Design Process for Producing Creative Product

    ERIC Educational Resources Information Center

    Bin Hassan, Isham Shah; Ismail, Mohd Arif; Mustafa, Ramlee

    2011-01-01

    The purpose of this research is to examine the effect of integrating the mobile and CAD technology on teaching architectural design process for Malaysian polytechnic architectural students in producing a creative product. The website is set up based on Caroll's minimal theory, while mobile and CAD technology integration is based on Brown and…

  4. The Longitudinal Impact of Cognitive Speed of Processing Training on Driving Mobility

    ERIC Educational Resources Information Center

    Edwards, Jerri D.; Myers, Charlsie; Ross, Lesley A.; Roenker, Daniel L.; Cissell, Gayla M.; McLaughlin, Alexis M.; Ball, Karlene K.

    2009-01-01

    Purpose: To examine how cognitive speed of processing training affects driving mobility across a 3-year period among older drivers. Design and Methods: Older drivers with poor Useful Field of View (UFOV) test performance (indicating greater risk for subsequent at-fault crashes and mobility declines) were randomly assigned to either a speed of…

  5. Building Regression Models: The Importance of Graphics.

    ERIC Educational Resources Information Center

    Dunn, Richard

    1989-01-01

    Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)

  6. Mathematical Creative Activity and the Graphic Calculator

    ERIC Educational Resources Information Center

    Duda, Janina

    2011-01-01

    Teaching mathematics using graphic calculators has been an issue of didactic discussions for years. Finding ways in which graphic calculators can enrich the development process of creative activity in mathematically gifted students between the ages of 16-17 is the focus of this article. Research was conducted using graphic calculators with…

  7. Graphic Design Is Not a Medium.

    ERIC Educational Resources Information Center

    Gruber, John Edward, Jr.

    2001-01-01

    Discusses graphic design and reviews its development from analog processes to a digital tool with the use of computers. Topics include graphical user interfaces; the need for visual communication concepts; transmedia as opposed to repurposing; and graphic design instruction in higher education. (LRW)

  8. Fast point-based method of a computer-generated hologram for a triangle-patch model by using a graphics processing unit.

    PubMed

    Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji

    2016-01-20

    The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method.

  9. Fast point-based method of a computer-generated hologram for a triangle-patch model by using a graphics processing unit.

    PubMed

    Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji

    2016-01-20

    The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method. PMID:26835949

  10. Computer graphics: Programmers's Hierarchical Interactive Graphics System (PHIGS). Language bindings (Part 3. Ada). Category: Software standard. Subcategory: Graphics. Final report

    SciTech Connect

    Benigni, D.R.

    1990-01-01

    The publication announces the adoption of the American National Standard Programmer's Hierarchical Interactive Graphics System, ANSI X3.144-1988, as a Federal Information Processing Standard (FIPS). The standard specifies the control and data interchange between an application program and its graphic support system. It provides a set of functions and programming language bindings, (or toolbox package) for the definition, display and modification of two-dimensional (2D) or three-dimensional (3D) graphical data. In addition, the standard supports highly interactive processing and geometric articulation, multi-level or hierarchical graphics data, and rapid modification of both the graphics data and the relationships between the graphical data. The purpose of the standard is to promote portability of graphics application programs between different installations.

  11. Gasoline from coal in the state of Illinois: feasibility study. Volume I. Design. [KBW gasification process, ICI low-pressure methanol process and Mobil M-gasoline process

    SciTech Connect

    Not Available

    1980-01-01

    Volume 1 describes the proposed plant: KBW gasification process, ICI low-pressure methanol process and Mobil M-gasoline process, and also with ancillary processes, such as oxygen plant, shift process, RECTISOL purification process, sulfur recovery equipment and pollution control equipment. Numerous engineering diagrams are included. (LTN)

  12. Mobile Phone Service Process Hiccups at Cellular Inc.

    ERIC Educational Resources Information Center

    Edgington, Theresa M.

    2010-01-01

    This teaching case documents an actual case of process execution and failure. The case is useful in MIS introductory courses seeking to demonstrate the interdependencies within a business process, and the concept of cascading failure at the process level. This case demonstrates benefits and potential problems with information technology systems,…

  13. Graphical fiber shaping control interface

    NASA Astrophysics Data System (ADS)

    Basso, Eric T.; Ninomiya, Yasuyuki

    2016-03-01

    In this paper, we present an improved graphical user interface for defining single-pass novel shaping techniques on glass processing machines that allows for streamlined process development. This approach offers unique modularity and debugging capability to researchers during the process development phase not usually afforded with similar scripting languages.

  14. Weather information network including graphical display

    NASA Technical Reports Server (NTRS)

    Leger, Daniel R. (Inventor); Burdon, David (Inventor); Son, Robert S. (Inventor); Martin, Kevin D. (Inventor); Harrison, John (Inventor); Hughes, Keith R. (Inventor)

    2006-01-01

    An apparatus for providing weather information onboard an aircraft includes a processor unit and a graphical user interface. The processor unit processes weather information after it is received onboard the aircraft from a ground-based source, and the graphical user interface provides a graphical presentation of the weather information to a user onboard the aircraft. Preferably, the graphical user interface includes one or more user-selectable options for graphically displaying at least one of convection information, turbulence information, icing information, weather satellite information, SIGMET information, significant weather prognosis information, and winds aloft information.

  15. Spins Dynamics in a Dissipative Environment: Hierarchal Equations of Motion Approach Using a Graphics Processing Unit (GPU).

    PubMed

    Tsuchimoto, Masashi; Tanimura, Yoshitaka

    2015-08-11

    A system with many energy states coupled to a harmonic oscillator bath is considered. To study quantum non-Markovian system-bath dynamics numerically rigorously and nonperturbatively, we developed a computer code for the reduced hierarchy equations of motion (HEOM) for a graphics processor unit (GPU) that can treat the system as large as 4096 energy states. The code employs a Padé spectrum decomposition (PSD) for a construction of HEOM and the exponential integrators. Dynamics of a quantum spin glass system are studied by calculating the free induction decay signal for the cases of 3 × 2 to 3 × 4 triangular lattices with antiferromagnetic interactions. We found that spins relax faster at lower temperature due to transitions through a quantum coherent state, as represented by the off-diagonal elements of the reduced density matrix, while it has been known that the spins relax slower due to suppression of thermal activation in a classical case. The decay of the spins are qualitatively similar regardless of the lattice sizes. The pathway of spin relaxation is analyzed under a sudden temperature drop condition. The Compute Unified Device Architecture (CUDA) based source code used in the present calculations is provided as Supporting Information . PMID:26574467

  16. Chemical Effects in the Separation Process of a Differential Mobility / Mass Spectrometer System

    PubMed Central

    Schneider, Bradley B.; Covey, Thomas R.; Coy, Stephen L.; Krylov, Evgeny V.; Nazarov, Erkinjon G.

    2013-01-01

    In differential mobility spectrometry (DMS, also referred to as high field asymmetric waveform ion mobility spectrometry, FAIMS), ions are separated on the basis of the difference in their mobility under high and low electric fields. The addition of polar modifiers to the gas transporting the ions through a DMS enhances the formation of clusters in a field-dependent way and thus amplifies the high and low field mobility difference resulting in increased peak capacity and separation power. Observations of the increase in mobility field dependence are consistent with a cluster formation model, also referred to as the dynamic cluster-decluster model. The uniqueness of chemical interactions that occur between an ion and cluster-forming neutrals increases the selectivity of the separation and the depression of low-field mobility relative to high-field mobility increases the compensation voltage and peak capacity. The effect of polar modifiers on the peak capacity across a broad range of chemicals has been investigated. We discuss the theoretical underpinnings which explain the observed effects. In contrast to the result from polar modifiers, we find that using mixtures of inert gases as the transport gas improve resolution by reducing peak width but has very little effect on peak capacity or selectivity. Inert gases do not cluster and thus do not reduce low field mobility relative to high-field mobility. The observed changes in the differential mobility α parameter exhibited by different classes of compounds when the transport gas contains polar modifiers or has a significant fraction of inert gas can be explained on the basis of the physical mechanisms involved in the separation processes. PMID:20121077

  17. Linear-scaling self-consistent field calculations based on divide-and-conquer method using resolution-of-identity approximation on graphical processing units.

    PubMed

    Yoshikawa, Takeshi; Nakai, Hiromi

    2015-01-30

    Graphical processing units (GPUs) are emerging in computational chemistry to include Hartree-Fock (HF) methods and electron-correlation theories. However, ab initio calculations of large molecules face technical difficulties such as slow memory access between central processing unit and GPU and other shortfalls of GPU memory. The divide-and-conquer (DC) method, which is a linear-scaling scheme that divides a total system into several fragments, could avoid these bottlenecks by separately solving local equations in individual fragments. In addition, the resolution-of-the-identity (RI) approximation enables an effective reduction in computational cost with respect to the GPU memory. The present study implemented the DC-RI-HF code on GPUs using math libraries, which guarantee compatibility with future development of the GPU architecture. Numerical applications confirmed that the present code using GPUs significantly accelerated the HF calculations while maintaining accuracy.

  18. A graphical language for reliability model generation

    NASA Technical Reports Server (NTRS)

    Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.

    1990-01-01

    A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.

  19. Tungsten Mobility During Alteration Processes: An Experimental Approach

    NASA Astrophysics Data System (ADS)

    Yin, N. H.; Fabre, S.; Quitté, G.

    2016-08-01

    Most chondrites show evidence of relatively low temperature alteration. Mass-dependent fractionation of non-traditional stable isotopes such as tungsten (W) are well suited to trace metal-silicate differentiation and alteration processes.

  20. A Web Graphics Primer.

    ERIC Educational Resources Information Center

    Buchanan, Larry

    1999-01-01

    Discusses the basic technical concepts of using graphics in World Wide Web pages, including: color depth and dithering, dots-per-inch, image size, file types, Graphics Interchange Formats (GIFs), Joint Photographic Experts Group (JPEG), format, and software recommendations. (AEF)

  1. GRASP/Ada: Graphical Representations of Algorithms, Structures, and Processes for Ada. The development of a program analysis environment for Ada: Reverse engineering tools for Ada, task 2, phase 3

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1991-01-01

    The main objective is the investigation, formulation, and generation of graphical representations of algorithms, structures, and processes for Ada (GRASP/Ada). The presented task, in which various graphical representations that can be extracted or generated from source code are described and categorized, is focused on reverse engineering. The following subject areas are covered: the system model; control structure diagram generator; object oriented design diagram generator; user interface; and the GRASP library.

  2. Developing a Mobile Application "Educational Process Remote Management System" on the Android Operating System

    ERIC Educational Resources Information Center

    Abildinova, Gulmira M.; Alzhanov, Aitugan K.; Ospanova, Nazira N.; Taybaldieva, Zhymatay; Baigojanova, Dametken S.; Pashovkin, Nikita O.

    2016-01-01

    Nowadays, when there is a need to introduce various innovations into the educational process, most efforts are aimed at simplifying the learning process. To that end, electronic textbooks, testing systems and other software is being developed. Most of them are intended to run on personal computers with limited mobility. Smart education is…

  3. Twitter Micro-Blogging Based Mobile Learning Approach to Enhance the Agriculture Education Process

    ERIC Educational Resources Information Center

    Dissanayeke, Uvasara; Hewagamage, K. P.; Ramberg, Robert; Wikramanayake, G. N.

    2013-01-01

    The study intends to see how to introduce mobile learning within the domain of agriculture so as to enhance the agriculture education process. We propose to use the Activity theory together with other methodologies such as participatory methods to design, implement, and evaluate mLearning activities. The study explores the process of introducing…

  4. Effects of Mobile Instant Messaging on Collaborative Learning Processes and Outcomes: The Case of South Korea

    ERIC Educational Resources Information Center

    Kim, Hyewon; Lee, MiYoung; Kim, Minjeong

    2014-01-01

    The purpose of this paper was to investigate the effects of mobile instant messaging on collaborative learning processes and outcomes. The collaborative processes were measured in terms of different types of interactions. We measured the outcomes of the collaborations through both the students' taskwork and their teamwork. The collaborative…

  5. Graphics and Listening Comprehension.

    ERIC Educational Resources Information Center

    Ruhe, Valerie

    1996-01-01

    Examines the effectiveness of graphics as lecture comprehension supports for low-proficiency English-as-a-Second-Language (ESL) listeners. The study compared the performance of Asian students in Canada listening to an audiotape while viewing an organizational graphic with that of a control group. Findings indicate that the graphics enhanced…

  6. Emergency healthcare process automation using mobile computing and cloud services.

    PubMed

    Poulymenopoulou, M; Malamateniou, F; Vassilacopoulos, G

    2012-10-01

    Emergency care is basically concerned with the provision of pre-hospital and in-hospital medical and/or paramedical services and it typically involves a wide variety of interdependent and distributed activities that can be interconnected to form emergency care processes within and between Emergency Medical Service (EMS) agencies and hospitals. Hence, in developing an information system for emergency care processes, it is essential to support individual process activities and to satisfy collaboration and coordination needs by providing readily access to patient and operational information regardless of location and time. Filling this information gap by enabling the provision of the right information, to the right people, at the right time fosters new challenges, including the specification of a common information format, the interoperability among heterogeneous institutional information systems or the development of new, ubiquitous trans-institutional systems. This paper is concerned with the development of an integrated computer support to emergency care processes by evolving and cross-linking institutional healthcare systems. To this end, an integrated EMS cloud-based architecture has been developed that allows authorized users to access emergency case information in standardized document form, as proposed by the Integrating the Healthcare Enterprise (IHE) profile, uses the Organization for the Advancement of Structured Information Standards (OASIS) standard Emergency Data Exchange Language (EDXL) Hospital Availability Exchange (HAVE) for exchanging operational data with hospitals and incorporates an intelligent module that supports triaging and selecting the most appropriate ambulances and hospitals for each case. PMID:22205383

  7. A service protocol for post-processing of medical images on the mobile device

    NASA Astrophysics Data System (ADS)

    He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian

    2014-03-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.

  8. Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks

    PubMed Central

    Xu, Yunfei; Choi, Jongeun

    2011-01-01

    This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal Gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of Gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme. PMID:22163785

  9. On the effective implementation of a boundary element code on graphics processing units unsing an out-of-core LU algorithm

    SciTech Connect

    D'Azevedo, Ed F; Nintcheu Fata, Sylvain

    2012-01-01

    A collocation boundary element code for solving the three-dimensional Laplace equation, publicly available from \\url{http://www.intetec.org}, has been adapted to run on an Nvidia Tesla general purpose graphics processing unit (GPU). Global matrix assembly and LU factorization of the resulting dense matrix were performed on the GPU. Out-of-core techniques were used to solve problems larger than available GPU memory. The code achieved over eight times speedup in matrix assembly and about 56~Gflops/sec in the LU factorization using only 512~Mbytes of GPU memory. Details of the GPU implementation and comparisons with the standard sequential algorithm are included to illustrate the performance of the GPU code.

  10. Relativistic hydrodynamics on graphic cards

    NASA Astrophysics Data System (ADS)

    Gerhard, Jochen; Lindenstruth, Volker; Bleicher, Marcus

    2013-02-01

    We show how to accelerate relativistic hydrodynamics simulations using graphic cards (graphic processing units, GPUs). These improvements are of highest relevance e.g. to the field of high-energetic nucleus-nucleus collisions at RHIC and LHC where (ideal and dissipative) relativistic hydrodynamics is used to calculate the evolution of hot and dense QCD matter. The results reported here are based on the Sharp And Smooth Transport Algorithm (SHASTA), which is employed in many hydrodynamical models and hybrid simulation packages, e.g. the Ultrarelativistic Quantum Molecular Dynamics model (UrQMD). We have redesigned the SHASTA using the OpenCL computing framework to work on accelerators like graphic processing units (GPUs) as well as on multi-core processors. With the redesign of the algorithm the hydrodynamic calculations have been accelerated by a factor 160 allowing for event-by-event calculations and better statistics in hybrid calculations.

  11. NMR data visualization, processing, and analysis on mobile devices.

    PubMed

    Cobas, Carlos; Iglesias, Isaac; Seoane, Felipe

    2015-08-01

    Touch-screen computers are emerging as a popular platform for many applications, including those in chemistry and analytical sciences. In this work, we present our implementation of a new NMR 'app' designed for hand-held and portable touch-controlled devices, such as smartphones and tablets. It features a flexible architecture formed by a powerful NMR processing and analysis kernel and an intuitive user interface that makes full use of the smart devices haptic capabilities. Routine 1D and 2D NMR spectra acquired in most NMR instruments can be processed in a fully unattended way. More advanced experiments such as non-uniform sampled NMR spectra are also supported through a very efficient parallelized Modified Iterative Soft Thresholding algorithm. Specific technical development features as well as the overall feasibility of using NMR software apps will also be discussed. All aspects considered the functionalities of the app allowing it to work as a stand-alone tool or as a 'companion' to more advanced desktop applications such as Mnova NMR. PMID:25924947

  12. NMR data visualization, processing, and analysis on mobile devices.

    PubMed

    Cobas, Carlos; Iglesias, Isaac; Seoane, Felipe

    2015-08-01

    Touch-screen computers are emerging as a popular platform for many applications, including those in chemistry and analytical sciences. In this work, we present our implementation of a new NMR 'app' designed for hand-held and portable touch-controlled devices, such as smartphones and tablets. It features a flexible architecture formed by a powerful NMR processing and analysis kernel and an intuitive user interface that makes full use of the smart devices haptic capabilities. Routine 1D and 2D NMR spectra acquired in most NMR instruments can be processed in a fully unattended way. More advanced experiments such as non-uniform sampled NMR spectra are also supported through a very efficient parallelized Modified Iterative Soft Thresholding algorithm. Specific technical development features as well as the overall feasibility of using NMR software apps will also be discussed. All aspects considered the functionalities of the app allowing it to work as a stand-alone tool or as a 'companion' to more advanced desktop applications such as Mnova NMR.

  13. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    PubMed

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks.

  14. Scalable Indoor Localization via Mobile Crowdsourcing and Gaussian Process

    PubMed Central

    Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei; Wang, Weiping

    2016-01-01

    Indoor localization using Received Signal Strength Indication (RSSI) fingerprinting has been extensively studied for decades. The positioning accuracy is highly dependent on the density of the signal database. In areas without calibration data, however, this algorithm breaks down. Building and updating a dense signal database is labor intensive, expensive, and even impossible in some areas. Researchers are continually searching for better algorithms to create and update dense databases more efficiently. In this paper, we propose a scalable indoor positioning algorithm that works both in surveyed and unsurveyed areas. We first propose Minimum Inverse Distance (MID) algorithm to build a virtual database with uniformly distributed virtual Reference Points (RP). The area covered by the virtual RPs can be larger than the surveyed area. A Local Gaussian Process (LGP) is then applied to estimate the virtual RPs’ RSSI values based on the crowdsourced training data. Finally, we improve the Bayesian algorithm to estimate the user’s location using the virtual database. All the parameters are optimized by simulations, and the new algorithm is tested on real-case scenarios. The results show that the new algorithm improves the accuracy by 25.5% in the surveyed area, with an average positioning error below 2.2 m for 80% of the cases. Moreover, the proposed algorithm can localize the users in the neighboring unsurveyed area. PMID:26999139

  15. Using analytic network process for evaluating mobile text entry methods.

    PubMed

    Ocampo, Lanndon A; Seva, Rosemary R

    2016-01-01

    This paper highlights a preference evaluation methodology for text entry methods in a touch keyboard smartphone using analytic network process (ANP). Evaluation of text entry methods in literature mainly considers speed and accuracy. This study presents an alternative means for selecting text entry method that considers user preference. A case study was carried out with a group of experts who were asked to develop a selection decision model of five text entry methods. The decision problem is flexible enough to reflect interdependencies of decision elements that are necessary in describing real-life conditions. Results showed that QWERTY method is more preferred than other text entry methods while arrangement of keys is the most preferred criterion in characterizing a sound method. Sensitivity analysis using simulation of normally distributed random numbers under fairly large perturbation reported the foregoing results reliable enough to reflect robust judgment. The main contribution of this paper is the introduction of a multi-criteria decision approach in the preference evaluation of text entry methods.

  16. Scalable Indoor Localization via Mobile Crowdsourcing and Gaussian Process.

    PubMed

    Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei; Wang, Weiping

    2016-01-01

    Indoor localization using Received Signal Strength Indication (RSSI) fingerprinting has been extensively studied for decades. The positioning accuracy is highly dependent on the density of the signal database. In areas without calibration data, however, this algorithm breaks down. Building and updating a dense signal database is labor intensive, expensive, and even impossible in some areas. Researchers are continually searching for better algorithms to create and update dense databases more efficiently. In this paper, we propose a scalable indoor positioning algorithm that works both in surveyed and unsurveyed areas. We first propose Minimum Inverse Distance (MID) algorithm to build a virtual database with uniformly distributed virtual Reference Points (RP). The area covered by the virtual RPs can be larger than the surveyed area. A Local Gaussian Process (LGP) is then applied to estimate the virtual RPs' RSSI values based on the crowdsourced training data. Finally, we improve the Bayesian algorithm to estimate the user's location using the virtual database. All the parameters are optimized by simulations, and the new algorithm is tested on real-case scenarios. The results show that the new algorithm improves the accuracy by 25.5% in the surveyed area, with an average positioning error below 2.2 m for 80% of the cases. Moreover, the proposed algorithm can localize the users in the neighboring unsurveyed area. PMID:26999139

  17. High Electron Mobility Transistor Structures on Sapphire Substrates Using CMOS Compatible Processing Techniques

    NASA Technical Reports Server (NTRS)

    Mueller, Carl; Alterovitz, Samuel; Croke, Edward; Ponchak, George

    2004-01-01

    System-on-a-chip (SOC) processes are under intense development for high-speed, high frequency transceiver circuitry. As frequencies, data rates, and circuit complexity increases, the need for substrates that enable high-speed analog operation, low-power digital circuitry, and excellent isolation between devices becomes increasingly critical. SiGe/Si modulation doped field effect transistors (MODFETs) with high carrier mobilities are currently under development to meet the active RF device needs. However, as the substrate normally used is Si, the low-to-modest substrate resistivity causes large losses in the passive elements required for a complete high frequency circuit. These losses are projected to become increasingly troublesome as device frequencies progress to the Ku-band (12 - 18 GHz) and beyond. Sapphire is an excellent substrate for high frequency SOC designs because it supports excellent both active and passive RF device performance, as well as low-power digital operations. We are developing high electron mobility SiGe/Si transistor structures on r-plane sapphire, using either in-situ grown n-MODFET structures or ion-implanted high electron mobility transistor (HEMT) structures. Advantages of the MODFET structures include high electron mobilities at all temperatures (relative to ion-implanted HEMT structures), with mobility continuously improving to cryogenic temperatures. We have measured electron mobilities over 1,200 and 13,000 sq cm/V-sec at room temperature and 0.25 K, respectively in MODFET structures. The electron carrier densities were 1.6 and 1.33 x 10(exp 12)/sq cm at room and liquid helium temperature, respectively, denoting excellent carrier confinement. Using this technique, we have observed electron mobilities as high as 900 sq cm/V-sec at room temperature at a carrier density of 1.3 x 10(exp 12)/sq cm. The temperature dependence of mobility for both the MODFET and HEMT structures provides insights into the mechanisms that allow for enhanced

  18. User's Guide for Subroutine PLOT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    This module is part of a series designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PLOT3D is a subroutine package which generates a variety of three dimensional hidden…

  19. Programmer's Guide for Subroutine PRNT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PRNT3D is a subroutine package which generates a variety of printed plot displays. The displays…

  20. User's Guide for Subroutine PRNT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PRNT3D is a subroutine package which generates a variety of printer plot displays. The displays…

  1. Programmer's Guide for Subroutine PLOT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    This module is part of a series designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PLOT3D is a subroutine package which generates a variety of three-dimensional hidden…

  2. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs.

    PubMed

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854

  3. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs.

    PubMed

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture.

  4. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs

    PubMed Central

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854

  5. Methods of Adapting Digital Content for the Learning Process via Mobile Devices

    ERIC Educational Resources Information Center

    Lopez, J. L. Gimenez; Royo, T. Magal; Laborda, Jesus Garcia; Calvo, F. Garde

    2009-01-01

    This article analyses different methods of adapting digital content for its delivery via mobile devices taking into account two aspects which are a fundamental part of the learning process; on the one hand, functionality of the contents, and on the other, the actual controlled navigation requirements that the learner needs in order to acquire high…

  6. Graphic Design in Educational Television.

    ERIC Educational Resources Information Center

    Clarke, Beverley

    To help educational television (ETV) practitioners achieve maximum clarity, economy and purposiveness, the range of techniques of television graphics is explained. Closed-circuit and broadcast ETV are compared. The design process is discussed in terms of aspect ratio, line structure, cut off, screen size, tone scales, studio apparatus, and…

  7. The Systems Biology Graphical Notation.

    PubMed

    Le Novère, Nicolas; Hucka, Michael; Mi, Huaiyu; Moodie, Stuart; Schreiber, Falk; Sorokin, Anatoly; Demir, Emek; Wegner, Katja; Aladjem, Mirit I; Wimalaratne, Sarala M; Bergman, Frank T; Gauges, Ralph; Ghazal, Peter; Kawaji, Hideya; Li, Lu; Matsuoka, Yukiko; Villéger, Alice; Boyd, Sarah E; Calzone, Laurence; Courtot, Melanie; Dogrusoz, Ugur; Freeman, Tom C; Funahashi, Akira; Ghosh, Samik; Jouraku, Akiya; Kim, Sohyoung; Kolpakov, Fedor; Luna, Augustin; Sahle, Sven; Schmidt, Esther; Watterson, Steven; Wu, Guanming; Goryanin, Igor; Kell, Douglas B; Sander, Chris; Sauro, Herbert; Snoep, Jacky L; Kohn, Kurt; Kitano, Hiroaki

    2009-08-01

    Circuit diagrams and Unified Modeling Language diagrams are just two examples of standard visual languages that help accelerate work by promoting regularity, removing ambiguity and enabling software tool support for communication of complex information. Ironically, despite having one of the highest ratios of graphical to textual information, biology still lacks standard graphical notations. The recent deluge of biological knowledge makes addressing this deficit a pressing concern. Toward this goal, we present the Systems Biology Graphical Notation (SBGN), a visual language developed by a community of biochemists, modelers and computer scientists. SBGN consists of three complementary languages: process diagram, entity relationship diagram and activity flow diagram. Together they enable scientists to represent networks of biochemical interactions in a standard, unambiguous way. We believe that SBGN will foster efficient and accurate representation, visualization, storage, exchange and reuse of information on all kinds of biological knowledge, from gene regulation, to metabolism, to cellular signaling.

  8. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  9. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  10. Getting Graphic at the School Library.

    ERIC Educational Resources Information Center

    Kan, Kat

    2003-01-01

    Provides information for school libraries interested in acquiring graphic novels. Discusses theft prevention; processing and cataloging; maintaining the collection; what to choose, with two Web sites for more information on graphic novels for libraries; collection development decisions; and Japanese comics called Manga. Includes an annotated list…

  11. Improvement of MS (multiple sclerosis) CAD (computer aided diagnosis) performance using C/C++ and computing engine in the graphical processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Suh, Joohyung; Ma, Kevin; Le, Anh

    2011-03-01

    Multiple Sclerosis (MS) is a disease which is caused by damaged myelin around axons of the brain and spinal cord. Currently, MR Imaging is used for diagnosis, but it is very highly variable and time-consuming since the lesion detection and estimation of lesion volume are performed manually. For this reason, we developed a CAD (Computer Aided Diagnosis) system which would assist segmentation of MS to facilitate physician's diagnosis. The MS CAD system utilizes K-NN (k-nearest neighbor) algorithm to detect and segment the lesion volume in an area based on the voxel. The prototype MS CAD system was developed under the MATLAB environment. Currently, the MS CAD system consumes a huge amount of time to process data. In this paper we will present the development of a second version of MS CAD system which has been converted into C/C++ in order to take advantage of the GPU (Graphical Processing Unit) which will provide parallel computation. With the realization of C/C++ and utilizing the GPU, we expect to cut running time drastically. The paper investigates the conversion from MATLAB to C/C++ and the utilization of a high-end GPU for parallel computing of data to improve algorithm performance of MS CAD.

  12. Graphics processing unit aided highly stable real-time spectral-domain optical coherence tomography at 1375 nm based on dual-coupled-line subtraction

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2013-04-01

    We have proposed and demonstrated a highly stable spectral-domain optical coherence tomography (SD-OCT) system based on dual-coupled-line subtraction. The proposed system achieved an ultrahigh axial resolution of 5 μm by combining four kinds of spectrally shifted superluminescent diodes at 1375 nm. Using the dual-coupled-line subtraction method, we made the system insensitive to fluctuations of the optical intensity that can possibly arise in various clinical and experimental conditions. The imaging stability was verified by perturbing the intensity by bending an optical fiber, our system being the only one to reduce the noise among the conventional systems. Also, the proposed method required less computational complexity than conventional mean- and median-line subtraction. The real-time SD-OCT scheme was implemented by graphics processing unit aided signal processing. This is the first reported reduction method for A-line-wise fixed-pattern noise in a single-shot image without estimating the DC component.

  13. Graphics Specialist (AFSC 23151).

    ERIC Educational Resources Information Center

    Air Univ., Gunter AFS, Ala. Extension Course Inst.

    This three-volume set of student texts is intended for use in an extension course to prepare Air Force graphics specialists. The first volume deals with basic equipment, materials, lettering, and drafting (including geometric and graphic construction). Addressed in the second volume are composition and layout techniques and the fundamentals of…

  14. How Computer Graphics Work.

    ERIC Educational Resources Information Center

    Prosise, Jeff

    This document presents the principles behind modern computer graphics without straying into the arcane languages of mathematics and computer science. Illustrations accompany the clear, step-by-step explanations that describe how computers draw pictures. The 22 chapters of the book are organized into 5 sections. "Part 1: Computer Graphics in…

  15. Quantitative Graphics in Newspapers.

    ERIC Educational Resources Information Center

    Tankard, James W., Jr.

    The use of quantitative graphics in newspapers requires achieving a balance between being accurate and getting the attention of the reader. The statistical representations in newspapers are drawn by graphic designers whose key technique is fusion--the striking combination of two visual images. This technique often results in visual puns,…

  16. Fully Solution-Processed Flexible Organic Thin Film Transistor Arrays with High Mobility and Exceptional Uniformity

    PubMed Central

    Fukuda, Kenjiro; Takeda, Yasunori; Mizukami, Makoto; Kumaki, Daisuke; Tokito, Shizuo

    2014-01-01

    Printing fully solution-processed organic electronic devices may potentially revolutionize production of flexible electronics for various applications. However, difficulties in forming thin, flat, uniform films through printing techniques have been responsible for poor device performance and low yields. Here, we report on fully solution-processed organic thin-film transistor (TFT) arrays with greatly improved performance and yields, achieved by layering solution-processable materials such as silver nanoparticle inks, organic semiconductors, and insulating polymers on thin plastic films. A treatment layer improves carrier injection between the source/drain electrodes and the semiconducting layer and dramatically reduces contact resistance. Furthermore, an organic semiconductor with large-crystal grains results in TFT devices with shorter channel lengths and higher field-effect mobilities. We obtained mobilities of over 1.2 cm2 V−1 s−1 in TFT devices with channel lengths shorter than 20 μm. By combining these fabrication techniques, we built highly uniform organic TFT arrays with average mobility levels as high as 0.80 cm2 V−1 s−1 and ideal threshold voltages of 0 V. These results represent major progress in the fabrication of fully solution-processed organic TFT device arrays. PMID:24492785

  17. Accelerating electrostatic interaction calculations with graphical processing units based on new developments of Ewald method using non-uniform fast Fourier transform.

    PubMed

    Yang, Sheng-Chun; Wang, Yong-Lei; Jiao, Gui-Sheng; Qian, Hu-Jun; Lu, Zhong-Yuan

    2016-01-30

    We present new algorithms to improve the performance of ENUF method (F. Hedman, A. Laaksonen, Chem. Phys. Lett. 425, 2006, 142) which is essentially Ewald summation using Non-Uniform FFT (NFFT) technique. A NearDistance algorithm is developed to extensively reduce the neighbor list size in real-space computation. In reciprocal-space computation, a new algorithm is developed for NFFT for the evaluations of electrostatic interaction energies and forces. Both real-space and reciprocal-space computations are further accelerated by using graphical processing units (GPU) with CUDA technology. Especially, the use of CUNFFT (NFFT based on CUDA) very much reduces the reciprocal-space computation. In order to reach the best performance of this method, we propose a procedure for the selection of optimal parameters with controlled accuracies. With the choice of suitable parameters, we show that our method is a good alternative to the standard Ewald method with the same computational precision but a dramatically higher computational efficiency. PMID:26584145

  18. Realtime cerebellum: a large-scale spiking network model of the cerebellum that runs in realtime using a graphics processing unit.

    PubMed

    Yamazaki, Tadashi; Igarashi, Jun

    2013-11-01

    The cerebellum plays an essential role in adaptive motor control. Once we are able to build a cerebellar model that runs in realtime, which means that a computer simulation of 1 s in the simulated world completes within 1 s in the real world, the cerebellar model could be used as a realtime adaptive neural controller for physical hardware such as humanoid robots. In this paper, we introduce "Realtime Cerebellum (RC)", a new implementation of our large-scale spiking network model of the cerebellum, which was originally built to study cerebellar mechanisms for simultaneous gain and timing control and acted as a general-purpose supervised learning machine of spatiotemporal information known as reservoir computing, on a graphics processing unit (GPU). Owing to the massive parallel computing capability of a GPU, RC runs in realtime, while reproducing qualitatively the same simulation results of the Pavlovian delay eyeblink conditioning with the previous version. RC is adopted as a realtime adaptive controller of a humanoid robot, which is instructed to learn a proper timing to swing a bat to hit a flying ball online. These results suggest that RC provides a means to apply the computational power of the cerebellum as a versatile supervised learning machine towards engineering applications.

  19. Super-Sonograms and graphical seismic source locations: Facing the challenge of real-time data processing in an OSI SAMS installation

    NASA Astrophysics Data System (ADS)

    Joswig, Manfred

    2010-05-01

    The installation and operation of an OSI seismic aftershock monitoring system (SAMS) is bound by strict time constraints: 30+ small arrays must be set up within days, and data screening must cope with the daily seismogram input. This is a significant challenge since any potential, single ML -2.0 aftershock from a potential nuclear test must be detected and discriminated against a variety of higher-amplitude noise bursts. No automated approach can handle this task to date; thus some 200 traces of 24/7 data must be screened manually with a time resolution sufficient to recover signals of just a few sec duration, and with tiny amplitudes just above the threshold of ambient noise. Previous tests confirmed that this task can not be performed by time-domain signal screening via established seismological processing software, e.g. PITSA, SEISAN, or GEOTOOLS. Instead, we introduced 'SonoView', a seismic diagnosis tool based on a compilation of array traces into super-sonograms. Several hours of cumulative array data can be displayed at once on a single computer screen - without sacrifying the necessary detectability of few-sec signals. Then 'TraceView' will guide the analyst to select the relevant traces with best SNR, and 'HypoLine' offers some interactive, graphical location tools for fast epicenter estimates and source signature identifications. A previous release of this software suite was successfully applied at IFE08 in Kasakhstan, and supported the seismic sub-team of OSI in its timely report compilation.

  20. Anomalous diffusion due to hindering by mobile obstacles undergoing Brownian motion or Orstein-Ulhenbeck processes.

    PubMed

    Berry, Hugues; Chaté, Hugues

    2014-02-01

    In vivo measurements of the passive movements of biomolecules or vesicles in cells consistently report "anomalous diffusion," where mean-squared displacements scale as a power law of time with exponent α<1 (subdiffusion). While the detailed mechanisms causing such behaviors are not always elucidated, movement hindrance by obstacles is often invoked. However, our understanding of how hindered diffusion leads to subdiffusion is based on diffusion amidst randomly located immobile obstacles. Here, we have used Monte Carlo simulations to investigate transient subdiffusion due to mobile obstacles with various modes of mobility. Our simulations confirm that the anomalous regimes rapidly disappear when the obstacles move by Brownian motion. By contrast, mobile obstacles with more confined displacements, e.g., Orstein-Ulhenbeck motion, are shown to preserve subdiffusive regimes. The mean-squared displacement of tracked protein displays convincing power laws with anomalous exponent α that varies with the density of Orstein-Ulhenbeck (OU) obstacles or the relaxation time scale of the OU process. In particular, some of the values we observed are significantly below the universal value predicted for immobile obstacles in two dimensions. Therefore, our results show that subdiffusion due to mobile obstacles with OU type of motion may account for the large variation range exhibited by experimental measurements in living cells and may explain that some experimental estimates are below the universal value predicted for immobile obstacles.

  1. A High Speed Mobile Courier Data Access System That Processes Database Queries in Real-Time

    NASA Astrophysics Data System (ADS)

    Gatsheni, Barnabas Ndlovu; Mabizela, Zwelakhe

    A secure high-speed query processing mobile courier data access (MCDA) system for a Courier Company has been developed. This system uses the wireless networks in combination with wired networks for updating a live database at the courier centre in real-time by an offsite worker (the Courier). The system is protected by VPN based on IPsec. There is no system that we know of to date that performs the task for the courier as proposed in this paper.

  2. Safety of vendor-prepared foods: evaluation of 10 processing mobile food vendors in Manhattan.

    PubMed Central

    Burt, Bryan M.; Volel, Caroline; Finkel, Madelon

    2003-01-01

    OBJECTIVES: Unsanitary food handling is a major public health hazard. There are over 4,100 mobile food vendors operating in New York City, and of these, approximately forty percent are processing vendors--mobile food units on which potentially hazardous food products are handled, prepared, or processed. This pilot study assesses the food handling practices of 10 processing mobile food vendors operating in a 38-block area of midtown Manhattan (New York City) from 43rd Street to 62nd Street between Madison and Sixth Avenues, and compares them to regulations stipulated in the New York City Health Code. METHODS: Ten processing mobile food vendors located in midtown Manhattan were observed for a period of 20 minutes each. Unsanitary food handling practices, food storage at potentially unsafe temperatures, and food contamination with uncooked meat or poultry were recorded. RESULTS: Over half of all vendors (67%) were found to contact served foods with bare hands. Four vendors were observed vending with visibly dirty hands or gloves and no vendor once washed his or her hands or changed gloves in the 20-minute observation period. Seven vendors had previously cooked meat products stored at unsafe temperatures on non-heating or non-cooking portions of the vendor cart for the duration of the observation. Four vendors were observed to contaminate served foods with uncooked meat or poultry. CONCLUSIONS: Each of these actions violates the New York City Code of Health and potentially jeopardizes the safety of these vendor-prepared foods. More stringent adherence to food safety regulations should be promoted by the New York City Department of Health. PMID:12941860

  3. Graphics processing unit-accelerated non-rigid registration of MR images to CT images during CT-guided percutaneous liver tumor ablations

    PubMed Central

    Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G.; Shekhar, Raj; Hata, Nobuhiko

    2015-01-01

    Rationale and Objectives Accuracy and speed are essential for the intraprocedural nonrigid MR-to-CT image registration in the assessment of tumor margins during CT-guided liver tumor ablations. While both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique based on volume subdivision with hardware acceleration using a graphical processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Materials and Methods Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance (HD)), and total processing time including contouring of ROIs and computation were compared using a paired Student’s t-test. Results Accuracy of the GPU-accelerated registrations and B-spline registrations, respectively were 88.3 ± 3.7% vs 89.3 ± 4.9% (p = 0.41) for DSC and 13.1 ± 5.2 mm vs 11.4 ± 6.3 mm (p = 0.15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 s vs 557 ± 116 s (p < 0.000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (p = 0.71). Conclusion The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU

  4. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    NASA Astrophysics Data System (ADS)

    Setiani, Tia Dwi; Suprijadi, Haryanto, Freddy

    2016-03-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 - 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 108 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  5. Do larger graphic health warnings on standardised cigarette packs increase adolescents’ cognitive processing of consumer health information and beliefs about smoking-related harms?

    PubMed Central

    White, Victoria; Williams, Tahlia; Faulkner, Agatha; Wakefield, Melanie

    2015-01-01

    Objective To examine the impact of plain packaging of cigarettes with enhanced graphic health warnings on Australian adolescents’ cognitive processing of warnings and awareness of different health consequences of smoking. Methods Cross-sectional school-based surveys conducted in 2011 (prior to introduction of standardised packaging, n=6338) and 2013 (7–12 months afterwards, n=5915). Students indicated frequency of attending to, reading, thinking or talking about warnings. Students viewed a list of diseases or health effects and were asked to indicate whether each was caused by smoking. Two—‘kidney and bladder cancer’ and ‘damages gums and teeth’—were new while the remainder had been promoted through previous health warnings and/or television campaigns. The 60% of students seeing a cigarette pack in previous 6 months in 2011 and 65% in 2013 form the sample for analysis. Changes in responses over time are examined. Results Awareness that smoking causes bladder cancer increased between 2011 and 2013 (p=0.002). There was high agreement with statements reflecting health effects featured in previous warnings or advertisements with little change over time. Exceptions to this were increases in the proportion agreeing that smoking was a leading cause of death (p<0.001) and causes blindness (p<0.001). The frequency of students reading, attending to, thinking or talking about the health warnings on cigarette packs did not change. Conclusions Acknowledgement of negative health effects of smoking among Australian adolescents remains high. Apart from increased awareness of bladder cancer, new requirements for packaging and health warnings did not increase adolescents’ cognitive processing of warning information.

  6. Image reproduction with interactive graphics

    NASA Technical Reports Server (NTRS)

    Buckner, J. D.; Council, H. W.; Edwards, T. R.

    1974-01-01

    Software application or development in optical image digital data processing requires a fast, good quality, yet inexpensive hard copy of processed images. To achieve this, a Cambo camera with an f 2.8/150-mm Xenotar lens in a Copal shutter having a Graflok back for 4 x 5 Polaroid type 57 pack-film has been interfaced to an existing Adage, AGT-30/Electro-Mechanical Research, EMR 6050 graphic computer system. Time-lapse photography in conjunction with a log to linear voltage transformation has resulted in an interactive system capable of producing a hard copy in 54 sec. The interactive aspect of the system lies in a Tektronix 4002 graphic computer terminal and its associated hard copy unit.

  7. Flowfield computer graphics

    NASA Technical Reports Server (NTRS)

    Desautel, Richard

    1993-01-01

    The objectives of this research include supporting the Aerothermodynamics Branch's research by developing graphical visualization tools for both the branch's adaptive grid code and flow field ray tracing code. The completed research for the reporting period includes development of a graphical user interface (GUI) and its implementation into the NAS Flowfield Analysis Software Tool kit (FAST), for both the adaptive grid code (SAGE) and the flow field ray tracing code (CISS).

  8. Scalable mobile information system to support the treatment process and the workflow of wastewater facilities.

    PubMed

    Schuchardt, L; Steinmetz, H; Ehret, J; Ebert, A; Schmitt, T G

    2004-01-01

    In order to support the operation of wastewater systems and the workflow of sewage systems an application for demonstration has been developed to show exemplarily how a mobile information system can be transferred into practice and used by the staff. The paper presents a scalable information visualisation system, which can be used with mobile devices. The regarded information data does not only include process data, but also general information about buildings and units, work directions, occupational safety regulations as well as instructions of first aid in case of a work accident. This is particularly appropriate for the use in remote facilities. The implementation is based on but not limited to SQL, JSP and HTML.

  9. A Photo Storm Report Mobile Application, Processing/Distribution System, and AWIPS-II Display Concept

    NASA Astrophysics Data System (ADS)

    Longmore, S. P.; Bikos, D.; Szoke, E.; Miller, S. D.; Brummer, R.; Lindsey, D. T.; Hillger, D.

    2014-12-01

    The increasing use of mobile phones equipped with digital cameras and the ability to post images and information to the Internet in real-time has significantly improved the ability to report events almost instantaneously. In the context of severe weather reports, a representative digital image conveys significantly more information than a simple text or phone relayed report to a weather forecaster issuing severe weather warnings. It also allows the forecaster to reasonably discern the validity and quality of a storm report. Posting geo-located, time stamped storm report photographs utilizing a mobile phone application to NWS social media weather forecast office pages has generated recent positive feedback from forecasters. Building upon this feedback, this discussion advances the concept, development, and implementation of a formalized Photo Storm Report (PSR) mobile application, processing and distribution system and Advanced Weather Interactive Processing System II (AWIPS-II) plug-in display software.The PSR system would be composed of three core components: i) a mobile phone application, ii) a processing and distribution software and hardware system, and iii) AWIPS-II data, exchange and visualization plug-in software. i) The mobile phone application would allow web-registered users to send geo-location, view direction, and time stamped PSRs along with severe weather type and comments to the processing and distribution servers. ii) The servers would receive PSRs, convert images and information to NWS network bandwidth manageable sizes in an AWIPS-II data format, distribute them on the NWS data communications network, and archive the original PSRs for possible future research datasets. iii) The AWIPS-II data and exchange plug-ins would archive PSRs, and the visualization plug-in would display PSR locations, times and directions by hour, similar to surface observations. Hovering on individual PSRs would reveal photo thumbnails and clicking on them would display the

  10. The Role of Mobile Technologies in Health Care Processes: The Case of Cancer Supportive Care

    PubMed Central

    Cucciniello, Maria; Guerrazzi, Claudia

    2015-01-01

    Background Health care systems are gradually moving toward new models of care based on integrated care processes shared by different care givers and on an empowered role of the patient. Mobile technologies are assuming an emerging role in this scenario. This is particularly true in care processes where the patient has a particularly enhanced role, as is the case of cancer supportive care. Objective This paper aims to review existing studies on the actual role and use of mobile technology during the different stages of care processes, with particular reference to cancer supportive care. Methods We carried out a review of literature with the aim of identifying studies related to the use of mHealth in cancer care and cancer supportive care. The final sample size consists of 106 records. Results There is scant literature concerning the use of mHealth in cancer supportive care. Looking more generally at cancer care, we found that mHealth is mainly used for self-management activities carried out by patients. The main tools used are mobile devices like mobile phones and tablets, but remote monitoring devices also play an important role. Text messaging technologies (short message service, SMS) have a minor role, with the exception of middle income countries where text messaging plays a major role. Telehealth technologies are still rarely used in cancer care processes. If we look at the different stages of health care processes, we can see that mHealth is mainly used during the treatment of patients, especially for self-management activities. It is also used for prevention and diagnosis, although to a lesser extent, whereas it appears rarely used for decision-making and follow-up activities. Conclusions Since mHealth seems to be employed only for limited uses and during limited phases of the care process, it is unlikely that it can really contribute to the creation of new care models. This under-utilization may depend on many issues, including the need for it to be embedded

  11. Modelling multi-phase liquid-sediment scour and resuspension induced by rapid flows using Smoothed Particle Hydrodynamics (SPH) accelerated with a Graphics Processing Unit (GPU)

    NASA Astrophysics Data System (ADS)

    Fourtakas, G.; Rogers, B. D.

    2016-06-01

    A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.

  12. Self-Authored Graphic Design: A Strategy for Integrative Studies

    ERIC Educational Resources Information Center

    McCarthy, Steven; De Almeida, Cristina Melibeu

    2002-01-01

    The purpose of this essay is to introduce the concepts of self-authorship in graphic design education as part of an integrative pedagogy. The enhanced potential of harnessing graphic design's dual modalities--the integrative processes inherent in design thinking and doing, and the ability of graphic design to engage other disciplines by giving…

  13. HIN9/468: The Last Mile - Secure and mobile data processing in healthcare

    PubMed Central

    Bludau, HB; Vocke, A; Herzog, W

    1999-01-01

    Motivation According to the Federal Ministry the avowed target of modern medicine is to administer the best medical care, the newest scientific insights and the knowledge of experienced specialists on affordable conditions to every patient no matter whether he is located in a rural area or in a teaching hospital. One way of administer information is on mobile tools. To find out more about the influence of mobile computer on the physician-patient relation, the acceptance of these tools as well as prerequisites of new security and data-processing concepts were investigated in a simulation study. Methods The Personal Digital Assistant Based on a personal digital assistant a prototype was developed. The Apple Newton was used because it appeared suitable for easy data input and retrieval by the means of a touch screen with handwriting recognition. The device was coupled with a conventional cellular phone for voice and data transfer The prototype provided several functions for information processing: access to a patient database access to medical knowledge documentation of diagnosis electronic requests forms for investigations tools for the personal organization. The prototype of an accessibility and safety manager was integrated. This software enables to control telephone accessibility individually. Situational adjustments and a complex set of rules configured the way arriving calls were dealt with. Moreover this software contained a component for sending and receiving text messages. The Simulation Study In simulation studies, test users are observed while working with prototypical technology in a close-to-reality environment. The aim is to test an early prototype in its avowed environment to obtain design proposals for technology by the future users. Within the Ladenburger group "Security in communications technology" of the Gottlieb-Daimler und Karl-Benz-Stiftung an investigation at the Heidelberg University Medical Centre was conducted under organisational management

  14. Control of Chemical Effects in the Separation Process of a Differential Mobility / Mass Spectrometer System

    PubMed Central

    Schneider, Bradley B.; Coy, Stephen L.; Krylov, Evgeny V.; Nazarov, Erkinjon G.

    2013-01-01

    Differential mobility spectrometry (DMS) separates ions on the basis of the difference in their migration rates under high versus low electric fields. Several models describing the physical nature of this field mobility dependence have been proposed but emerging as a dominant effect is the clusterization model sometimes referred to as the dynamic cluster-decluster model. DMS resolution and peak capacity is strongly influenced by the addition of modifiers which results in the formation and dissociation of clusters. This process increases selectivity due to the unique chemical interactions that occur between an ion and neutral gas phase molecules. It is thus imperative to bring the parameters influencing the chemical interactions under control and find ways to exploit them in order to improve the analytical utility of the device. In this paper we describe three important areas that need consideration in order to stabilize and capitalize on the chemical processes that dominate a DMS separation. The first involves means of controlling the dynamic equilibrium of the clustering reactions with high concentrations of specific reagents. The second area involves a means to deal with the unwanted heterogeneous cluster ion populations emitted from the electrospray ionization process that degrade resolution and sensitivity. The third involves fine control of parameters that affect the fundamental collision processes, temperature and pressure. PMID:20065515

  15. Control of chemical effects in the separation process of a differential mobility mass spectrometer system.

    PubMed

    Schneider, Bradley B; Covey, Thomas R; Coy, Stephen L; Krylov, Evgeny V; Nazarov, Erkinjon G

    2010-01-01

    Differential mobility spectrometry (DMS) separates ions on the basis of the difference in their migration rates under high versus low electric fields. Several models describing the physical nature of this field mobility dependence have been proposed but emerging as a dominant effect is the clusterization model sometimes referred to as the dynamic cluster-decluster model. DMS resolution and peak capacity is strongly influenced by the addition of modifiers which results in the formation and dissociation of clusters. This process increases selectivity due to the unique chemical interactions that occur between an ion and neutral gas-phase molecules. It is thus imperative to bring the parameters influencing the chemical interactions under control and find ways to exploit them in order to improve the analytical utility of the device. In this paper, we describe three important areas that need consideration in order to stabilize and capitalize on the chemical processes that dominate a DMS separation. The first involves means of controlling the dynamic equilibrium of the clustering reactions with high concentrations of specific reagents. The second area involves a means to deal with the unwanted heterogeneous cluster ion populations emitted from the electrospray ionization process that degrade resolution and sensitivity. The third involves fine control of parameters that affect the fundamental collision processes, temperature and pressure.

  16. Learning by Graphics: Translating Verbal Information Into Graphic Network Formats. Tech Memo Number 60.

    ERIC Educational Resources Information Center

    Dunn, Thomas G.; Hansen, Duncan

    Visuals, like pictures, charts, diagrams, graphics, and the like, are widely used in education. However, there is little justification in the research literature for their use. The overall purpose of this exploratory study was to find out more about the processing of visuals or graphics in an educational task. Specifically, it was thought that…

  17. mHealth Quality: A Process to Seal the Qualified Mobile Health Apps.

    PubMed

    Yasini, Mobin; Beranger, Jérôme; Desmarais, Pierre; Perez, Lucas; Marchand, Guillaume

    2016-01-01

    A large number of mobile health applications (apps) are currently available with a variety of functionalities. The user ratings in the app stores seem not to be reliable to determine the quality of the apps. The traditional methods of evaluation are not suitable for fast paced nature of mobile technology. In this study, we propose a collaborative multidimensional scale to assess the quality of mHealth apps. During our process, the app quality is assessed in various aspects including medical reliability, legal consistency, ethical consistency, usability aspects, personal data privacy and IT security. A hypothetico-deductive approach was used in various working groups to define the audit criteria based on the various use cases that an app could provide. These criteria were then implemented into a web based self-administered questionnaires and the generation of automatic reports were considered. This method is on the one hand specific to each app because it allows to assess each health app according to its offered functionalities. On the other hand, this method is automatic, transferable to all apps and adapted to the dynamic nature of mobile technology. PMID:27577372

  18. Adsorbed solution model for prediction of normal-phase chromatography process with varying composition of the mobile phase.

    PubMed

    Piatkowski, Wojciech; Petrushka, Igor; Antos, Dorota

    2005-10-21

    The adsorbed solution model has been used to predict competitive adsorption equilibria of the solute and the active component of mobile phase in a normal-phase liquid chromatography system. The inputs to the calculations were the single adsorption isotherms accounting for energetic heterogeneity of the adsorbent surface and non-ideality of the mobile phase solution. The competitive adsorption model has been coupled with a model of the column dynamics and used for simulating of chromatography process at different mobile phase composition. The predictions have been verified by comparing the simulated and experimental chromatograms. The model allowed quantitative prediction of chromatography process on the basis of the pure-species adsorption isotherms.

  19. Doping suppression and mobility enhancement of graphene transistors fabricated using an adhesion promoting dry transfer process

    SciTech Connect

    Cheol Shin, Woo; Hun Mun, Jeong; Yong Kim, Taek; Choi, Sung-Yool; Jin Cho, Byung E-mail: tskim1@kaist.ac.kr; Yoon, Taeshik; Kim, Taek-Soo E-mail: tskim1@kaist.ac.kr

    2013-12-09

    We present the facile dry transfer of graphene synthesized via chemical vapor deposition on copper film to a functional device substrate. High quality uniform dry transfer of graphene to oxidized silicon substrate was achieved by exploiting the beneficial features of a poly(4-vinylphenol) adhesive layer involving a strong adhesion energy to graphene and negligible influence on the electronic and structural properties of graphene. The graphene field effect transistors (FETs) fabricated using the dry transfer process exhibit excellent electrical performance in terms of high FET mobility and low intrinsic doping level, which proves the feasibility of our approach in graphene-based nanoelectronics.

  20. High-Mobility Ambipolar Organic Thin-Film Transistor Processed From a Nonchlorinated Solvent.

    PubMed

    Sonar, Prashant; Chang, Jingjing; Kim, Jae H; Ong, Kok-Haw; Gann, Eliot; Manzhos, Sergei; Wu, Jishan; McNeill, Christopher R

    2016-09-21

    Polymer semiconductor PDPPF-DFT, which combines furan-substituted diketopyrrolopyrrole (DPP) and a 3,4-difluorothiophene base, has been designed and synthesized. PDPPF-DFT polymer semiconductor thin film processed from nonchlorinated hexane is used as an active layer in thin-film transistors. As a result, balanced hole and electron mobilities of 0.26 and 0.12 cm(2)/(V s) are achieved for PDPPF-DFT. This is the first report of using nonchlorinated hexane solvent for fabricating high-performance ambipolar thin-film transistor devices.

  1. Doping suppression and mobility enhancement of graphene transistors fabricated using an adhesion promoting dry transfer process

    NASA Astrophysics Data System (ADS)

    Cheol Shin, Woo; Yoon, Taeshik; Hun Mun, Jeong; Yong Kim, Taek; Choi, Sung-Yool; Kim, Taek-Soo; Jin Cho, Byung

    2013-12-01

    We present the facile dry transfer of graphene synthesized via chemical vapor deposition on copper film to a functional device substrate. High quality uniform dry transfer of graphene to oxidized silicon substrate was achieved by exploiting the beneficial features of a poly(4-vinylphenol) adhesive layer involving a strong adhesion energy to graphene and negligible influence on the electronic and structural properties of graphene. The graphene field effect transistors (FETs) fabricated using the dry transfer process exhibit excellent electrical performance in terms of high FET mobility and low intrinsic doping level, which proves the feasibility of our approach in graphene-based nanoelectronics.

  2. High-Mobility Ambipolar Organic Thin-Film Transistor Processed From a Nonchlorinated Solvent.

    PubMed

    Sonar, Prashant; Chang, Jingjing; Kim, Jae H; Ong, Kok-Haw; Gann, Eliot; Manzhos, Sergei; Wu, Jishan; McNeill, Christopher R

    2016-09-21

    Polymer semiconductor PDPPF-DFT, which combines furan-substituted diketopyrrolopyrrole (DPP) and a 3,4-difluorothiophene base, has been designed and synthesized. PDPPF-DFT polymer semiconductor thin film processed from nonchlorinated hexane is used as an active layer in thin-film transistors. As a result, balanced hole and electron mobilities of 0.26 and 0.12 cm(2)/(V s) are achieved for PDPPF-DFT. This is the first report of using nonchlorinated hexane solvent for fabricating high-performance ambipolar thin-film transistor devices. PMID:27595165

  3. Data processing and quality evaluation of a boat-based mobile laser scanning system.

    PubMed

    Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri

    2013-01-01

    Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0-1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data. PMID:24048340

  4. Recovery of critical and value metals from mobile electronics enabled by electrochemical processing

    SciTech Connect

    Tedd E. Lister; Peiming Wang; Andre Anderko

    2014-10-01

    Electrochemistry-based schemes were investigated as a means to recover critical and value metals from scrap mobile electronics. Mobile electronics offer a growing feedstock for replenishing value and critical metals and reducing need to exhaust primary sources. The electrorecycling process generates oxidizing agents at an anode to dissolve metals from the scrap matrix while reducing dissolved metals at the cathode. The process uses a single cell to maximize energy efficiency. E vs pH diagrams and metals dissolution experiments were used to assess effectiveness of various solution chemistries. Following this work, a flow chart was developed where two stages of electrorecycling were proposed: 1) initial dissolution of Cu, Sn, Ag and magnet materials using Fe+3 generated in acidic sulfate and 2) final dissolution of Pd and Au using Cl2 generated in an HCl solution. Experiments were performed using a simulated metal mixture equivalent to 5 cell phones. Both Cu and Ag were recovered at ~ 97% using Fe+3 while leaving Au and Pd intact. Strategy for extraction of rare earth elements (REE) from dissolved streams is discussed as well as future directions in process development.

  5. Data processing and quality evaluation of a boat-based mobile laser scanning system.

    PubMed

    Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri

    2013-09-17

    Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0-1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data.

  6. Data Processing and Quality Evaluation of a Boat-Based Mobile Laser Scanning System

    PubMed Central

    Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri

    2013-01-01

    Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0–1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data. PMID:24048340

  7. Parallel processor-based raster graphics system architecture

    DOEpatents

    Littlefield, Richard J.

    1990-01-01

    An apparatus for generating raster graphics images from the graphics command stream includes a plurality of graphics processors connected in parallel, each adapted to receive any part of the graphics command stream for processing the command stream part into pixel data. The apparatus also includes a frame buffer for mapping the pixel data to pixel locations and an interconnection network for interconnecting the graphics processors to the frame buffer. Through the interconnection network, each graphics processor may access any part of the frame buffer concurrently with another graphics processor accessing any other part of the frame buffer. The plurality of graphics processors can thereby transmit concurrently pixel data to pixel locations in the frame buffer.

  8. Interactive computer graphics

    NASA Astrophysics Data System (ADS)

    Purser, K.

    1980-08-01

    Design layouts have traditionally been done on a drafting board by drawing a two-dimensional representation with section cuts and side views to describe the exact three-dimensional model. With the advent of computer graphics, a three-dimensional model can be created directly. The computer stores the exact three-dimensional model, which can be examined from any angle and at any scale. A brief overview of interactive computer graphics, how models are made and some of the benefits/limitations are described.

  9. Graphical workstation capability for reliability modeling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.

    1992-01-01

    In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.

  10. Near Real-Time Assessment of Anatomic and Dosimetric Variations for Head and Neck Radiation Therapy via Graphics Processing Unit–based Dose Deformation Framework

    SciTech Connect

    Qi, X. Sharon; Santhanam, Anand; Neylon, John; Min, Yugang; Armstrong, Tess; Sheng, Ke; Staton, Robert J.; Pukala, Jason; Pham, Andrew; Low, Daniel A.; Lee, Steve P.; Steinberg, Michael; Manon, Rafael; Chen, Allen M.; Kupelian, Patrick

    2015-06-01

    Purpose: The purpose of this study was to systematically monitor anatomic variations and their dosimetric consequences during intensity modulated radiation therapy (IMRT) for head and neck (H&N) cancer by using a graphics processing unit (GPU)-based deformable image registration (DIR) framework. Methods and Materials: Eleven IMRT H&N patients undergoing IMRT with daily megavoltage computed tomography (CT) and weekly kilovoltage CT (kVCT) scans were included in this analysis. Pretreatment kVCTs were automatically registered with their corresponding planning CTs through a GPU-based DIR framework. The deformation of each contoured structure in the H&N region was computed to account for nonrigid change in the patient setup. The Jacobian determinant of the planning target volumes and the surrounding critical structures were used to quantify anatomical volume changes. The actual delivered dose was calculated accounting for the organ deformation. The dose distribution uncertainties due to registration errors were estimated using a landmark-based gamma evaluation. Results: Dramatic interfractional anatomic changes were observed. During the treatment course of 6 to 7 weeks, the parotid gland volumes changed up to 34.7%, and the center-of-mass displacement of the 2 parotid glands varied in the range of 0.9 to 8.8 mm. For the primary treatment volume, the cumulative minimum and mean and equivalent uniform doses assessed by the weekly kVCTs were lower than the planned doses by up to 14.9% (P=.14), 2% (P=.39), and 7.3% (P=.05), respectively. The cumulative mean doses were significantly higher than the planned dose for the left parotid (P=.03) and right parotid glands (P=.006). The computation including DIR and dose accumulation was ultrafast (∼45 seconds) with registration accuracy at the subvoxel level. Conclusions: A systematic analysis of anatomic variations in the H&N region and their dosimetric consequences is critical in improving treatment efficacy. Nearly real

  11. Mobilization of iron from neoplastic cells by some iron chelators is an energy-dependent process.

    PubMed

    Richardson, D R

    1997-05-16

    Iron (Fe) chelators of the pyridoxal isonicotinoyl hydrazone (PIH) class may be useful agents to treat Fe overload disease and also cancer. These ligands possess high activity at mobilizing 59Fe from neoplastic cells, and the present study has been designed to examine whether their marked activity may be related to an energy-dependent transport process across the cell membrane. Initial experiments examined the release of 59Fe from SK-N-MC neuroblastoma (NB) cells prelabelled for 3 h at 37 degrees C with 59Fe-transferrin (1.25 microM) and then reincubated in the presence and absence of the chelators for 3 h at 4 degrees C or 37 degrees C. Prelabelled cells released 4-5% of total cellular 59Fe when reincubated in minimum essential medium at 4 degrees C or 37 degrees C. When the chelators desferrioxamine (DFO; 0.1 mM) or PIH (0.1 mM) were reincubated with labelled cells at 4 degrees C, they mobilized only 4-5% of cellular 59Fe, whereas as 37 degrees C, these ligands mobilized 21% and 48% of cell 59Fe, respectively. The lipophilic PIH analogue, 311 (2-hydroxy-1-naphthylaldehyde isonicotinoyl hydrazone; 0.1 mM), which exhibits high anti-proliferative activity, released 10% and 53% of cellular 59Fe when reincubated with prelabelled cells at 4 degrees C and 37 degrees C, respectively. Almost identical results were obtained using the SK-Mel-28 melanoma cell line. These data suggest that perhaps temperature-dependent mechanisms are essential for 59Fe mobilization from these cells. Interestingly, the metabolic inhibitors, 2,4-dinitrophenol, oligomycin, rotenone, and sodium azide, markedly decreased 59Fe mobilization mediated by PIH, but had either no effect or much less effect on 59Fe release by 311. Considering that an ATP-dependent process was involved in 59Fe release by PIH, further studies examined 4 widely used inhibitors of the multi-drug efflux pump P-glycoprotein (P-gp). All of these inhibitors, namely, verapamil (Ver), cyclosporin A (CsA), reserpine (Res) and

  12. Raster graphics display library

    NASA Technical Reports Server (NTRS)

    Grimsrud, Anders; Stephenson, Michael B.

    1987-01-01

    The Raster Graphics Display Library (RGDL) is a high level subroutine package that give the advanced raster graphics display capabilities needed. The RGDL uses FORTRAN source code routines to build subroutines modular enough to use as stand-alone routines in a black box type of environment. Six examples are presented which will teach the use of RGDL in the fastest, most complete way possible. Routines within the display library that are used to produce raster graphics are presented in alphabetical order, each on a separate page. Each user-callable routine is described by function and calling parameters. All common blocks that are used in the display library are listed and the use of each variable within each common block is discussed. A reference on the include files that are necessary to compile the display library is contained. Each include file and its purpose are listed. The link map for MOVIE.BYU version 6, a general purpose computer graphics display system that uses RGDL software, is also contained.

  13. Comics & Graphic Novels

    ERIC Educational Resources Information Center

    Cleaver, Samantha

    2008-01-01

    Not so many years ago, comic books in school were considered the enemy. Students caught sneaking comics between the pages of bulky--and less engaging--textbooks were likely sent to the principal. Today, however, comics, including classics such as "Superman" but also their generally more complex, nuanced cousins, graphic novels, are not only…

  14. Mathematical Graphic Organizers

    ERIC Educational Resources Information Center

    Zollman, Alan

    2009-01-01

    As part of a math-science partnership, a university mathematics educator and ten elementary school teachers developed a novel approach to mathematical problem solving derived from research on reading and writing pedagogy. Specifically, research indicates that students who use graphic organizers to arrange their ideas improve their comprehension…

  15. Graphic Novels: A Roundup.

    ERIC Educational Resources Information Center

    Kan, Katherine L.

    1994-01-01

    Reviews graphic novels for young adults, including five titles from "The Adventures of Tintin," a French series that often uses ethnic and racial stereotypes which reflect the time in which they were published, and "Wolverine," a Marvel comic character adventure. (Contains six references.) (LRW)

  16. Graphically Enhanced Science Notebooks

    ERIC Educational Resources Information Center

    Minogue, James; Wiebe, Eric; Madden, Lauren; Bedward, John; Carter, Mike

    2010-01-01

    A common mode of communication in the elementary classroom is the science notebook. In this article, the authors outline the ways in which "graphically enhanced science notebooks" can help engage students in complete and robust inquiry. Central to this approach is deliberate attention to the efficient and effective use of student-generated…

  17. Printer Graphics Package

    NASA Technical Reports Server (NTRS)

    Blanchard, D. C.

    1986-01-01

    Printer Graphics Package (PGP) is tool for making two-dimensional symbolic plots on line printer. PGP created to support development of Heads-Up Display (HUD) simulation. Standard symbols defined with HUD in mind. Available symbols include circle, triangle, quadrangle, window, line, numbers, and text. Additional symbols easily added or built up from available symbols.

  18. A mobile unit for memory retrieval in daily life based on image and sensor processing

    NASA Astrophysics Data System (ADS)

    Takesumi, Ryuji; Ueda, Yasuhiro; Nakanishi, Hidenobu; Nakamura, Atsuyoshi; Kakimori, Nobuaki

    2003-10-01

    We developed a Mobile Unit which purpose is to support memory retrieval of daily life. In this paper, we describe the two characteristic factors of this unit. (1)The behavior classification with an acceleration sensor. (2)Extracting the difference of environment with image processing technology. In (1), By analyzing power and frequency of an acceleration sensor which turns to gravity direction, the one's activities can be classified using some techniques to walk, stay, and so on. In (2), By extracting the difference between the beginning scene and the ending scene of a stay scene with image processing, the result which is done by user is recognized as the difference of environment. Using those 2 techniques, specific scenes of daily life can be extracted, and important information at the change of scenes can be realized to record. Especially we describe the effect to support retrieving important things, such as a thing left behind and a state of working halfway.

  19. Evaluation of food processing wastewater loading characteristics on metal mobilization within the soil.

    PubMed

    Julien, Ryan; Safferman, Steven

    2015-01-01

    Wastewater generated during food processing is commonly treated using land-application systems which primarily rely on soil microbes to transform nutrients and organic compounds into benign byproducts. Naturally occurring metals in the soil may be chemically reduced via microbially mediated oxidation-reduction reactions as oxygen becomes depleted. Some metals such as manganese and iron become water soluble when chemically reduced, leading to groundwater contamination. Alternatively, metals within the wastewater may not become assimilated into the soil and leach into the groundwater if the environment is not sufficiently oxidizing. A lab-scale column study was conducted to investigate the impacts of wastewater loading values on metal mobilization within the soil. Oxygen content and volumetric water data were collected via soil sensors for the duration of the study. The pH, chemical oxygen demand, manganese, and iron concentrations in the influent and effluent water from each column were measured. Average organic loading and organic loading per dose were shown to have statistically significant impacts using Spearman's Rank Correlation Coefficient on effluent water quality. The Hydraulic resting period qualitatively appeared to have impacts on effluent water quality. This study verifies that excessive organic loading of land application systems causes mobilization of naturally occurring metals and prevents those added in the wastewater from becoming immobilized, resulting in ineffective wastewater treatment. Results also indicate the need to consider the organic dose load and hydraulic resting period in the treatment system design. Findings from this study demonstrate waste application twice daily may encourage soil aeration and allow for increased organic loading while limiting the mobilization of metals already in the soil and those being applied.

  20. Evaluation of food processing wastewater loading characteristics on metal mobilization within the soil.

    PubMed

    Julien, Ryan; Safferman, Steven

    2015-01-01

    Wastewater generated during food processing is commonly treated using land-application systems which primarily rely on soil microbes to transform nutrients and organic compounds into benign byproducts. Naturally occurring metals in the soil may be chemically reduced via microbially mediated oxidation-reduction reactions as oxygen becomes depleted. Some metals such as manganese and iron become water soluble when chemically reduced, leading to groundwater contamination. Alternatively, metals within the wastewater may not become assimilated into the soil and leach into the groundwater if the environment is not sufficiently oxidizing. A lab-scale column study was conducted to investigate the impacts of wastewater loading values on metal mobilization within the soil. Oxygen content and volumetric water data were collected via soil sensors for the duration of the study. The pH, chemical oxygen demand, manganese, and iron concentrations in the influent and effluent water from each column were measured. Average organic loading and organic loading per dose were shown to have statistically significant impacts using Spearman's Rank Correlation Coefficient on effluent water quality. The Hydraulic resting period qualitatively appeared to have impacts on effluent water quality. This study verifies that excessive organic loading of land application systems causes mobilization of naturally occurring metals and prevents those added in the wastewater from becoming immobilized, resulting in ineffective wastewater treatment. Results also indicate the need to consider the organic dose load and hydraulic resting period in the treatment system design. Findings from this study demonstrate waste application twice daily may encourage soil aeration and allow for increased organic loading while limiting the mobilization of metals already in the soil and those being applied. PMID:26327299

  1. Hydrothermal Processes and Mobile Element Transport in Martian Impact Craters - Evidence from Terrestrial Analogue Craters

    NASA Technical Reports Server (NTRS)

    Newsom, H. E.; Nelson, M. J.; Shearer, C. K.; Dressler, B. L.

    2005-01-01

    Hydrothermal alteration and chemical transport involving impact craters probably occurred on Mars throughout its history. Our studies of alteration products and mobile element transport in ejecta blanket and drill core samples from impact craters show that these processes may have contributed to the surface composition of Mars. Recent work on the Chicxulub Yaxcopoil-1 drill core has provided important information on the relative mobility of many elements that may be relevant to Mars. The Chicxulub impact structure in the Yucatan Peninsula of Mexico and offshore in the Gulf of Mexico is one of the largest impact craters identified on the Earth, has a diameter of 180-200 km, and is associated with the mass extinctions at the K/T boundary. The Yax-1 hole was drilled in 2001 and 2002 on the Yaxcopoil hacienda near Merida on the Yucatan Peninsula. Yax-1 is located just outside of the transient cavity, which explains some of the unusual characteristics of the core stratigraphy. No typical impact melt sheet was encountered in the hole and most of the Yax-1 impactites are breccias. In particular, the impact melt and breccias are only 100 m thick which is surprising taking into account the considerably thicker breccia accumulations towards the center of the structure and farther outside the transient crater encountered by other drill holes.

  2. Colloid/Nanoparticle mobility determining processes investigated by laser- and synchrotron based techniques

    NASA Astrophysics Data System (ADS)

    Schäfer, Thorsten; Huber, Florian; Temgoua, Louis; Claret, Francis; Darbha, Gopala; Chagneau, Aurélie; Fischer, Cornelius; Jacobsen, Chris

    2014-05-01

    Transport of pollutants can occur in the aqueous phase or for strongly sorbing pollutants associated on mobile solid phases spanning the range from a couple of nanometers up to approx. ~1μm; usually called colloids or nanoparticles [1,2]. A new form of pollutants are engineered nanoparticles (ENP's), where properties differ substantially from those of bulk materials of the same composition and cannot be scaled by simple surface area corrections. Potential harmful interactions with biological systems and the environment are a new field of research [3]. A challenge with respect to understand and predict the contaminant mobility is the contaminant speciation, the aquifer surface interaction and the mobility of nanoparticles. Especially for colloid/nanoparticle associated contaminant transport the metal sorption reversibility is a key element for long-term mobility prediction. The spatial resolution needed is clearly demanding for nanoscopic techniques benefiting from the new technical developments in the laser and synchrotron community [4]. Furthermore, high energy resolution is needed to either resolve different chemical species or the oxidation state of redox sensitive elements. In the context of successful planning of remediation strategies for contaminated sites this chemical information is categorically needed. In addition, chemical sensitivity as well as post processing methods extracting trace chemical information from a complex geo-matrix are required. The presentation will give examples of homogeneous and heterogeneous nucleation of nanoparticles [5], the speciation of radionuclides through incorporation in these newly formed phases [6], the changes of surface roughness and charge heterogeneity and its impact on nanoparticle mobility [7] and the sorption of organic colloids on mineral surfaces leading to functional group fractionation and consequently different metal binding environment as unraveled by time resolved laser fluorescence measurements [8

  3. Timeseries Signal Processing for Enhancing Mobile Surveys: Learning from Field Studies

    NASA Astrophysics Data System (ADS)

    Risk, D. A.; Lavoie, M.; Marshall, A. D.; Baillie, J.; Atherton, E. E.; Laybolt, W. D.

    2015-12-01

    Vehicle-based surveys using laser and other analyzers are now commonplace in research and industry. In many cases when these studies target biologically-relevant gases like methane and carbon dioxide, the minimum detection limits are often coarse (ppm) relative to the analyzer's capabilities (ppb), because of the inherent variability in the ambient background concentrations across the landscape that creates noise and uncertainty. This variation arises from localized biological sinks and sources, but also atmospheric turbulence, air pooling, and other factors. Computational processing routines are widely used in many fields to increase resolution of a target signal in temporally dense data, and offer promise for enhancing mobile surveying techniques. Signal processing routines can both help identify anomalies at very low levels, or can be used inversely to remove localized industrially-emitted anomalies from ecological data. This presentation integrates learnings from various studies in which simple signal processing routines were used successfully to isolate different temporally-varying components of 1 Hz timeseries measured with laser- and UV fluorescence-based analyzers. As illustrative datasets, we present results from industrial fugitive emission studies from across Canada's western provinces and other locations, and also an ecological study that aimed to model near-surface concentration variability across different biomes within eastern Canada. In these cases, signal processing algorithms contributed significantly to the clarity of both industrial, and ecological processes. In some instances, signal processing was too computationally intensive for real-time in-vehicle processing, but we identified workarounds for analyzer-embedded software that contributed to an improvement in real-time resolution of small anomalies. Signal processing is a natural accompaniment to these datasets, and many avenues are open to researchers who wish to enhance existing, and future

  4. Graphical Contingency Analysis Tool

    SciTech Connect

    2010-03-02

    GCA is a visual analytic tool for power grid contingency analysis to provide more decision support for power grid operations. GCA allows power grid operators to quickly gain situational awareness of power grid by converting large amounts of operational data to graphic domain with a color contoured map; identify system trend and foresee and discern emergencies by performing trending analysis; identify the relationships between system configurations and affected assets by conducting clustering analysis; and identify the best action by interactively evaluate candidate actions.

  5. Graphic Grown Up

    ERIC Educational Resources Information Center

    Kim, Ann

    2009-01-01

    It's no secret that children and YAs are clued in to graphic novels (GNs) and that comics-loving adults are positively giddy that this format is getting the recognition it deserves. Still, there is a whole swath of library card-carrying grown-up readers out there with no idea where to start. Splashy movies such as "300" and "Spider-Man" and their…

  6. A stable solution-processed polymer semiconductor with record high-mobility for printed transistors

    PubMed Central

    Li, Jun; Zhao, Yan; Tan, Huei Shuan; Guo, Yunlong; Di, Chong-An; Yu, Gui; Liu, Yunqi; Lin, Ming; Lim, Suo Hon; Zhou, Yuhua; Su, Haibin; Ong, Beng S.

    2012-01-01

    Microelectronic circuits/arrays produced via high-speed printing instead of traditional photolithographic processes offer an appealing approach to creating the long-sought after, low-cost, large-area flexible electronics. Foremost among critical enablers to propel this paradigm shift in manufacturing is a stable, solution-processable, high-performance semiconductor for printing functionally capable thin-film transistors — fundamental building blocks of microelectronics. We report herein the processing and optimisation of solution-processable polymer semiconductors for thin-film transistors, demonstrating very high field-effect mobility, high on/off ratio, and excellent shelf-life and operating stabilities under ambient conditions. Exceptionally high-gain inverters and functional ring oscillator devices on flexible substrates have been demonstrated. This optimised polymer semiconductor represents a significant progress in semiconductor development, dispelling prevalent skepticism surrounding practical usability of organic semiconductors for high-performance microelectronic devices, opening up application opportunities hitherto functionally or economically inaccessible with silicon technologies, and providing an excellent structural framework for fundamental studies of charge transport in organic systems. PMID:23082244

  7. Mobile air monitoring data-processing strategies and effects on spatial air pollution trends

    NASA Astrophysics Data System (ADS)

    Brantley, H. L.; Hagler, G. S. W.; Kimbrough, E. S.; Williams, R. W.; Mukerjee, S.; Neas, L. M.

    2014-07-01

    The collection of real-time air quality measurements while in motion (i.e., mobile monitoring) is currently conducted worldwide to evaluate in situ emissions, local air quality trends, and air pollutant exposure. This measurement strategy pushes the limits of traditional data analysis with complex second-by-second multipollutant data varying as a function of time and location. Data reduction and filtering techniques are often applied to deduce trends, such as pollutant spatial gradients downwind of a highway. However, rarely do mobile monitoring studies report the sensitivity of their results to the chosen data-processing approaches. The study being reported here utilized 40 h (> 140 000 observations) of mobile monitoring data collected on a roadway network in central North Carolina to explore common data-processing strategies including local emission plume detection, background estimation, and averaging techniques for spatial trend analyses. One-second time resolution measurements of ultrafine particles (UFPs), black carbon (BC), particulate matter (PM), carbon monoxide (CO), and nitrogen dioxide (NO2) were collected on 12 unique driving routes that were each sampled repeatedly. The route with the highest number of repetitions was used to compare local exhaust plume detection and averaging methods. Analyses demonstrate that the multiple local exhaust plume detection strategies reported produce generally similar results and that utilizing a median of measurements taken within a specified route segment (as opposed to a mean) may be sufficient to avoid bias in near-source spatial trends. A time-series-based method of estimating background concentrations was shown to produce similar but slightly lower estimates than a location-based method. For the complete data set the estimated contributions of the background to the mean pollutant concentrations were as follows: BC (15%), UFPs (26%), CO (41%), PM2.5-10 (45%), NO2 (57%), PM10 (60%), PM2.5 (68%). Lastly, while

  8. Linking geochemical processes in mud volcanoes with arsenic mobilization driven by organic matter.

    PubMed

    Liu, Chia-Chuan; Kar, Sandeep; Jean, Jiin-Shuh; Wang, Chung-Ho; Lee, Yao-Chang; Sracek, Ondra; Li, Zhaohui; Bundschuh, Jochen; Yang, Huai-Jen; Chen, Chien-Yen

    2013-11-15

    The present study deals with geochemical characterization of mud fluids and sediments collected from Kunshuiping (KSP), Liyushan (LYS), Wushanting (WST), Sinyangnyuhu (SYNH), Hsiaokunshui (HKS) and Yenshuikeng (YSK) mud volcanoes in southwestern Taiwan. Chemical constituents (cations, anions, trace elements, organic carbon, humic acid, and stable isotopes) in both fluids and mud were analyzed to investigate the geochemical processes and spatial variability among the mud volcanoes under consideration. Analytical results suggested that the anoxic mud volcanic fluids are highly saline, implying connate water as the probable source. The isotopic signature indicated that δ(18)O-rich fluids may be associated with silicate and carbonate mineral released through water-rock interaction, along with dehydration of clay minerals. Considerable amounts of arsenic in mud irrespective of fluid composition suggested possible release through biogeochemical processes in the subsurface environment. Sequential extraction of As from the mud indicated that As was mostly present in organic and sulphidic phases, and adsorbed on amorphous Mn oxyhydroxides. Volcanic mud and fluids are rich in organic matter (in terms of organic carbon), and the presence of humic acid in mud has implications for the binding of arsenic. Functional groups of humic acid also showed variable sources of organic matter among the mud volcanoes being examined. Because arsenate concentration in the mud fluids was found to be independent from geochemical factors, it was considered that organic matter may induce arsenic mobilization through an adsorption/desorption mechanism with humic substances under reducing conditions. Organic matter therefore plays a significant role in the mobility of arsenic in mud volcanoes.

  9. Career Opportunities in Computer Graphics.

    ERIC Educational Resources Information Center

    Langer, Victor

    1983-01-01

    Reviews the impact of computer graphics on industrial productivity. Details the computer graphics technician curriculum at Milwaukee Area Technical College and the cooperative efforts of business and industry to fund and equip the program. (SK)

  10. Span graphics display utilities handbook, first edition

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Green, J. L.; Newman, R.

    1985-01-01

    The Space Physics Analysis Network (SPAN) is a computer network connecting scientific institutions throughout the United States. This network provides an avenue for timely, correlative research between investigators, in a multidisciplinary approach to space physics studies. An objective in the development of SPAN is to make available direct and simplified procedures that scientists can use, without specialized training, to exchange information over the network. Information exchanges include raw and processes data, analysis programs, correspondence, documents, and graphite images. This handbook details procedures that can be used to exchange graphic images over SPAN. The intent is to periodically update this handbook to reflect the constantly changing facilities available on SPAN. The utilities described within reflect an earnest attempt to provide useful descriptions of working utilities that can be used to transfer graphic images across the network. Whether graphic images are representative of satellite servations or theoretical modeling and whether graphics images are of device dependent or independent type, the SPAN graphics display utilities handbook will be the users guide to graphic image exchange.

  11. Beyond solutes - Mobile matter and its role for the properties, processes and functions of natural porous media

    NASA Astrophysics Data System (ADS)

    Totsche, Kai U.

    2013-04-01

    This presentation will focus on the vastly neglected but rather fascinating aspects of mobile colloidal and particulate materials in natural porous media. The substance spectra of mobile matter in soils, sediments and aquifers will be introduced. Besides clay minerals, carbonates and the oxides and hydroxides of Si, Al, Fe and Mn, these materials comprise in particular organic and biotic material of diverse provenience. Of particular importance are the neo-formations of nanoparticles in the presence of organic matter by means of heterogenic nucleation and growth. Beside the adsorption of mobile organic matter to mineral surfaces, it is this is process that results in the production of organo-mineral phases that differ dramatic in their properties from the pure minerals. Release and formation processes and their role for solute transport will be discussed. The manifold reactions and interactions within and in between the involved immobile and mobile solid, liquid and "biotic" phases are highlighted. Special consideration is given to the interdependence of mobile matter, fluids, physical structure, fluid properties and transport. Among others, this comprises the interplay of mobile matter and aggregation, surface inversion, and fluid properties. Based on lab and field experimental evidence and theoretical concepts, the "solutes and solution" approach will be challenged and the need to theoretically and experimentally step beyond will be justified.

  12. Evaluation of a Mobile Hot Cell Technology for Processing Idaho National Laboratory Remote-Handled Wastes

    SciTech Connect

    B.J. Orchard; L.A. Harvego; R.P. Miklos; F. Yapuncich; L. Care

    2009-03-01

    The Idaho National Laboratory (INL) currently does not have the necessary capabilities to process all remote-handled wastes resulting from the Laboratory’s nuclear-related missions. Over the years, various U.S. Department of Energy (DOE)-sponsored programs undertaken at the INL have produced radioactive wastes and other materials that are categorized as remote-handled (contact radiological dose rate > 200 mR/hr). These materials include Spent Nuclear Fuel (SNF), transuranic (TRU) waste, waste requiring geological disposal, low-level waste (LLW), mixed waste (both radioactive and hazardous per the Resource Conservation and Recovery Act [RCRA]), and activated and/or radioactively-contaminated reactor components. The waste consists primarily of uranium, plutonium, other TRU isotopes, and shorter-lived isotopes such as cesium and cobalt with radiological dose rates up to 20,000 R/hr. The hazardous constituents in the waste consist primarily of reactive metals (i.e., sodium and sodium-potassium alloy [NaK]), which are reactive and ignitable per RCRA, making the waste difficult to handle and treat. A smaller portion of the waste is contaminated with other hazardous components (i.e., RCRA toxicity characteristic metals). Several analyses of alternatives to provide the required remote-handling and treatment capability to manage INL’s remote-handled waste have been conducted over the years and have included various options ranging from modification of existing hot cells to construction of new hot cells. Previous analyses have identified a mobile processing unit as an alternative for providing the required remote-handled waste processing capability; however, it was summarily dismissed as being a potentially viable alternative based on limitations of a specific design considered. In 2008 INL solicited expressions of interest from Vendors who could provide existing, demonstrated technology that could be applied to the retrieval, sorting, treatment (as required), and

  13. Graphical environment for DAQ simulations

    NASA Astrophysics Data System (ADS)

    Wang, Chung-Ching; Booth, Alexander W.; Chen, Yen-Min; Botlo, Michael

    1994-02-01

    At the Superconducting Super Collider Laboratory (SSCL) a tool called DAQSIM has been developed to study the behavior of data acquisition (DAQ) systems. This paper reports and discusses the graphics use in DAQSIM. DAQSIM graphics includes graphical user interface (GUI), animation debugging, and control facilities. DAQSIM graphics not only provides a convenient DAQ simulation environment, it also serves as an efficient manager in simulation development and verification.

  14. Approximate Counting of Graphical Realizations

    PubMed Central

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  15. Graphic Novels and School Libraries

    ERIC Educational Resources Information Center

    Rudiger, Hollis Margaret; Schliesman, Megan

    2007-01-01

    School libraries serving children and teenagers today should be committed to collecting graphic novels to the extent that their budgets allow. However, the term "graphic novel" is enough to make some librarians--not to mention administrators and parents--pause. Graphic novels are simply book-length comics. They can be works of fiction or…

  16. Selecting Mangas and Graphic Novels

    ERIC Educational Resources Information Center

    Nylund, Carol

    2007-01-01

    The decision to add graphic novels, and particularly the Japanese styled called manga, was one the author has debated for a long time. In this article, the author shares her experience when she purchased graphic novels and mangas to add to her library collection. She shares how graphic novels and mangas have revitalized the library.

  17. Low Cost Graphics. Second Edition.

    ERIC Educational Resources Information Center

    Tinker, Robert F.

    This manual describes the CALM TV graphics interface, a low-cost means of producing quality graphics on an ordinary TV. The system permits the output of data in graphic as well as alphanumeric form and the input of data from the face of the TV using a light pen. The integrated circuits required in the interface can be obtained from standard…

  18. Graphics performance in rich Internet applications.

    PubMed

    Hoetzlein, Rama C

    2012-01-01

    Rendering performance for rich Internet applications (RIAs) has recently focused on the debate between using Flash and HTML5 for streaming video and gaming on mobile devices. A key area not widely explored, however, is the scalability of raw bitmap graphics performance for RIAs. Does Flash render animated sprites faster than HTML5? How much faster is WebGL than Flash? Answers to these questions are essential for developing large-scale data visualizations, online games, and truly dynamic websites. A new test methodology analyzes graphics performance across RIA frameworks and browsers, revealing specific performance outliers in existing frameworks. The results point toward a future in which all online experiences might be GPU accelerated. PMID:24806992

  19. Graphical Contingency Analysis Tool

    2010-03-02

    GCA is a visual analytic tool for power grid contingency analysis to provide more decision support for power grid operations. GCA allows power grid operators to quickly gain situational awareness of power grid by converting large amounts of operational data to graphic domain with a color contoured map; identify system trend and foresee and discern emergencies by performing trending analysis; identify the relationships between system configurations and affected assets by conducting clustering analysis; and identifymore » the best action by interactively evaluate candidate actions.« less

  20. Design and Certification of the Extravehicular Activity Mobility Unit (EMU) Water Processing Jumper

    NASA Technical Reports Server (NTRS)

    Peterson, Laurie J.; Neumeyer, Derek J.; Lewis, John F.

    2006-01-01

    The Extravehicular Mobility Units (EMUs) onboard the International Space Station (ISS) experienced a failure due to cooling water contamination from biomass and corrosion byproducts forming solids around the EMU pump rotor. The coolant had no biocide and a low pH which induced biofilm growth and corrosion precipitates, respectively. NASA JSC was tasked with building hardware to clean the ionic, organic, and particulate load from the EMU coolant loop before and after Extravehicular Activity (EVAs). Based on a return sample of the EMU coolant loop, the chemical load was well understood, but there was not sufficient volume of the returned sample to analyze particulates. Through work with EMU specialists, chemists, (EVA) Mission Operations Directorate (MOD) representation, safety and mission assurance, astronaut crew, and team engineers, requirements were developed for the EMU Water Processing hardware (sometimes referred to as the Airlock Coolant Loop Recovery [A/L CLR] system). Those requirements ranged from the operable level of ionic, organic, and particulate load, interfaces to the EMU, maximum cycle time, operating pressure drop, flow rate, and temperature, leakage rates, and biocide levels for storage. Design work began in February 2005 and certification was completed in April 2005 to support a return to flight launch date of May 12, 2005. This paper will discuss the details of the design and certification of the EMU Water Processing hardware and its components

  1. An atomic orbital-based formulation of analytical gradients and nonadiabatic coupling vector elements for the state-averaged complete active space self-consistent field method on graphical processing units

    SciTech Connect

    Snyder, James W.; Hohenstein, Edward G.; Luehr, Nathan; Martínez, Todd J.

    2015-10-21

    We recently presented an algorithm for state-averaged complete active space self-consistent field (SA-CASSCF) orbital optimization that capitalizes on sparsity in the atomic orbital basis set to reduce the scaling of computational effort with respect to molecular size. Here, we extend those algorithms to calculate the analytic gradient and nonadiabatic coupling vectors for SA-CASSCF. Combining the low computational scaling with acceleration from graphical processing units allows us to perform SA-CASSCF geometry optimizations for molecules with more than 1000 atoms. The new approach will make minimal energy conical intersection searches and nonadiabatic dynamics routine for molecular systems with O(10{sup 2}) atoms.

  2. 77 FR 38597 - Multistakeholder Process To Develop Consumer Data Privacy Code of Conduct Concerning Mobile...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-28

    ... applications and interactive services for mobile devices handle personal data.\\3\\ \\1\\ The Privacy Blueprint is... Data Privacy Code of Conduct Concerning Mobile Application Transparency AGENCY: National... INFORMATION: Background: On February 23, 2012, the White House released Consumer Data Privacy in a...

  3. Finding the Sweet Spot: Network Structures and Processes for Increased Knowledge Mobilization

    ERIC Educational Resources Information Center

    Briscoe, Patricia; Pollock, Katina; Campbell, Carol; Carr-Harris, Shasta

    2015-01-01

    The use of networks in public education is one of many knowledge mobilization (KMb) strategies utilized to promote evidence-based research into practice. However, challenges exist in the ability to mobilize knowledge through networks. The purpose of this paper is to explore how networks work. Data were collected from virtual discussions for an…

  4. The Effects of Image-Based Concept Mapping on the Learning Outcomes and Cognitive Processes of Mobile Learners

    ERIC Educational Resources Information Center

    Yen, Jung-Chuan; Lee, Chun-Yi; Chen, I-Jung

    2012-01-01

    The purpose of this study was to investigate the effects of different teaching strategies (text-based concept mapping vs. image-based concept mapping) on the learning outcomes and cognitive processes of mobile learners. Eighty-six college freshmen enrolled in the "Local Area Network Planning and Implementation" course taught by the first author…

  5. Ash iron mobilization through physicochemical processing in volcanic eruption plumes: a numerical modeling approach

    NASA Astrophysics Data System (ADS)

    Hoshyaripour, G. A.; Hort, M.; Langmann, B.

    2015-08-01

    It has been shown that volcanic ash fertilizes the Fe-limited areas of the surface ocean through releasing soluble iron. As ash iron is mostly insoluble upon the eruption, it is hypothesized that heterogeneous in-plume and in-cloud processing of the ash promote the iron solubilization. Direct evidences concerning such processes are, however, lacking. In this study, a 1-D numerical model is developed to simulate the physicochemical interactions of the gas-ash-aerosol in volcanic eruption plumes focusing on the iron mobilization processes at temperatures between 600 and 0 °C. Results show that sulfuric acid and water vapor condense at ~ 150 and ~ 50 °C on the ash surface, respectively. This liquid phase then efficiently scavenges the surrounding gases (> 95 % of HCl, 3-20 % of SO2 and 12-62 % of HF) forming an extremely acidic coating at the ash surface. The low pH conditions of the aqueous film promote acid-mediated dissolution of the Fe-bearing phases present in the ash material. We estimate that 0.1-33 % of the total iron available at the ash surface is dissolved in the aqueous phase before the freezing point is reached. The efficiency of dissolution is controlled by the halogen content of the erupted gas as well as the mineralogy of the iron at ash surface: elevated halogen concentrations and presence of Fe2+-carrying phases lead to the highest dissolution efficiency. Findings of this study are in agreement with the data obtained through leaching experiments.

  6. A mobile monitoring system to understand the processes controlling episodic events in Corpus Christi Bay.

    PubMed

    Islam, Mohammad Shahidul; Bonner, James S; Ojo, Temitope O; Page, Cheryl

    2011-04-01

    Corpus Christi Bay (TX, USA) is a shallow wind-driven bay and thereby, can be characterized as a highly pulsed system. It cycles through various episodic events such as hypoxia, water column stratification, sediment resuspension, flooding, etc. Understanding of the processes that control these events requires an efficient observation system that can measure various hydrodynamic and water quality parameters at the multitude of spatial and temporal scales of interest. As part of our effort to implement an efficient observation system for Corpus Christi Bay, a mobile monitoring system was developed that can acquire and visualize data measured by various submersible sensors on an undulating tow-body deployed behind a research vessel. Along with this system, we have installed a downward-looking Acoustic Doppler Current Profiler to measure the vertical profile of water currents. Real-time display of each measured parameter intensity (measured value relative to a pre-set peak value) guides in selecting the transect route to capture the event of interest. In addition, large synchronized datasets measured by this system provide an opportunity to understand the processes that control various episodic events in the bay. To illustrate the capability of this system, datasets from two research cruises are presented in this paper that help to clarify processes inducing an inverse estuary condition at the mouth of the ship channel and hypoxia at the bottom of the bay. These measured datasets can also be used to drive numerical models to understand various environmental phenomena that control the water quality of the bay. PMID:20556650

  7. A mobile monitoring system to understand the processes controlling episodic events in Corpus Christi Bay.

    PubMed

    Islam, Mohammad Shahidul; Bonner, James S; Ojo, Temitope O; Page, Cheryl

    2011-04-01

    Corpus Christi Bay (TX, USA) is a shallow wind-driven bay and thereby, can be characterized as a highly pulsed system. It cycles through various episodic events such as hypoxia, water column stratification, sediment resuspension, flooding, etc. Understanding of the processes that control these events requires an efficient observation system that can measure various hydrodynamic and water quality parameters at the multitude of spatial and temporal scales of interest. As part of our effort to implement an efficient observation system for Corpus Christi Bay, a mobile monitoring system was developed that can acquire and visualize data measured by various submersible sensors on an undulating tow-body deployed behind a research vessel. Along with this system, we have installed a downward-looking Acoustic Doppler Current Profiler to measure the vertical profile of water currents. Real-time display of each measured parameter intensity (measured value relative to a pre-set peak value) guides in selecting the transect route to capture the event of interest. In addition, large synchronized datasets measured by this system provide an opportunity to understand the processes that control various episodic events in the bay. To illustrate the capability of this system, datasets from two research cruises are presented in this paper that help to clarify processes inducing an inverse estuary condition at the mouth of the ship channel and hypoxia at the bottom of the bay. These measured datasets can also be used to drive numerical models to understand various environmental phenomena that control the water quality of the bay.

  8. Dietary Assessment on a Mobile Phone Using Image Processing and Pattern Recognition Techniques: Algorithm Design and System Prototyping

    PubMed Central

    Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing

    2015-01-01

    Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work. PMID:26225994

  9. Modeling of geochemical processes related to uranium mobilization in the groundwater of a uranium mine.

    PubMed

    Gómez, P; Garralón, A; Buil, B; Turrero, Ma J; Sánchez, L; de la Cruz, B

    2006-07-31

    This paper describes the processes leading to uranium distribution in the groundwater of five boreholes near a restored uranium mine (dug in granite), and the environmental impact of restoration work in the discharge area. The groundwater uranium content varied from <1 microg/L in reduced water far from the area of influence of the uranium ore-containing dyke, to 104 microg/L in a borehole hydraulically connected to the mine. These values, however, fail to reflect a chemical equilibrium between the water and the pure mineral phases. A model for the mobilization of uranium in this groundwater is therefore proposed. This involves the percolation of oxidized waters through the fractured granite, leading to the oxidation of pyrite and arsenopyrite and the precipitation of iron oxyhydroxides. This in turn leads to the dissolution of the primary pitchblende and, subsequently, the release of U(VI) species to the groundwater. These U(VI) species are retained by iron hydroxides. Secondary uranium species are eventually formed as reducing conditions are re-established due to water-rock interactions.

  10. On-site installation and shielding of a mobile electron accelerator for radiation processing

    NASA Astrophysics Data System (ADS)

    Catana, Dumitru; Panaitescu, Julian; Axinescu, Silviu; Manolache, Dumitru; Matei, Constantin; Corcodel, Calin; Ulmeanu, Magdalena; Bestea, Virgil

    1995-05-01

    The development of radiation processing of some bulk products, e.g. grains or potatoes, would be sustained if the irradiation had been carried out at the place of storage, i.e. silo. A promising solution is proposed consisting of a mobile electron accelerator, installed on a couple of trucks and traveling from one customer to another. The energy of the accelerated electrons was chosen at 5 MeV, with 10 to 50 kW beam power. The irradiation is possible either with electrons or with bremsstrahlung. A major problem of the above solution is the provision of adequate shielding at the customer, with a minimum investment cost. Plans for a bunker are presented, which houses the truck carrying the radiation head. The beam is vertical downwards, through the truck floor, through a transport pipe and a scanning horn. The irradiation takes place in a pit, where the products are transported through a belt. The belt path is so chosen as to minimize openings in the shielding. Shielding calculations are presented supposing a working regime with 5 MeV bremsstrahlung. Leakage and scattered radiation are taken into account.

  11. High-mobility solution-processed copper phthalocyanine-based organic field-effect transistors

    NASA Astrophysics Data System (ADS)

    Chaure, Nandu B.; Cammidge, Andrew N.; Chambrier, Isabelle; Cook, Michael J.; Cain, Markys G.; Murphy, Craig E.; Pal, Chandana; Ray, Asim K.

    2011-03-01

    Solution-processed films of 1,4,8,11,15,18,22,25-octakis(hexyl) copper phthalocyanine (CuPc6) were utilized as an active semiconducting layer in the fabrication of organic field-effect transistors (OFETs) in the bottom-gate configurations using chemical vapour deposited silicon dioxide (SiO2) as gate dielectrics. The surface treatment of the gate dielectric with a self-assembled monolayer of octadecyltrichlorosilane (OTS) resulted in values of 4×10-2 cm2 V-1 s-1 and 106 for saturation mobility and on/off current ratio, respectively. This improvement was accompanied by a shift in the threshold voltage from 3 V for untreated devices to -2 V for OTS treated devices. The trap density at the interface between the gate dielectric and semiconductor decreased by about one order of magnitude after the surface treatment. The transistors with the OTS treated gate dielectrics were more stable over a 30-day period in air than untreated ones.

  12. Software Package For Real-Time Graphics

    NASA Technical Reports Server (NTRS)

    Malone, Jacqueline C.; Moore, Archie L.

    1991-01-01

    Software package for master graphics interactive console (MAGIC) at Western Aeronautical Test Range (WATR) of NASA Ames Research Center provides general-purpose graphical display system for real-time and post-real-time analysis of data. Written in C language and intended for use on workstation of interactive raster imaging system (IRIS) equipped with level-V Unix operating system. Enables flight researchers to create their own displays on basis of individual requirements. Applicable to monitoring of complicated processes in chemical industry.

  13. Interactive Learning for Graphic Design Foundations

    ERIC Educational Resources Information Center

    Chu, Sauman; Ramirez, German Mauricio Mejia

    2012-01-01

    One of the biggest problems for students majoring in pre-graphic design is students' inability to apply their knowledge to different design solutions. The purpose of this study is to examine the effectiveness of interactive learning modules in facilitating knowledge acquisition during the learning process and to create interactive learning modules…

  14. Arrows: A Special Case of Graphic Communication.

    ERIC Educational Resources Information Center

    Hardin, Pris

    The purpose of this paper is to examine arrow design in relation to the type of pointing, connecting, or processing involved. Three possible approaches to the investigation of arrows as graphic communication include research: by arrow function, relating message structure to arrow design, and linking user expectations to arrow design. The following…

  15. Interactive computer graphics: the arms race

    SciTech Connect

    Hafemeister, D.W.

    1983-01-01

    By using interactive computer graphics (ICG) it is possible to discuss the numerical aspects of some arms race issues with more specificity and in a visual way. The number of variables involved in these issues can be quite large; computers operated in the interactive, graphical mode, can allow exploration of the variables, leading to a greater understanding of the issues. This paper will examine some examples of interactive computer graphics: (1) the relationship between silo hardening and the accuracy, yield, and reliability of ICBMs; (2) target vulnerability (Minuteman, Dense Pack); (3) counterforce vs. countervalue weapons; (4) civil defense; (5) gravitational bias error; (6) MIRV; (7) national vulnerability to a preemptive first strike; (8) radioactive fallout; (9) digital-image processing with charge-coupled devices. 17 references, 11 figures, 1 table.

  16. Interactive computer graphics - Why's, wherefore's and examples

    NASA Technical Reports Server (NTRS)

    Gregory, T. J.; Carmichael, R. L.

    1983-01-01

    The benefits of using computer graphics in design are briefly reviewed. It is shown that computer graphics substantially aids productivity by permitting errors in design to be found immediately and by greatly reducing the cost of fixing the errors and the cost of redoing the process. The possibilities offered by computer-generated displays in terms of information content are emphasized, along with the form in which the information is transferred. The human being is ideally and naturally suited to dealing with information in picture format, and the content rate in communication with pictures is several orders of magnitude greater than with words or even graphs. Since science and engineering involve communicating ideas, concepts, and information, the benefits of computer graphics cannot be overestimated.

  17. Big system: Interactive graphics for the engineer

    NASA Technical Reports Server (NTRS)

    Quenneville, C. E.

    1975-01-01

    The BCS Interactive Graphics System (BIG System) approach to graphics was presented, along with several significant engineering applications. The BIG System precompiler, the graphics support library, and the function requirements of graphics applications are discussed. It was concluded that graphics standardization and a device independent code can be developed to assure maximum graphic terminal transferability.

  18. [Hardware for graphics systems].

    PubMed

    Goetz, C

    1991-02-01

    In all personal computer applications, be it for private or professional use, the decision of which "brand" of computer to buy is of central importance. In the USA Apple computers are mainly used in universities, while in Europe computers of the so-called "industry standard" by IBM (or clones thereof) have been increasingly used for many years. Independently of any brand name considerations, the computer components purchased must meet the current (and projected) needs of the user. Graphic capabilities and standards, processor speed, the use of co-processors, as well as input and output devices such as "mouse", printers and scanners are discussed. This overview is meant to serve as a decision aid. Potential users are given a short but detailed summary of current technical features. PMID:2042260

  19. LONGLIB - A GRAPHICS LIBRARY

    NASA Technical Reports Server (NTRS)

    Long, D.

    1994-01-01

    This library is a set of subroutines designed for vector plotting to CRT's, plotters, dot matrix, and laser printers. LONGLIB subroutines are invoked by program calls similar to standard CALCOMP routines. In addition to the basic plotting routines, LONGLIB contains an extensive set of routines to allow viewport clipping, extended character sets, graphic input, shading, polar plots, and 3-D plotting with or without hidden line removal. LONGLIB capabilities include surface plots, contours, histograms, logarithm axes, world maps, and seismic plots. LONGLIB includes master subroutines, which are self-contained series of commonly used individual subroutines. When invoked, the master routine will initialize the plotting package, and will plot multiple curves, scatter plots, log plots, 3-D plots, etc. and then close the plot package, all with a single call. Supported devices include VT100 equipped with Selanar GR100 or GR100+ boards, VT125s, VT240s, VT220 equipped with Selanar SG220, Tektronix 4010/4014 or 4107/4109 and compatibles, and Graphon GO-235 terminals. Dot matrix printer output is available by using the provided raster scan conversion routines for DEC LA50, Printronix printers, and high or low resolution Trilog printers. Other output devices include QMS laser printers, Postscript compatible laser printers, and HPGL compatible plotters. The LONGLIB package includes the graphics library source code, an on-line help library, scan converter and meta file conversion programs, and command files for installing, creating, and testing the library. The latest version, 5.0, is significantly enhanced and has been made more portable. Also, the new version's meta file format has been changed and is incompatible with previous versions. A conversion utility is included to port the old meta files to the new format. Color terminal plotting has been incorporated. LONGLIB is written in FORTRAN 77 for batch or interactive execution and has been implemented on a DEC VAX series

  20. GFI - EASY PC GRAPHICS

    NASA Technical Reports Server (NTRS)

    Katz, R. B.

    1994-01-01

    Easy PC Graphics (GFI) is a graphical plot program that permits data to be easily and flexibly plotted. Data is input in a standard format which allows easy data entry and evaluation. Multiple dependent axes are also supported. The program may either be run in a stand alone mode or be embedded in the user's own software. Automatic scaling is built in for several logarithmic and decibel scales. New scales are easily incorporated into the code through the use of object-oriented programming techniques. For the autoscale routines and the actual plotting code, data is not retrieved directly from a file, but a "method" delivers the data, performing scaling as appropriate. Each object (variable) has state information which selects its own scaling. GFI is written in Turbo Pascal version 6.0 for IBM PC compatible computers running MS-DOS. The source code will only compile properly with the Turbo Pascal v. 6.0 or v. 7.0 compilers; however, an executable is provided on the distribution disk. This executable requires at least 64K of RAM and DOS 3.1 or higher, as well as an HP LaserJet printer to print output plots. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. An electronic copy of the documentation is provided on the distribution medium in ASCII format. GFI was developed in 1993.

  1. Solution-Processed Transistors Using Colloidal Nanocrystals with Composition-Matched Molecular "Solders": Approaching Single Crystal Mobility.

    PubMed

    Jang, Jaeyoung; Dolzhnikov, Dmitriy S; Liu, Wenyong; Nam, Sooji; Shim, Moonsub; Talapin, Dmitri V

    2015-10-14

    Crystalline silicon-based complementary metal-oxide-semiconductor transistors have become a dominant platform for today's electronics. For such devices, expensive and complicated vacuum processes are used in the preparation of active layers. This increases cost and restricts the scope of applications. Here, we demonstrate high-performance solution-processed CdSe nanocrystal (NC) field-effect transistors (FETs) that exhibit very high carrier mobilities (over 400 cm(2)/(V s)). This is comparable to the carrier mobilities of crystalline silicon-based transistors. Furthermore, our NC FETs exhibit high operational stability and MHz switching speeds. These NC FETs are prepared by spin coating colloidal solutions of CdSe NCs capped with molecular solders [Cd2Se3](2-) onto various oxide gate dielectrics followed by thermal annealing. We show that the nature of gate dielectrics plays an important role in soldered CdSe NC FETs. The capacitance of dielectrics and the NC electronic structure near gate dielectric affect the distribution of localized traps and trap filling, determining carrier mobility and operational stability of the NC FETs. We expand the application of the NC soldering process to core-shell NCs consisting of a III-V InAs core and a CdSe shell with composition-matched [Cd2Se3](2-) molecular solders. Soldering CdSe shells forms nanoheterostructured material that combines high electron mobility and near-IR photoresponse. PMID:26280943

  2. Computer graphics application in the engineering design integration system

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  3. DspaceOgre 3D Graphics Visualization Tool

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.

    2011-01-01

    This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.

  4. ARCGRAPH SYSTEM - AMES RESEARCH GRAPHICS SYSTEM

    NASA Technical Reports Server (NTRS)

    Hibbard, E. A.

    1994-01-01

    Ames Research Graphics System, ARCGRAPH, is a collection of libraries and utilities which assist researchers in generating, manipulating, and visualizing graphical data. In addition, ARCGRAPH defines a metafile format that contains device independent graphical data. This file format is used with various computer graphics manipulation and animation packages at Ames, including SURF (COSMIC Program ARC-12381) and GAS (COSMIC Program ARC-12379). In its full configuration, the ARCGRAPH system consists of a two stage pipeline which may be used to output graphical primitives. Stage one is associated with the graphical primitives (i.e. moves, draws, color, etc.) along with the creation and manipulation of the metafiles. Five distinct data filters make up stage one. They are: 1) PLO which handles all 2D vector primitives, 2) POL which handles all 3D polygonal primitives, 3) RAS which handles all 2D raster primitives, 4) VEC which handles all 3D raster primitives, and 5) PO2 which handles all 2D polygonal primitives. Stage two is associated with the process of displaying graphical primitives on a device. To generate the various graphical primitives, create and reprocess ARCGRAPH metafiles, and access the device drivers in the VDI (Video Device Interface) library, users link their applications to ARCGRAPH's GRAFIX library routines. Both FORTRAN and C language versions of the GRAFIX and VDI libraries exist for enhanced portability within these respective programming environments. The ARCGRAPH libraries were developed on a VAX running VMS. Minor documented modification of various routines, however, allows the system to run on the following computers: Cray X-MP running COS (no C version); Cray 2 running UNICOS; DEC VAX running BSD 4.3 UNIX, or Ultrix; SGI IRIS Turbo running GL2-W3.5 and GL2-W3.6; Convex C1 running UNIX; Amhdahl 5840 running UTS; Alliant FX8 running UNIX; Sun 3/160 running UNIX (no native device driver); Stellar GS1000 running Stellex (no native device driver

  5. All-digital multicarrier demodulators for on-board processing satellites in mobile communication systems

    NASA Astrophysics Data System (ADS)

    Yim, Wan Hung

    Economical operation of future satellite systems for mobile communications can only be fulfilled by using dedicated on-board processing satellites, which would allow both cheap earth terminals and lower space segment costs. With on-board modems and codecs, the up-link and down-link can be optimized separately. An attractive scheme is to use frequency-division multiple access/single chanel per carrier (FDMA/SCPC) on the up-link and time division multiplexing (TDM) on the down-link. This scheme allows mobile terminals to transmit a narrow band, low power signal, resulting in smaller dishes and high power amplifiers (HPA's) with lower output power. On the up-link, there are hundreds to thousands of FDM channels to be demodulated on-board. The most promising approach is the use of all-digital multicarrier demodulators (MCD's), where analog and digital hardware are efficiently shared among channels, and digital signal processing (DSP) is used at an early stage to take advantage of very large scale integration (VLSI) implementation. A MCD consists of a channellizer for separation of frequency division multiplexing (FDM) channels, followed by individual modulators for each channel. Major research areas in MCD's are in multirate DSP, and the optimal estimation for synchronization, which form the basis of the thesis. Complex signal theories are central to the development of structured approaches for the sampling and processing of bandpass signals, which are the foundations in both channellizer and demodulator design. In multirate DSP, polyphase theories replace many ad-hoc, tedious and error-prone design procedures. For example, a polyphase-matrix deep space network frequency and timing system (DFT) channellizer includes all efficient filter bank techniques as special cases. Also, a polyphase-lattice filter is derived, not only for sampling rate conversion, but also capable of sampling phase variation, which is required for symbol timing adjustment in all

  6. Graphically Speaking: Graphics Software for Non-Artists.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1994-01-01

    Discusses microcomputer-based graphics and describes software for Windows and other operating systems. Highlights include file formats for painting and drawing; sources of artwork, including clip art, scanning, and public domain images; examples; and graphics toolkits. A review of 26 recent articles on personal computers, other hardware, and…

  7. Evaluating Texts for Graphical Literacy Instruction: The Graphic Rating Tool

    ERIC Educational Resources Information Center

    Roberts, Kathryn L.; Brugar, Kristy A.; Norman, Rebecca R.

    2015-01-01

    In this article, we present the Graphical Rating Tool (GRT), which is designed to evaluate the graphical devices that are commonly found in content-area, non-fiction texts, in order to identify books that are well suited for teaching about those devices. We also present a "best of" list of science and social studies books, which includes…

  8. Graphics Display of Foreign Scripts.

    ERIC Educational Resources Information Center

    Abercrombie, John R.

    1987-01-01

    Describes Graphics Project for Foreign Language Learning at the University of Pennsylvania, which has developed ways of displaying foreign scripts on microcomputers. Character design on computer screens is explained; software for graphics, printing, and language instruction is discussed; and a text editor is described that corrects optically…

  9. Graphic Interfaces and Online Information.

    ERIC Educational Resources Information Center

    Percival, J. Mark

    1990-01-01

    Discusses the growing importance of the use of Graphic User Interfaces (GUIs) with microcomputers and online services. Highlights include the development of graphics interfacing with microcomputers; CD-ROM databases; an evaluation of HyperCard as a potential interface to electronic mail and online commercial databases; and future possibilities.…

  10. Low-Budget Graphic Databases.

    ERIC Educational Resources Information Center

    Mahoney, Dan

    1994-01-01

    Explains the use of a standard text-based database program (i.e., dBase III) to run external programs that display graphic files during a database session and reduces costs normally encountered when preparing a computer to run a graphical database. An example is given of a simple database with two fields. (LRW)

  11. Super VGA Primitives Graphics System.

    1992-05-14

    Version 00 These primitives are the lowest level routines needed to perform super VGA graphics on a PC. A sample main program is included that exercises the primitives. Both Lahey and Microsoft FORTRAN's have graphics libraries. However, the libraries do not support 256 color graphics at resolutions greater than 320x200. The primitives bypass these libraries while still conforming to standard usage of BIOS. The supported graphics modes depend upon the PC graphics card and itsmore » memory. Super VGA resolutions of 640x480 and 800x600 have been tested on an ATI VGA Wonder card with 512K memory and on several 80486 PC's (unknown manufacturers) at retail stores.« less

  12. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

    SciTech Connect

    Nakata, Susumu

    2008-09-01

    This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

  13. Integration of rocket turbine design and analysis through computer graphics

    NASA Technical Reports Server (NTRS)

    Hsu, Wayne; Boynton, Jim

    1988-01-01

    An interactive approach with engineering computer graphics is used to integrate the design and analysis processes of a rocket engine turbine into a progressive and iterative design procedure. The processes are interconnected through pre- and postprocessors. The graphics are used to generate the blade profiles, their stacking, finite element generation, and analysis presentation through color graphics. Steps of the design process discussed include pitch-line design, axisymmetric hub-to-tip meridional design, and quasi-three-dimensional analysis. The viscous two- and three-dimensional analysis codes are executed after acceptable designs are achieved and estimates of initial losses are confirmed.

  14. Computer graphics and graphic artists: a rocky courtship

    SciTech Connect

    Clark, B.A.

    1982-01-01

    A presentation- and publication-quality computer-graphics system has been implemented at Union Carbide Corporation Nuclear Division over the past four years. Success of the implementation required close interaction between programmers and illustrators. This paper discusses the problems involved in establishing a computer-graphics capability in a conventional graphic arts department. The problems dealt with fall into three areas: identifying and acquiring appropriate hardware, acquiring user-friendly software that could meet stringent quality standards, and overcoming the prejudices and misconceptions of all the people involved.

  15. Baseband switches and transmultiplexers for use in an on-board processing mobile/business satellite system

    NASA Astrophysics Data System (ADS)

    Evans, B. G.; Coakley, F. P.; El-Amin, M. H. M.; Lu, S. C.; Wong, C. W.

    1986-07-01

    The paper reviews the traffic requirements for two specific services which will benefit by the use of on-board processing: (1) business satellites for European coverage and (2) land mobile satellites for Europe. Although the traffic requirements are very different for the two services, the proposed architectures are similar in comprising a mixture of baseband switches and transmultiplexers. The paper reviews various architectures for both components, and estimates the chip count and power requirements for the various architectures.

  16. Automatic Palette Identification of Colored Graphics

    NASA Astrophysics Data System (ADS)

    Lacroix, Vinciane

    The median-shift, a new clustering algorithm, is proposed to automatically identify the palette of colored graphics, a pre-requisite for graphics vectorization. The median-shift is an iterative process which shifts each data point to the "median" point of its neighborhood defined thanks to a distance measure and a maximum radius, the only parameter of the method. The process is viewed as a graph transformation which converges to a set of clusters made of one or several connected vertices. As the palette identification depends on color perception, the clustering is performed in the L*a*b* feature space. As pixels located on edges are made of mixed colors not expected to be part of the palette, they are removed from the initial data set by an automatic pre-processing. Results are shown on scanned maps and on the Macbeth color chart and compared to well established methods.

  17. Mobile Air Monitoring Data Processing Strategies and Effects on Spatial Air Pollution Trends

    EPA Science Inventory

    The collection of real-time air quality measurements while in motion (i.e., mobile monitoring) is currently conducted worldwide to evaluate in situ emissions, local air quality trends, and air pollutant exposure. This measurement strategy pushes the limits of traditional data an...

  18. A Comparative Analysis of the Processes of Social Mobility in the USSR and in Today's Russia

    ERIC Educational Resources Information Center

    Shkaratan, O. I.; Iastrebov, G. A.

    2012-01-01

    When it comes to analyzing problems of mobility, most studies of the post-Soviet era have cited random and unconnected data with respect to the Soviet era, on the principle of comparing "the old" and "the new." The authors have deemed it possible (although based on material that is not fully comparable) to examine the late Soviet past as a period…

  19. An efficient process for producing economical and eco-friendly cotton textile composites for mobile industry

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The mobile industry comprised of airplanes, automotives, and ships uses enormous quantities of various types of textiles. Just a few decades ago, most of these textile products and composites were made with woven or knitted fabrics that were mostly made with the then only available natural fibers, i...

  20. Graphic arts techniques and equipment: A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Technology utilization of NASA sponsored projects involving graphic arts techniques and equipment is discussed. The subjects considered are: (1) modification to graphics tools, (1) new graphics tools, (3) visual aids for graphics, and (4) graphic arts shop hints. Photographs and diagrams are included to support the written material.

  1. Graphical presentation of diagnostic information

    PubMed Central

    Whiting, Penny F; Sterne, Jonathan AC; Westwood, Marie E; Bachmann, Lucas M; Harbord, Roger; Egger, Matthias; Deeks, Jonathan J

    2008-01-01

    Background Graphical displays of results allow researchers to summarise and communicate the key findings of their study. Diagnostic information should be presented in an easily interpretable way, which conveys both test characteristics (diagnostic accuracy) and the potential for use in clinical practice (predictive value). Methods We discuss the types of graphical display commonly encountered in primary diagnostic accuracy studies and systematic reviews of such studies, and systematically review the use of graphical displays in recent diagnostic primary studies and systematic reviews. Results We identified 57 primary studies and 49 systematic reviews. Fifty-six percent of primary studies and 53% of systematic reviews used graphical displays to present results. Dot-plot or box-and- whisker plots were the most commonly used graph in primary studies and were included in 22 (39%) studies. ROC plots were the most common type of plot included in systematic reviews and were included in 22 (45%) reviews. One primary study and five systematic reviews included a probability-modifying plot. Conclusion Graphical displays are currently underused in primary diagnostic accuracy studies and systematic reviews of such studies. Diagnostic accuracy studies need to include multiple types of graphic in order to provide both a detailed overview of the results (diagnostic accuracy) and to communicate information that can be used to inform clinical practice (predictive value). Work is required to improve graphical displays, to better communicate the utility of a test in clinical practice and the implications of test results for individual patients. PMID:18405357

  2. Physical and technological foundations of graphical treatment processes based on inner defects under the action of powerful pulses of laser radiation

    NASA Astrophysics Data System (ADS)

    Davidov, Nicolay N.; Kudaev, Serge V.

    1999-01-01

    Researchers of damage formation in processes in glass are directed on studying the interaction mechanisms of powerful impulses of penetrating laser radiation with materials for the purpose of improvement of optical components resistance. However, the processes of glass structure defects formation as local areas with low factor of visible light admittance can find application in a final glassware processing. Application of treatment modes, using these effects, allows: to increase art expression of decorative glassware for furnish of buildings interior; to solve some problems of manufacturing counter devices, and also indication devices of electronic instruments. Mathematical models of defect formation processes in optically transparent materials under an action of powerful pulses of laser radiation are necessary for development of control principles of glass treatment.

  3. User Dynamics in Graphical Authentication Systems

    NASA Astrophysics Data System (ADS)

    Revett, Kenneth; Jahankhani, Hamid; de Magalhães, Sérgio Tenreiro; Santos, Henrique M. D.

    In this paper, a graphical authentication system is presented which is based on a matching scheme. The user is required to match up thumbnail graphical images that belong to a variety of categories - in an order based approach. The number of images in the selection panel was varied to determine how this effects memorability. In addition, timing information was included as a means of enhancing the security level of the system. That is, the user's mouse clicks were timed and used as part of the authentication process. This is one of the few studies that employ a proper biometric facility, namely mouse dynamics, into a graphical authentication system. Lastly, this study employees the use of the 2-D version of Fitts' law, the Accot-Zhai streering law, which is used to examine the effect of image size on usability. The results from this study indicate that the combination of biometrics (mouse timing information) into a graphical authentication scheme produces FAR/FRR values that approach textual based authentication schemes.

  4. Graphics Software For VT Terminals

    NASA Technical Reports Server (NTRS)

    Wang, Caroline

    1991-01-01

    VTGRAPH graphics software tool for DEC/VT computer terminal or terminals compatible with it, widely used by government and industry. Callable in FORTRAN or C language, library program enabling user to cope with many computer environments in which VT terminals used for window management and graphic systems. Provides PLOT10-like package plus color or shade capability for VT240, VT241, and VT300 terminals. User can easily design more-friendly user-interface programs and design PLOT10 programs on VT terminals with different computer systems. Requires ReGis graphics set terminal and FORTRAN compiler.

  5. The Effects of Integrating Mobile and CAD Technology in Teaching Design Process for Malaysian Polytechnic Architecture Student in Producing Creative Product

    ERIC Educational Resources Information Center

    Hassan, Isham Shah; Ismail, Mohd Arif; Mustapha, Ramlee

    2010-01-01

    The purpose of this research is to examine the effect of integrating the digital media such as mobile and CAD technology on designing process of Malaysian polytechnic architecture students in producing a creative product. A website is developed based on Caroll's minimal theory, while mobile and CAD technology integration is based on Brown and…

  6. Low-temperature processable amorphous In-W-O thin-film transistors with high mobility and stability

    SciTech Connect

    Kizu, Takio; Aikawa, Shinya; Mitoma, Nobuhiko; Shimizu, Maki; Gao, Xu; Lin, Meng-Fang; Tsukagoshi, Kazuhito; Nabatame, Toshihide

    2014-04-14

    Thin-film transistors (TFTs) with a high stability and a high field-effect mobility have been achieved using W-doped indium oxide semiconductors in a low-temperature process (∼150 °C). By incorporating WO{sub 3} into indium oxide, TFTs that were highly stable under a negative bias stress were reproducibly achieved without high-temperature annealing, and the degradation of the field-effect mobility was not pronounced. This may be due to the efficient suppression of the excess oxygen vacancies in the film by the high dissociation energy of the bond between oxygen and W atoms and to the different charge states of W ions.

  7. Reflex: Graphical workflow engine for data reduction

    NASA Astrophysics Data System (ADS)

    ESO Reflex development Team

    2014-01-01

    Reflex provides an easy and flexible way to reduce VLT/VLTI science data using the ESO pipelines. It allows graphically specifying the sequence in which the data reduction steps are executed, including conditional stops, loops and conditional branches. It eases inspection of the intermediate and final data products and allows repetition of selected processing steps to optimize the data reduction. The data organization necessary to reduce the data is built into the system and is fully automatic; advanced users can plug their own modules and steps into the data reduction sequence. Reflex supports the development of data reduction workflows based on the ESO Common Pipeline Library. Reflex is based on the concept of a scientific workflow, whereby the data reduction cascade is rendered graphically and data seamlessly flow from one processing step to the next. It is distributed with a number of complete test datasets so users can immediately start experimenting and familiarize themselves with the system.

  8. Managing facts and concepts: computer graphics and information graphics from a graphic designer's perspective

    SciTech Connect

    Marcus, A.

    1983-01-01

    This book emphasizes the importance of graphic design for an information-oriented society. In an environment in which many new graphic communication technologies are emerging, it raises some issues which graphic designers and managers of graphic design production should consider in using the new technology effectively. In its final sections, it gives an example of the steps taken in designing a visual narrative as a prototype for responsible information-oriented graphic design. The management of complex facts and concepts, of complex systems of ideas and issues, presented in a visual as well as verbal narrative or dialogue and conveyed through new technology will challenge the graphic design community in the coming decades. This shift to visual-verbal communication has repercussions in the educational system and the political/governance systems that go beyond the scope of this book. If there is a single goal for this book, it is to stimulate the reader and then to provide references that will help you learn more about graphic design in an era of communication when know business is show business.

  9. Raster graphics extensions to the core system

    NASA Technical Reports Server (NTRS)

    Foley, J. D.

    1984-01-01

    A conceptual model of raster graphics systems was developed. The model integrates core-like graphics package concepts with contemporary raster display architectures. The conceptual model of raster graphics introduces multiple pixel matrices with associated index tables.

  10. Spatial heterogeneity of mobilization processes and input pathways of herbicides into a brook in a small agricultural catchment

    NASA Astrophysics Data System (ADS)

    Doppler, Tobias; Lück, Alfred; Popow, Gabriel; Strahm, Ivo; Winiger, Luca; Gaj, Marcel; Singer, Heinz; Stamm, Christian

    2010-05-01

    Soil applied herbicides can be transported from their point of application (agricultural field) to surface waters during rain events. There they can have harmful effects on aquatic species. Since the spatial distribution of mobilization and transport processes is very heterogeneous, the contributions of different fields to the total load in a surface water body may differ considerably. The localization of especially critical areas (contributing areas) can help to efficiently minimize herbicide inputs to surface waters. An agricultural field becomes a contributing area when three conditions are met: 1) herbicides are applied, 2) herbicides are mobilized on the field and 3) the mobilized herbicides are transported rapidly to the surface water. In spring 2009, a controlled herbicide application was performed on corn fields in a small (ca 1 km2) catchment with intensive crop production in the Swiss plateau. Subsequently water samples were taken at different locations in the catchment with a high temporal resolution during rain events. We observed both saturation excess and hortonian overland flow during the field campaign. Both can be important mobilization processes depending on the intensity and quantity of the rain. This can lead to different contributing areas during different types of rain events. We will show data on the spatial distribution of herbicide loads during different types of rain events. Also the connectivity of the fields with the brook is spatially heterogeneous. Most of the fields are disconnected from the brook by internal sinks in the catchment, which prevents surface runoff from entering the brook directly. Surface runoff from these disconnected areas can only enter the brook rapidly via macropore-flow into tile drains beneath the internal sinks or via direct shortcuts to the drainage system (maintenance manholes, farmyard or road drains). We will show spatially distributed data on herbicide concentration in purely subsurface systems which shows

  11. Photojournal Home Page Graphic 2007

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image is an unannotated version of the Photojournal Home Page graphic released in October 2007. This digital collage contains a highly stylized rendition of our solar system and points beyond. As this graphic was intended to be used as a navigation aid in searching for data within the Photojournal, certain artistic embellishments have been added (color, location, etc.). Several data sets from various planetary and astronomy missions were combined to create this image.

  12. Planetary Photojournal Home Page Graphic

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This image is an unannotated version of the Planetary Photojournal Home Page graphic. This digital collage contains a highly stylized rendition of our solar system and points beyond. As this graphic was intended to be used as a navigation aid in searching for data within the Photojournal, certain artistic embellishments have been added (color, location, etc.). Several data sets from various planetary and astronomy missions were combined to create this image.

  13. APSRS state-base graphics

    USGS Publications Warehouse

    ,

    1981-01-01

    The National Cartographic Information Center (NCIC) is the information branch of the U.S. Geological Survey's National Mapping Division. In order to organize and distribute information about U.S. aerial photography coverage and to help eliminate aerial mapping duplication by tracking individual aerial projects, NCIC developed the Aerial Photography Summary Record System (APSRS). APSRS's principal products are State-Base Graphics (SBG), graphic indexes that show the coverage of conventional aerial photography projects over each State.

  14. Graphic design of pinhole cameras

    NASA Technical Reports Server (NTRS)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  15. Building a Mobile HIV Prevention App for Men Who Have Sex With Men: An Iterative and Community-Driven Process

    PubMed Central

    McDougal, Sarah J; Sullivan, Patrick S; Stekler, Joanne D; Stephenson, Rob

    2015-01-01

    Background Gay, bisexual, and other men who have sex with men (MSM) account for a disproportionate burden of new HIV infections in the United States. Mobile technology presents an opportunity for innovative interventions for HIV prevention. Some HIV prevention apps currently exist; however, it is challenging to encourage users to download these apps and use them regularly. An iterative research process that centers on the community’s needs and preferences may increase the uptake, adherence, and ultimate effectiveness of mobile apps for HIV prevention. Objective The aim of this paper is to provide a case study to illustrate how an iterative community approach to a mobile HIV prevention app can lead to changes in app content to appropriately address the needs and the desires of the target community. Methods In this three-phase study, we conducted focus group discussions (FGDs) with MSM and HIV testing counselors in Atlanta, Seattle, and US rural regions to learn preferences for building a mobile HIV prevention app. We used data from these groups to build a beta version of the app and theater tested it in additional FGDs. A thematic data analysis examined how this approach addressed preferences and concerns expressed by the participants. Results There was an increased willingness to use the app during theater testing than during the first phase of FGDs. Many concerns that were identified in phase one (eg, disagreements about reminders for HIV testing, concerns about app privacy) were considered in building the beta version. Participants perceived these features as strengths during theater testing. However, some disagreements were still present, especially regarding the tone and language of the app. Conclusions These findings highlight the benefits of using an interactive and community-driven process to collect data on app preferences when building a mobile HIV prevention app. Through this process, we learned how to be inclusive of the larger MSM population without

  16. Defining Identities through Multiliteracies: EL Teens Narrate Their Immigration Experiences as Graphic Stories

    ERIC Educational Resources Information Center

    Danzak, Robin L.

    2011-01-01

    Based on a framework of identity-as-narrative and multiliteracies, this article describes "Graphic Journeys," a multimedia literacy project in which English learners (ELs) in middle school created graphic stories that expressed their families' immigration experiences. The process involved reading graphic novels, journaling, interviewing, and…

  17. Write Is Right: Using Graphic Organizers to Improve Student Mathematical Problem Solving

    ERIC Educational Resources Information Center

    Zollman, Alan

    2012-01-01

    Teachers have used graphic organizers successfully in teaching the writing process. This paper describes graphic organizers and their potential mathematics benefits for both students and teachers, elucidates a specific graphic organizer adaptation for mathematical problem solving, and discusses results using the "four-corners-and-a-diamond"…

  18. On the possibility of producing true real-time retinal cross-sectional images using a graphics processing unit enhanced master-slave optical coherence tomography system

    NASA Astrophysics Data System (ADS)

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian

    2015-07-01

    In a previous report, we demonstrated master-slave optical coherence tomography (MS-OCT), an OCT method that does not need resampling of data and can be used to deliver en face images from several depths simultaneously. In a separate report, we have also demonstrated MS-OCT's capability of producing cross-sectional images of a quality similar to those provided by the traditional Fourier domain (FD) OCT technique, but at a much slower rate. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real time. We analyze the conditions that ensure a true real-time B-scan imaging operation and demonstrate in vivo real-time images from human fovea and the optic nerve, with resolution and sensitivity comparable to those produced using the traditional FD-based method, however, without the need of data resampling.

  19. On the possibility of producing true real-time retinal cross-sectional images using a graphics processing unit enhanced master-slave optical coherence tomography system.

    PubMed

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian

    2015-07-01

    In a previous report, we demonstrated master-slave optical coherence tomography (MS-OCT), an OCT method that does not need resampling of data and can be used to deliver en face images from several depths simultaneously. In a separate report, we have also demonstrated MS-OCT's capability of producing cross-sectional images of a quality similar to those provided by the traditional Fourier domain (FD) OCT technique, but at a much slower rate. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real time. We analyze the conditions that ensure a true real-time B-scan imaging operation and demonstrate in vivo real-time images from human fovea and the optic nerve, with resolution and sensitivity comparable to those produced using the traditional FD-based method, however, without the need of data resampling.

  20. Statistical graphics: mapping the pathways of science.

    PubMed

    Wainer, H; Velleman, P F

    2001-01-01

    This chapter traces the evolution of statistical graphics starting with its departure from the common noun structure of Cartesian determinism, through William Playfair's revolutionary grammatical shift to graphs as proper nouns, and alights on the modern conception of graph as an active participant in the scientific process of discovery. The ubiquitous availability of data, software, and cheap, high-powered, computing when coupled with the broad acceptance of the ideas in Tukey's 1977 treatise on exploratory data analysis has yielded a fundamental change in the way that the role of statistical graphics is thought of within science-as a dynamic partner and guide to the future rather than as a static monument to the discoveries of the past. We commemorate and illustrate this development while pointing readers to the new tools available and providing some indications of their potential.

  1. Computer graphics in architecture and engineering

    NASA Technical Reports Server (NTRS)

    Greenberg, D. P.

    1975-01-01

    The present status of the application of computer graphics to the building profession or architecture and its relationship to other scientific and technical areas were discussed. It was explained that, due to the fragmented nature of architecture and building activities (in contrast to the aerospace industry), a comprehensive, economic utilization of computer graphics in this area is not practical and its true potential cannot now be realized due to the present inability of architects and structural, mechanical, and site engineers to rely on a common data base. Future emphasis will therefore have to be placed on a vertical integration of the construction process and effective use of a three-dimensional data base, rather than on waiting for any technological breakthrough in interactive computing.

  2. RHENIUM SOLUBILITY IN BOROSILICATE NUCLEAR WASTE GLASS IMPLICATIONS FOR THE PROCESSING AND IMMOBILIZATION OF TECHNETIUM-99 (AND SUPPORTING INFORMATION WITH GRAPHICAL ABSTRACT)

    SciTech Connect

    AA KRUGER; A GOEL; CP RODRIGUEZ; JS MCCLOY; MJ SCHWEIGER; WW LUKENS; JR, BJ RILEY; D KIM; M LIEZERS; P HRMA

    2012-08-13

    The immobilization of 99Tc in a suitable host matrix has proved a challenging task for researchers in the nuclear waste community around the world. At the Hanford site in Washington State in the U.S., the total amount of 99Tc in low-activity waste (LAW) is {approx} 1,300 kg and the current strategy is to immobilize the 99Tc in borosilicate glass with vitrification. In this context, the present article reports on the solubility and retention of rhenium, a nonradioactive surrogate for 99Tc, in a LAW sodium borosilicate glass. Due to the radioactive nature of technetium, rhenium was chosen as a simulant because of previously established similarities in ionic radii and other chemical aspects. The glasses containing target Re concentrations varying from 0 to10,000 ppm by mass were synthesized in vacuum-sealed quartz ampoules to minimize the loss of Re by volatilization during melting at 1000 DC. The rhenium was found to be present predominantly as Re7 + in all the glasses as observed by X-ray absorption near-edge structure (XANES). The solubility of Re in borosilicate glasses was determined to be {approx}3,000 ppm (by mass) using inductively coupled plasma-optical emission spectroscopy (ICP-OES). At higher rhenium concentrations, some additional material was retained in the glasses in the form of alkali perrhenate crystalline inclusions detected by X-ray diffraction (XRD) and laser ablation-ICP mass spectrometry (LA-ICP-MS). Assuming justifiably substantial similarities between Re7 + and Tc 7+ behavior in this glass system, these results implied that the processing and immobilization of 99Tc from radioactive wastes should not be limited by the solubility of 99Tc in borosilicate LAW glasses.

  3. Graphics-System Color-Code Interface

    NASA Technical Reports Server (NTRS)

    Tulppo, J. S.

    1982-01-01

    Circuit originally developed for a flight simulator interfaces a computer graphics system with color monitor. Subsystem is intended for particular display computer (AGT-130, ADAGE Graphics Terminal) and specific color monitor (beam penetration tube--Penetron). Store-and-transmit channel is one of five in graphics/color-monitor interface. Adding 5-bit color code to existing graphics programs requires minimal programing effort.

  4. Antinomies of Semiotics in Graphic Design

    ERIC Educational Resources Information Center

    Storkerson, Peter

    2010-01-01

    The following paper assesses the roles played by semiotics in graphic design and in graphic design education, which both reflects and shapes practice. It identifies a series of factors; graphic design education methods and culture; semiotic theories themselves and their application to graphic design; the two wings of Peircian semiotics and…

  5. Comprehending, Composing, and Celebrating Graphic Poetry

    ERIC Educational Resources Information Center

    Calo, Kristine M.

    2011-01-01

    The use of graphic poetry in classrooms is encouraged as a way to engage students and motivate them to read and write poetry. This article discusses how graphic poetry can help students with their comprehension of poetry while tapping into popular culture. It is organized around three main sections--reading graphic poetry, writing graphic poetry,…

  6. Cartooning History: Canada's Stories in Graphic Novels

    ERIC Educational Resources Information Center

    King, Alyson E.

    2012-01-01

    In recent years, historical events, issues, and characters have been portrayed in an increasing number of non-fiction graphic texts. Similar to comics and graphic novels, graphic texts are defined as fully developed, non-fiction narratives told through panels of sequential art. Such non-fiction graphic texts are being used to teach history in…

  7. Computer Graphics. Curriculum Guide for Technology Education.

    ERIC Educational Resources Information Center

    Craft, Clyde O.

    This curriculum guide for a 1-quarter or 1-semester course in computer graphics is designed to be used with Apple II computers. Some of the topics covered include the following: computer graphics terminology and applications, operating Apple computers, graphics programming in BASIC using various programs and commands, computer graphics painting,…

  8. PHIGS PLUS for scientific graphics

    SciTech Connect

    Crawfis, R.A.

    1991-01-14

    This paper gives a brief overview of the use of computer graphics standards in the scientific community. It particularly details how how PHIGS PLUS meets the needs of users at the Lawrence Livermore National Laboratory. Although standards for computer graphics have improved substantially over the past decade, their acceptance in the scientific community has been slow. As the use and diversity of computers has increased, the scientific graphics libraries have not been able to keep pace with the additional capabilities these new machines offer. Therefore, several organizations have or are now working on converting their scientific libraries to reset upon a portable standard. This paper will address why is transition has been so slow and offer suggestions for future standards work to enhance scientific visualization. This work was performed under the auspices of the US Department of Energy by Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.

  9. VAX Professional Workstation goes graphic

    SciTech Connect

    Downward, J.G.

    1984-01-01

    The VAX Professional Workstation (VPW) is a collection of programs and procedures designed to provide an integrated work-station environment for the staff at KMS Fusion's research laboratories. During the past year numerous capabilities have been added to VPW, including support for VT125/VT240/4014 graphic workstations, editing windows, and additional desk utilities. Graphics workstation support allows users to create, edit, and modify graph data files, enter the data via a graphic tablet, create simple plots with DATATRIEVE or DECgraph on ReGIS terminals, or elaborate plots with TEKGRAPH on ReGIS or Tektronix terminals. Users may assign display error bars to the data and interactively plot it in a variety of ways. Users also can create and display viewgraphs. Hard copy output for a large network of office terminals is obtained by multiplexing each terminal's video output into a recently developed video multiplexer front ending a single channel video hard copy unit.

  10. Distributed interactive graphics applications in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Buning, Pieter G.; Merritt, Fergus J.

    1988-01-01

    Implementation of two interactive, distributed graphics programs used in Computational Fluid Dynamics is discussed. Both programs run on a Cray 2 supercomputer and use a Silicon Graphics Iris workstation as the graphics front-end machine. The hardware and supporting software is from the Numerical Aerodynamic Simulation project. Using this configuration, the supercomputer does all of the numerically intensive work and the workstation allows the user to perform real-time interactive transformations on the displayed data. The first program was written originally as a distributed program which computes particle traces for fluid flow solutions existing on the supercomputer. The second is an older post-processing and plotting program which was modified to run in a distributed mode. Both programs have realized a large increase in capability as a distributed process. Some graphical results are presented.

  11. Graphic Journeys: Graphic Novels' Representations of Immigrant Experiences

    ERIC Educational Resources Information Center

    Boatright, Michael D.

    2010-01-01

    This article explores how immigrant experiences are represented in the narratives of three graphic novels published in the last decade: Tan's (2007) "The Arrival," Kiyama's (1931/1999) "The Four Immigrants Manga: A Japanese Experience in San Francisco, 1904-1924," and Yang's (2006) "American Born Chinese." Through a theoretical lens informed by…

  12. Investigation of characteristics and transformation processes of megacity emission plumes using a mobile laboratory in the Paris metropolitan area

    NASA Astrophysics Data System (ADS)

    von der Weiden-Reinmüller, S.-L.; Drewnick, F.; Zhang, Q.; Meleux, F.; Beekmann, M.; Borrmann, S.

    2012-04-01

    A growing fraction of the world's population is living in urban agglomerations of increasing size. Currently, 20 cities worldwide qualify as so-called megacities, having more than 10 million inhabitants. These intense pollution hot-spots cause a number of scientific questions concerning their influence on local and regional air quality, which is connected with human health, flora and fauna. In the framework of the European Union FP7 MEGAPOLI project (Megacities: Emissions, urban, regional and Global Atmospheric POLlution and climate effects, and Integrated tools for assessment and mitigation) two major field campaigns were carried out in the greater Paris region in July 2009 and January/February 2010. This work presents results from mobile particulate and gas phase measurements with focus on the characteristics of the Paris emission plume and its impact on the regional air quality and on aerosol transformation processes within this plume as it travels away from its source. In addition differences between summer and winter conditions are discussed. The mobile laboratory was equipped with high time resolution instrumentation to measure particle number concentrations (dP > 2.5 nm), size distributions (dP ~ 5 nm - 32 μm), sub-micron chemical composition (non-refractory species using Aerodyne HR-ToF-AMS, PAH and black carbon) as well as major trace gases (CO2, SO2, O3, NOx) and standard meteorological parameters. On-board webcam and GPS allow detailed monitoring of traffic situation and vehicle track. In a total of 29 mobile and 25 stationary measurements with the mobile laboratory the Paris emission plume as well as the atmospheric background was characterized under various meteorological conditions. This allows investigating the influence of external factors like temperature, solar radiation or precipitation on the plume characteristics. Three measurement strategies were applied to investigate the emission plume. First, circular mobile measurements around Paris

  13. Animation graphic interface for the space shuttle onboard computer

    NASA Technical Reports Server (NTRS)

    Wike, Jeffrey; Griffith, Paul

    1989-01-01

    Graphics interfaces designed to operate on space qualified hardware challenge software designers to display complex information under processing power and physical size constraints. Under contract to Johnson Space Center, MICROEXPERT Systems is currently constructing an intelligent interface for the LASER DOCKING SENSOR (LDS) flight experiment. Part of this interface is a graphic animation display for Rendezvous and Proximity Operations. The displays have been designed in consultation with Shuttle astronauts. The displays show multiple views of a satellite relative to the shuttle, coupled with numeric attitude information. The graphics are generated using position data received by the Shuttle Payload and General Support Computer (PGSC) from the Laser Docking Sensor. Some of the design considerations include crew member preferences in graphic data representation, single versus multiple window displays, mission tailoring of graphic displays, realistic 3D images versus generic icon representations of real objects, the physical relationship of the observers to the graphic display, how numeric or textual information should interface with graphic data, in what frame of reference objects should be portrayed, recognizing conditions of display information-overload, and screen format and placement consistency.

  14. Graphics for Stereo Visualization Theater for Supercomputing 1998

    NASA Technical Reports Server (NTRS)

    Antipuesto, Joel; Reid, Lisa (Technical Monitor)

    1998-01-01

    The Stereo Visualization Theater is a high-resolution graphics demonstration that prides a review of current research being performed at NASA. Using a stereoscopic projection, multiple participants can explore scientific data in new ways. The pre-processed audio and video are being played in real-time off of a workstation. A stereo graphics filter for the projector and passive polarized glasses worn by audience members are used to create the stereo effect.

  15. Intellectual system of identification of Arabic graphics

    NASA Astrophysics Data System (ADS)

    Abdoullayeva, Gulchin G.; Aliyev, Telman A.; Gurbanova, Nazakat G.

    2001-08-01

    The studies made by using the domain of graphic images allowed creating facilities of the artificial intelligence for letters, letter combinations etc. for various graphics and prints. The work proposes a system of recognition and identification of symbols of the Arabic graphics, which has its own specificity as compared to Latin and Cyrillic ones. The starting stage of the recognition and the identification is coding with further entry of information into a computer. Here the problem of entry is one of the essentials. For entry of a large volume of information in the unit of time a scanner is usually employed. Along with the scanner the authors suggest their elaboration of technical facilities for effective input and coding of the information. For refinement of symbols not identified from the scanner mostly for a small bulk of information the developed coding devices are used directly in the process of writing. The functional design of the software is elaborated on the basis of the heuristic model of the creative activity of a researcher and experts in the description and estimation of states of the weakly formalizable systems on the strength of the methods of identification and of selection of geometric features.

  16. Collection Of Software For Computer Graphics

    NASA Technical Reports Server (NTRS)

    Hibbard, Eric A.; Makatura, George

    1990-01-01

    Ames Research Graphics System (ARCGRAPH) collection of software libraries and software utilities assisting researchers in generating, manipulating, and visualizing graphical data. Defines metafile format containing device-independent graphical data. File format used with various computer-graphics-manipulation and -animation software packages at Ames, including SURF (COSMIC Program ARC-12381) and GAS (COSMIC Program ARC-12379). Consists of two-stage "pipeline" used to put out graphical primitives. ARCGRAPH libraries developed on VAX computer running VMS.

  17. Trend Monitoring System (TMS) graphics software

    NASA Technical Reports Server (NTRS)

    Brown, J. S.

    1979-01-01

    A prototype bus communications systems, which is being used to support the Trend Monitoring System (TMS) and to evaluate the bus concept is considered. A set of FORTRAN-callable graphics subroutines for the host MODCOMP comuter, and an approach to splitting graphics work between the host and the system's intelligent graphics terminals are described. The graphics software in the MODCOMP and the operating software package written for the graphics terminals are included.

  18. Homology modeling, docking studies and molecular dynamic simulations using graphical processing unit architecture to probe the type-11 phosphodiesterase catalytic site: a computational approach for the rational design of selective inhibitors.

    PubMed

    Cichero, Elena; D'Ursi, Pasqualina; Moscatelli, Marco; Bruno, Olga; Orro, Alessandro; Rotolo, Chiara; Milanesi, Luciano; Fossa, Paola

    2013-12-01

    Phosphodiesterase 11 (PDE11) is the latest isoform of the PDEs family to be identified, acting on both cyclic adenosine monophosphate and cyclic guanosine monophosphate. The initial reports of PDE11 found evidence for PDE11 expression in skeletal muscle, prostate, testis, and salivary glands; however, the tissue distribution of PDE11 still remains a topic of active study and some controversy. Given the sequence similarity between PDE11 and PDE5, several PDE5 inhibitors have been shown to cross-react with PDE11. Accordingly, many non-selective inhibitors, such as IBMX, zaprinast, sildenafil, and dipyridamole, have been documented to inhibit PDE11. Only recently, a series of dihydrothieno[3,2-d]pyrimidin-4(3H)-one derivatives proved to be selective toward the PDE11 isoform. In the absence of experimental data about PDE11 X-ray structures, we found interesting to gain a better understanding of the enzyme-inhibitor interactions using in silico simulations. In this work, we describe a computational approach based on homology modeling, docking, and molecular dynamics simulation to derive a predictive 3D model of PDE11. Using a Graphical Processing Unit architecture, it is possible to perform long simulations, find stable interactions involved in the complex, and finally to suggest guideline for the identification and synthesis of potent and selective inhibitors.

  19. Forced gradient infiltration experiments: effect on the release processes of mobile particles and organic contaminants

    NASA Astrophysics Data System (ADS)

    Pagels, B.; Reichel, K.; Totsche, K. U.

    2009-04-01

    Mobile colloidal and suspended matter is likely to affect themobility of polycyclic aromatic hydrocarbons (PAHs) in the unsaturatedsoil zone at contaminated sites. We studied the release of mobile (organic) particles (MOPs), which include among others dissolved and colloidal organic matter in response to forced sprinkling infiltration and multiple flow interrupts using undisturbed zero-tensionlysimeters. The aim was to assess the effect of these MOPs on the exportof PAHs and other contaminants in floodplain soils. Seepage water samples were analyzed for dissolvedand colloidal organic carbon (DOC), PAH, suspended particles, pH, electrical conductivity, turbidity,zeta potential and surface tension in the fraction smaller 0.7 m. In additional selected PAH were analysed in the size fraction > 0.7 m. Bromide was used as a conservative tracer to determine the flow regime. First arrival of bromide was detected 3.8 hours after start of irrigation. The concentration gradually increased and reached a level of C/C0=0.1 just before the flow interrupt (FI). After flow was resumed, effluent bromide concentration was equal to the concentration before the FI. Ongoing irrigation caused a breakthrough wave, which continuously increased until the bromide concentration reached ~100% of the input concentration. A high-intensity rain event of 4 L m -2 h-1 upon summer-dried lysimeters results in a release of particles in a the size of 250-400 nm. In addition it seems that with the initial exported seepage water surface-active agents are released which is indicated by the decrease of the surface to 60 mN m-1 (Pure water: 72mN m-1). The turbidity values range from 8-14 FAU. The concentration of DOC is about 30-40 mg L-1 in the initial effluent fractions and equilibrates to 15 mg L-1 with ongoing percolation. The PAHs in the fraction < 0.7 m amount to 0.02 g L-1, and 0.05 g L-1 in the fraction > 0.7 m. After establishing steady state flow conditions, first arrival of bromide was detected

  20. Graphics Design Technology Curriculum Guide.

    ERIC Educational Resources Information Center

    Idaho State Dept. of Education, Boise. Div. of Vocational Education.

    This Idaho secondary education curriculum guide provides lists of tasks, performance objectives, and enabling objectives for instruction intended to impart entry-level employment skills in graphics design technology. The first list states all tasks for 11 areas; separate lists for each area follow. Each task on the lists is accompanied by a…