Science.gov

Sample records for mobile graphics processing

  1. Evaluating Mobile Graphics Processing Units (GPUs) for Real-Time Resource Constrained Applications

    SciTech Connect

    Meredith, J; Conger, J; Liu, Y; Johnson, J

    2005-11-11

    Modern graphics processing units (GPUs) can provide tremendous performance boosts for some applications beyond what a single CPU can accomplish, and their performance is growing at a rate faster than CPUs as well. Mobile GPUs available for laptops have the small form factor and low power requirements suitable for use in embedded processing. We evaluated several desktop and mobile GPUs and CPUs on traditional and non-traditional graphics tasks, as well as on the most time consuming pieces of a full hyperspectral imaging application. Accuracy remained high despite small differences in arithmetic operations like rounding. Performance improvements are summarized here relative to a desktop Pentium 4 CPU.

  2. Oklahoma's Mobile Computer Graphics Laboratory.

    ERIC Educational Resources Information Center

    McClain, Gerald R.

    This Computer Graphics Laboratory houses an IBM 1130 computer, U.C.C. plotter, printer, card reader, two key punch machines, and seminar-type classroom furniture. A "General Drafting Graphics System" (GDGS) is used, based on repetitive use of basic coordinate and plot generating commands. The system is used by 12 institutions of higher education…

  3. Graphical Language for Data Processing

    NASA Technical Reports Server (NTRS)

    Alphonso, Keith

    2011-01-01

    A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.

  4. Toward energy-aware balancing of mobile graphics

    NASA Astrophysics Data System (ADS)

    Stavrakis, Efstathios; Polychronis, Marios; Pelekanos, Nectarios; Artusi, Alessandro; Hadjichristodoulou, Panayiotis; Chrysanthou, Yiorgos

    2015-03-01

    In the area of computer graphics the design of hardware and software has primarily been driven by the need to achieve maximum performance. Energy efficiency was usually neglected, assuming that a stable always-on power source was available. However, the advent of the mobile era has brought into question these ideas and designs in computer graphics since mobile devices are both limited by their computational capabilities and their energy sources. Aligned to this emerging need in computer graphics for energy efficiency analysis we have setup a software framework to obtain power measurements from 3D scenes using off-the-shelf hardware that allows for sampling the energy consumption over the power rails of the CPU and GPU. Our experiments include geometric complexity, texture resolution and common CPU and GPU workloads. The goal of this work is to combine the knowledge obtained from these measurements into a prototype energy-aware balancer of processing resources. The balancer dynamically selects the rendering parameters and uses a simple framerate-based dynamic frequency scaling strategy. Our experimental results demonstrate that our power saving framework can achieve savings of approximately 40%.

  5. Cockpit weather graphics using mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Seth, Shashi

    1993-01-01

    Many new companies are pushing state-of-the-art technology to bring a revolution in the cockpits of General Aviation (GA) aircraft. The vision, according to Dr. Bruce Holmes - the Assistant Director for Aeronautics at National Aeronautics and Space Administration's (NASA) Langley Research Center, is to provide such an advanced flight control system that the motor and cognitive skills you use to drive a car would be very similar to the ones you would use to fly an airplane. We at ViGYAN, Inc., are currently developing a system called the Pilot Weather Advisor (PWxA), which would be a part of such an advanced technology flight management system. The PWxA provides graphical depictions of weather information in the cockpit of aircraft in near real-time, through the use of broadcast satellite communications. The purpose of this system is to improve the safety and utility of GA aircraft operations. Considerable effort is being extended for research in the design of graphical weather systems, notably the works of Scanlon and Dash. The concept of providing pilots with graphical depictions of weather conditions, overlaid on geographical and navigational maps, is extremely powerful.

  6. Graphics hardware accelerated panorama builder for mobile phones

    NASA Astrophysics Data System (ADS)

    Bordallo López, Miguel; Hannuksela, Jari; Silvén, Olli; Vehviläinen, Markku

    2009-02-01

    Modern mobile communication devices frequently contain built-in cameras allowing users to capture highresolution still images, but at the same time the imaging applications are facing both usability and throughput bottlenecks. The difficulties in taking ad hoc pictures of printed paper documents with multi-megapixel cellular phone cameras on a common business use case, illustrate these problems for anyone. The result can be examined only after several seconds, and is often blurry, so a new picture is needed, although the view-finder image had looked good. The process can be a frustrating one with waits and the user not being able to predict the quality beforehand. The problems can be traced to the processor speed and camera resolution mismatch, and application interactivity demands. In this context we analyze building mosaic images of printed documents from frames selected from VGA resolution (640x480 pixel) video. High interactivity is achieved by providing real-time feedback on the quality, while simultaneously guiding the user actions. The graphics processing unit of the mobile device can be used to speed up the reconstruction computations. To demonstrate the viability of the concept, we present an interactive document scanning application implemented on a Nokia N95 mobile phone.

  7. Graphic Design in Libraries: A Conceptual Process

    ERIC Educational Resources Information Center

    Ruiz, Miguel

    2014-01-01

    Providing successful library services requires efficient and effective communication with users; therefore, it is important that content creators who develop visual materials understand key components of design and, specifically, develop a holistic graphic design process. Graphic design, as a form of visual communication, is the process of…

  8. Hyperspectral processing in graphical processing units

    NASA Astrophysics Data System (ADS)

    Winter, Michael E.; Winter, Edwin M.

    2011-06-01

    With the advent of the commercial 3D video card in the mid 1990s, we have seen an order of magnitude performance increase with each generation of new video cards. While these cards were designed primarily for visualization and video games, it became apparent after a short while that they could be used for scientific purposes. These Graphical Processing Units (GPUs) are rapidly being incorporated into data processing tasks usually reserved for general purpose computers. It has been found that many image processing problems scale well to modern GPU systems. We have implemented four popular hyperspectral processing algorithms (N-FINDR, linear unmixing, Principal Components, and the RX anomaly detection algorithm). These algorithms show an across the board speedup of at least a factor of 10, with some special cases showing extreme speedups of a hundred times or more.

  9. Optimization Techniques for 3D Graphics Deployment on Mobile Devices

    NASA Astrophysics Data System (ADS)

    Koskela, Timo; Vatjus-Anttila, Jarkko

    2015-03-01

    3D Internet technologies are becoming essential enablers in many application areas including games, education, collaboration, navigation and social networking. The use of 3D Internet applications with mobile devices provides location-independent access and richer use context, but also performance issues. Therefore, one of the important challenges facing 3D Internet applications is the deployment of 3D graphics on mobile devices. In this article, we present an extensive survey on optimization techniques for 3D graphics deployment on mobile devices and qualitatively analyze the applicability of each technique from the standpoints of visual quality, performance and energy consumption. The analysis focuses on optimization techniques related to data-driven 3D graphics deployment, because it supports off-line use, multi-user interaction, user-created 3D graphics and creation of arbitrary 3D graphics. The outcome of the analysis facilitates the development and deployment of 3D Internet applications on mobile devices and provides guidelines for future research.

  10. Process and representation in graphical displays

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne

    1990-01-01

    How people comprehend graphics is examined. Graphical comprehension involves the cognitive representation of information from a graphic display and the processing strategies that people apply to answer questions about graphics. Research on representation has examined both the features present in a graphic display and the cognitive representation of the graphic. The key features include the physical components of a graph, the relation between the figure and its axes, and the information in the graph. Tests of people's memory for graphs indicate that both the physical and informational aspect of a graph are important in the cognitive representation of a graph. However, the physical (or perceptual) features overshadow the information to a large degree. Processing strategies also involve a perception-information distinction. In order to answer simple questions (e.g., determining the value of a variable, comparing several variables, and determining the mean of a set of variables), people switch between two information processing strategies: (1) an arithmetic, look-up strategy in which they use a graph much like a table, looking up values and performing arithmetic calculations; and (2) a perceptual strategy in which they use the spatial characteristics of the graph to make comparisons and estimations. The user's choice of strategies depends on the task and the characteristics of the graph. A theory of graphic comprehension is presented.

  11. Process and representation in graphical displays

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne

    1993-01-01

    Our initial model of graphic comprehension has focused on statistical graphs. Like other models of human-computer interaction, models of graphical comprehension can be used by human-computer interface designers and developers to create interfaces that present information in an efficient and usable manner. Our investigation of graph comprehension addresses two primary questions: how do people represent the information contained in a data graph?; and how do they process information from the graph? The topics of focus for graphic representation concern the features into which people decompose a graph and the representations of the graph in memory. The issue of processing can be further analyzed as two questions: what overall processing strategies do people use?; and what are the specific processing skills required?

  12. Graphics processing unit-assisted lossless decompression

    DOEpatents

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  13. Graphical analysis of power systems for mobile robotics

    NASA Astrophysics Data System (ADS)

    Raade, Justin William

    The field of mobile robotics places stringent demands on the power system. Energetic autonomy, or the ability to function for a useful operation time independent of any tether, refueling, or recharging, is a driving force in a robot designed for a field application. The focus of this dissertation is the development of two graphical analysis tools, namely Ragone plots and optimal hybridization plots, for the design of human scale mobile robotic power systems. These tools contribute to the intuitive understanding of the performance of a power system and expand the toolbox of the design engineer. Ragone plots are useful for graphically comparing the merits of different power systems for a wide range of operation times. They plot the specific power versus the specific energy of a system on logarithmic scales. The driving equations in the creation of a Ragone plot are derived in terms of several important system parameters. Trends at extreme operation times (both very short and very long) are examined. Ragone plot analysis is applied to the design of several power systems for high-power human exoskeletons. Power systems examined include a monopropellant-powered free piston hydraulic pump, a gasoline-powered internal combustion engine with hydraulic actuators, and a fuel cell with electric actuators. Hybrid power systems consist of two or more distinct energy sources that are used together to meet a single load. They can often outperform non-hybrid power systems in low duty-cycle applications or those with widely varying load profiles and long operation times. Two types of energy sources are defined: engine-like and capacitive. The hybridization rules for different combinations of energy sources are derived using graphical plots of hybrid power system mass versus the primary system power. Optimal hybridization analysis is applied to several power systems for low-power human exoskeletons. Hybrid power systems examined include a fuel cell and a solar panel coupled with

  14. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  15. Partial wave analysis using graphics processing units

    NASA Astrophysics Data System (ADS)

    Berger, Niklaus; Beijiang, Liu; Jike, Wang

    2010-04-01

    Partial wave analysis is an important tool for determining resonance properties in hadron spectroscopy. For large data samples however, the un-binned likelihood fits employed are computationally very expensive. At the Beijing Spectrometer (BES) III experiment, an increase in statistics compared to earlier experiments of up to two orders of magnitude is expected. In order to allow for a timely analysis of these datasets, additional computing power with short turnover times has to be made available. It turns out that graphics processing units (GPUs) originally developed for 3D computer games have an architecture of massively parallel single instruction multiple data floating point units that is almost ideally suited for the algorithms employed in partial wave analysis. We have implemented a framework for tensor manipulation and partial wave fits called GPUPWA. The user writes a program in pure C++ whilst the GPUPWA classes handle computations on the GPU, memory transfers, caching and other technical details. In conjunction with a recent graphics processor, the framework provides a speed-up of the partial wave fit by more than two orders of magnitude compared to legacy FORTRAN code.

  16. Process control graphics for petrochemical plants

    SciTech Connect

    Lieber, R.E.

    1982-12-01

    Describes many specialized features of a computer control system, schematic/graphics in particular, which are vital to effectively run today's complex refineries and chemical plants. Illustrates such control systems as a full-graphic control house panel of the 60s, a European refinery control house of the early 70s, and the Ingolstadt refinery control house. Presents diagram showing a shape library. Implementation of state-of-the-art control theory, distributed control, dual hi-way digital instrument systems, and many other person-machine interface developments have been prime factors in process control. Further developments in person-machine interfaces are in progress including voice input/output, touch screen, and other entry devices. Color usage, angle of projection, control house lighting, and pattern recognition are all being studied by vendors, users, and academics. These studies involve psychologists concerned with ''quality of life'' factors, employee relations personnel concerned with labor contracts or restrictions, as well as operations personnel concerned with just getting the plant to run better.

  17. Graphics Processing Units for HEP trigger systems

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-07-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  18. Kernel density estimation using graphical processing unit

    NASA Astrophysics Data System (ADS)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  19. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2012-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often great, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques. Technical Methodology/Approach: Apply massively parallel algorithms and data structures to the specific analysis requirements presented when working with thermographic data sets.

  20. Animation and Learning: Selective Processing of Information in Dynamic Graphics.

    ERIC Educational Resources Information Center

    Lowe, R. K.

    2003-01-01

    Studied the selective processing of information in dynamic graphics by 12 undergraduates who received training aided by animation and 12 who did not. Results indicate selective processing of the animation that involved perceptually driven dynamic effects and raise questions about the assumed superiority of animations over static graphics. (SLD)

  1. Text and Illustration Processing System (TIPS) User's Manual. Volume 2: Graphics Processing System.

    ERIC Educational Resources Information Center

    Cox, Ray; Braby, Richard

    This manual contains the procedures to teach the relatively inexperienced author how to enter and process graphic information on a graphics processing system developed by the Training Analysis and Evaluation Group. It describes the illustration processing routines, including scanning graphics into computer memory, displaying graphics, enhancing…

  2. Graphics

    ERIC Educational Resources Information Center

    Post, Susan

    1975-01-01

    An art teacher described an elective course in graphics which was designed to enlarge a student's knowledge of value, color, shape within a shape, transparency, line and texture. This course utilized the technique of working a multi-colored print from a single block that was first introduced by Picasso. (Author/RK)

  3. Reading the Graphics: What Is the Relationship between Graphical Reading Processes and Student Comprehension?

    ERIC Educational Resources Information Center

    Norman, Rebecca R.

    2012-01-01

    Research on comprehension of written text and reading processes suggests a greater use of reading processes is associated with higher scores on comprehension measures of those same texts. Although researchers have suggested that the graphics in text convey important meaning, little research exists on the relationship between children's processes…

  4. Reading the Graphics: Reading Processes Prompted by the Graphics as Second Graders Read Informational Text

    ERIC Educational Resources Information Center

    Norman, Rebecca R.

    2010-01-01

    This dissertation is comprised of two manuscripts that resulted from a single study using verbal protocols to examine the reading processes prompted by the graphics as second graders read informational text. Verbal protocols have provided researchers with an understanding of the processes readers use as they read. Little is known, however, about…

  5. Picture This: Processes Prompted by Graphics in Informational Text

    ERIC Educational Resources Information Center

    Norman, Rebecca R.

    2010-01-01

    Verbal protocols have provided literacy researchers with a strong understanding of what processes readers (both adults and children) use as they read narrative and informational text. Little is known, however, about the comprehension processes that are prompted by the graphics in these texts. This study of nine second graders used verbal protocol…

  6. Graphic Arts: Book Two. Process Camera, Stripping, and Platemaking.

    ERIC Educational Resources Information Center

    Farajollahi, Karim; And Others

    The second of a three-volume set of instructional materials for a course in graphic arts, this manual consists of 10 instructional units dealing with the process camera, stripping, and platemaking. Covered in the individual units are the process camera and darkroom photography, line photography, half-tone photography, other darkroom techniques,…

  7. Graphic Arts: Process Camera, Stripping, and Platemaking. Third Edition.

    ERIC Educational Resources Information Center

    Crummett, Dan

    This document contains teacher and student materials for a course in graphic arts concentrating on camera work, stripping, and plate making in the printing process. Eight units of instruction cover the following topics: (1) the process camera and darkroom equipment; (2) line photography; (3) halftone photography; (4) other darkroom techniques; (5)…

  8. Graphic Arts: The Press and Finishing Processes. Third Edition.

    ERIC Educational Resources Information Center

    Crummett, Dan

    This document contains teacher and student materials for a course in graphic arts concentrating on printing presses and the finishing process for publications. Seven units of instruction cover the following topics: (1) offset press systems; (2) offset inks and dampening chemistry; (3) offset press operating procedures; (4) preventive maintenance…

  9. Graphic Arts: Book Three. The Press and Related Processes.

    ERIC Educational Resources Information Center

    Farajollahi, Karim; And Others

    The third of a three-volume set of instructional materials for a graphic arts course, this manual consists of nine instructional units dealing with presses and related processes. Covered in the units are basic press fundamentals, offset press systems, offset press operating procedures, offset inks and dampening chemistry, preventive maintenance…

  10. Digital-Computer Processing of Graphical Data. Final Report.

    ERIC Educational Resources Information Center

    Freeman, Herbert

    The final report of a two-year study concerned with the digital-computer processing of graphical data. Five separate investigations carried out under this study are described briefly, and a detailed bibliography, complete with abstracts, is included in which are listed the technical papers and reports published during the period of this program.…

  11. Engineering graphics and image processing at Langley Research Center

    NASA Technical Reports Server (NTRS)

    Voigt, Susan J.

    1985-01-01

    The objective of making raster graphics and image processing techniques readily available for the analysis and display of engineering and scientific data is stated. The approach is to develop and acquire tools and skills which are applied to support research activities in such disciplines as aeronautics and structures. A listing of grants and key personnel are given.

  12. Obliterable of graphics and correction of skew using Hough transform for mobile captured documents

    NASA Astrophysics Data System (ADS)

    Chethan, H. K.; Kumar, G. Hemantha

    2011-10-01

    CBDA is an emerging field in Computer Vision and Pattern Recognition. In recent technology camera are incorporated to several electronic equipments and are very interesting and thus playing a vital role by replacing scanner with hand held imaging devices like Digital Cameras, Mobile phones and gaming devices attached with these camera. The goal of the work is to remove graphics from the document which plays a vital role in recognition of characters from the mobile captured documents. In this paper we have proposed a novel method for separating or removal of graphics like logos, animations other than the text from the document and method to reduce noise and finally textual content skew is estimated and corrected using Hough Transform. The experimental results show the efficacy compared to the result of well known existing methods.

  13. Novel graphical environment for virtual and real-world operations of tracked mobile manipulators

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.; Azam, Mir; Lassiter, Nils T.

    1993-08-01

    A simulation, animation, visualization and interactive control (SAVIC) environment has been developed for the design and operation of an integrated mobile manipulator system. This unique system possesses the abilities for (1) multi-sensor simulation, (2) kinematics and locomotion animation, (3) dynamic motion and manipulation animation, (4) transformation between real and virtual modes within the same graphics system, (5) ease in exchanging software modules and hardware devices between real and virtual world operations, and (6) interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  14. Beam line error analysis, position correction, and graphic processing

    NASA Astrophysics Data System (ADS)

    Wang, Fuhua; Mao, Naifeng

    1993-12-01

    A beam transport line error analysis and beam position correction code called ``EAC'' has been enveloped associated with a graphics and data post processing package for TRANSPORT. Based on the linear optics design using TRANSPORT or other general optics codes, EAC independently analyzes effects of magnet misalignments, systematic and statistical errors of magnetic fields as well as the effects of the initial beam positions, on the central trajectory and upon the transverse beam emittance dilution. EAC also provides an efficient way to develop beam line trajectory correcting schemes. The post processing package generates various types of graphics such as the beam line geometrical layout, plots of the Twiss parameters, beam envelopes, etc. It also generates an EAC input file, thus connecting EAC with general optics codes. EAC and the post processing package are small size codes, that are easy to access and use. They have become useful tools for the design of transport lines at SSCL.

  15. Graphical representation of the process of solving problems in statics

    NASA Astrophysics Data System (ADS)

    Lopez, Carlos

    2011-03-01

    It is presented a method of construction to a graphical representation technique of knowledge called Conceptual Chains. Especially, this tool has been focused to the representation of processes and applied to solving problems in physics, mathematics and engineering. The method is described in ten steps and is illustrated with its development in a particular topic of statics. Various possible didactic applications of this technique are showed.

  16. Graphics processing unit acceleration of computational electromagnetic methods

    NASA Astrophysics Data System (ADS)

    Inman, Matthew

    The use of Graphical Processing Units (GPU's) for scientific applications has been evolving and expanding for the decade. GPU's provide an alternative to the CPU in the creation and execution of the numerical codes that are often relied upon in to perform simulations in computational electromagnetics. While originally designed purely to display graphics on the users monitor, GPU's today are essentially powerful floating point co-processors that can be programmed not only to render complex graphics, but also perform the complex mathematical calculations often encountered in scientific computing. Currently the GPU's being produced often contain hundreds of separate cores able to access large amounts of high-speed dedicated memory. By utilizing the power offered by such a specialized processor, it is possible to drastically speed up the calculations required in computational electromagnetics. This increase in speed allows for the use of GPU based simulations in a variety of situations that the computational time has heretofore been a limiting factor in, such as in educational courses. Many situations in teaching electromagnetics often rely upon simple examples of problems due to the simulation times needed to analyze more complex problems. The use of GPU based simulations will be shown to allow demonstrations of more advanced problems than previously allowed by adapting the methods for use on the GPU. Modules will be developed for a wide variety of teaching situations utilizing the speed of the GPU to demonstrate various techniques and ideas previously unrealizable.

  17. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    PubMed

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability. PMID:25570838

  18. Porting a Hall MHD Code to a Graphic Processing Unit

    NASA Technical Reports Server (NTRS)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  19. Off-line graphics processing: a case study

    SciTech Connect

    Harris, D.D.

    1983-09-01

    The Drafting Systems organization at Bendix, Kansas City Division, is responsible for the creation of computer-readable media used for producing photoplots, phototools, and production traveler illustrations. From 1977 when the organization acquired its first Applicon system, until 1982 when the off-line graphics processing system was added, the production of Gerber photoplotter tapes and APPLE files presented an ever increasing load on the Applicon systems. This paper describes how the organization is now using a VAX to offload this work from the Applicon systems and presents the techniques used to automate the flow of data from the Applicon sources to the final users.

  20. Line-by-line spectroscopic simulations on graphics processing units

    NASA Astrophysics Data System (ADS)

    Collange, Sylvain; Daumas, Marc; Defour, David

    2008-01-01

    We report here on software that performs line-by-line spectroscopic simulations on gases. Elaborate models (such as narrow band and correlated-K) are accurate and efficient for bands where various components are not simultaneously and significantly active. Line-by-line is probably the most accurate model in the infrared for blends of gases that contain high proportions of H 2O and CO 2 as this was the case for our prototype simulation. Our implementation on graphics processing units sustains a speedup close to 330 on computation-intensive tasks and 12 on memory intensive tasks compared to implementations on one core of high-end processors. This speedup is due to data parallelism, efficient memory access for specific patterns and some dedicated hardware operators only available in graphics processing units. It is obtained leaving most of processor resources available and it would scale linearly with the number of graphics processing units in parallel machines. Line-by-line simulation coupled with simulation of fluid dynamics was long believed to be economically intractable but our work shows that it could be done with some affordable additional resources compared to what is necessary to perform simulations on fluid dynamics alone. Program summaryProgram title: GPU4RE Catalogue identifier: ADZY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 62 776 No. of bytes in distributed program, including test data, etc.: 1 513 247 Distribution format: tar.gz Programming language: C++ Computer: x86 PC Operating system: Linux, Microsoft Windows. Compilation requires either gcc/g++ under Linux or Visual C++ 2003/2005 and Cygwin under Windows. It has been tested using gcc 4.1.2 under Ubuntu Linux 7.04 and using Visual C

  1. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    NASA Astrophysics Data System (ADS)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  2. Exploiting graphics processing units for computational biology and bioinformatics.

    PubMed

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700. PMID:20658333

  3. Graphic and Phonological Processing in Chinese Character Identification.

    ERIC Educational Resources Information Center

    Ju, Daushen; Jackson, Nancy Ewald

    1995-01-01

    Examines the effect of graphic, phonological, and graphic-and-phonological information on Chinese character identification by 22 Mandarin-speaking Taiwanese graduate students. Finds that graphic information plays an essential role in Chinese character identification, while phonological information does not enhance the accuracy of identification.…

  4. Solar physics applications of computer graphics and image processing

    NASA Technical Reports Server (NTRS)

    Altschuler, M. D.

    1985-01-01

    Computer graphics devices coupled with computers and carefully developed software provide new opportunities to achieve insight into the geometry and time evolution of scalar, vector, and tensor fields and to extract more information quickly and cheaply from the same image data. Two or more different fields which overlay in space can be calculated from the data (and the physics), then displayed from any perspective, and compared visually. The maximum regions of one field can be compared with the gradients of another. Time changing fields can also be compared. Images can be added, subtracted, transformed, noise filtered, frequency filtered, contrast enhanced, color coded, enlarged, compressed, parameterized, and histogrammed, in whole or section by section. Today it is possible to process multiple digital images to reveal spatial and temporal correlations and cross correlations. Data from different observatories taken at different times can be processed, interpolated, and transformed to a common coordinate system.

  5. Implementing wide baseline matching algorithms on a graphics processing unit.

    SciTech Connect

    Rothganger, Fredrick H.; Larson, Kurt W.; Gonzales, Antonio Ignacio; Myers, Daniel S.

    2007-10-01

    Wide baseline matching is the state of the art for object recognition and image registration problems in computer vision. Though effective, the computational expense of these algorithms limits their application to many real-world problems. The performance of wide baseline matching algorithms may be improved by using a graphical processing unit as a fast multithreaded co-processor. In this paper, we present an implementation of the difference of Gaussian feature extractor, based on the CUDA system of GPU programming developed by NVIDIA, and implemented on their hardware. For a 2000x2000 pixel image, the GPU-based method executes nearly thirteen times faster than a comparable CPU-based method, with no significant loss of accuracy.

  6. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    NASA Astrophysics Data System (ADS)

    Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele

    2014-05-01

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  7. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott

    2012-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the GPU accelerator compiler directives. We have implemented the GPU acceleration on a Core I7 gaming PC with a NVIDIA GTX 580 GPU. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. Optimization strategies and comparisons between DIRAC and the gaming PC will be presented. We will also discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  8. Graphics Processing Units and High-Dimensional Optimization

    PubMed Central

    Zhou, Hua; Lange, Kenneth; Suchard, Marc A.

    2011-01-01

    This paper discusses the potential of graphics processing units (GPUs) in high-dimensional optimization problems. A single GPU card with hundreds of arithmetic cores can be inserted in a personal computer and dramatically accelerates many statistical algorithms. To exploit these devices fully, optimization algorithms should reduce to multiple parallel tasks, each accessing a limited amount of data. These criteria favor EM and MM algorithms that separate parameters and data. To a lesser extent block relaxation and coordinate descent and ascent also qualify. We demonstrate the utility of GPUs in nonnegative matrix factorization, PET image reconstruction, and multidimensional scaling. Speedups of 100 fold can easily be attained. Over the next decade, GPUs will fundamentally alter the landscape of computational statistics. It is time for more statisticians to get on-board. PMID:21847315

  9. Graphics processing units accelerated semiclassical initial value representation molecular dynamics.

    PubMed

    Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele

    2014-05-01

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly. PMID:24811627

  10. Graphics Processing Unit Enhanced Parallel Document Flocking Clustering

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E; ST Charles, Jesse Lee

    2010-01-01

    Analyzing and clustering documents is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to generate results in a reasonable amount of time. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. In this paper, we have conducted research to exploit this archi- tecture and apply its strengths to the flocking based document clustering problem. Using the CUDA platform from NVIDIA, we developed a doc- ument flocking implementation to be run on the NVIDIA GEFORCE GPU. Performance gains ranged from thirty-six to nearly sixty times improvement of the GPU over the CPU implementation.

  11. Fast Docking on Graphics Processing Units via Ray-Casting

    PubMed Central

    Khar, Karen R.; Goldschmidt, Lukasz; Karanicolas, John

    2013-01-01

    Docking Approach using Ray Casting (DARC) is structure-based computational method for carrying out virtual screening by docking small-molecules into protein surface pockets. In a complementary study we find that DARC can be used to identify known inhibitors from large sets of decoy compounds, and can identify new compounds that are active in biochemical assays. Here, we describe our adaptation of DARC for use on Graphics Processing Units (GPUs), leading to a speedup of approximately 27-fold in typical-use cases over the corresponding calculations carried out using a CPU alone. This dramatic speedup of DARC will enable screening larger compound libraries, screening with more conformations of each compound, and including multiple receptor conformations when screening. We anticipate that all three of these enhanced approaches, which now become tractable, will lead to improved screening results. PMID:23976948

  12. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    SciTech Connect

    Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele

    2014-05-07

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  13. Role of Graphics Tools in the Learning Design Process

    ERIC Educational Resources Information Center

    Laisney, Patrice; Brandt-Pomares, Pascale

    2015-01-01

    This paper discusses the design activities of students in secondary school in France. Graphics tools are now part of the capacity of design professionals. It is therefore apt to reflect on their integration into the technological education. Has the use of intermediate graphical tools changed students' performance, and if so in what direction,…

  14. Graphic Arts: The Press and Finishing Processes. Teacher Guide.

    ERIC Educational Resources Information Center

    Feasley, Sue C., Ed.

    This curriculum guide is the third in a three-volume series of instructional materials for competency-based graphic arts instruction. Each publication is designed to include the technical content and tasks necessary for a student to be employed in an entry-level graphic arts occupation. Introductory materials include an instructional/task analysis…

  15. Graphic Arts: Process Camera, Stripping, and Platemaking. Teacher Guide.

    ERIC Educational Resources Information Center

    Feasley, Sue C., Ed.

    This curriculum guide is the second in a three-volume series of instructional materials for competency-based graphic arts instruction. Each publication is designed to include the technical content and tasks necessary for a student to be employed in an entry-level graphic arts occupation. Introductory materials include an instructional/task…

  16. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  17. Accelerating sparse linear algebra using graphics processing units

    NASA Astrophysics Data System (ADS)

    Spagnoli, Kyle E.; Humphrey, John R.; Price, Daniel K.; Kelmelis, Eric J.

    2011-06-01

    The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math processor capable of over 1 TFLOPS of peak computational throughput at a cost similar to a high-end CPU with excellent FLOPS-to-watt ratio. High-level sparse linear algebra operations are computationally intense, often requiring large amounts of parallel operations and would seem a natural fit for the processing power of the GPU. Our work is on a GPU accelerated implementation of sparse linear algebra routines. We present results from both direct and iterative sparse system solvers. The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is accomplished by allowing each platform to do the work it performs most naturally. For example, the CPU is responsible for graph theory portion of the direct solvers while the GPU simultaneously performs the low level linear algebra routines.

  18. Parallel Latent Semantic Analysis using a Graphics Processing Unit

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E; Cavanagh, Joseph M

    2009-01-01

    Latent Semantic Analysis (LSA) can be used to reduce the dimensions of large Term-Document datasets using Singular Value Decomposition. However, with the ever expanding size of data sets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. The Graphics Processing Unit (GPU) can solve some highly parallel problems much faster than the traditional sequential processor (CPU). Thus, a deployable system using a GPU to speedup large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a computer cluster. In this paper, we presented a parallel LSA implementation on the GPU, using NVIDIA Compute Unified Device Architecture (CUDA) and Compute Unified Basic Linear Algebra Subprograms (CUBLAS). The performance of this implementation is compared to traditional LSA implementation on CPU using an optimized Basic Linear Algebra Subprograms library. For large matrices that have dimensions divisible by 16, the GPU algorithm ran five to six times faster than the CPU version.

  19. Accelerating sino-atrium computer simulations with graphic processing units.

    PubMed

    Zhang, Hong; Xiao, Zheng; Lin, Shien-fong

    2015-01-01

    Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations. PMID:26406070

  20. Accelerating radio astronomy cross-correlation with graphics processing units

    NASA Astrophysics Data System (ADS)

    Clark, M. A.; LaPlante, P. C.; Greenhill, L. J.

    2013-05-01

    We present a highly parallel implementation of the cross-correlation of time-series data using graphics processing units (GPUs), which is scalable to hundreds of independent inputs and suitable for the processing of signals from 'large-Formula' arrays of many radio antennas. The computational part of the algorithm, the X-engine, is implemented efficiently on NVIDIA's Fermi architecture, sustaining up to 79% of the peak single-precision floating-point throughput. We compare performance obtained for hardware- and software-managed caches, observing significantly better performance for the latter. The high performance reported involves use of a multi-level data tiling strategy in memory and use of a pipelined algorithm with simultaneous computation and transfer of data from host to device memory. The speed of code development, flexibility, and low cost of the GPU implementations compared with application-specific integrated circuit (ASIC) and field programmable gate array (FPGA) implementations have the potential to greatly shorten the cycle of correlator development and deployment, for cases where some power-consumption penalty can be tolerated.

  1. Handling geophysical flows: Numerical modelling using Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario

    2016-04-01

    Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta

  2. Retrospective Study on Mathematical Modeling Based on Computer Graphic Processing

    NASA Astrophysics Data System (ADS)

    Zhang, Kai Li

    Graphics & image making is an important field in computer application, in which visualization software has been widely used with the characteristics of convenience and quick. However, it was thought by modeling designers that the software had been limited in it's function and flexibility because mathematics modeling platform was not built. A non-visualization graphics software appearing at this moment enabled the graphics & image design has a very good mathematics modeling platform. In the paper, a polished pyramid is established by multivariate spline function algorithm, and validate the non-visualization software is good in mathematical modeling.

  3. Accelerating compartmental modeling on a graphical processing unit.

    PubMed

    Ben-Shalom, Roy; Liberman, Gilad; Korngreen, Alon

    2013-01-01

    Compartmental modeling is a widely used tool in neurophysiology but the detail and scope of such models is frequently limited by lack of computational resources. Here we implement compartmental modeling on low cost Graphical Processing Units (GPUs), which significantly increases simulation speed compared to NEURON. Testing two methods for solving the current diffusion equation system revealed which method is more useful for specific neuron morphologies. Regions of applicability were investigated using a range of simulations from a single membrane potential trace simulated in a simple fork morphology to multiple traces on multiple realistic cells. A runtime peak 150-fold faster than the CPU was achieved. This application can be used for statistical analysis and data fitting optimizations of compartmental models and may be used for simultaneously simulating large populations of neurons. Since GPUs are forging ahead and proving to be more cost-effective than CPUs, this may significantly decrease the cost of computation power and open new computational possibilities for laboratories with limited budgets. PMID:23508232

  4. Multilevel Summation of Electrostatic Potentials Using Graphics Processing Units*

    PubMed Central

    Hardy, David J.; Stone, John E.; Schulten, Klaus

    2009-01-01

    Physical and engineering practicalities involved in microprocessor design have resulted in flat performance growth for traditional single-core microprocessors. The urgent need for continuing increases in the performance of scientific applications requires the use of many-core processors and accelerators such as graphics processing units (GPUs). This paper discusses GPU acceleration of the multilevel summation method for computing electrostatic potentials and forces for a system of charged atoms, which is a problem of paramount importance in biomolecular modeling applications. We present and test a new GPU algorithm for the long-range part of the potentials that computes a cutoff pair potential between lattice points, essentially convolving a fixed 3-D lattice of “weights” over all sub-cubes of a much larger lattice. The implementation exploits the different memory subsystems provided on the GPU to stream optimally sized data sets through the multiprocessors. We demonstrate for the full multilevel summation calculation speedups of up to 26 using a single GPU and 46 using multiple GPUs, enabling the computation of a high-resolution map of the electrostatic potential for a system of 1.5 million atoms in under 12 seconds. PMID:20161132

  5. Graphics processing unit-based alignment of protein interaction networks.

    PubMed

    Xie, Jiang; Zhou, Zhonghua; Ma, Jin; Xiang, Chaojuan; Nie, Qing; Zhang, Wu

    2015-08-01

    Network alignment is an important bridge to understanding human protein-protein interactions (PPIs) and functions through model organisms. However, the underlying subgraph isomorphism problem complicates and increases the time required to align protein interaction networks (PINs). Parallel computing technology is an effective solution to the challenge of aligning large-scale networks via sequential computing. In this study, the typical Hungarian-Greedy Algorithm (HGA) is used as an example for PIN alignment. The authors propose a HGA with 2-nearest neighbours (HGA-2N) and implement its graphics processing unit (GPU) acceleration. Numerical experiments demonstrate that HGA-2N can find alignments that are close to those found by HGA while dramatically reducing computing time. The GPU implementation of HGA-2N optimises the parallel pattern, computing mode and storage mode and it improves the computing time ratio between the CPU and GPU compared with HGA when large-scale networks are considered. By using HGA-2N in GPUs, conserved PPIs can be observed, and potential PPIs can be predicted. Among the predictions based on 25 common Gene Ontology terms, 42.8% can be found in the Human Protein Reference Database. Furthermore, a new method of reconstructing phylogenetic trees is introduced, which shows the same relationships among five herpes viruses that are obtained using other methods. PMID:26243827

  6. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott; Chen, Yang

    2013-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the OpenACC compiler directives and Fortran CUDA. Mixed implementation of both Open-ACC and CUDA is demonstrated. CUDA is required for optimizing the particle deposition algorithm. We have implemented the GPU acceleration on a third generation Core I7 gaming PC with two NVIDIA GTX 680 GPUs. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. We also see enormous speedups (10 or more) on the Titan supercomputer at Oak Ridge with Kepler K20 GPUs. Results show speed-ups comparable or better than that of OpenMP models utilizing multiple cores. The use of hybrid OpenACC, CUDA Fortran, and MPI models across many nodes will also be discussed. Optimization strategies will be presented. We will discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  7. Kinematic modelling of disc galaxies using graphics processing units

    NASA Astrophysics Data System (ADS)

    Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.

    2016-01-01

    With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.

  8. Use of general purpose graphics processing units with MODFLOW

    USGS Publications Warehouse

    Hughes, Joseph D.; White, Jeremy T.

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.

  9. Use of general purpose graphics processing units with MODFLOW.

    PubMed

    Hughes, Joseph D; White, Jeremy T

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized. PMID:23281733

  10. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    SciTech Connect

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  11. Accelerating chemical database searching using graphics processing units.

    PubMed

    Liu, Pu; Agrafiotis, Dimitris K; Rassokhin, Dmitrii N; Yang, Eric

    2011-08-22

    The utility of chemoinformatics systems depends on the accurate computer representation and efficient manipulation of chemical compounds. In such systems, a small molecule is often digitized as a large fingerprint vector, where each element indicates the presence/absence or the number of occurrences of a particular structural feature. Since in theory the number of unique features can be exceedingly large, these fingerprint vectors are usually folded into much shorter ones using hashing and modulo operations, allowing fast "in-memory" manipulation and comparison of molecules. There is increasing evidence that lossless fingerprints can substantially improve retrieval performance in chemical database searching (substructure or similarity), which have led to the development of several lossless fingerprint compression algorithms. However, any gains in storage and retrieval afforded by compression need to be weighed against the extra computational burden required for decompression before these fingerprints can be compared. Here we demonstrate that graphics processing units (GPU) can greatly alleviate this problem, enabling the practical application of lossless fingerprints on large databases. More specifically, we show that, with the help of a ~$500 ordinary video card, the entire PubChem database of ~32 million compounds can be searched in ~0.2-2 s on average, which is 2 orders of magnitude faster than a conventional CPU. If multiple query patterns are processed in batch, the speedup is even more dramatic (less than 0.02-0.2 s/query for 1000 queries). In the present study, we use the Elias gamma compression algorithm, which results in a compression ratio as high as 0.097. PMID:21696144

  12. Area-delay trade-offs of texture decompressors for a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Novoa Súñer, Emilio; Ituero, Pablo; López-Vallejo, Marisa

    2011-05-01

    Graphics Processing Units have become a booster for the microelectronics industry. However, due to intellectual property issues, there is a serious lack of information on implementation details of the hardware architecture that is behind GPUs. For instance, the way texture is handled and decompressed in a GPU to reduce bandwidth usage has never been dealt with in depth from a hardware point of view. This work addresses a comparative study on the hardware implementation of different texture decompression algorithms for both conventional (PCs and video game consoles) and mobile platforms. Circuit synthesis is performed targeting both a reconfigurable hardware platform and a 90nm standard cell library. Area-delay trade-offs have been extensively analyzed, which allows us to compare the complexity of decompressors and thus determine suitability of algorithms for systems with limited hardware resources.

  13. Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

    PubMed Central

    Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf

    2013-01-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867

  14. Flocking-based Document Clustering on the Graphics Processing Unit

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E; Patton, Robert M; ST Charles, Jesse Lee

    2008-01-01

    Abstract?Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. Each bird represents a single document and flies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to receive results in a reasonable amount of time. However, flocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have found increased performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefit the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NIVIDA? we developed a document flocking implementation to be run on the NIVIDA?GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3000 documents. The results of these tests were very significant. Performance gains ranged from three to nearly five times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  15. Viscoelastic Finite Difference Modeling Using Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.

    2014-12-01

    Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size

  16. Megahertz processing rate for Fourier domain optical coherence tomography using a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuuki; Kamiyama, Dai

    2012-01-01

    We developed the ultra high-speed processing of FD-OCT images using a low-cost graphics processing unit (GPU) with many stream processors to realize highly parallel processing. The processing line rates of half range FD-OCT and full range FD-OCT were 1.34 MHz and 0.70 MHz for a spectral interference image of 1024 FFT size x 2048 lateral A-scans, respectively. A display rate of 22.5 frames per second for processed full range images was achieved in our OCT system using an InGaAs line scan camera operated at 47 kHz.

  17. Graphic Arts: Process Camera, Stripping, and Platemaking. Fourth Edition. Teacher Edition [and] Student Edition.

    ERIC Educational Resources Information Center

    Multistate Academic and Vocational Curriculum Consortium, Stillwater, OK.

    This publication contains both a teacher edition and a student edition of materials for a course in graphic arts that covers the process camera, stripping, and platemaking. The course introduces basic concepts and skills necessary for entry-level employment in a graphic communication occupation. The contents of the materials are tied to measurable…

  18. Software Graphics Processing Unit (sGPU) for Deep Space Applications

    NASA Technical Reports Server (NTRS)

    McCabe, Mary; Salazar, George; Steele, Glen

    2015-01-01

    A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.

  19. Processing and distribution of graphical information in offshore engineering

    SciTech Connect

    Rodriguez, M.V.R.; Simao, N.C.; Lorenzoni, C.; Ferrante, A.J.

    1994-12-31

    This paper reports briefly the experience of the Production Department of Petrobras in transforming its centralized, mainframe based, computational environment into a open distributed client/server computational environment, through the application of downsizing concepts. Then the paper focused its attention on the problem of handling technical graphics information regarding its more than 70 fixed offshore platforms, going from 10 to 180 m water depth. The solution adopted, with emphasis on the local network corresponding to a typical production region, and on the connection of the offshore platforms to that network, is discussed first. The experiences collected during the implementation and operation of such solution are then reported, and practical conclusions are presented with regard to the main issues involved.

  20. The Merging Of Computer Graphics And Image Processing Technologies And Applications

    NASA Astrophysics Data System (ADS)

    Brammer, Robert F.; Stephenson, Thomas P.

    1990-01-01

    Historically, computer graphics and image processing technologies and applications have been distinct, both in their research communities and in their hardware and software product suppliers. Computer graphics deals with synthesized visual depictions of outputs from computer models*, whereas image processing (and analysis) deals with computational operations on input data from "imaging sensors"**. Furthermore, the fundamental storage and computational aspects of these two fields are different from one another. For example, many computer graphics applications store data using vector formats whereas image processing applications generally use raster formats. Computer graphics applications may involve polygonal representations, floating point operations, and mathematical models of physical phenomena such as lighting conditions, surface reflecting properties, etc. Image processing applications may involve pixel operations, fixed point representations, global operations (e.g. image rotations), and nonlinear signal processing algorithms.

  1. Grace: A cross-platform micromagnetic simulator on graphics processing units

    NASA Astrophysics Data System (ADS)

    Zhu, Ru

    2015-12-01

    A micromagnetic simulator running on graphics processing units (GPUs) is presented. Different from GPU implementations of other research groups which are predominantly running on NVidia's CUDA platform, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform independent. It runs on GPUs from venders including NVidia, AMD and Intel, and achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude. The simulator paved the way for running large size micromagnetic simulations on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics cards, and is freely available to download.

  2. Mobile Devices and GPU Parallelism in Ionospheric Data Processing

    NASA Astrophysics Data System (ADS)

    Mascharka, D.; Pankratius, V.

    2015-12-01

    Scientific data acquisition in the field is often constrained by data transfer backchannels to analysis environments. Geoscientists are therefore facing practical bottlenecks with increasing sensor density and variety. Mobile devices, such as smartphones and tablets, offer promising solutions to key problems in scientific data acquisition, pre-processing, and validation by providing advanced capabilities in the field. This is due to affordable network connectivity options and the increasing mobile computational power. This contribution exemplifies a scenario faced by scientists in the field and presents the "Mahali TEC Processing App" developed in the context of the NSF-funded Mahali project. Aimed at atmospheric science and the study of ionospheric Total Electron Content (TEC), this app is able to gather data from various dual-frequency GPS receivers. It demonstrates parsing of full-day RINEX files on mobile devices and on-the-fly computation of vertical TEC values based on satellite ephemeris models that are obtained from NASA. Our experiments show how parallel computing on the mobile device GPU enables fast processing and visualization of up to 2 million datapoints in real-time using OpenGL. GPS receiver bias is estimated through minimum TEC approximations that can be interactively adjusted by scientists in the graphical user interface. Scientists can also perform approximate computations for "quickviews" to reduce CPU processing time and memory consumption. In the final stage of our mobile processing pipeline, scientists can upload data to the cloud for further processing. Acknowledgements: The Mahali project (http://mahali.mit.edu) is funded by the NSF INSPIRE grant no. AGS-1343967 (PI: V. Pankratius). We would like to acknowledge our collaborators at Boston College, Virginia Tech, Johns Hopkins University, Colorado State University, as well as the support of UNAVCO for loans of dual-frequency GPS receivers for use in this project, and Intel for loans of

  3. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  4. Calculation of HELAS amplitudes for QCD processes using graphics processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Hagiwara, K.; Kanzaki, J.; Okamura, N.; Rainwater, D.; Stelzer, T.

    2010-11-01

    We use a graphics processing unit (GPU) for fast calculations of helicity amplitudes of quark and gluon scattering processes in massless QCD. New HEGET ( HELAS Evaluation with GPU Enhanced Technology) codes for gluon self-interactions are introduced, and a C++ program to convert the MadGraph generated FORTRAN codes into HEGET codes in CUDA (a C-platform for general purpose computing on GPU) is created. Because of the proliferation of the number of Feynman diagrams and the number of independent color amplitudes, the maximum number of final state jets we can evaluate on a GPU is limited to 4 for pure gluon processes ( gg→4 g), or 5 for processes with one or more quark lines such as qoverline{q}→ 5g and qq→ qq+3 g. Compared with the usual CPU-based programs, we obtain 60-100 times better performance on the GPU, except for 5-jet production processes and the gg→4 g processes for which the GPU gain over the CPU is about 20.

  5. Mobile processing in open systems

    SciTech Connect

    Sapaty, P.S.

    1996-12-31

    A universal spatial automaton, called WAVE, for highly parallel processing in arbitrary distributed systems is described. The automaton is based on a virus principle where recursive programs, or waves, self-navigate in networks of data or processes in multiple cooperative parts while controlling and modifying the environment they exist in and move through. The layered general organization of the automaton as well as its distributed implementation in computer networks have been discussed. As the automaton dynamically creates, modifies, activates and processes any knowledge networks arbitrarily distributed in computer networks, it can easily model any other paradigms for parallel and distributed computing. Comparison of WAVE with some known programming models and languages, and ideas of their possible integration have also been given.

  6. Student Thinking Processes While Constructing Graphic Representations of Textbook Content: What Insights Do Think-Alouds Provide?

    ERIC Educational Resources Information Center

    Scott, D. Beth; Dreher, Mariam Jean

    2016-01-01

    This study examined the thinking processes students engage in while constructing graphic representations of textbook content. Twenty-eight students who either used graphic representations in a routine manner during social studies instruction or learned to construct graphic representations based on the rhetorical patterns used to organize textbook…

  7. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    NASA Technical Reports Server (NTRS)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  8. Multiparallel decompression simultaneously using multicore central processing unit and graphic processing unit

    NASA Astrophysics Data System (ADS)

    Petta, Andrea; Serra, Luigi; De Nino, Maurizio

    2013-01-01

    The discrete wavelet transform (DWT)-based compression algorithm is widely used in many image compression systems. The time-consuming computation of the 9/7 discrete wavelet decomposition and the bit-plane decoding is usually the bottleneck of these systems. In order to perform real-time decompression on a massive bit stream of compressed images continuously down-linked from the satellite, we propose a different graphic processing unit (GPU)-accelerated decoding system. In this system, the GPU and multiple central processing unit (CPU) threads are run in parallel. To obtain the maximum throughput via a different pipeline structure for processing continuous satellite images, an additional balancing algorithm workload has been implemented to distribute the jobs to both CPU and GPU parts to have approximately the same processing speed. Through the pipelined CPU and GPU heterogeneous computing, the entire decoding system approaches a speedup of 15× as compared to its single-threaded CPU counterpart. The proposed channel and source decoding system is able to decompress 1024×1024 satellite images at a speed of 20 frames/s.

  9. A DDC Bibliography on Optical or Graphic Information Processing (Information Sciences Series). Volume I.

    ERIC Educational Resources Information Center

    Defense Documentation Center, Alexandria, VA.

    This unclassified-unlimited bibliography contains 183 references, with abstracts, dealing specifically with optical or graphic information processing. Citations are grouped under three headings: display devices and theory, character recognition, and pattern recognition. Within each group, they are arranged in accession number (AD-number) sequence.…

  10. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  11. Graphics Processing Unit-Based Bioheat Simulation to Facilitate Rapid Decision Making Associated with Cryosurgery Training.

    PubMed

    Keelan, Robert; Zhang, Hong; Shimada, Kenji; Rabin, Yoed

    2016-04-01

    This study focuses on the implementation of an efficient numerical technique for cryosurgery simulations on a graphics processing unit as an alternative means to accelerate runtime. This study is part of an ongoing effort to develop computerized training tools for cryosurgery, with prostate cryosurgery as a developmental model. The ability to perform rapid simulations of various test cases is critical to facilitate sound decision making associated with medical training. Consistent with clinical practice, the training tool aims at correlating the frozen region contour and the corresponding temperature field with the target region shape. The current study focuses on the feasibility of graphics processing unit-based computation using C++ accelerated massive parallelism, as one possible implementation. Benchmark results on a variety of computation platforms display between 3-fold acceleration (laptop) and 13-fold acceleration (gaming computer) of cryosurgery simulation, in comparison with the more common implementation on a multicore central processing unit. While the general concept of graphics processing unit-based simulations is not new, its application to phase-change problems, combined with the unique requirements for cryosurgery optimization, represents the core contribution of the current study. PMID:25941162

  12. iMOSFLM: a new graphical interface for diffraction-image processing with MOSFLM

    PubMed Central

    Battye, T. Geoff G.; Kontogiannis, Luke; Johnson, Owen; Powell, Harold R.; Leslie, Andrew G. W.

    2011-01-01

    iMOSFLM is a graphical user interface to the diffraction data-integration program MOSFLM. It is designed to simplify data processing by dividing the process into a series of steps, which are normally carried out sequentially. Each step has its own display pane, allowing control over parameters that influence that step and providing graphical feedback to the user. Suitable values for integration parameters are set automatically, but additional menus provide a detailed level of control for experienced users. The image display and the interfaces to the different tasks (indexing, strategy calculation, cell refinement, integration and history) are described. The most important parameters for each step and the best way of assessing success or failure are discussed. PMID:21460445

  13. TRIIG - Time-lapse reproduction of images through interactive graphics. [digital processing of quality hard copy

    NASA Technical Reports Server (NTRS)

    Buckner, J. D.; Council, H. W.; Edwards, T. R.

    1974-01-01

    Description of the hardware and software implementing the system of time-lapse reproduction of images through interactive graphics (TRIIG). The system produces a quality hard copy of processed images in a fast and inexpensive manner. This capability allows for optimal development of processing software through the rapid viewing of many image frames in an interactive mode. Three critical optical devices are used to reproduce an image: an Optronics photo reader/writer, the Adage Graphics Terminal, and Polaroid Type 57 high speed film. Typical sources of digitized images are observation satellites, such as ERTS or Mariner, computer coupled electron microscopes for high-magnification studies, or computer coupled X-ray devices for medical research.

  14. Particle-in-cell Simulations with Charge-Conserving Current Deposition on Graphic Processing Units

    NASA Astrophysics Data System (ADS)

    Kong, Xianglong; Huang, Michael; Ren, Chuang; Decyk, Viktor

    2010-11-01

    We present an implementation of a fully relativistic, electromagnetic PIC code, with charge-conserving current deposition, on graphics processing units (GPUs) with NVIDIA's massively multithreaded computing architecture CUDA. A particle-based computation thread assignment was used in the current deposition scheme and write conflicts among the threads were resolved by a thread racing technique. A parallel particle sorting scheme was also developed and used. The implementation took advantage of fast on-chip shared memory. The 2D implementation achieved a one particle-step process time of 2.28 ns for cold plasma runs and 8.53 ns for extremely relativistic plasma runs on a GTX 280 graphic card, which were respectively 90 and 29 times faster than a single threaded state-of-art CPU code. A comparable speedup was also achieved for the 3D implementation.

  15. Signal processing for ION mobility spectrometers

    NASA Technical Reports Server (NTRS)

    Taylor, S.; Hinton, M.; Turner, R.

    1995-01-01

    Signal processing techniques for systems based upon Ion Mobility Spectrometry will be discussed in the light of 10 years of experience in the design of real-time IMS. Among the topics to be covered are compensation techniques for variations in the number density of the gas - the use of an internal standard (a reference peak) or pressure and temperature sensors. Sources of noise and methods for noise reduction will be discussed together with resolution limitations and the ability of deconvolution techniques to improve resolving power. The use of neural networks (either by themselves or as a component part of a processing system) will be reviewed.

  16. Process analysis using ion mobility spectrometry.

    PubMed

    Baumbach, J I

    2006-03-01

    Ion mobility spectrometry, originally used to detect chemical warfare agents, explosives and illegal drugs, is now frequently applied in the field of process analytics. The method combines both high sensitivity (detection limits down to the ng to pg per liter and ppb(v)/ppt(v) ranges) and relatively low technical expenditure with a high-speed data acquisition. In this paper, the working principles of IMS are summarized with respect to the advantages and disadvantages of the technique. Different ionization techniques, sample introduction methods and preseparation methods are considered. Proven applications of different types of ion mobility spectrometer (IMS) used at ISAS will be discussed in detail: monitoring of gas insulated substations, contamination in water, odoration of natural gas, human breath composition and metabolites of bacteria. The example applications discussed relate to purity (gas insulated substations), ecology (contamination of water resources), plants and person safety (odoration of natural gas), food quality control (molds and bacteria) and human health (breath analysis). PMID:16132133

  17. Advanced colour processing for mobile devices

    NASA Astrophysics Data System (ADS)

    Gillich, Eugen; Dörksen, Helene; Lohweg, Volker

    2015-02-01

    Mobile devices such as smartphones are going to play an important role in professionally image processing tasks. However, mobile systems were not designed for such applications, especially in terms of image processing requirements like stability and robustness. One major drawback is the automatic white balance, which comes with the devices. It is necessary for many applications, but of no use when applied to shiny surfaces. Such an issue appears when image acquisition takes place in differently coloured illuminations caused by different environments. This results in inhomogeneous appearances of the same subject. In our paper we show a new approach for handling the complex task of generating a low-noise and sharp image without spatial filtering. Our method is based on the fact that we analyze the spectral and saturation distribution of the channels. Furthermore, the RGB space is transformed into a more convenient space, a particular HSI space. We generate the greyscale image by a control procedure that takes into account the colour channels. This leads in an adaptive colour mixing model with reduced noise. The results of the optimized images are used to show how, e. g., image classification benefits from our colour adaptation approach.

  18. Systems Biology Graphical Notation: Process Description language Level 1 Version 1.3.

    PubMed

    Moodie, Stuart; Le Novère, Nicolas; Demir, Emek; Mi, Huaiyu; Villéger, Alice

    2015-01-01

    The Systems Biological Graphical Notation (SBGN) is an international community effort for standardized graphical representations of biological pathways and networks. The goal of SBGN is to provide unambiguous pathway and network maps for readers with different scientific backgrounds as well as to support efficient and accurate exchange of biological knowledge between different research communities, industry, and other players in systems biology. Three SBGN languages, Process Description (PD), Entity Relationship (ER) and Activity Flow (AF), allow for the representation of different aspects of biological and biochemical systems at different levels of detail. The SBGN Process Description language represents biological entities and processes between these entities within a network. SBGN PD focuses on the mechanistic description and temporal dependencies of biological interactions and transformations. The nodes (elements) are split into entity nodes describing, e.g., metabolites, proteins, genes and complexes, and process nodes describing, e.g., reactions and associations. The edges (connections) provide descriptions of relationships (or influences) between the nodes, such as consumption, production, stimulation and inhibition. Among all three languages of SBGN, PD is the closest to metabolic and regulatory pathways in biological literature and textbooks, but its well-defined semantics offer a superior precision in expressing biological knowledge. PMID:26528561

  19. Creating Interactive Graphical Overlays in the Advanced Weather Interactive Processing System Using Shapefiles and DGM Files

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Lafosse, Richard; Hood, Doris; Hoeth, Brian

    2007-01-01

    Graphical overlays can be created in real-time in the Advanced Weather Interactive Processing System (AWIPS) using shapefiles or Denver AWIPS Risk Reduction and Requirements Evaluation (DARE) Graphics Metafile (DGM) files. This presentation describes how to create graphical overlays on-the-fly for AWIPS, by using two examples of AWIPS applications that were created by the Applied Meteorology Unit (AMU) located at Cape Canaveral Air Force Station (CCAFS), Florida. The first example is the Anvil Threat Corridor Forecast Tool, which produces a shapefile that depicts a graphical threat corridor of the forecast movement of thunderstorm anvil clouds, based on the observed or forecast upper-level winds. This tool is used by the Spaceflight Meteorology Group (SMG) at Johnson Space Center, Texas and 45th Weather Squadron (45 WS) at CCAFS to analyze the threat of natural or space vehicle-triggered lightning over a location. The second example is a launch and landing trajectory tool that produces a DGM file that plots the ground track of space vehicles during launch or landing. The trajectory tool can be used by SMG and the 45 WS forecasters to analyze weather radar imagery along a launch or landing trajectory. The presentation will list the advantages and disadvantages of both file types for creating interactive graphical overlays in future AWIPS applications. Shapefiles are a popular format used extensively in Geographical Information Systems. They are usually used in AWIPS to depict static map backgrounds. A shapefile stores the geometry and attribute information of spatial features in a dataset (ESRI 1998). Shapefiles can contain point, line, and polygon features. Each shapefile contains a main file, index file, and a dBASE table. The main file contains a record for each spatial feature, which describes the feature with a list of its vertices. The index file contains the offset of each record from the beginning of the main file. The dBASE table contains records for each

  20. Efficient neighbor list calculation for molecular simulation of colloidal systems using graphics processing units

    NASA Astrophysics Data System (ADS)

    Howard, Michael P.; Anderson, Joshua A.; Nikoubashman, Arash; Glotzer, Sharon C.; Panagiotopoulos, Athanassios Z.

    2016-06-01

    We present an algorithm based on linear bounding volume hierarchies (LBVHs) for computing neighbor (Verlet) lists using graphics processing units (GPUs) for colloidal systems characterized by large size disparities. We compare this to a GPU implementation of the current state-of-the-art CPU algorithm based on stenciled cell lists. We report benchmarks for both neighbor list algorithms in a Lennard-Jones binary mixture with synthetic interaction range disparity and a realistic colloid solution. LBVHs outperformed the stenciled cell lists for systems with moderate or large size disparity and dilute or semidilute fractions of large particles, conditions typical of colloidal systems.

  1. Solution of relativistic quantum optics problems using clusters of graphical processing units

    SciTech Connect

    Gordon, D.F. Hafizi, B.; Helle, M.H.

    2014-06-15

    Numerical solution of relativistic quantum optics problems requires high performance computing due to the rapid oscillations in a relativistic wavefunction. Clusters of graphical processing units are used to accelerate the computation of a time dependent relativistic wavefunction in an arbitrary external potential. The stationary states in a Coulomb potential and uniform magnetic field are determined analytically and numerically, so that they can used as initial conditions in fully time dependent calculations. Relativistic energy levels in extreme magnetic fields are recovered as a means of validation. The relativistic ionization rate is computed for an ion illuminated by a laser field near the usual barrier suppression threshold, and the ionizing wavefunction is displayed.

  2. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding.

    SciTech Connect

    Loughry, Thomas A.

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  3. Graphics processing unit-accelerated double random phase encoding for fast image encryption

    NASA Astrophysics Data System (ADS)

    Lee, Jieun; Yi, Faliu; Saifullah, Rao; Moon, Inkyu

    2014-11-01

    We propose a fast double random phase encoding (DRPE) algorithm using a graphics processing unit (GPU)-based stream-processing model. A performance analysis of the accelerated DRPE implementation that employs the Compute Unified Device Architecture programming environment is presented. We show that the proposed methodology executed on a GPU can dramatically increase encryption speed compared with central processing unit sequential computing. Our experimental results demonstrate that in encryption data of an image with a pixel size of 1000×1000, where one pixel has a 32-bit depth, our GPU version of the DRPE scheme can be approximately two times faster than the advanced encryption standard algorithm implemented on a GPU. In addition, the quality of parallel processing on the presented DRPE acceleration method is evaluated with performance parameters, such as speedup, efficiency, and redundancy.

  4. Pre and post processing using the IBM 3277 display station graphics attachment (RPQ7H0284)

    NASA Technical Reports Server (NTRS)

    Burroughs, S. H.; Lawlor, M. B.; Miller, I. M.

    1978-01-01

    A graphical interactive procedure operating under TSO and utilizing two CRT display terminals is shown to be an effective means of accomplishing mesh generation, establishing boundary conditions, and reviewing graphic output for finite element analysis activity.

  5. Modified graphical autocatalytic set model of combustion process in circulating fluidized bed boiler

    NASA Astrophysics Data System (ADS)

    Yusof, Nurul Syazwani; Bakar, Sumarni Abu; Ismail, Razidah

    2014-07-01

    Circulating Fluidized Bed Boiler (CFB) is a device for generating steam by burning fossil fuels in a furnace operating under a special hydrodynamic condition. Autocatalytic Set has provided a graphical model of chemical reactions that occurred during combustion process in CFB. Eight important chemical substances known as species were represented as nodes and catalytic relationships between nodes are represented by the edges in the graph. In this paper, the model is extended and modified by considering other relevant chemical reactions that also exist during the process. Catalytic relationship among the species in the model is discussed. The result reveals that the modified model is able to gives more explanation of the relationship among the species during the process at initial time t.

  6. Data processing and presentation for a personalised, image-driven medical graphical avatar.

    PubMed

    de Ridder, Michael; Bi, Lei; Constantinescu, Liviu; Kim, Jinman; Feng, David Dagan

    2013-01-01

    With the continuing digital revolution in the healthcare industry, patients are being confronted with the difficult task of managing their digital medical data. Current personal health record (PHR) systems are able to store and consolidate this data, but they are limited in providing tools to facilitate patients' understanding and management of the data. One reason for this stems from the limited use of contextual information, especially in presenting spatial details such as in volumetric images and videos, as well as time-based temporal data. Further, lack of meaningful visualisation techniques exist to represent the data stored in PHRs. In this paper we propose a medical graphical avatar (MGA) constructed from whole-body patient images, and a navigable timeline of the patient's medical records. A data mapping framework is presented that extracts information from medical multimedia data such as images, video and text, to populate our PHR timeline, while also embedding spatial and textual annotations such as regions of interest (ROIs) that are automatically derived from image processing algorithms. We developed a prototype to process the various forms of PHR data and present the data in a graphical avatar. We analysed the usefulness of our system under various scenarios of patient data use and present preliminary results that indicate that our system performs well on standard consumer hardware. PMID:24110654

  7. Computation of Large Covariance Matrices by SAMMY on Graphical Processing Units and Multicore CPUs

    SciTech Connect

    Arbanas, Goran; Dunn, Michael E; Wiarda, Dorothea

    2011-01-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The U-235 RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000 x 20,000 that had previously taken days, took approximately one minute on the GPU. Similar performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms.

  8. BarraCUDA - a fast short read sequence aligner using graphics processing units

    PubMed Central

    2012-01-01

    Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net PMID:22244497

  9. Using Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data

    NASA Astrophysics Data System (ADS)

    O'Connor, A. S.; Justice, B.; Harris, A. T.

    2013-12-01

    Graphics Processing Units (GPUs) are high-performance multiple-core processors capable of very high computational speeds and large data throughput. Modern GPUs are inexpensive and widely available commercially. These are general-purpose parallel processors with support for a variety of programming interfaces, including industry standard languages such as C. GPU implementations of algorithms that are well suited for parallel processing can often achieve speedups of several orders of magnitude over optimized CPU codes. Significant improvements in speeds for imagery orthorectification, atmospheric correction, target detection and image transformations like Independent Components Analsyis (ICA) have been achieved using GPU-based implementations. Additional optimizations, when factored in with GPU processing capabilities, can provide 50x - 100x reduction in the time required to process large imagery. Exelis Visual Information Solutions (VIS) has implemented a CUDA based GPU processing frame work for accelerating ENVI and IDL processes that can best take advantage of parallelization. Testing Exelis VIS has performed shows that orthorectification can take as long as two hours with a WorldView1 35,0000 x 35,000 pixel image. With GPU orthorecification, the same orthorectification process takes three minutes. By speeding up image processing, imagery can successfully be used by first responders, scientists making rapid discoveries with near real time data, and provides an operational component to data centers needing to quickly process and disseminate data.

  10. Acceleration of Early-Photon Fluorescence Molecular Tomography with Graphics Processing Units

    PubMed Central

    Wang, Xin; Zhang, Bin; Cao, Xu; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-01-01

    Fluorescence molecular tomography (FMT) with early-photons can improve the spatial resolution and fidelity of the reconstructed results. However, its computing scale is always large which limits its applications. In this paper, we introduced an acceleration strategy for the early-photon FMT with graphics processing units (GPUs). According to the procedure, the whole solution of FMT was divided into several modules and the time consumption for each module is studied. In this strategy, two most time consuming modules (Gd and W modules) were accelerated with GPU, respectively, while the other modules remained coded in the Matlab. Several simulation studies with a heterogeneous digital mouse atlas were performed to confirm the performance of the acceleration strategy. The results confirmed the feasibility of the strategy and showed that the processing speed was improved significantly. PMID:23606899

  11. Modified Anderson Method for Accelerating 3D-RISM Calculations Using Graphics Processing Unit.

    PubMed

    Maruyama, Yutaka; Hirata, Fumio

    2012-09-11

    A fast algorithm is proposed to solve the three-dimensional reference interaction site model (3D-RISM) theory on a graphics processing unit (GPU). 3D-RISM theory is a powerful tool for investigating biomolecular processes in solution; however, such calculations are often both memory-intensive and time-consuming. We sought to accelerate these calculations using GPUs, but to work around the problem of limited memory size in GPUs, we modified the less memory-intensive "Anderson method" to give faster convergence to 3D-RISM calculations. Using this method on a Tesla C2070 GPU, we reduced the total computational time by a factor of 8, 1.4 times by the modified Andersen method and 5.7 times by GPU, compared to calculations on an Intel Xeon machine (eight cores, 3.33 GHz) with the conventional method. PMID:26605714

  12. SeqTrace: A Graphical Tool for Rapidly Processing DNA Sequencing Chromatograms

    PubMed Central

    Stucky, Brian J.

    2012-01-01

    Modern applications of Sanger DNA sequencing often require converting a large number of chromatogram trace files into high-quality DNA sequences for downstream analyses. Relatively few nonproprietary software tools are available to assist with this process. SeqTrace is a new, free, and open-source software application that is designed to automate the entire workflow by facilitating easy batch processing of large numbers of trace files. SeqTrace can identify, align, and compute consensus sequences from matching forward and reverse traces, filter low-quality base calls, and end-trim finished sequences. The software features a graphical interface that includes a full-featured chromatogram viewer and sequence editor. SeqTrace runs on most popular operating systems and is freely available, along with supporting documentation, at http://seqtrace.googlecode.com/. PMID:22942788

  13. Stereo system based on a graphics processing unit for pedestrian detection and tracking

    NASA Astrophysics Data System (ADS)

    Nam, Bodam; Kang, Sungil; Hong, Hyunki; Eem, Changkyoung

    2010-12-01

    This paper presents a novel stereo system, based on a graphics processing unit (GPU), for pedestrian detection in real images. The process of obtaining a dense disparity map and the edge properties of the scene to extract a region of interest (ROI) is designed on a GPU for real-time applications. After extracting the histograms of the oriented gradients on the ROIs, a support vector machine classifies them as pedestrian and nonpedestrian types. The system employs the recognition-by-components method, which compensates for the pose and articulation changes of pedestrians. In order to effectively track spatial pedestrian estimates over sequences, subwindows at distinctive parts of human beings are used as measurements for the Kalman filter.

  14. A software architecture for multi-cellular system simulations on graphics processing units.

    PubMed

    Jeannin-Girardon, Anne; Ballet, Pascal; Rodin, Vincent

    2013-09-01

    The first aim of simulation in virtual environment is to help biologists to have a better understanding of the simulated system. The cost of such simulation is significantly reduced compared to that of in vivo simulation. However, the inherent complexity of biological system makes it hard to simulate these systems on non-parallel architectures: models might be made of sub-models and take several scales into account; the number of simulated entities may be quite large. Today, graphics cards are used for general purpose computing which has been made easier thanks to frameworks like CUDA or OpenCL. Parallelization of models may however not be easy: parallel computer programing skills are often required; several hardware architectures may be used to execute models. In this paper, we present the software architecture we built in order to implement various models able to simulate multi-cellular system. This architecture is modular and it implements data structures adapted for graphics processing units architectures. It allows efficient simulation of biological mechanisms. PMID:23900760

  15. Open-source graphics processing unit-accelerated ray tracer for optical simulation

    NASA Astrophysics Data System (ADS)

    Mauch, Florian; Gronle, Marc; Lyda, Wolfram; Osten, Wolfgang

    2013-05-01

    Ray tracing still is the workhorse in optical design and simulation. Its basic principle, propagating light as a set of mutually independent rays, implies a linear dependency of the computational effort and the number of rays involved in the problem. At the same time, the mutual independence of the light rays bears a huge potential for parallelization of the computational load. This potential has recently been recognized in the visualization community, where graphics processing unit (GPU)-accelerated ray tracing is used to render photorealistic images. However, precision requirements in optical simulation are substantially higher than in visualization, and therefore performance results known from visualization cannot be expected to transfer to optical simulation one-to-one. In this contribution, we present an open-source implementation of a GPU-accelerated ray tracer, based on nVidias acceleration engine OptiX, that traces in double precision and exploits the massively parallel architecture of modern graphics cards. We compare its performance to a CPU-based tracer that has been developed in parallel.

  16. Lossy hyperspectral image compression tuned for spectral mixture analysis applications on NVidia graphics processing units

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Plaza, Javier; Sánchez, Sergio; Paz, Abel

    2009-08-01

    In this paper, we develop a computationally efficient approach for lossy compression of remotely sensed hyperspectral images which has been specifically tuned to preserve the relevant information required in spectral mixture analysis (SMA) applications. The proposed method is based on two steps: 1) endmember extraction, and 2) linear spectral unmixing. Two endmember extraction algorithms: the pixel purity index (PPI) and the automatic morphological endmember extraction (AMEE), and a fully constrained linear spectral unmixing (FCLSU) algorithm have been considered in this work to devise the proposed lossy compression strategy. The proposed methodology has been implemented in graphics processing units (GPUs) of NVidiaTM type. Our experiments demonstrate that it can achieve very high compression ratios when applied to standard hyperspectral data sets, and can also retain the relevant information required for spectral unmixing in a computationally efficient way, achieving speedups in the order of 26 on a NVidiaTM GeForce 8800 GTX graphic card when compared to an optimized implementation of the same code in a dual-core CPU.

  17. High-speed nonlinear finite element analysis for surgical simulation using graphics processing units.

    PubMed

    Taylor, Z A; Cheng, M; Ourselin, S

    2008-05-01

    The use of biomechanical modelling, especially in conjunction with finite element analysis, has become common in many areas of medical image analysis and surgical simulation. Clinical employment of such techniques is hindered by conflicting requirements for high fidelity in the modelling approach, and fast solution speeds. We report the development of techniques for high-speed nonlinear finite element analysis for surgical simulation. We use a fully nonlinear total Lagrangian explicit finite element formulation which offers significant computational advantages for soft tissue simulation. However, the key contribution of the work is the presentation of a fast graphics processing unit (GPU) solution scheme for the finite element equations. To the best of our knowledge, this represents the first GPU implementation of a nonlinear finite element solver. We show that the present explicit finite element scheme is well suited to solution via highly parallel graphics hardware, and that even a midrange GPU allows significant solution speed gains (up to 16.8 x) compared with equivalent CPU implementations. For the models tested the scheme allows real-time solution of models with up to 16,000 tetrahedral elements. The use of GPUs for such purposes offers a cost-effective high-performance alternative to expensive multi-CPU machines, and may have important applications in medical image analysis and surgical simulation. PMID:18450538

  18. Real-time nonlinear finite element analysis for surgical simulation using graphics processing units.

    PubMed

    Taylor, Zeike A; Cheng, Mario; Ourselin, Sébastien

    2007-01-01

    Clinical employment of biomechanical modelling techniques in areas of medical image analysis and surgical simulation is often hindered by conflicting requirements for high fidelity in the modelling approach and high solution speeds. We report the development of techniques for high-speed nonlinear finite element (FE) analysis for surgical simulation. We employ a previously developed nonlinear total Lagrangian explicit FE formulation which offers significant computational advantages for soft tissue simulation. However, the key contribution of the work is the presentation of a fast graphics processing unit (GPU) solution scheme for the FE equations. To the best of our knowledge this represents the first GPU implementation of a nonlinear FE solver. We show that the present explicit FE scheme is well-suited to solution via highly parallel graphics hardware, and that even a midrange GPU allows significant solution speed gains (up to 16.4x) compared with equivalent CPU implementations. For the models tested the scheme allows real-time solution of models with up to 16000 tetrahedral elements. The use of GPUs for such purposes offers a cost-effective high-performance alternative to expensive multi-CPU machines, and may have important applications in medical image analysis and surgical simulation. PMID:18051120

  19. SU-E-P-59: A Graphical Interface for XCAT Phantom Configuration, Generation and Processing

    SciTech Connect

    Myronakis, M; Cai, W; Dhou, S; Cifter, F; Lewis, J; Hurwitz, M

    2015-06-15

    Purpose: To design a comprehensive open-source, publicly available, graphical user interface (GUI) to facilitate the configuration, generation, processing and use of the 4D Extended Cardiac-Torso (XCAT) phantom. Methods: The XCAT phantom includes over 9000 anatomical objects as well as respiratory, cardiac and tumor motion. It is widely used for research studies in medical imaging and radiotherapy. The phantom generation process involves the configuration of a text script to parameterize the geometry, motion, and composition of the whole body and objects within it, and to generate simulated PET or CT images. To avoid the need for manual editing or script writing, our MATLAB-based GUI uses slider controls, drop-down lists, buttons and graphical text input to parameterize and process the phantom. Results: Our GUI can be used to: a) generate parameter files; b) generate the voxelized phantom; c) combine the phantom with a lesion; d) display the phantom; e) produce average and maximum intensity images from the phantom output files; f) incorporate irregular patient breathing patterns; and f) generate DICOM files containing phantom images. The GUI provides local help information using tool-tip strings on the currently selected phantom, minimizing the need for external documentation. The DICOM generation feature is intended to simplify the process of importing the phantom images into radiotherapy treatment planning systems or other clinical software. Conclusion: The GUI simplifies and automates the use of the XCAT phantom for imaging-based research projects in medical imaging or radiotherapy. This has the potential to accelerate research conducted with the XCAT phantom, or to ease the learning curve for new users. This tool does not include the XCAT phantom software itself. We would like to acknowledge funding from MRA, Varian Medical Systems Inc.

  20. Real-time display on Fourier domain optical coherence tomography system using a graphics processing unit.

    PubMed

    Watanabe, Yuuki; Itagaki, Toshiki

    2009-01-01

    Fourier domain optical coherence tomography (FD-OCT) requires resampling of spectrally resolved depth information from wavelength to wave number, and the subsequent application of the inverse Fourier transform. The display rates of OCT images are much slower than the image acquisition rates due to processing speed limitations on most computers. We demonstrate a real-time display of processed OCT images using a linear-in-wave-number (linear-k) spectrometer and a graphics processing unit (GPU). We use the linear-k spectrometer with the combination of a diffractive grating with 1200 lines/mm and a F2 equilateral prism in the 840-nm spectral region to avoid calculating the resampling process. The calculations of the fast Fourier transform (FFT) are accelerated by the GPU with many stream processors, which realizes highly parallel processing. A display rate of 27.9 frames/sec for processed images (2048 FFT size x 1000 lateral A-scans) is achieved in our OCT system using a line scan CCD camera operated at 27.9 kHz. PMID:20059237

  1. Real-time display on Fourier domain optical coherence tomography system using a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuuki; Itagaki, Toshiki

    2009-11-01

    Fourier domain optical coherence tomography (FD-OCT) requires resampling of spectrally resolved depth information from wavelength to wave number, and the subsequent application of the inverse Fourier transform. The display rates of OCT images are much slower than the image acquisition rates due to processing speed limitations on most computers. We demonstrate a real-time display of processed OCT images using a linear-in-wave-number (linear-k) spectrometer and a graphics processing unit (GPU). We use the linear-k spectrometer with the combination of a diffractive grating with 1200 lines/mm and a F2 equilateral prism in the 840-nm spectral region to avoid calculating the resampling process. The calculations of the fast Fourier transform (FFT) are accelerated by the GPU with many stream processors, which realizes highly parallel processing. A display rate of 27.9 frames/sec for processed images (2048 FFT size×1000 lateral A-scans) is achieved in our OCT system using a line scan CCD camera operated at 27.9 kHz.

  2. On the use of graphics processing units (GPUs) for molecular dynamics simulation of spherical particles

    NASA Astrophysics Data System (ADS)

    Hidalgo, R. C.; Kanzaki, T.; Alonso-Marroquin, F.; Luding, S.

    2013-06-01

    General-purpose computation on Graphics Processing Units (GPU) on personal computers has recently become an attractive alternative to parallel computing on clusters and supercomputers. We present the GPU-implementation of an accurate molecular dynamics algorithm for a system of spheres. The new hybrid CPU-GPU implementation takes into account all the degrees of freedom, including the quaternion representation of 3D rotations. For additional versatility, the contact interaction between particles is defined using a force law of enhanced generality, which accounts for the elastic and dissipative interactions, and the hard-sphere interaction parameters are translated to the soft-sphere parameter set. We prove that the algorithm complies with the statistical mechanical laws by examining the homogeneous cooling of a granular gas with rotation. The results are in excellent agreement with well established mean-field theories for low-density hard sphere systems. This GPU technique dramatically reduces user waiting time, compared with a traditional CPU implementation.

  3. Acceleration of the GAMESS-UK electronic structure package on graphical processing units.

    PubMed

    Wilkinson, Karl A; Sherwood, Paul; Guest, Martyn F; Naidoo, Kevin J

    2011-07-30

    The approach used to calculate the two-electron integral by many electronic structure packages including generalized atomic and molecular electronic structure system-UK has been designed for CPU-based compute units. We redesigned the two-electron compute algorithm for acceleration on a graphical processing unit (GPU). We report the acceleration strategy and illustrate it on the (ss|ss) type integrals. This strategy is general for Fortran-based codes and uses the Accelerator compiler from Portland Group International and GPU-based accelerators from Nvidia. The evaluation of (ss|ss) type integrals within calculations using Hartree Fock ab initio methods and density functional theory are accelerated by single and quad GPU hardware systems by factors of 43 and 153, respectively. The overall speedup for a single self consistent field cycle is at least a factor of eight times faster on a single GPU compared with that of a single CPU. PMID:21541963

  4. Efficient implementation of effective core potential integrals and gradients on graphical processing units

    NASA Astrophysics Data System (ADS)

    Song, Chenchen; Wang, Lee-Ping; Sachse, Torsten; Preiß, Julia; Presselt, Martin; Martínez, Todd J.

    2015-07-01

    Effective core potential integral and gradient evaluations are accelerated via implementation on graphical processing units (GPUs). Two simple formulas are proposed to estimate the upper bounds of the integrals, and these are used for screening. A sorting strategy is designed to balance the workload between GPU threads properly. Significant improvements in performance and reduced scaling with system size are observed when combining the screening and sorting methods, and the calculations are highly efficient for systems containing up to 10 000 basis functions. The GPU implementation preserves the precision of the calculation; the ground state Hartree-Fock energy achieves good accuracy for CdSe and ZnTe nanocrystals, and energy is well conserved in ab initio molecular dynamics simulations.

  5. Parallel multigrid solver of radiative transfer equation for photon transport via graphics processing unit.

    PubMed

    Gao, Hao; Phan, Lan; Lin, Yuting

    2012-09-01

    A graphics processing unit-based parallel multigrid solver for a radiative transfer equation with vacuum boundary condition or reflection boundary condition is presented for heterogeneous media with complex geometry based on two-dimensional triangular meshes or three-dimensional tetrahedral meshes. The computational complexity of this parallel solver is linearly proportional to the degrees of freedom in both angular and spatial variables, while the full multigrid method is utilized to minimize the number of iterations. The overall gain of speed is roughly 30 to 300 fold with respect to our prior multigrid solver, which depends on the underlying regime and the parallelization. The numerical validations are presented with the MATLAB codes at https://sites.google.com/site/rtefastsolver/. PMID:23085905

  6. Particle-In-Cell simulations of high pressure plasmas using graphics processing units

    NASA Astrophysics Data System (ADS)

    Gebhardt, Markus; Atteln, Frank; Brinkmann, Ralf Peter; Mussenbrock, Thomas; Mertmann, Philipp; Awakowicz, Peter

    2009-10-01

    Particle-In-Cell (PIC) simulations are widely used to understand the fundamental phenomena in low-temperature plasmas. Particularly plasmas at very low gas pressures are studied using PIC methods. The inherent drawback of these methods is that they are very time consuming -- certain stability conditions has to be satisfied. This holds even more for the PIC simulation of high pressure plasmas due to the very high collision rates. The simulations take up to very much time to run on standard computers and require the help of computer clusters or super computers. Recent advances in the field of graphics processing units (GPUs) provides every personal computer with a highly parallel multi processor architecture for very little money. This architecture is freely programmable and can be used to implement a wide class of problems. In this paper we present the concepts of a fully parallel PIC simulation of high pressure plasmas using the benefits of GPU programming.

  7. A graphical method to evaluate predominant geochemical processes occurring in groundwater systems for radiocarbon dating

    USGS Publications Warehouse

    Han, Liang-Feng; Plummer, L. Niel; Aggarwal, Pradeep

    2012-01-01

    A graphical method is described for identifying geochemical reactions needed in the interpretation of radiocarbon age in groundwater systems. Graphs are constructed by plotting the measured 14C, δ13C, and concentration of dissolved inorganic carbon and are interpreted according to specific criteria to recognize water samples that are consistent with a wide range of processes, including geochemical reactions, carbon isotopic exchange, 14C decay, and mixing of waters. The graphs are used to provide a qualitative estimate of radiocarbon age, to deduce the hydrochemical complexity of a groundwater system, and to compare samples from different groundwater systems. Graphs of chemical and isotopic data from a series of previously-published groundwater studies are used to demonstrate the utility of the approach. Ultimately, the information derived from the graphs is used to improve geochemical models for adjustment of radiocarbon ages in groundwater systems.

  8. Efficient implementation of effective core potential integrals and gradients on graphical processing units.

    PubMed

    Song, Chenchen; Wang, Lee-Ping; Sachse, Torsten; Preiss, Julia; Presselt, Martin; Martínez, Todd J

    2015-07-01

    Effective core potential integral and gradient evaluations are accelerated via implementation on graphical processing units (GPUs). Two simple formulas are proposed to estimate the upper bounds of the integrals, and these are used for screening. A sorting strategy is designed to balance the workload between GPU threads properly. Significant improvements in performance and reduced scaling with system size are observed when combining the screening and sorting methods, and the calculations are highly efficient for systems containing up to 10 000 basis functions. The GPU implementation preserves the precision of the calculation; the ground state Hartree-Fock energy achieves good accuracy for CdSe and ZnTe nanocrystals, and energy is well conserved in ab initio molecular dynamics simulations. PMID:26156472

  9. Acceleration of Electron Repulsion Integral Evaluation on Graphics Processing Units via Use of Recurrence Relations.

    PubMed

    Miao, Yipu; Merz, Kenneth M

    2013-02-12

    Electron repulsion integral (ERI) calculation on graphical processing units (GPUs) can significantly accelerate quantum chemical calculations. Herein, the ab initio self-consistent-field (SCF) calculation is implemented on GPUs using recurrence relations, which is one of the fastest ERI evaluation algorithms currently available. A direct-SCF scheme to assemble the Fock matrix efficiently is presented, wherein ERIs are evaluated on-the-fly to avoid CPU-GPU data transfer, a well-known architectural bottleneck in GPU specific computation. Realized speedups on GPUs reach 10-100 times relative to traditional CPU nodes, with accuracies of better than 1 × 10(-7) for systems with more than 4000 basis functions. PMID:26588740

  10. Implementation and optimization of ultrasound signal processing algorithms on mobile GPU

    NASA Astrophysics Data System (ADS)

    Kong, Woo Kyu; Lee, Wooyoul; Kim, Kyu Cheol; Yoo, Yangmo; Song, Tai-Kyong

    2014-03-01

    A general-purpose graphics processing unit (GPGPU) has been used for improving computing power in medical ultrasound imaging systems. Recently, a mobile GPU becomes powerful to deal with 3D games and videos at high frame rates on Full HD or HD resolution displays. This paper proposes the method to implement ultrasound signal processing on a mobile GPU available in the high-end smartphone (Galaxy S4, Samsung Electronics, Seoul, Korea) with programmable shaders on the OpenGL ES 2.0 platform. To maximize the performance of the mobile GPU, the optimization of shader design and load sharing between vertex and fragment shader was performed. The beamformed data were captured from a tissue mimicking phantom (Model 539 Multipurpose Phantom, ATS Laboratories, Inc., Bridgeport, CT, USA) by using a commercial ultrasound imaging system equipped with a research package (Ultrasonix Touch, Ultrasonix, Richmond, BC, Canada). The real-time performance is evaluated by frame rates while varying the range of signal processing blocks. The implementation method of ultrasound signal processing on OpenGL ES 2.0 was verified by analyzing PSNR with MATLAB gold standard that has the same signal path. CNR was also analyzed to verify the method. From the evaluations, the proposed mobile GPU-based processing method has no significant difference with the processing using MATLAB (i.e., PSNR<52.51 dB). The comparable results of CNR were obtained from both processing methods (i.e., 11.31). From the mobile GPU implementation, the frame rates of 57.6 Hz were achieved. The total execution time was 17.4 ms that was faster than the acquisition time (i.e., 34.4 ms). These results indicate that the mobile GPU-based processing method can support real-time ultrasound B-mode processing on the smartphone.

  11. Real-time resampling in Fourier domain optical coherence tomography using a graphics processing unit.

    PubMed

    Van der Jeught, Sam; Bradu, Adrian; Podoleanu, Adrian Gh

    2010-01-01

    Fourier domain optical coherence tomography (FD-OCT) requires either a linear-in-wavenumber spectrometer or a computationally heavy software algorithm to recalibrate the acquired optical signal from wavelength to wavenumber. The first method is sensitive to the position of the prism in the spectrometer, while the second method drastically slows down the system speed when it is implemented on a serially oriented central processing unit. We implement the full resampling process on a commercial graphics processing unit (GPU), distributing the necessary calculations to many stream processors that operate in parallel. A comparison between several recalibration methods is made in terms of performance and image quality. The GPU is also used to accelerate the fast Fourier transform (FFT) and to remove the background noise, thereby achieving full GPU-based signal processing without the need for extra resampling hardware. A display rate of 25 framessec is achieved for processed images (1,024 x 1,024 pixels) using a line-scan charge-coupled device (CCD) camera operating at 25.6 kHz. PMID:20614994

  12. Real-time resampling in Fourier domain optical coherence tomography using a graphics processing unit

    NASA Astrophysics Data System (ADS)

    van der Jeught, Sam; Bradu, Adrian; Podoleanu, Adrian Gh.

    2010-05-01

    Fourier domain optical coherence tomography (FD-OCT) requires either a linear-in-wavenumber spectrometer or a computationally heavy software algorithm to recalibrate the acquired optical signal from wavelength to wavenumber. The first method is sensitive to the position of the prism in the spectrometer, while the second method drastically slows down the system speed when it is implemented on a serially oriented central processing unit. We implement the full resampling process on a commercial graphics processing unit (GPU), distributing the necessary calculations to many stream processors that operate in parallel. A comparison between several recalibration methods is made in terms of performance and image quality. The GPU is also used to accelerate the fast Fourier transform (FFT) and to remove the background noise, thereby achieving full GPU-based signal processing without the need for extra resampling hardware. A display rate of 25 frames/sec is achieved for processed images (1024×1024 pixels) using a line-scan charge-coupled device (CCD) camera operating at 25.6 kHz.

  13. Real-time blood flow visualization using the graphics processing unit.

    PubMed

    Yang, Owen; Cuccia, David; Choi, Bernard

    2011-01-01

    Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ∼10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark. PMID:21280915

  14. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit

    PubMed Central

    Lee, Kenneth K. C.; Mariampillai, Adrian; Yu, Joe X. Z.; Cadotte, David W.; Wilson, Brian C.; Standish, Beau A.; Yang, Victor X. D.

    2012-01-01

    Abstract: Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second. PMID:22808428

  15. Process industries - graphic arts, paint, plastics, and textiles: all cousins under the skin

    NASA Astrophysics Data System (ADS)

    Simon, Frederick T.

    2002-06-01

    The origin and selection of colors in the process industries is different depending upon how the creative process is applied and what are the capabilities of the manufacturing process. The fashion industry (clothing) with its supplier of textiles is the leader of color innovation. Color may be introduced into textile products at several stages in the manufacturing process from fiber through yarn and finally into fabric. The paint industry is divided into two major applications: automotive and trades sales. Automotive colors are selected by stylists who are in the employ of the automobile manufacturers. Trade sales paint on the other hand can be decided by paint manufactureres or by invididuals who patronize custom mixing facilities. Plastics colors are for the most part decided by the industrial designers who include color as part of the design. Graphic Arts (painting) is a burgeoning industry that uses color in image reproduction and package design. Except for text, printed material in color today has become the norm rather than an exception.

  16. Atmospheric process evaluation of mobile source emissions

    SciTech Connect

    1995-07-01

    During the past two decades there has been a considerable effort in the US to develop and introduce an alternative to the use of gasoline and conventional diesel fuel for transportation. The primary motives for this effort have been twofold: energy security and improvement in air quality, most notably ozone, or smog. The anticipated improvement in air quality is associated with a decrease in the atmospheric reactivity, and sometimes a decrease in the mass emission rate, of the organic gas and NO{sub x} emissions from alternative fuels when compared to conventional transportation fuels. Quantification of these air quality impacts is a prerequisite to decisions on adopting alternative fuels. The purpose of this report is to present a critical review of the procedures and data base used to assess the impact on ambient air quality of mobile source emissions from alternative and conventional transportation fuels and to make recommendations as to how this process can be improved. Alternative transportation fuels are defined as methanol, ethanol, CNG, LPG, and reformulated gasoline. Most of the discussion centers on light-duty AFVs operating on these fuels. Other advanced transportation technologies and fuels such as hydrogen, electric vehicles, and fuel cells, will not be discussed. However, the issues raised herein can also be applied to these technologies and other classes of vehicles, such as heavy-duty diesels (HDDs). An evaluation of the overall impact of AFVs on society requires consideration of a number of complex issues. It involves the development of new vehicle technology associated with engines, fuel systems, and emission control technology; the implementation of the necessary fuel infrastructure; and an appropriate understanding of the economic, health, safety, and environmental impacts associated with the use of these fuels. This report addresses the steps necessary to properly evaluate the impact of AFVs on ozone air quality.

  17. Optical diagnostics of a single evaporating droplet using fast parallel computing on graphics processing units

    NASA Astrophysics Data System (ADS)

    Jakubczyk, D.; Migacz, S.; Derkachov, G.; Woźniak, M.; Archer, J.; Kolwas, K.

    2016-09-01

    We report on the first application of the graphics processing units (GPUs) accelerated computing technology to improve performance of numerical methods used for the optical characterization of evaporating microdroplets. Single microdroplets of various liquids with different volatility and molecular weight (glycerine, glycols, water, etc.), as well as mixtures of liquids and diverse suspensions evaporate inside the electrodynamic trap under the chosen temperature and composition of atmosphere. The series of scattering patterns recorded from the evaporating microdroplets are processed by fitting complete Mie theory predictions with gradientless lookup table method. We showed that computations on GPUs can be effectively applied to inverse scattering problems. In particular, our technique accelerated calculations of the Mie scattering theory on a single-core processor in a Matlab environment over 800 times and almost 100 times comparing to the corresponding code in C language. Additionally, we overcame problems of the time-consuming data post-processing when some of the parameters (particularly the refractive index) of an investigated liquid are uncertain. Our program allows us to track the parameters characterizing the evaporating droplet nearly simultaneously with the progress of evaporation.

  18. Developing extensible lattice-Boltzmann simulators for general-purpose graphics-processing units

    SciTech Connect

    Walsh, S C; Saar, M O

    2011-12-21

    Lattice-Boltzmann methods are versatile numerical modeling techniques capable of reproducing a wide variety of fluid-mechanical behavior. These methods are well suited to parallel implementation, particularly on the single-instruction multiple data (SIMD) parallel processing environments found in computer graphics processing units (GPUs). Although more recent programming tools dramatically improve the ease with which GPU programs can be written, the programming environment still lacks the flexibility available to more traditional CPU programs. In particular, it may be difficult to develop modular and extensible programs that require variable on-device functionality with current GPU architectures. This paper describes a process of automatic code generation that overcomes these difficulties for lattice-Boltzmann simulations. It details the development of GPU-based modules for an extensible lattice-Boltzmann simulation package - LBHydra. The performance of the automatically generated code is compared to equivalent purpose written codes for both single-phase, multiple-phase, and multiple-component flows. The flexibility of the new method is demonstrated by simulating a rising, dissolving droplet in a porous medium with user generated lattice-Boltzmann models and subroutines.

  19. Graphical Technique to Support the Teaching/Learning Process of Software Process Reference Models

    NASA Astrophysics Data System (ADS)

    Espinosa-Curiel, Ismael Edrein; Rodríguez-Jacobo, Josefina; Fernández-Zepeda, José Alberto

    In this paper, we propose a set of diagrams to visualize software process reference models (PRM). The diagrams, called dimods, are the combination of some visual and process modeling techniques such as rich pictures, mind maps, IDEF and RAD diagrams. We show the use of this technique by designing a set of dimods for the Mexican Software Industry Process Model (MoProSoft). Additionally, we perform an evaluation of the usefulness of dimods. The result of the evaluation shows that dimods may be a support tool that facilitates the understanding, memorization, and learning of software PRMs in both, software development organizations and universities. The results also show that dimods may have advantages over the traditional description methods for these types of models.

  20. Spatial resolution recovery utilizing multi-ray tracing and graphic processing unit in PET image reconstruction

    NASA Astrophysics Data System (ADS)

    Liang, Yicheng; Peng, Hao

    2015-02-01

    Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity.

  1. Particle-in-cell simulations with charge-conserving current deposition on graphic processing units

    NASA Astrophysics Data System (ADS)

    Ren, Chuang; Kong, Xianglong; Huang, Michael; Decyk, Viktor; Mori, Warren

    2011-10-01

    Recently using CUDA, we have developed an electromagnetic Particle-in-Cell (PIC) code with charge-conserving current deposition for Nvidia graphic processing units (GPU's) (Kong et al., Journal of Computational Physics 230, 1676 (2011). On a Tesla M2050 (Fermi) card, the GPU PIC code can achieve a one-particle-step process time of 1.2 - 3.2 ns in 2D and 2.3 - 7.2 ns in 3D, depending on plasma temperatures. In this talk we will discuss novel algorithms for GPU-PIC including charge-conserving current deposition scheme with few branching and parallel particle sorting. These algorithms have made efficient use of the GPU shared memory. We will also discuss how to replace the computation kernels of existing parallel CPU codes while keeping their parallel structures. This work was supported by U.S. Department of Energy under Grant Nos. DE-FG02-06ER54879 and DE-FC02-04ER54789 and by NSF under Grant Nos. PHY-0903797 and CCF-0747324.

  2. Fast ray-tracing of human eye optics on Graphics Processing Units.

    PubMed

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. PMID:24713524

  3. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  4. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches. PMID:24298424

  5. Lossy hyperspectral image compression on a graphics processing unit: parallelization strategy and performance evaluation

    NASA Astrophysics Data System (ADS)

    Santos, Lucana; Magli, Enrico; Vitulli, Raffaele; Núñez, Antonio; López, José F.; Sarmiento, Roberto

    2013-01-01

    There is an intense necessity for the development of new hardware architectures for the implementation of algorithms for hyperspectral image compression on board satellites. Graphics processing units (GPUs) represent a very attractive opportunity, offering the possibility to dramatically increase the computation speed in applications that are data and task parallel. An algorithm for the lossy compression of hyperspectral images is implemented on a GPU using Nvidia computer unified device architecture (CUDA) parallel computing architecture. The parallelization strategy is explained, with emphasis on the entropy coding and bit packing phases, for which a more sophisticated strategy is necessary due to the existing data dependencies. Experimental results are obtained by comparing the performance of the GPU implementation with a single-threaded CPU implementation, showing high speedups of up to 15.41. A profiling of the algorithm is provided, demonstrating the high performance of the designed parallel entropy coding phase. The accuracy of the GPU implementation is presented, as well as the effect of the configuration parameters on performance. The convenience of using GPUs for on-board processing is demonstrated, and solutions to the potential difficulties encountered when accelerating hyperspectral compression algorithms are proposed, if space-qualified GPUs become a reality in the near future.

  6. Spatial resolution recovery utilizing multi-ray tracing and graphic processing unit in PET image reconstruction.

    PubMed

    Liang, Yicheng; Peng, Hao

    2015-02-01

    Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity. PMID:25591118

  7. A low-cost microcomputer system for interactive graphical processing of geophysical data on magnetic tape

    NASA Astrophysics Data System (ADS)

    Ulbrich, Carlton W.; Holden, Daniel N.

    In a recent Eos article (“Applications of Personal Computers in Geophysics,” Eos, November 18, 1986, p. 1321), W.H.K. Lee, J.C. Lahr, and R.E. Haberman described the uses of microcomputers in scientific work, with emphasis on applications in geophysics. One of the conclusions of their article was that common microcomputers are not convenient for processing of geophysical data in a high-level language such as Fortran because of the long times required to compile executable programs of only moderate size. They also indicate that common personal computers (PCs) are usually not equipped with tape drives and are not powerful enough to do heavy input/output (I/O) or “number crunching.” The purpose of this note is to supplement the material that Lee et al. presented by describing a relatively low-cost microcomputer system that is capable of performing interactive graphical processing of meteorological data on magnetic tape in a variety of computer languages, including Fortran.

  8. Real-time lossy compression of hyperspectral images using iterative error analysis on graphics processing units

    NASA Astrophysics Data System (ADS)

    Sánchez, Sergio; Plaza, Antonio

    2012-06-01

    Hyperspectral image compression is an important task in remotely sensed Earth Observation as the dimensionality of this kind of image data is ever increasing. This requires on-board compression in order to optimize the donwlink connection when sending the data to Earth. A successful algorithm to perform lossy compression of remotely sensed hyperspectral data is the iterative error analysis (IEA) algorithm, which applies an iterative process which allows controlling the amount of information loss and compression ratio depending on the number of iterations. This algorithm, which is based on spectral unmixing concepts, can be computationally expensive for hyperspectral images with high dimensionality. In this paper, we develop a new parallel implementation of the IEA algorithm for hyperspectral image compression on graphics processing units (GPUs). The proposed implementation is tested on several different GPUs from NVidia, and is shown to exhibit real-time performance in the analysis of an Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) data sets collected over different locations. The proposed algorithm and its parallel GPU implementation represent a significant advance towards real-time onboard (lossy) compression of hyperspectral data where the quality of the compression can be also adjusted in real-time.

  9. A performance comparison of different graphics processing units running direct N-body simulations

    NASA Astrophysics Data System (ADS)

    Capuzzo-Dolcetta, R.; Spera, M.

    2013-11-01

    Hybrid computational architectures based on the joint power of Central Processing Units (CPUs) and Graphic Processing Units (GPUs) are becoming popular and powerful hardware tools for a wide range of simulations in biology, chemistry, engineering, physics, etc. In this paper we present a performance comparison of various GPUs available on market when applied to the numerical integration of the classic, gravitational, N-body problem. To do this, we developed an OpenCL version of the parallel code HiGPUs used for these tests, because this portable version is the only apt to work on GPUs of different makes. The main general result is that we confirm the reliability, speed and cheapness of GPUs when applied to the examined kind of problems (i.e. when the forces to evaluate are dependent on the mutual distances, as it happens in gravitational physics and molecular dynamics). More specifically, we find that also the cheap GPUs built to be employed just for gaming applications are very performant in terms of computing speed also in scientific applications and, although with some limitations concerning on-board memory, can be a good choice to build a cheap and efficient machine for scientific applications.

  10. Graphics processing units as tools to predict mechanisms of biological signaling pathway regulation

    NASA Astrophysics Data System (ADS)

    McCarter, Patrick; Elston, Timothy; Nagiek, Michal; Dohlman, Henrik

    2013-04-01

    Biochemical and genomic studies have revealed protein components of S. cerevisiae (yeast) signal transduction networks. These networks allow the transmission of extracellular signals to the cell nucleus through coordinated biochemical interactions, resulting in direct responses to specific external stimuli. The coordination and regulation mechanisms of proteins in these networks have not been fully characterized. Thus, in this work we develop systems of ordinary differential equations to characterize processes that regulate signaling pathways. We employ graphics processing units (GPUs) in high performance computing environments to search in parallel through substantially more comprehensive parameter sets than allowed by personal computers. As a result, we are able to parameterize larger models with experimental data, leading to an increase in our model prediction capabilities. Thus far these models have helped to identify specific mechanisms such as positive and negative feedback loops that control network protein activity. We ultimately believe that the use of GPUs in biochemical signal transduction pathway modeling will help to discern how regulation mechanisms allow cells to respond to multiple external stimuli.

  11. Ultra-fast displaying Spectral Domain Optical Doppler Tomography system using a Graphics Processing Unit.

    PubMed

    Jeong, Hyosang; Cho, Nam Hyun; Jung, Unsang; Lee, Changho; Kim, Jeong-Yeon; Kim, Jeehyun

    2012-01-01

    We demonstrate an ultrafast displaying Spectral Domain Optical Doppler Tomography system using Graphics Processing Unit (GPU) computing. The calculation of FFT and the Doppler frequency shift is accelerated by the GPU. Our system can display processed OCT and ODT images simultaneously in real time at 120 fps for 1,024 pixels × 512 lateral A-scans. The computing time for the Doppler information was dependent on the size of the moving average window, but with a window size of 32 pixels the ODT computation time is only 8.3 ms, which is comparable to the data acquisition time. Also the phase noise decreases significantly with the window size. Since the performance of a real-time display for OCT/ODT is very important for clinical applications that need immediate diagnosis for screening or biopsy. Intraoperative surgery can take much benefit from the real-time display flow rate information from the technology. Moreover, the GPU is an attractive tool for clinical and commercial systems for functional OCT features as well. PMID:22969328

  12. Mobile Monitoring Data Processing & Analysis Strategies

    EPA Science Inventory

    The development of portable, high-time resolution instruments for measuring the concentrations of a variety of air pollutants has made it possible to collect data while in motion. This strategy, known as mobile monitoring, involves mounting air sensors on variety of different pla...

  13. Mobile Monitoring Data Processing and Analysis Strategies

    EPA Science Inventory

    The development of portable, high-time resolution instruments for measuring the concentrations of a variety of air pollutants has made it possible to collect data while in motion. This strategy, known as mobile monitoring, involves mounting air sensors on variety of different pla...

  14. VACTIV: A graphical dialog based program for an automatic processing of line and band spectra

    NASA Astrophysics Data System (ADS)

    Zlokazov, V. B.

    2013-05-01

    The program VACTIV-Visual ACTIV-has been developed for an automatic analysis of spectrum-like distributions, in particular gamma-ray spectra or alpha-spectra and is a standard graphical dialog based Windows XX application, driven by a menu, mouse and keyboard. On the one hand, it was a conversion of an existing Fortran program ACTIV [1] to the DELPHI language; on the other hand, it is a transformation of the sequential syntax of Fortran programming to a new object-oriented style, based on the organization of event interactions. New features implemented in the algorithms of both the versions consisted in the following as peak model both an analytical function and a graphical curve could be used; the peak search algorithm was able to recognize not only Gauss peaks but also peaks with an irregular form; both narrow peaks (2-4 channels) and broad ones (50-100 channels); the regularization technique in the fitting guaranteed a stable solution in the most complicated cases of strongly overlapping or weak peaks. The graphical dialog interface of VACTIV is much more convenient than the batch mode of ACTIV. [1] V.B. Zlokazov, Computer Physics Communications, 28 (1982) 27-37. NEW VERSION PROGRAM SUMMARYProgram Title: VACTIV Catalogue identifier: ABAC_v2_0 Licensing provisions: no Programming language: DELPHI 5-7 Pascal. Computer: IBM PC series. Operating system: Windows XX. RAM: 1 MB Keywords: Nuclear physics, spectrum decomposition, least squares analysis, graphical dialog, object-oriented programming. Classification: 17.6. Catalogue identifier of previous version: ABAC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 28 (1982) 27 Does the new version supersede the previous version?: Yes. Nature of problem: Program VACTIV is intended for precise analysis of arbitrary spectrum-like distributions, e.g. gamma-ray and X-ray spectra and allows the user to carry out the full cycle of automatic processing of such spectra, i.e. calibration, automatic peak search

  15. VACTIV: A graphical dialog based program for an automatic processing of line and band spectra

    NASA Astrophysics Data System (ADS)

    Zlokazov, V. B.

    2013-05-01

    The program VACTIV-Visual ACTIV-has been developed for an automatic analysis of spectrum-like distributions, in particular gamma-ray spectra or alpha-spectra and is a standard graphical dialog based Windows XX application, driven by a menu, mouse and keyboard. On the one hand, it was a conversion of an existing Fortran program ACTIV [1] to the DELPHI language; on the other hand, it is a transformation of the sequential syntax of Fortran programming to a new object-oriented style, based on the organization of event interactions. New features implemented in the algorithms of both the versions consisted in the following as peak model both an analytical function and a graphical curve could be used; the peak search algorithm was able to recognize not only Gauss peaks but also peaks with an irregular form; both narrow peaks (2-4 channels) and broad ones (50-100 channels); the regularization technique in the fitting guaranteed a stable solution in the most complicated cases of strongly overlapping or weak peaks. The graphical dialog interface of VACTIV is much more convenient than the batch mode of ACTIV. [1] V.B. Zlokazov, Computer Physics Communications, 28 (1982) 27-37. NEW VERSION PROGRAM SUMMARYProgram Title: VACTIV Catalogue identifier: ABAC_v2_0 Licensing provisions: no Programming language: DELPHI 5-7 Pascal. Computer: IBM PC series. Operating system: Windows XX. RAM: 1 MB Keywords: Nuclear physics, spectrum decomposition, least squares analysis, graphical dialog, object-oriented programming. Classification: 17.6. Catalogue identifier of previous version: ABAC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 28 (1982) 27 Does the new version supersede the previous version?: Yes. Nature of problem: Program VACTIV is intended for precise analysis of arbitrary spectrum-like distributions, e.g. gamma-ray and X-ray spectra and allows the user to carry out the full cycle of automatic processing of such spectra, i.e. calibration, automatic peak search

  16. Large-scale analytical Fourier transform of photomask layouts using graphics processing units

    NASA Astrophysics Data System (ADS)

    Sakamoto, Julia A.

    2015-10-01

    Compensation of lens-heating effects during the exposure scan in an optical lithographic system requires knowledge of the heating profile in the pupil of the projection lens. A necessary component in the accurate estimation of this profile is the total integrated distribution of light, relying on the squared modulus of the Fourier transform (FT) of the photomask layout for individual process layers. Requiring a layout representation in pixelated image format, the most common approach is to compute the FT numerically via the fast Fourier transform (FFT). However, the file size for a standard 26- mm×33-mm mask with 5-nm pixels is an overwhelming 137 TB in single precision; the data importing process alone, prior to FFT computation, can render this method highly impractical. A more feasible solution is to handle layout data in a highly compact format with vertex locations of mask features (polygons), which correspond to elements in an integrated circuit, as well as pattern symmetries and repetitions (e.g., GDSII format). Provided the polygons can decompose into shapes for which analytical FT expressions are possible, the analytical approach dramatically reduces computation time and alleviates the burden of importing extensive mask data. Algorithms have been developed for importing and interpreting hierarchical layout data and computing the analytical FT on a graphics processing unit (GPU) for rapid parallel processing, not assuming incoherent imaging. Testing was performed on the active layer of a 392- μm×297-μm virtual chip test structure with 43 substructures distributed over six hierarchical levels. The factor of improvement in the analytical versus numerical approach for importing layout data, performing CPU-GPU memory transfers, and executing the FT on a single NVIDIA Tesla K20X GPU was 1.6×104, 4.9×103, and 3.8×103, respectively. Various ideas for algorithm enhancements will be discussed.

  17. Process and Object Interpretations of Vector Magnitude Mediated by Use of the Graphics Calculator.

    ERIC Educational Resources Information Center

    Forster, Patricia

    2000-01-01

    Analyzes the development of one student's understanding of vector magnitude and how her problem solving was mediated by use of the absolute value graphics calculator function. (Contains 35 references.) (Author/ASK)

  18. Seismic interpretation using Support Vector Machines implemented on Graphics Processing Units

    SciTech Connect

    Kuzma, H A; Rector, J W; Bremer, D

    2006-06-22

    Support Vector Machines (SVMs) estimate lithologic properties of rock formations from seismic data by interpolating between known models using synthetically generated model/data pairs. SVMs are related to kriging and radial basis function neural networks. In our study, we train an SVM to approximate an inverse to the Zoeppritz equations. Training models are sampled from distributions constructed from well-log statistics. Training data is computed via a physically realistic forward modeling algorithm. In our experiments, each training data vector is a set of seismic traces similar to a 2-d image. The SVM returns a model given by a weighted comparison of the new data to each training data vector. The method of comparison is given by a kernel function which implicitly transforms data into a high-dimensional feature space and performs a dot-product. The feature space of a Gaussian kernel is made up of sines and cosines and so is appropriate for band-limited seismic problems. Training an SVM involves estimating a set of weights from the training model/data pairs. It is designed to be an easy problem; at worst it is a quadratic programming problem on the order of the size of the training set. By implementing the slowest part of our SVM algorithm on a graphics processing unit (GPU), we improve the speed of the algorithm by two orders of magnitude. Our SVM/GPU combination achieves results that are similar to those of conventional iterative inversion in fractions of the time.

  19. High-throughput Characterization of Porous Materials Using Graphics Processing Units

    SciTech Connect

    Kim, Jihan; Martin, Richard L.; Ruebel, Oliver; Haranczyk, Maciej; Smit, Berend

    2012-03-19

    We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CH$_{4}$ and CO$_{2}$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.

  20. Accelerated Molecular Dynamics Simulations with the AMOEBA Polarizable Force Field on Graphics Processing Units

    PubMed Central

    2013-01-01

    The accelerated molecular dynamics (aMD) method has recently been shown to enhance the sampling of biomolecules in molecular dynamics (MD) simulations, often by several orders of magnitude. Here, we describe an implementation of the aMD method for the OpenMM application layer that takes full advantage of graphics processing units (GPUs) computing. The aMD method is shown to work in combination with the AMOEBA polarizable force field (AMOEBA-aMD), allowing the simulation of long time-scale events with a polarizable force field. Benchmarks are provided to show that the AMOEBA-aMD method is efficiently implemented and produces accurate results in its standard parametrization. For the BPTI protein, we demonstrate that the protein structure described with AMOEBA remains stable even on the extended time scales accessed at high levels of accelerations. For the DNA repair metalloenzyme endonuclease IV, we show that the use of the AMOEBA force field is a significant improvement over fixed charged models for describing the enzyme active-site. The new AMOEBA-aMD method is publicly available (http://wiki.simtk.org/openmm/VirtualRepository) and promises to be interesting for studying complex systems that can benefit from both the use of a polarizable force field and enhanced sampling. PMID:24634618

  1. Accelerating image reconstruction in three-dimensional optoacoustic tomography on graphics processing units

    PubMed Central

    Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A.; Anastasio, Mark A.

    2013-01-01

    Purpose: Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Methods: Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. Results: The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Conclusions: Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction. PMID:23387778

  2. The application of projected conjugate gradient solvers on graphical processing units

    SciTech Connect

    Lin, Youzuo; Renaut, Rosemary

    2011-01-26

    Graphical processing units introduce the capability for large scale computation at the desktop. Presented numerical results verify that efficiencies and accuracies of basic linear algebra subroutines of all levels when implemented in CUDA and Jacket are comparable. But experimental results demonstrate that the basic linear algebra subroutines of level three offer the greatest potential for improving efficiency of basic numerical algorithms. We consider the solution of the multiple right hand side set of linear equations using Krylov subspace-based solvers. Thus, for the multiple right hand side case, it is more efficient to make use of a block implementation of the conjugate gradient algorithm, rather than to solve each system independently. Jacket is used for the implementation. Furthermore, including projection from one system to another improves efficiency. A relevant example, for which simulated results are provided, is the reconstruction of a three dimensional medical image volume acquired from a positron emission tomography scanner. Efficiency of the reconstruction is improved by using projection across nearby slices.

  3. Simulation of Coarse-Grained Protein-Protein Interactions with Graphics Processing Units.

    PubMed

    Tunbridge, Ian; Best, Robert B; Gain, James; Kuttel, Michelle M

    2010-11-01

    We report a hybrid parallel central and graphics processing units (CPU-GPU) implementation of a coarse-grained model for replica exchange Monte Carlo (REMC) simulations of protein assemblies. We describe the design, optimization, validation, and benchmarking of our algorithms, particularly the parallelization strategy, which is specific to the requirements of GPU hardware. Performance evaluation of our hybrid implementation shows scaled speedup as compared to a single-core CPU; reference simulations of small 100 residue proteins have a modest speedup of 4, while large simulations with thousands of residues are up to 1400 times faster. Importantly, the combination of coarse-grained models with highly parallel GPU hardware vastly increases the length- and time-scales accessible for protein simulation, making it possible to simulate much larger systems of interacting proteins than have previously been attempted. As a first step toward the simulation of the assembly of an entire viral capsid, we have demonstrated that the chosen coarse-grained model, together with REMC sampling, is capable of identifying the correctly bound structure, for a pair of fragments from the human hepatitis B virus capsid. Our parallel solution can easily be generalized to other interaction functions and other types of macromolecules and has implications for the parallelization of similar N-body problems that require random access lookups. PMID:26617104

  4. Graphics processing unit accelerated one-dimensional blood flow computation in the human arterial tree.

    PubMed

    Itu, Lucian; Sharma, Puneet; Kamen, Ali; Suciu, Constantin; Comaniciu, Dorin

    2013-12-01

    One-dimensional blood flow models have been used extensively for computing pressure and flow waveforms in the human arterial circulation. We propose an improved numerical implementation based on a graphics processing unit (GPU) for the acceleration of the execution time of one-dimensional model. A novel parallel hybrid CPU-GPU algorithm with compact copy operations (PHCGCC) and a parallel GPU only (PGO) algorithm are developed, which are compared against previously introduced PHCG versions, a single-threaded CPU only algorithm and a multi-threaded CPU only algorithm. Different second-order numerical schemes (Lax-Wendroff and Taylor series) are evaluated for the numerical solution of one-dimensional model, and the computational setups include physiologically motivated non-periodic (Windkessel) and periodic boundary conditions (BC) (structured tree) and elastic and viscoelastic wall laws. Both the PHCGCC and the PGO implementations improved the execution time significantly. The speed-up values over the single-threaded CPU only implementation range from 5.26 to 8.10 × , whereas the speed-up values over the multi-threaded CPU only implementation range from 1.84 to 4.02 × . The PHCGCC algorithm performs best for an elastic wall law with non-periodic BC and for viscoelastic wall laws, whereas the PGO algorithm performs best for an elastic wall law with periodic BC. PMID:24009129

  5. Graphics processing unit (GPU)-based computation of heat conduction in thermally anisotropic solids

    NASA Astrophysics Data System (ADS)

    Nahas, C. A.; Balasubramaniam, Krishnan; Rajagopal, Prabhu

    2013-01-01

    Numerical modeling of anisotropic media is a computationally intensive task since it brings additional complexity to the field problem in such a way that the physical properties are different in different directions. Largely used in the aerospace industry because of their lightweight nature, composite materials are a very good example of thermally anisotropic media. With advancements in video gaming technology, parallel processors are much cheaper today and accessibility to higher-end graphical processing devices has increased dramatically over the past couple of years. Since these massively parallel GPUs are very good in handling floating point arithmetic, they provide a new platform for engineers and scientists to accelerate their numerical models using commodity hardware. In this paper we implement a parallel finite difference model of thermal diffusion through anisotropic media using the NVIDIA CUDA (Compute Unified device Architecture). We use the NVIDIA GeForce GTX 560 Ti as our primary computing device which consists of 384 CUDA cores clocked at 1645 MHz with a standard desktop pc as the host platform. We compare the results from standard CPU implementation for its accuracy and speed and draw implications for simulation using the GPU paradigm.

  6. Parallel design of JPEG-LS encoder on graphics processing units

    NASA Astrophysics Data System (ADS)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  7. Acceleration of High Angular Momentum Electron Repulsion Integrals and Integral Derivatives on Graphics Processing Units.

    PubMed

    Miao, Yipu; Merz, Kenneth M

    2015-04-14

    We present an efficient implementation of ab initio self-consistent field (SCF) energy and gradient calculations that run on Compute Unified Device Architecture (CUDA) enabled graphical processing units (GPUs) using recurrence relations. We first discuss the machine-generated code that calculates the electron-repulsion integrals (ERIs) for different ERI types. Next we describe the porting of the SCF gradient calculation to GPUs, which results in an acceleration of the computation of the first-order derivative of the ERIs. However, only s, p, and d ERIs and s and p derivatives could be executed simultaneously on GPUs using the current version of CUDA and generation of NVidia GPUs using a previously described algorithm [Miao and Merz J. Chem. Theory Comput. 2013, 9, 965-976.]. Hence, we developed an algorithm to compute f type ERIs and d type ERI derivatives on GPUs. Our benchmarks shows the performance GPU enable ERI and ERI derivative computation yielded speedups of 10-18 times relative to traditional CPU execution. An accuracy analysis using double-precision calculations demonstrates that the overall accuracy is satisfactory for most applications. PMID:26574356

  8. Graphic processing unit accelerated real-time partially coherent beam generator

    NASA Astrophysics Data System (ADS)

    Ni, Xiaolong; Liu, Zhi; Chen, Chunyi; Jiang, Huilin; Fang, Hanhan; Song, Lujun; Zhang, Su

    2016-07-01

    A method of using liquid-crystals (LCs) to generate a partially coherent beam in real-time is described. An expression for generating a partially coherent beam is given and calculated using a graphic processing unit (GPU), i.e., the GeForce GTX 680. A liquid-crystal on silicon (LCOS) with 256 × 256 pixels is used as the partially coherent beam generator (PCBG). An optimizing method with partition convolution is used to improve the generating speed of our LC PCBG. The total time needed to generate a random phase map with a coherence width range from 0.015 mm to 1.5 mm is less than 2.4 ms for calculation and readout with the GPU; adding the time needed for the CPU to read and send to LCOS with the response time of the LC PCBG, the real-time partially coherent beam (PCB) generation frequency of our LC PCBG is up to 312 Hz. To our knowledge, it is the first real-time partially coherent beam generator. A series of experiments based on double pinhole interference are performed. The result shows that to generate a laser beam with a coherence width of 0.9 mm and 1.5 mm, with a mean error of approximately 1%, the RMS values needed 0.021306 and 0.020883 and the PV values required 0.073576 and 0.072998, respectively.

  9. Density-fitted singles and doubles coupled cluster on graphics processing units

    SciTech Connect

    Sherrill, David; Sumpter, Bobby G; DePrince, III, A. Eugene

    2014-01-01

    We adapt an algorithm for singles and doubles coupled cluster (CCSD) that uses density fitting (DF) or Cholesky decomposition (CD) in the construction and contraction of all electron repulsion integrals (ERI s) for use on heterogeneous compute nodes consisting of a multicore CPU and at least one graphics processing unit (GPU). The use of approximate 3-index ERI s ameliorates two of the major difficulties in designing scientific algorithms for GPU s: (i) the extremely limited global memory on the devices and (ii) the overhead associated with data motion across the PCI bus. For the benzene trimer described by an aug-cc-pVDZ basis set, the use of a single NVIDIA Tesla C2070 (Fermi) GPU accelerates a CD-CCSD computation by a factor of 2.1, relative to the multicore CPU-only algorithm that uses 6 highly efficient Intel core i7-3930K CPU cores. The use of two Fermis provides an acceleration of 2.89, which is comparable to that observed when using a single NVIDIA Kepler K20c GPU (2.73).

  10. Graphics processing unit (GPU)-accelerated particle filter framework for positron emission tomography image reconstruction.

    PubMed

    Yu, Fengchao; Liu, Huafeng; Hu, Zhenghui; Shi, Pengcheng

    2012-04-01

    As a consequence of the random nature of photon emissions and detections, the data collected by a positron emission tomography (PET) imaging system can be shown to be Poisson distributed. Meanwhile, there have been considerable efforts within the tracer kinetic modeling communities aimed at establishing the relationship between the PET data and physiological parameters that affect the uptake and metabolism of the tracer. Both statistical and physiological models are important to PET reconstruction. The majority of previous efforts are based on simplified, nonphysical mathematical expression, such as Poisson modeling of the measured data, which is, on the whole, completed without consideration of the underlying physiology. In this paper, we proposed a graphics processing unit (GPU)-accelerated reconstruction strategy that can take both statistical model and physiological model into consideration with the aid of state-space evolution equations. The proposed strategy formulates the organ activity distribution through tracer kinetics models and the photon-counting measurements through observation equations, thus making it possible to unify these two constraints into a general framework. In order to accelerate reconstruction, GPU-based parallel computing is introduced. Experiments of Zubal-thorax-phantom data, Monte Carlo simulated phantom data, and real phantom data show the power of the method. Furthermore, thanks to the computing power of the GPU, the reconstruction time is practical for clinical application. PMID:22472843

  11. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    NASA Astrophysics Data System (ADS)

    Rath, N.; Kato, S.; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q.

    2014-04-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  12. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units.

    PubMed

    Rath, N; Kato, S; Levesque, J P; Mauel, M E; Navratil, G A; Peng, Q

    2014-04-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules. PMID:24784666

  13. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    SciTech Connect

    Rath, N. Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q.; Kato, S.

    2014-04-15

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  14. Simulating data processing for an Advanced Ion Mobility Mass Spectrometer

    SciTech Connect

    Chavarría-Miranda, Daniel; Clowers, Brian H.; Anderson, Gordon A.; Belov, Mikhail E.

    2007-11-03

    We have designed and implemented a Cray XD-1-based sim- ulation of data capture and signal processing for an ad- vanced Ion Mobility mass spectrometer (Hadamard trans- form Ion Mobility). Our simulation is a hybrid application that uses both an FPGA component and a CPU-based soft- ware component to simulate Ion Mobility mass spectrome- try data processing. The FPGA component includes data capture and accumulation, as well as a more sophisticated deconvolution algorithm based on a PNNL-developed en- hancement to standard Hadamard transform Ion Mobility spectrometry. The software portion is in charge of stream- ing data to the FPGA and collecting results. We expect the computational and memory addressing logic of the FPGA component to be portable to an instrument-attached FPGA board that can be interfaced with a Hadamard transform Ion Mobility mass spectrometer.

  15. Interactive Computing and Graphics in Undergraduate Digital Signal Processing. Microcomputing Working Paper Series F 84-9.

    ERIC Educational Resources Information Center

    Onaral, Banu; And Others

    This report describes the development of a Drexel University electrical and computer engineering course on digital filter design that used interactive computing and graphics, and was one of three courses in a senior-level sequence on digital signal processing (DSP). Interactive and digital analysis/design routines and the interconnection of these…

  16. Note: Quasi-real-time analysis of dynamic near field scattering data using a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Cerchiari, G.; Croccolo, F.; Cardinaux, F.; Scheffold, F.

    2012-10-01

    We present an implementation of the analysis of dynamic near field scattering (NFS) data using a graphics processing unit. We introduce an optimized data management scheme thereby limiting the number of operations required. Overall, we reduce the processing time from hours to minutes, for typical experimental conditions. Previously the limiting step in such experiments, the processing time is now comparable to the data acquisition time. Our approach is applicable to various dynamic NFS methods, including shadowgraph, Schlieren and differential dynamic microscopy.

  17. Mobile Ultrasound Plane Wave Beamforming on iPhone or iPad using Metal- based GPU Processing

    NASA Astrophysics Data System (ADS)

    Hewener, Holger J.; Tretbar, Steffen H.

    Mobile and cost effective ultrasound devices are being used in point of care scenarios or the drama room. To reduce the costs of such devices we already presented the possibilities of consumer devices like the Apple iPad for full signal processing of raw data for ultrasound image generation. Using technologies like plane wave imaging to generate a full image with only one excitation/reception event the acquisition times and power consumption of ultrasound imaging can be reduced for low power mobile devices based on consumer electronics realizing the transition from FPGA or ASIC based beamforming into more flexible software beamforming. The massive parallel beamforming processing can be done with the Apple framework "Metal" for advanced graphics and general purpose GPU processing for the iOS platform. We were able to integrate the beamforming reconstruction into our mobile ultrasound processing application with imaging rates up to 70 Hz on iPad Air 2 hardware.

  18. A GRAPHICS PROCESSING UNIT-ENABLED, HIGH-RESOLUTION COSMOLOGICAL MICROLENSING PARAMETER SURVEY

    SciTech Connect

    Bate, N. F.; Fluke, C. J.

    2012-01-10

    In the era of synoptic surveys, the number of known gravitationally lensed quasars is set to increase by over an order of magnitude. These new discoveries will enable a move from single-quasar studies to investigations of statistical samples, presenting new opportunities to test theoretical models for the structure of quasar accretion disks and broad emission line regions (BELRs). As one crucial step in preparing for this influx of new lensed systems, a large-scale exploration of microlensing convergence-shear parameter space is warranted, requiring the computation of O(10{sup 5}) high-resolution magnification maps. Based on properties of known lensed quasars, and expectations from accretion disk/BELR modeling, we identify regions of convergence-shear parameter space, map sizes, smooth matter fractions, and pixel resolutions that should be covered. We describe how the computationally time-consuming task of producing {approx}290,000 magnification maps with sufficient resolution (10,000{sup 2} pixel map{sup -1}) to probe scales from the inner edge of the accretion disk to the BELR can be achieved in {approx}400 days on a 100 teraflop s{sup -1} high-performance computing facility, where the processing performance is achieved with graphics processing units. We illustrate a use-case for the parameter survey by investigating the effects of varying the lens macro-model on accretion disk constraints in the lensed quasar Q2237+0305. We find that although all constraints are consistent within their current error bars, models with more densely packed microlenses tend to predict shallower accretion disk radial temperature profiles. With a large parameter survey such as the one described here, such systematics on microlensing measurements could be fully explored.

  19. In-Situ Statistical Analysis of Autotune Simulation Data using Graphical Processing Units

    SciTech Connect

    Ranjan, Niloo; Sanyal, Jibonananda; New, Joshua Ryan

    2013-08-01

    Developing accurate building energy simulation models to assist energy efficiency at speed and scale is one of the research goals of the Whole-Building and Community Integration group, which is a part of Building Technologies Research and Integration Center (BTRIC) at Oak Ridge National Laboratory (ORNL). The aim of the Autotune project is to speed up the automated calibration of building energy models to match measured utility or sensor data. The workflow of this project takes input parameters and runs EnergyPlus simulations on Oak Ridge Leadership Computing Facility s (OLCF) computing resources such as Titan, the world s second fastest supercomputer. Multiple simulations run in parallel on nodes having 16 processors each and a Graphics Processing Unit (GPU). Each node produces a 5.7 GB output file comprising 256 files from 64 simulations. Four types of output data covering monthly, daily, hourly, and 15-minute time steps for each annual simulation is produced. A total of 270TB+ of data has been produced. In this project, the simulation data is statistically analyzed in-situ using GPUs while annual simulations are being computed on the traditional processors. Titan, with its recent addition of 18,688 Compute Unified Device Architecture (CUDA) capable NVIDIA GPUs, has greatly extended its capability for massively parallel data processing. CUDA is used along with C/MPI to calculate statistical metrics such as sum, mean, variance, and standard deviation leveraging GPU acceleration. The workflow developed in this project produces statistical summaries of the data which reduces by multiple orders of magnitude the time and amount of data that needs to be stored. These statistical capabilities are anticipated to be useful for sensitivity analysis of EnergyPlus simulations.

  20. Fast Analysis of Molecular Dynamics Trajectories with Graphics Processing Units—Radial Distribution Function Histogramming

    PubMed Central

    Stone, John E.; Kohlmeyer, Axel

    2011-01-01

    The calculation of radial distribution functions (RDFs) from molecular dynamics trajectory data is a common and computationally expensive analysis task. The rate limiting step in the calculation of the RDF is building a histogram of the distance between atom pairs in each trajectory frame. Here we present an implementation of this histogramming scheme for multiple graphics processing units (GPUs). The algorithm features a tiling scheme to maximize the reuse of data at the fastest levels of the GPU’s memory hierarchy and dynamic load balancing to allow high performance on heterogeneous configurations of GPUs. Several versions of the RDF algorithm are presented, utilizing the specific hardware features found on different generations of GPUs. We take advantage of larger shared memory and atomic memory operations available on state-of-the-art GPUs to accelerate the code significantly. The use of atomic memory operations allows the fast, limited-capacity on-chip memory to be used much more efficiently, resulting in a fivefold increase in performance compared to the version of the algorithm without atomic operations. The ultimate version of the algorithm running in parallel on four NVIDIA GeForce GTX 480 (Fermi) GPUs was found to be 92 times faster than a multithreaded implementation running on an Intel Xeon 5550 CPU. On this multi-GPU hardware, the RDF between two selections of 1,000,000 atoms each can be calculated in 26.9 seconds per frame. The multi-GPU RDF algorithms described here are implemented in VMD, a widely used and freely available software package for molecular dynamics visualization and analysis. PMID:21547007

  1. Fast Analysis of Molecular Dynamics Trajectories with Graphics Processing Units-Radial Distribution Function Histogramming.

    PubMed

    Levine, Benjamin G; Stone, John E; Kohlmeyer, Axel

    2011-05-01

    The calculation of radial distribution functions (RDFs) from molecular dynamics trajectory data is a common and computationally expensive analysis task. The rate limiting step in the calculation of the RDF is building a histogram of the distance between atom pairs in each trajectory frame. Here we present an implementation of this histogramming scheme for multiple graphics processing units (GPUs). The algorithm features a tiling scheme to maximize the reuse of data at the fastest levels of the GPU's memory hierarchy and dynamic load balancing to allow high performance on heterogeneous configurations of GPUs. Several versions of the RDF algorithm are presented, utilizing the specific hardware features found on different generations of GPUs. We take advantage of larger shared memory and atomic memory operations available on state-of-the-art GPUs to accelerate the code significantly. The use of atomic memory operations allows the fast, limited-capacity on-chip memory to be used much more efficiently, resulting in a fivefold increase in performance compared to the version of the algorithm without atomic operations. The ultimate version of the algorithm running in parallel on four NVIDIA GeForce GTX 480 (Fermi) GPUs was found to be 92 times faster than a multithreaded implementation running on an Intel Xeon 5550 CPU. On this multi-GPU hardware, the RDF between two selections of 1,000,000 atoms each can be calculated in 26.9 seconds per frame. The multi-GPU RDF algorithms described here are implemented in VMD, a widely used and freely available software package for molecular dynamics visualization and analysis. PMID:21547007

  2. Fast analysis of molecular dynamics trajectories with graphics processing units-Radial distribution function histogramming

    SciTech Connect

    Levine, Benjamin G.; Stone, John E.; Kohlmeyer, Axel

    2011-05-01

    The calculation of radial distribution functions (RDFs) from molecular dynamics trajectory data is a common and computationally expensive analysis task. The rate limiting step in the calculation of the RDF is building a histogram of the distance between atom pairs in each trajectory frame. Here we present an implementation of this histogramming scheme for multiple graphics processing units (GPUs). The algorithm features a tiling scheme to maximize the reuse of data at the fastest levels of the GPU's memory hierarchy and dynamic load balancing to allow high performance on heterogeneous configurations of GPUs. Several versions of the RDF algorithm are presented, utilizing the specific hardware features found on different generations of GPUs. We take advantage of larger shared memory and atomic memory operations available on state-of-the-art GPUs to accelerate the code significantly. The use of atomic memory operations allows the fast, limited-capacity on-chip memory to be used much more efficiently, resulting in a fivefold increase in performance compared to the version of the algorithm without atomic operations. The ultimate version of the algorithm running in parallel on four NVIDIA GeForce GTX 480 (Fermi) GPUs was found to be 92 times faster than a multithreaded implementation running on an Intel Xeon 5550 CPU. On this multi-GPU hardware, the RDF between two selections of 1,000,000 atoms each can be calculated in 26.9 s per frame. The multi-GPU RDF algorithms described here are implemented in VMD, a widely used and freely available software package for molecular dynamics visualization and analysis.

  3. Quantum Chemistry for Solvated Molecules on Graphical Processing Units Using Polarizable Continuum Models.

    PubMed

    Liu, Fang; Luehr, Nathan; Kulik, Heather J; Martínez, Todd J

    2015-07-14

    The conductor-like polarization model (C-PCM) with switching/Gaussian smooth discretization is a widely used implicit solvation model in chemical simulations. However, its application in quantum mechanical calculations of large-scale biomolecular systems can be limited by computational expense of both the gas phase electronic structure and the solvation interaction. We have previously used graphical processing units (GPUs) to accelerate the first of these steps. Here, we extend the use of GPUs to accelerate electronic structure calculations including C-PCM solvation. Implementation on the GPU leads to significant acceleration of the generation of the required integrals for C-PCM. We further propose two strategies to improve the solution of the required linear equations: a dynamic convergence threshold and a randomized block-Jacobi preconditioner. These strategies are not specific to GPUs and are expected to be beneficial for both CPU and GPU implementations. We benchmark the performance of the new implementation using over 20 small proteins in solvent environment. Using a single GPU, our method evaluates the C-PCM related integrals and their derivatives more than 10× faster than that with a conventional CPU-based implementation. Our improvements to the linear solver provide a further 3× acceleration. The overall calculations including C-PCM solvation require, typically, 20-40% more effort than that for their gas phase counterparts for a moderate basis set and molecule surface discretization level. The relative cost of the C-PCM solvation correction decreases as the basis sets and/or cavity radii increase. Therefore, description of solvation with this model should be routine. We also discuss applications to the study of the conformational landscape of an amyloid fibril. PMID:26575750

  4. FLOCKING-BASED DOCUMENT CLUSTERING ON THE GRAPHICS PROCESSING UNIT [Book Chapter

    SciTech Connect

    Charles, J S; Patton, R M; Potok, T E; Cui, X

    2008-01-01

    Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the fl ocking behavior of birds. Each bird represents a single document and fl ies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly diffi cult to receive results in a reasonable amount of time. However, fl ocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have experienced improved performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefi t the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NVIDIA®, we developed a document fl ocking implementation to be run on the NVIDIA® GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3,000 documents. The results of these tests were very signifi cant. Performance gains ranged from three to nearly fi ve times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  5. Parallel flow accumulation algorithms for graphical processing units with application to RUSLE model

    NASA Astrophysics Data System (ADS)

    Sten, Johan; Lilja, Harri; Hyväluoma, Jari; Westerholm, Jan; Aspnäs, Mats

    2016-04-01

    Digital elevation models (DEMs) are widely used in the modeling of surface hydrology, which typically includes the determination of flow directions and flow accumulation. The use of high-resolution DEMs increases the accuracy of flow accumulation computation, but as a drawback, the computational time may become excessively long if large areas are analyzed. In this paper we investigate the use of graphical processing units (GPUs) for efficient flow accumulation calculations. We present two new parallel flow accumulation algorithms based on dependency transfer and topological sorting and compare them to previously published flow transfer and indegree-based algorithms. We benchmark the GPU implementations against industry standards, ArcGIS and SAGA. With the flow-transfer D8 flow routing model and binary input data, a speed up of 19 is achieved compared to ArcGIS and 15 compared to SAGA. We show that on GPUs the topological sort-based flow accumulation algorithm leads on average to a speedup by a factor of 7 over the flow-transfer algorithm. Thus a total speed up of the order of 100 is achieved. We test the algorithms by applying them to the Revised Universal Soil Loss Equation (RUSLE) erosion model. For this purpose we present parallel versions of the slope, LS factor and RUSLE algorithms and show that the RUSLE erosion results for an area of 12 km x 24 km containing 72 million cells can be calculated in less than a second. Since flow accumulation is needed in many hydrological models, the developed algorithms may find use in many other applications than RUSLE modeling. The algorithm based on topological sorting is particularly promising for dynamic hydrological models where flow accumulations are repeatedly computed over an unchanged DEM.

  6. Porting ONETEP to graphical processing unit-based coprocessors. 1. FFT box operations.

    PubMed

    Wilkinson, Karl; Skylaris, Chris-Kriton

    2013-10-30

    We present the first graphical processing unit (GPU) coprocessor-enabled version of the Order-N Electronic Total Energy Package (ONETEP) code for linear-scaling first principles quantum mechanical calculations on materials. This work focuses on porting to the GPU the parts of the code that involve atom-localized fast Fourier transform (FFT) operations. These are among the most computationally intensive parts of the code and are used in core algorithms such as the calculation of the charge density, the local potential integrals, the kinetic energy integrals, and the nonorthogonal generalized Wannier function gradient. We have found that direct porting of the isolated FFT operations did not provide any benefit. Instead, it was necessary to tailor the port to each of the aforementioned algorithms to optimize data transfer to and from the GPU. A detailed discussion of the methods used and tests of the resulting performance are presented, which show that individual steps in the relevant algorithms are accelerated by a significant amount. However, the transfer of data between the GPU and host machine is a significant bottleneck in the reported version of the code. In addition, an initial investigation into a dynamic precision scheme for the ONETEP energy calculation has been performed to take advantage of the enhanced single precision capabilities of GPUs. The methods used here result in no disruption to the existing code base. Furthermore, as the developments reported here concern the core algorithms, they will benefit the full range of ONETEP functionality. Our use of a directive-based programming model ensures portability to other forms of coprocessors and will allow this work to form the basis of future developments to the code designed to support emerging high-performance computing platforms. PMID:24038140

  7. Increasing the economy of design and preparation for manufacturing by integrated and graphic data processing: CAD/CAM - Phase III

    NASA Astrophysics Data System (ADS)

    Grupe, U.

    1986-04-01

    The development of CAD/CAM techniques and equipment for aircraft production at Dornier and MBB during the period 1983-1986 is reviewed. The topics discussed include geometry processing, structural mechanics, design of fabrication equipment, NC techniques, production planning, and processing of production orders. Consideration is given to the increased use of color graphics, the change from vector to scanning screens, and software-related problems encountered in shifting functions from the mainframe computer to terminals.

  8. Real-time Graphics Processing Unit Based Fourier Domain Optical Coherence Tomography and Surgical Applications

    NASA Astrophysics Data System (ADS)

    Zhang, Kang

    2011-12-01

    In this dissertation, real-time Fourier domain optical coherence tomography (FD-OCT) capable of multi-dimensional micrometer-resolution imaging targeted specifically for microsurgical intervention applications was developed and studied. As a part of this work several ultra-high speed real-time FD-OCT imaging and sensing systems were proposed and developed. A real-time 4D (3D+time) OCT system platform using the graphics processing unit (GPU) to accelerate OCT signal processing, the imaging reconstruction, visualization, and volume rendering was developed. Several GPU based algorithms such as non-uniform fast Fourier transform (NUFFT), numerical dispersion compensation, and multi-GPU implementation were developed to improve the impulse response, SNR roll-off and stability of the system. Full-range complex-conjugate-free FD-OCT was also implemented on the GPU architecture to achieve doubled image range and improved SNR. These technologies overcome the imaging reconstruction and visualization bottlenecks widely exist in current ultra-high speed FD-OCT systems and open the way to interventional OCT imaging for applications in guided microsurgery. A hand-held common-path optical coherence tomography (CP-OCT) distance-sensor based microsurgical tool was developed and validated. Through real-time signal processing, edge detection and feed-back control, the tool was shown to be capable of track target surface and compensate motion. The micro-incision test using a phantom was performed using a CP-OCT-sensor integrated hand-held tool, which showed an incision error less than +/-5 microns, comparing to >100 microns error by free-hand incision. The CP-OCT distance sensor has also been utilized to enhance the accuracy and safety of optical nerve stimulation. Finally, several experiments were conducted to validate the system for surgical applications. One of them involved 4D OCT guided micro-manipulation using a phantom. Multiple volume renderings of one 3D data set were

  9. Large eddy simulations of turbulent flows on graphics processing units: Application to film-cooling flows

    NASA Astrophysics Data System (ADS)

    Shinn, Aaron F.

    Computational Fluid Dynamics (CFD) simulations can be very computationally expensive, especially for Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) of turbulent ows. In LES the large, energy containing eddies are resolved by the computational mesh, but the smaller (sub-grid) scales are modeled. In DNS, all scales of turbulence are resolved, including the smallest dissipative (Kolmogorov) scales. Clusters of CPUs have been the standard approach for such simulations, but an emerging approach is the use of Graphics Processing Units (GPUs), which deliver impressive computing performance compared to CPUs. Recently there has been great interest in the scientific computing community to use GPUs for general-purpose computation (such as the numerical solution of PDEs) rather than graphics rendering. To explore the use of GPUs for CFD simulations, an incompressible Navier-Stokes solver was developed for a GPU. This solver is capable of simulating unsteady laminar flows or performing a LES or DNS of turbulent ows. The Navier-Stokes equations are solved via a fractional-step method and are spatially discretized using the finite volume method on a Cartesian mesh. An immersed boundary method based on a ghost cell treatment was developed to handle flow past complex geometries. The implementation of these numerical methods had to suit the architecture of the GPU, which is designed for massive multithreading. The details of this implementation will be described, along with strategies for performance optimization. Validation of the GPU-based solver was performed for fundamental bench-mark problems, and a performance assessment indicated that the solver was over an order-of-magnitude faster compared to a CPU. The GPU-based Navier-Stokes solver was used to study film-cooling flows via Large Eddy Simulation. In modern gas turbine engines, the film-cooling method is used to protect turbine blades from hot combustion gases. Therefore, understanding the physics of

  10. Graphic Arts: The Press and Finishing Processes. Fourth Edition. Teacher Edition [and] Student Edition.

    ERIC Educational Resources Information Center

    Farajollahi, Karim; Ogle, Gary; Reed, William; Woodcock, Kenneth

    Part of a series of instructional materials for courses on graphic communication, this packet contains both teacher and student materials for seven units that cover the following topics: (1) offset press systems; (2) offset inks and dampening chemistry; (3) offset press operating procedures; (4) preventive maintenance and troubleshooting; (5) job…

  11. REIONIZATION SIMULATIONS POWERED BY GRAPHICS PROCESSING UNITS. I. ON THE STRUCTURE OF THE ULTRAVIOLET RADIATION FIELD

    SciTech Connect

    Aubert, Dominique; Teyssier, Romain

    2010-11-20

    We present a set of cosmological simulations with radiative transfer in order to model the reionization history of the universe from z = 18 down to z = 6. Galaxy formation and the associated star formation are followed self-consistently with gas and dark matter dynamics using the RAMSES code, while radiative transfer is performed as a post-processing step using a moment-based method with the M1 closure relation in the ATON code. The latter has been ported to a multiple Graphics Processing Unit (GPU) architecture using the CUDA language together with the MPI library, resulting in an overall acceleration that allows us to tackle radiative transfer problems at a significantly higher resolution than previously reported: 1024{sup 3} + 2 levels of refinement for the hydrodynamic adaptive grid and 1024{sup 3} for the radiative transfer Cartesian grid. We reach a typical acceleration factor close to 100x when compared to the CPU version, allowing us to perform 1/4 million time steps in less than 3000 GPU hr. We observe good convergence properties between our different resolution runs for various volume- and mass-averaged quantities such as neutral fraction, UV background, and Thomson optical depth, as long as the effects of finite resolution on the star formation history are properly taken into account. We also show that the neutral fraction depends on the total mass density, in a way close to the predictions of photoionization equilibrium, as long as the effect of self-shielding are included in the background radiation model. Although our simulation suite has reached unprecedented mass and spatial resolution, we still fail in reproducing the z {approx} 6 constraints on the neutral fraction of hydrogen and the intensity of the UV background. In order to account for unresolved density fluctuations, we have modified our chemistry solver with a simple clumping factor model. Using our most spatially resolved simulation (12.5 Mpc h {sup -1} with 1024{sup 3} particles) to

  12. Creating Interactive Graphical Overlays in the Advanced Weather Interactive Processing System (AWIPS) Using Shapefiles and DGM Files

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Lafosse, Richard; Hood, Doris; Hoeth, Brian

    2007-01-01

    Graphical overlays can be created in real-time in the Advanced Weather Interactive Processing System (AWIPS) using shapefiles or DARE Graphics Metafile (DGM) files. This presentation describes how to create graphical overlays on-the-fly for AWIPS, by using two examples of AWIPS applications that were created by the Applied Meteorology Unit (AMU). The first example is the Anvil Threat Corridor Forecast Tool, which produces a shapefile that depicts a graphical threat corridor of the forecast movement of thunderstorm anvil clouds, based on the observed or forecast upper-level winds. This tool is used by the Spaceflight Meteorology Group (SMG) and 45th Weather Squadron (45 WS) to analyze the threat of natural or space vehicle-triggered lightning over a location. The second example is a launch and landing trajectory tool that produces a DGM file that plots the ground track of space vehicles during launch or landing. The trajectory tool can be used by SMG and the 45 WS forecasters to analyze weather radar imagery along a launch or landing trajectory. Advantages of both file types will be listed.

  13. Computer Graphics.

    ERIC Educational Resources Information Center

    Halpern, Jeanne W.

    1970-01-01

    Computer graphics have been called the most exciting development in computer technology. At the University of Michigan, three kinds of graphics output equipment are now being used: symbolic printers, line plotters or drafting devices, and cathode-ray tubes (CRT). Six examples are given that demonstrate the range of graphics use at the University.…

  14. Efficient particle-in-cell simulation of auroral plasma phenomena using a CUDA enabled graphics processing unit

    NASA Astrophysics Data System (ADS)

    Sewell, Stephen

    This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.

  15. Interactive image processing for mobile devices

    NASA Astrophysics Data System (ADS)

    Shaw, Rodney

    2009-01-01

    As the number of consumer digital images escalates by tens of billions each year, an increasing proportion of these images are being acquired using the latest generations of sophisticated mobile devices. The characteristics of the cameras embedded in these devices now yield image-quality outcomes that approach those of the parallel generations of conventional digital cameras, and all aspects of the management and optimization of these vast new image-populations become of utmost importance in providing ultimate consumer satisfaction. However this satisfaction is still limited by the fact that a substantial proportion of all images are perceived to have inadequate image quality, and a lesser proportion of these to be completely unacceptable (for sharing, archiving, printing, etc). In past years at this same conference, the author has described various aspects of a consumer digital-image interface based entirely on an intuitive image-choice-only operation. Demonstrations have been given of this facility in operation, essentially allowing criticalpath navigation through approximately a million possible image-quality states within a matter of seconds. This was made possible by the definition of a set of orthogonal image vectors, and defining all excursions in terms of a fixed linear visual-pixel model, independent of the image attribute. During recent months this methodology has been extended to yield specific user-interactive image-quality solutions in the form of custom software, which at less than 100kb is readily embedded in the latest generations of unlocked portable devices. This has also necessitated the design of new user-interfaces and controls, as well as streamlined and more intuitive versions of the user quality-choice hierarchy. The technical challenges and details will be described for these modified versions of the enhancement methodology, and initial practical experience with typical images will be described.

  16. Discontinuous Galerkin method with Gaussian artificial viscosity on graphical processing units for nonlinear acoustics

    NASA Astrophysics Data System (ADS)

    Tripathi, Bharat B.; Marchiano, Régis; Baskar, Sambandam; Coulouvrat, François

    2015-10-01

    Propagation of acoustical shock waves in complex geometry is a topic of interest in the field of nonlinear acoustics. For instance, simulation of Buzz Saw Noice requires the treatment of shock waves generated by the turbofan through the engines of aeroplanes with complex geometries and wall liners. Nevertheless, from a numerical point of view it remains a challenge. The two main hurdles are to take into account the complex geometry of the domain and to deal with the spurious oscillations (Gibbs phenomenon) near the discontinuities. In this work, first we derive the conservative hyperbolic system of nonlinear acoustics (up to quadratic nonlinear terms) using the fundamental equations of fluid dynamics. Then, we propose to adapt the classical nodal discontinuous Galerkin method to develop a high fidelity solver for nonlinear acoustics. The discontinuous Galerkin method is a hybrid of finite element and finite volume method and is very versatile to handle complex geometry. In order to obtain better performance, the method is parallelized on Graphical Processing Units. Like other numerical methods, discontinuous Galerkin method suffers with the problem of Gibbs phenomenon near the shock, which is a numerical artifact. Among the various ways to manage these spurious oscillations, we choose the method of parabolic regularization. Although, the introduction of artificial viscosity into the system is a popular way of managing shocks, we propose a new approach of introducing smooth artificial viscosity locally in each element, wherever needed. Firstly, a shock sensor using the linear coefficients of the spectral solution is used to locate the position of the discontinuities. Then, a viscosity coefficient depending on the shock sensor is introduced into the hyperbolic system of equations, only in the elements near the shock. The viscosity is applied as a two-dimensional Gaussian patch with its shape parameters depending on the element dimensions, referred here as Element

  17. Real-time 2D parallel windowed Fourier transform for fringe pattern analysis using Graphics Processing Unit.

    PubMed

    Gao, Wenjing; Huyen, Nguyen Thi Thanh; Loi, Ho Sy; Kemao, Qian

    2009-12-01

    In optical interferometers, fringe projection systems, and synthetic aperture radars, fringe patterns are common outcomes and usually degraded by unavoidable noises. The presence of noises makes the phase extraction and phase unwrapping challenging. Windowed Fourier transform (WFT) based algorithms have been proven to be effective for fringe pattern analysis to various applications. However, the WFT-based algorithms are computationally expensive, prohibiting them from real-time applications. In this paper, we propose a fast parallel WFT-based library using graphics processing units and computer unified device architecture. Real-time WFT-based algorithms are achieved with 4 frames per second in processing 256x256 fringe patterns. Up to 132x speedup is obtained for WFT-based algorithms using NVIDIA GTX295 graphics card than sequential C in quad-core 2.5GHz Intel(R)Xeon(R) CPU E5420. PMID:20052242

  18. Application of computer generated color graphic techniques to the processing and display of three dimensional fluid dynamic data

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.; Putt, C. W.; Giamati, C. C.

    1981-01-01

    Color coding techniques used in the processing of remote sensing imagery were adapted and applied to the fluid dynamics problems associated with turbofan mixer nozzles. The computer generated color graphics were found to be useful in reconstructing the measured flow field from low resolution experimental data to give more physical meaning to this information and in scanning and interpreting the large volume of computer generated data from the three dimensional viscous computer code used in the analysis.

  19. Compressed sensing reconstruction for whole-heart imaging with 3D radial trajectories: a graphics processing unit implementation.

    PubMed

    Nam, Seunghoon; Akçakaya, Mehmet; Basha, Tamer; Stehning, Christian; Manning, Warren J; Tarokh, Vahid; Nezafat, Reza

    2013-01-01

    A disadvantage of three-dimensional (3D) isotropic acquisition in whole-heart coronary MRI is the prolonged data acquisition time. Isotropic 3D radial trajectories allow undersampling of k-space data in all three spatial dimensions, enabling accelerated acquisition of the volumetric data. Compressed sensing (CS) reconstruction can provide further acceleration in the acquisition by removing the incoherent artifacts due to undersampling and improving the image quality. However, the heavy computational overhead of the CS reconstruction has been a limiting factor for its application. In this article, a parallelized implementation of an iterative CS reconstruction method for 3D radial acquisitions using a commercial graphics processing unit is presented. The execution time of the graphics processing unit-implemented CS reconstruction was compared with that of the C++ implementation, and the efficacy of the undersampled 3D radial acquisition with CS reconstruction was investigated in both phantom and whole-heart coronary data sets. Subsequently, the efficacy of CS in suppressing streaking artifacts in 3D whole-heart coronary MRI with 3D radial imaging and its convergence properties were studied. The CS reconstruction provides improved image quality (in terms of vessel sharpness and suppression of noise-like artifacts) compared with the conventional 3D gridding algorithm, and the graphics processing unit implementation greatly reduces the execution time of CS reconstruction yielding 34-54 times speed-up compared with C++ implementation. PMID:22392604

  20. Phase transitions in contagion processes mediated by recurrent mobility patterns

    NASA Astrophysics Data System (ADS)

    Balcan, Duygu; Vespignani, Alessandro

    2011-07-01

    Human mobility and activity patterns mediate contagion on many levels, including the spatial spread of infectious diseases, diffusion of rumours, and emergence of consensus. These patterns however are often dominated by specific locations and recurrent flows and poorly modelled by the random diffusive dynamics generally used to study them. Here we develop a theoretical framework to analyse contagion within a network of locations where individuals recall their geographic origins. We find a phase transition between a regime in which the contagion affects a large fraction of the system and one in which only a small fraction is affected. This transition cannot be uncovered by continuous deterministic models because of the stochastic features of the contagion process and defines an invasion threshold that depends on mobility parameters, providing guidance for controlling contagion spread by constraining mobility processes. We recover the threshold behaviour by analysing diffusion processes mediated by real human commuting data.

  1. International Student Mobility and the Bologna Process

    ERIC Educational Resources Information Center

    Teichler, Ulrich

    2012-01-01

    The Bologna Process is the newest of a chain of activities stimulated by supra-national actors since the 1950s to challenge national borders in higher education in Europe. Now, the ministers in charge of higher education of the individual European countries have agreed to promote a similar cycle-structure of study programmes and programmes based…

  2. Interpretation of Medical Imaging Data with a Mobile Application: A Mobile Digital Imaging Processing Environment

    PubMed Central

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J.; Ullmann, Jeremy F. P.; Janke, Andrew L.

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users’ expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587

  3. Interpretation of medical imaging data with a mobile application: a mobile digital imaging processing environment.

    PubMed

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J; Ullmann, Jeremy F P; Janke, Andrew L

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users' expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587

  4. Real-time imaging implementation of the Army Research Laboratory synchronous impulse reconstruction radar on a graphics processing unit architecture

    NASA Astrophysics Data System (ADS)

    Park, Song Jun; Nguyen, Lam H.; Shires, Dale R.; Henz, Brian J.

    2009-05-01

    High computing requirements for the synchronous impulse reconstruction (SIRE) radar algorithm present a challenge for near real-time processing, particularly the calculations involved in output image formation. Forming an image requires a large number of parallel and independent floating-point computations. To reduce the processing time and exploit the abundant parallelism of image processing, a graphics processing unit (GPU) architecture is considered for the imaging algorithm. Widely available off the shelf, high-end GPUs offer inexpensive technology that exhibits great capacity of computing power in one card. To address the parallel nature of graphics processing, the GPU architecture is designed for high computational throughput realized through multiple computing resources to target data parallel applications. Due to a leveled or in some cases reduced clock frequency in mainstream single and multi-core general-purpose central processing units (CPUs), GPU computing is becoming a competitive option for compute-intensive radar imaging algorithm prototyping. We describe the translation and implementation of the SIRE radar backprojection image formation algorithm on a GPU platform. The programming model for GPU's parallel computing and hardware-specific memory optimizations are discussed in the paper. A considerable level of speedup is available from the GPU implementation resulting in processing at real-time acquisition speeds.

  5. Using wesBench to Study the Rendering Performance of Graphics Processing Units

    SciTech Connect

    Bethel, Edward W

    2010-01-08

    Graphics operations consist of two broad operations. The first, which we refer to here as vertex operations, consists of transformation, lighting, primitive assembly, and so forth. The second, which we refer to as pixel or fragment operations, consist of rasterization, texturing, scissoring, blending, and fill. Overall GPU rendering performance is a function of throughput of both these interdependent stages: if one stage is slower than the other, the faster stage will be forced to run more slowly and overall rendering performance will be adversely affected. This relationship is commutative: if the later stage has a greater workload than the earlier stage, the earlier stage will be forced to 'slow down.' For example, a large triangle that covers many screen pixels will incur a very small amount of work in the vertex stage while at the same time incurring a relatively large amount of work in the fragment stage. Rendering performance of a scene consisting of many large-area triangles will be limited by throughput of the fragment stage, which will have relatively more work than the vertex stage. There are two main objectives for this document. First, we introduce a new graphics benchmark, wesBench, which is useful for measuring performance of both stages of the rendering pipeline under varying conditions. Second, we present its methodology for measuring performance and show results of several performance measurement studies aimed at producing better understanding of GPU rendering performance characteristics and limits under varying configurations. First, in Section 2, we explore the 'crossover' point between geometry and rasterization. Second, in Section 3, we explore additional performance characteristics, some of which are ill- or un-documented. Lastly, several appendices provide additional material concerning problems with the gfxbench benchmark, and details about the new wesBench graphics benchmark.

  6. NATURAL graphics

    NASA Technical Reports Server (NTRS)

    Jones, R. H.

    1984-01-01

    The hardware and software developments in computer graphics are discussed. Major topics include: system capabilities, hardware design, system compatibility, and software interface with the data base management system.

  7. [Dynamic Pulse Signal Processing and Analyzing in Mobile System].

    PubMed

    Chou, Yongxin; Zhang, Aihua; Ou, Jiqing; Qi, Yusheng

    2015-09-01

    In order to derive dynamic pulse rate variability (DPRV) signal from dynamic pulse signal in real time, a method for extracting DPRV signal was proposed and a portable mobile monitoring system was designed. The system consists of a front end for collecting and wireless sending pulse signal and a mobile terminal. The proposed method is employed to extract DPRV from dynamic pulse signal in mobile terminal, and the DPRV signal is analyzed both in the time domain and the frequency domain and also with non-linear method in real time. The results show that the proposed method can accurately derive DPRV signal in real time, the system can be used for processing and analyzing DPRV signal in real time. PMID:26904868

  8. Image processing for navigation on a mobile embedded platform: design of an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Loose, Harald; Lemke, Christiane; Papazov, Chavdar

    2006-02-01

    This paper deals with intelligent mobile platforms connected to a camera controlled by a small hardware-platform called RCUBE. This platform is able to provide features of a typical actuator-sensor board with various inputs and outputs as well as computing power and image recognition capabilities. Several intelligent autonomous RCBUE devices can be equipped and programmed to participate in the BOSPORUS network. These components form an intelligent network for gathering sensor and image data, sensor data fusion, navigation and control of mobile platforms. The RCUBE platform provides a standalone solution for image processing, which will be explained and presented. It plays a major role for several components in a reference implementation of the BOSPORUS system. On the one hand, intelligent cameras will be positioned in the environment, analyzing the events from a fixed point of view and sharing their perceptions with other components in the system. On the other hand, image processing results will contribute to a reliable navigation of a mobile system, which is crucially important. Fixed landmarks and other objects appropriate for determining the position of a mobile system can be recognized. For navigation other methods are added, i.e. GPS calculations and odometers.

  9. Real-time display on SD-OCT using a linear-in-wavenumber spectrometer and a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuuki; Itagaki, Toshiki

    2010-02-01

    We demonstrated a real-time display of processed OCT images using a linear-in-wavenumber (linear-k) spectrometer and a graphics processing unit (GPU). We used the linear-k spectrometer with optimal combination of a diffractive grating with 1200 lines/mm and a F2 equilateral prism in the 840 nm spectral region, to avoid calculating the re-sampling process. The calculations of the FFT (fast Fourier transform) were accelerated by the low cost GPU with many stream processors, which realized highly parallel processing. A display rate of 27.9 frames per second for processed images (2048 FFT size × 1000 lateral A-scans) was achieved in our OCT system using a line scan CCD camera operated at 27.9 kHz.

  10. Business Graphics

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Genigraphics Corporation's Masterpiece 8770 FilmRecorder is an advanced high resolution system designed to improve and expand a company's in-house graphics production. GRAFTIME/software package was designed to allow office personnel with minimal training to produce professional level graphics for business communications and presentations. Products are no longer being manufactured.

  11. Graphic Storytelling

    ERIC Educational Resources Information Center

    Thompson, John

    2009-01-01

    Graphic storytelling is a medium that allows students to make and share stories, while developing their art communication skills. American comics today are more varied in genre, approach, and audience than ever before. When considering the impact of Japanese manga on the youth, graphic storytelling emerges as a powerful player in pop culture. In…

  12. Adaptive step ODE algorithms for the 3D simulation of electric heart activity with graphics processing units.

    PubMed

    Garcia-Molla, V M; Liberos, A; Vidal, A; Guillem, M S; Millet, J; Gonzalez, A; Martinez-Zaldivar, F J; Climent, A M

    2014-01-01

    In this paper we studied the implementation and performance of adaptive step methods for large systems of ordinary differential equations systems in graphics processing units, focusing on the simulation of three-dimensional electric cardiac activity. The Rush-Larsen method was applied in all the implemented solvers to improve efficiency. We compared the adaptive methods with the fixed step methods, and we found that the fixed step methods can be faster while the adaptive step methods are better in terms of accuracy and robustness. PMID:24377685

  13. Towards real-time wavefront sensorless adaptive optics using a graphical processing unit (GPU) in a line scanning system

    NASA Astrophysics Data System (ADS)

    Biss, David P.; Patel, Ankit H.; Ferguson, R. Daniel; Mujat, Mircea; Iftimia, Nicusor; Hammer, Daniel X.

    2011-03-01

    Adaptive optics ophthalmic imaging systems that rely on a standalone wave-front sensor can be costly to build and difficult for non-technical personnel to operate. As an alternative we present a simplified wavefront sensorless adaptive optics laser scanning ophthalmoscope. This sensorless system is based on deterministic search algorithms that utilize the image's spatial frequency as an optimization metric. We implement this algorithm on a NVIDIA video card to take advantage of the graphics processing unit (GPU)'s parallel architecture to reduce algorithm computation times and approach real-time correction.

  14. An atomic orbital-based formulation of the complete active space self-consistent field method on graphical processing units

    SciTech Connect

    Hohenstein, Edward G.; Luehr, Nathan; Ufimtsev, Ivan S.; Martínez, Todd J.

    2015-06-14

    Despite its importance, state-of-the-art algorithms for performing complete active space self-consistent field (CASSCF) computations have lagged far behind those for single reference methods. We develop an algorithm for the CASSCF orbital optimization that uses sparsity in the atomic orbital (AO) basis set to increase the applicability of CASSCF. Our implementation of this algorithm uses graphical processing units (GPUs) and has allowed us to perform CASSCF computations on molecular systems containing more than one thousand atoms. Additionally, we have implemented analytic gradients of the CASSCF energy; the gradients also benefit from GPU acceleration as well as sparsity in the AO basis.

  15. High mobility solution-processed hybrid light emitting transistors

    NASA Astrophysics Data System (ADS)

    Walker, Bright; Ullah, Mujeeb; Chae, Gil Jo; Burn, Paul L.; Cho, Shinuk; Kim, Jin Young; Namdas, Ebinazar B.; Seo, Jung Hwa

    2014-11-01

    We report the design, fabrication, and characterization of high-performance, solution-processed hybrid (inorganic-organic) light emitting transistors (HLETs). The devices employ a high-mobility, solution-processed cadmium sulfide layer as the switching and transport layer, with a conjugated polymer Super Yellow as an emissive material in non-planar source/drain transistor geometry. We demonstrate HLETs with electron mobilities of up to 19.5 cm2/V s, current on/off ratios of >107, and external quantum efficiency of 10-2% at 2100 cd/m2. These combined optical and electrical performance exceed those reported to date for HLETs. Furthermore, we provide full analysis of charge injection, charge transport, and recombination mechanism of the HLETs. The high brightness coupled with a high on/off ratio and low-cost solution processing makes this type of hybrid device attractive from a manufacturing perspective.

  16. High mobility solution-processed hybrid light emitting transistors

    SciTech Connect

    Walker, Bright; Kim, Jin Young; Ullah, Mujeeb; Burn, Paul L.; Namdas, Ebinazar B. E-mail: seojh@dau.ac.kr; Chae, Gil Jo; Cho, Shinuk; Seo, Jung Hwa E-mail: seojh@dau.ac.kr

    2014-11-03

    We report the design, fabrication, and characterization of high-performance, solution-processed hybrid (inorganic-organic) light emitting transistors (HLETs). The devices employ a high-mobility, solution-processed cadmium sulfide layer as the switching and transport layer, with a conjugated polymer Super Yellow as an emissive material in non-planar source/drain transistor geometry. We demonstrate HLETs with electron mobilities of up to 19.5 cm{sup 2}/V s, current on/off ratios of >10{sup 7}, and external quantum efficiency of 10{sup −2}% at 2100 cd/m{sup 2}. These combined optical and electrical performance exceed those reported to date for HLETs. Furthermore, we provide full analysis of charge injection, charge transport, and recombination mechanism of the HLETs. The high brightness coupled with a high on/off ratio and low-cost solution processing makes this type of hybrid device attractive from a manufacturing perspective.

  17. Stable image acquisition for mobile image processing applications

    NASA Astrophysics Data System (ADS)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  18. The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units

    NASA Astrophysics Data System (ADS)

    Hall, Clifford; Ji, Weixiao; Blaisten-Barojas, Estela

    2014-02-01

    We present a CPU-GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU-GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU-GPU duets.

  19. r-Java: An r-process Code and Graphical User Interface for Heavy-Element Nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Charignon, Camille; Kostka, Mathew; Konin, Nico; Jaikumar, Prashanth; Ouyed, Rachid

    2011-04-01

    We present r-Java, an r-process code for open use, that performs r-process nucleosynthesis calculations. Equipped with a simple graphical user interface, r-Java is capable of carrying out nuclear statistical equilibrium (NSE) as well as static and dynamic r-process calculations for a wide range of input parameters. In this introductory paper, we present the motivation and details behind r-Java, and results from our static and dynamic simulations. Static simulations are explored for a range of neutron irradiation and temperatures. Dynamic simulations are studied with a parameterized expansion formula. Our code generates the resulting abundance pattern based on a general entropy expression that can be applied to degenerate as well as non-degenerate matter, allowing us to track the rapid density and temperature evolution of the ejecta during the initial stages of ejecta expansion. At present, our calculations are limited to the waiting-point approximation.

  20. The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units

    SciTech Connect

    Hall, Clifford; Ji, Weixiao; Blaisten-Barojas, Estela

    2014-02-01

    We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.

  1. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  2. Enabling customer self service through image processing on mobile devices

    NASA Astrophysics Data System (ADS)

    Kliche, Ingmar; Hellmann, Sascha; Kreutel, Jörn

    2013-03-01

    Our paper will outline the results of a research project that employs image processing for the automatic diagnosis of technical devices whose internal state is communicated through visual displays. In particular, we developed a method for detecting exceptional states of retail wireless routers, analysing the state and blinking behaviour of the LEDs that make up most routers' user interface. The method was made configurable by means of abstracting away from a particular device's display properties, thus being able to analyse a whole range of different devices whose displays are covered by our abstraction. The method of analysis and its configuration mechanism were implemented as a native mobile application for the Android Platform. It employs the local camera of mobile devices for capturing a router's state, and uses overlaid visual hints for guiding the user toward that perspective from where an analysis is possible.

  3. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  4. Distributed cooperating processes in a mobile robot control system

    NASA Technical Reports Server (NTRS)

    Skillman, Thomas L., Jr.

    1988-01-01

    A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.

  5. Real-Space Density Functional Theory on Graphical Processing Units: Computational Approach and Comparison to Gaussian Basis Set Methods.

    PubMed

    Andrade, Xavier; Aspuru-Guzik, Alán

    2013-10-01

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs. PMID:26589153

  6. Robot graphic simulation testbed

    NASA Technical Reports Server (NTRS)

    Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.

    1991-01-01

    The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts.

  7. Programmer's Guide for FFORM. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Anderson, Lougenia; Gales, Larry

    This module is part of a series designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. FFORM is a portable format-free input subroutine package written in ANSI Fortran IV…

  8. Effects of Graphic Organizers on Student Achievement in the Writing Process

    ERIC Educational Resources Information Center

    Brown, Marjorie

    2011-01-01

    Writing at the high school level requires higher cognitive and literacy skills. Educators must decide the strategies best suited for the varying skills of each process. Compounding this issue is the need to instruct students with learning disabilities. Writing for students with learning disabilities is a struggle at minimum; teachers have to find…

  9. Graphic Novels, Web Comics, and Creator Blogs: Examining Product and Process

    ERIC Educational Resources Information Center

    Carter, James Bucky

    2011-01-01

    Young adult literature (YAL) of the late 20th and early 21st century is exploring hybrid forms with growing regularity by embracing textual conventions from sequential art, video games, film, and more. As well, Web-based technologies have given those who consume YAL more immediate access to authors, their metacognitive creative processes, and…

  10. Mobil uses two-layer coating process on replacement pipe

    SciTech Connect

    Not Available

    1991-03-01

    Mobil Oil's West Coast Pipe Line, as part of an ongoing program, has replaced sections of its crude oil pipe line that crosses Southern California's San Joaquin Valley. The significant aspect of the replacement project was the use of a new two-part coating process that has the ability to make cathodic protection more effective, while not deteriorating in service. Mobil's crude line extends from the company's San Joaquin Valley oil field in Kern County to the Torrance, Calif., refinery on the south side of Los Angeles. It crosses a variety of terrain including desert, foothills and urban development. Crude oil from the San Joaquin Valley is heavy and requires heating for efficient flow. Normal operating temperature is about 180{degrees}F. Due to moisture in the soil surrounding the hot line, the risk of corrosion is constant. Additionally, soil stress on such a line extending through the California hills inflicts damage on the protective coating. Under these conditions, coatings can soften, bake out and eventually become brittle. The ultimate result is separation from the pipe. The coating system employs a two-part process. Each of the two coatings are tailored to each other in a patented process, forming a chemical bond between the layers. This enhances the pipe protection both mechanically and electrically.

  11. Accelerating the performance of a novel meshless method based on collocation with radial basis functions by employing a graphical processing unit as a parallel coprocessor

    NASA Astrophysics Data System (ADS)

    Owusu-Banson, Derek

    In recent times, a variety of industries, applications and numerical methods including the meshless method have enjoyed a great deal of success by utilizing the graphical processing unit (GPU) as a parallel coprocessor. These benefits often include performance improvement over the previous implementations. Furthermore, applications running on graphics processors enjoy superior performance per dollar and performance per watt than implementations built exclusively on traditional central processing technologies. The GPU was originally designed for graphics acceleration but the modern GPU, known as the General Purpose Graphical Processing Unit (GPGPU) can be used for scientific and engineering calculations. The GPGPU consists of massively parallel array of integer and floating point processors. There are typically hundreds of processors per graphics card with dedicated high-speed memory. This work describes an application written by the author, titled GaussianRBF to show the implementation and results of a novel meshless method that in-cooperates the collocation of the Gaussian radial basis function by utilizing the GPU as a parallel co-processor. Key phases of the proposed meshless method have been executed on the GPU using the NVIDIA CUDA software development kit. Especially, the matrix fill and solution phases have been carried out on the GPU, along with some post processing. This approach resulted in a decreased processing time compared to similar algorithm implemented on the CPU while maintaining the same accuracy.

  12. The LHEA PDP 11/70 graphics processing facility users guide

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A compilation of all necessary and useful information needed to allow the inexperienced user to program on the PDP 11/70. Information regarding the use of editing and file manipulation utilities as well as operational procedures are included. The inexperienced user is taken through the process of creating, editing, compiling, task building and debugging his/her FORTRAN program. Also, documentation on additional software is included.

  13. Perception in statistical graphics

    NASA Astrophysics Data System (ADS)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  14. Conceptual Learning with Multiple Graphical Representations: Intelligent Tutoring Systems Support for Sense-Making and Fluency-Building Processes

    ERIC Educational Resources Information Center

    Rau, Martina A.

    2013-01-01

    Most learning environments in the STEM disciplines use multiple graphical representations along with textual descriptions and symbolic representations. Multiple graphical representations are powerful learning tools because they can emphasize complementary aspects of complex learning contents. However, to benefit from multiple graphical…

  15. Graphical expression of thermodynamic characteristics of absorption process in ammonia-water system

    NASA Astrophysics Data System (ADS)

    Pospíšil, Jiří; Fortelný, Zdeněk

    2012-04-01

    The adiabatic sorption is very interesting phenomenon that occurs when vapor of refrigerant is in contact with unsaturated liquid absorbent-refrigerant mixture and exchange of heat is forbid between the system and an environment. This contribution introduces new auxiliary lines that enable correct position determination of the adiabatic sorption process in the p-T-x diagram of ammoniawater system. The presented auxiliary lines were obtained from common functions for fast calculation of water-ammonia system properties. Absorption cycles designers often utilize p-t-x diagrams of working mixtures for first suggestion of new absorption cycles. The p-t-x diagrams enable fast correct determination of saturate states of liquid (and gaseous) mixtures of refrigerants and absorbents. The working mixture isn't only at saturated state during a real working cycle. If we know pressure and temperature of an unsaturated mixture, exact position determination is possible in the p-t-x diagrams too.

  16. Genetic algorithm supported by graphical processing unit improves the exploration of effective connectivity in functional brain imaging

    PubMed Central

    Chan, Lawrence Wing Chi; Pang, Bin; Shyu, Chi-Ren; Chan, Tao; Khong, Pek-Lan

    2015-01-01

    Brain regions of human subjects exhibit certain levels of associated activation upon specific environmental stimuli. Functional Magnetic Resonance Imaging (fMRI) detects regional signals, based on which we could infer the direct or indirect neuronal connectivity between the regions. Structural Equation Modeling (SEM) is an appropriate mathematical approach for analyzing the effective connectivity using fMRI data. A maximum likelihood (ML) discrepancy function is minimized against some constrained coefficients of a path model. The minimization is an iterative process. The computing time is very long as the number of iterations increases geometrically with the number of path coefficients. Using regular Quad-Core Central Processing Unit (CPU) platform, duration up to 3 months is required for the iterations from 0 to 30 path coefficients. This study demonstrates the application of Graphical Processing Unit (GPU) with the parallel Genetic Algorithm (GA) that replaces the Powell minimization in the standard program code of the analysis software package. It was found in the same example that GA under GPU reduced the duration to 20 h and provided more accurate solution when compared with standard program code under CPU. PMID:25999846

  17. r-Java: an r-process code and graphical user interface for heavy-element nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Charignon, C.; Kostka, M.; Koning, N.; Jaikumar, P.; Ouyed, R.

    2011-07-01

    We present r-Java, an r-process code for open use that performs r-process nucleosynthesis calculations. Equipped with a simple graphical user interface, r-Java is capable of carrying out nuclear statistical equilibrium (NSE), as well as static and dynamic r-process calculations, for a wide range of input parameters. In this introductory paper, we present the motivation and details behind r-Java and results from our static and dynamic simulations. Static simulations are explored for a range of neutron irradiation and temperatures. Dynamic simulations are studied with a parameterized expansion formula. Our code generates the resulting abundance pattern based on a general entropy expression that can be applied to both degenerate and non-degenerate matter, allowing us to track the rapid density and temperature evolution of the ejecta during the initial stages of ejecta expansion. At present, our calculations are limited to the waiting-point approximation. We encourage the nuclear astrophysics community to provide feedback on the code and related documentation, which is available for download from the website of the Quark-Nova Project: http://quarknova.ucalgary.ca/.

  18. Optimization of Parallel Legendre Transform using Graphics Processing Unit (GPU) for a Geodynamo Code

    NASA Astrophysics Data System (ADS)

    Lokavarapu, H. V.; Matsui, H.

    2015-12-01

    Convection and magnetic field of the Earth's outer core are expected to have vast length scales. To resolve these flows, high performance computing is required for geodynamo simulations using spherical harmonics transform (SHT), a significant portion of the execution time is spent on the Legendre transform. Calypso is a geodynamo code designed to model magnetohydrodynamics of a Boussinesq fluid in a rotating spherical shell, such as the outer core of the Earth. The code has been shown to scale well on computer clusters capable of computing at the order of 10⁵ cores using Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) parallelization for CPUs. To further optimize, we investigate three different algorithms of the SHT using GPUs. One is to preemptively compute the Legendre polynomials on the CPU before executing SHT on the GPU within the time integration loop. In the second approach, both the Legendre polynomials and the SHT are computed on the GPU simultaneously. In the third approach , we initially partition the radial grid for the forward transform and the harmonic order for the backward transform between the CPU and GPU. There after, the partitioned works are simultaneously computed in the time integration loop. We examine the trade-offs between space and time, memory bandwidth and GPU computations on Maverick, a Texas Advanced Computing Center (TACC) supercomputer. We have observed improved performance using a GPU enabled Legendre transform. Furthermore, we will compare and contrast the different algorithms in the context of GPUs.

  19. Real-time 4D signal processing and visualization using graphics processing unit on a regular nonlinear-k Fourier-domain OCT system.

    PubMed

    Zhang, Kang; Kang, Jin U

    2010-05-24

    We realized graphics processing unit (GPU) based real-time 4D (3D+time) signal processing and visualization on a regular Fourier-domain optical coherence tomography (FD-OCT) system with a nonlinear k-space spectrometer. An ultra-high speed linear spline interpolation (LSI) method for lambda-to-k spectral re-sampling is implemented in the GPU architecture, which gives average interpolation speeds of >3,000,000 line/s for 1024-pixel OCT (1024-OCT) and >1,400,000 line/s for 2048-pixel OCT (2048-OCT). The complete FD-OCT signal processing including lambda-to-k spectral re-sampling, fast Fourier transform (FFT) and post-FFT processing have all been implemented on a GPU. The maximum complete A-scan processing speeds are investigated to be 680,000 line/s for 1024-OCT and 320,000 line/s for 2048-OCT, which correspond to 1GByte processing bandwidth. In our experiment, a 2048-pixel CMOS camera running up to 70 kHz is used as an acquisition device. Therefore the actual imaging speed is camera- limited to 128,000 line/s for 1024-OCT or 70,000 line/s for 2048-OCT. 3D Data sets are continuously acquired in real time at 1024-OCT mode, immediately processed and visualized as high as 10 volumes/second (12,500 A-scans/volume) by either en face slice extraction or ray-casting based volume rendering from 3D texture mapped in graphics memory. For standard FD-OCT systems, a GPU is the only additional hardware needed to realize this improvement and no optical modification is needed. This technique is highly cost-effective and can be easily integrated into most ultrahigh speed FD-OCT systems to overcome the 3D data processing and visualization bottlenecks. PMID:20589038

  20. Grid-based algorithm to search critical points, in the electron density, accelerated by graphics processing units.

    PubMed

    Hernández-Esparza, Raymundo; Mejía-Chica, Sol-Milena; Zapata-Escobar, Andy D; Guevara-García, Alfredo; Martínez-Melchor, Apolinar; Hernández-Pérez, Julio-M; Vargas, Rubicelia; Garza, Jorge

    2014-12-01

    Using a grid-based method to search the critical points in electron density, we show how to accelerate such a method with graphics processing units (GPUs). When the GPU implementation is contrasted with that used on central processing units (CPUs), we found a large difference between the time elapsed by both implementations: the smallest time is observed when GPUs are used. We tested two GPUs, one related with video games and other used for high-performance computing (HPC). By the side of the CPUs, two processors were tested, one used in common personal computers and other used for HPC, both of last generation. Although our parallel algorithm scales quite well on CPUs, the same implementation on GPUs runs around 10× faster than 16 CPUs, with any of the tested GPUs and CPUs. We have found what one GPU dedicated for video games can be used without any problem for our application, delivering a remarkable performance, in fact; this GPU competes against one HPC GPU, in particular when single-precision is used. PMID:25345784

  1. Performance evaluation for volumetric segmentation of multiple sclerosis lesions using MATLAB and computing engine in the graphical processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Le, Anh H.; Park, Young W.; Ma, Kevin; Jacobs, Colin; Liu, Brent J.

    2010-03-01

    Multiple Sclerosis (MS) is a progressive neurological disease affecting myelin pathways in the brain. Multiple lesions in the white matter can cause paralysis and severe motor disabilities of the affected patient. To solve the issue of inconsistency and user-dependency in manual lesion measurement of MRI, we have proposed a 3-D automated lesion quantification algorithm to enable objective and efficient lesion volume tracking. The computer-aided detection (CAD) of MS, written in MATLAB, utilizes K-Nearest Neighbors (KNN) method to compute the probability of lesions on a per-voxel basis. Despite the highly optimized algorithm of imaging processing that is used in CAD development, MS CAD integration and evaluation in clinical workflow is technically challenging due to the requirement of high computation rates and memory bandwidth in the recursive nature of the algorithm. In this paper, we present the development and evaluation of using a computing engine in the graphical processing unit (GPU) with MATLAB for segmentation of MS lesions. The paper investigates the utilization of a high-end GPU for parallel computing of KNN in the MATLAB environment to improve algorithm performance. The integration is accomplished using NVIDIA's CUDA developmental toolkit for MATLAB. The results of this study will validate the practicality and effectiveness of the prototype MS CAD in a clinical setting. The GPU method may allow MS CAD to rapidly integrate in an electronic patient record or any disease-centric health care system.

  2. Graphics processing unit-assisted real-time three-dimensional measurement using speckle-embedded fringe.

    PubMed

    Feng, Shijie; Chen, Qian; Zuo, Chao

    2015-08-01

    This paper presents a novel two-frame fringe projection technique for real-time, accurate, and unambiguous three-dimensional (3D) measurement. One of the frames is a digital speckle pattern, and the other one is a composite image which is generated by fusing that speckle image with sinusoidal fringes. The contained sinusoidal component is used to obtain a wrapped phase map by Fourier transform profilometry, and the speckle image helps determine the fringe order for phase unwrapping. Compared with traditional methods, the proposed pattern scheme enables measurements of discontinuous surfaces with only two frames, greatly reducing the number of required patterns and thus reducing the sensitivity to movements. This merit makes the method very suitable for inspecting dynamic scenes. Moreover, it shows close performance in measurement accuracy compared with the phase-shifting method from our experiments. To process data in real time, a Compute Unified Device Architecture-enabled graphics processing unit is adopted to accelerate some time-consuming computations. With our system, measurements can be performed at 21 frames per second with a resolution of 307,000 points per frame. PMID:26368103

  3. Graphics processing unit accelerated three-dimensional model for the simulation of pulsed low-temperature plasmas

    SciTech Connect

    Fierro, Andrew Dickens, James; Neuber, Andreas

    2014-12-15

    A 3-dimensional particle-in-cell/Monte Carlo collision simulation that is fully implemented on a graphics processing unit (GPU) is described and used to determine low-temperature plasma characteristics at high reduced electric field, E/n, in nitrogen gas. Details of implementation on the GPU using the NVIDIA Compute Unified Device Architecture framework are discussed with respect to efficient code execution. The software is capable of tracking around 10 × 10{sup 6} particles with dynamic weighting and a total mesh size larger than 10{sup 8} cells. Verification of the simulation is performed by comparing the electron energy distribution function and plasma transport parameters to known Boltzmann Equation (BE) solvers. Under the assumption of a uniform electric field and neglecting the build-up of positive ion space charge, the simulation agrees well with the BE solvers. The model is utilized to calculate plasma characteristics of a pulsed, parallel plate discharge. A photoionization model provides the simulation with additional electrons after the initial seeded electron density has drifted towards the anode. Comparison of the performance benefits between the GPU-implementation versus a CPU-implementation is considered, and a speed-up factor of 13 for a 3D relaxation Poisson solver is obtained. Furthermore, a factor 60 speed-up is realized for parallelization of the electron processes.

  4. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    SciTech Connect

    Badal, Andreu; Badano, Aldo

    2009-11-15

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  5. Solution of the direct problem in turbid media with inclusions using Monte Carlo simulations implemented in graphics processing units: new criterion for processing transmittance data

    NASA Astrophysics Data System (ADS)

    Carbone, Nicolas; di Rocco, Hector; Iriarte, Daniela I.; Pomarico, Juan A.

    2010-05-01

    The study of light propagation in diffusive media requires solving the radiative transfer equation, or eventually, the diffusion approximation. Except for some cases involving simple geometries, the problem with immersed inclusions has not been solved. Also, Monte Carlo (MC) calculations have become a gold standard for simulating photon migration in turbid media, although they have the drawback large processing times. The purpose of this work is two-fold: first, we introduce a new processing criterion to retrieve information about the location and shape of absorbing inclusions based on normalization to the background intensity, when no inhomogeneities are present. Second, we demonstrate the feasibility of including inhomogeneities in MC simulations implemented in graphics processing units, achieving large acceleration factors (~103), thus providing an important tool for iteratively solving the forward problem to retrieve the optical properties of the inclusion. Results using a cw source are compared with MC outcomes showing very good agreement.

  6. Computer graphics and the graphic artist

    NASA Technical Reports Server (NTRS)

    Taylor, N. L.; Fedors, E. G.; Pinelli, T. E.

    1985-01-01

    A centralized computer graphics system is being developed at the NASA Langley Research Center. This system was required to satisfy multiuser needs, ranging from presentation quality graphics prepared by a graphic artist to 16-mm movie simulations generated by engineers and scientists. While the major thrust of the central graphics system was directed toward engineering and scientific applications, hardware and software capabilities to support the graphic artists were integrated into the design. This paper briefly discusses the importance of computer graphics in research; the central graphics system in terms of systems, software, and hardware requirements; the application of computer graphics to graphic arts, discussed in terms of the requirements for a graphic arts workstation; and the problems encountered in applying computer graphics to the graphic arts. The paper concludes by presenting the status of the central graphics system.

  7. Data Processing: Use of a Mobile Classroom in the Teaching of Data Processing

    ERIC Educational Resources Information Center

    Ruge, Gerald D.

    1970-01-01

    A data processing van is a 12-foot by 60-foot mobile home containing 10 keypunch simulators, six keypunches, a verifier, sorter, accounting machine, 1622 card read-punch, and 1620 computer. The van is at each of four high schools during one quarter of the school year. (DM)

  8. PO*WW*ER mobile treatment unit process hazards analysis

    SciTech Connect

    Richardson, R.B.

    1996-06-01

    The objective of this report is to demonstrate that a thorough assessment of the risks associated with the operation of the Rust Geotech patented PO*WW*ER mobile treatment unit (MTU) has been performed and documented. The MTU was developed to treat aqueous mixed wastes at the US Department of Energy (DOE) Albuquerque Operations Office sites. The MTU uses evaporation to separate organics and water from radionuclides and solids, and catalytic oxidation to convert the hazardous into byproducts. This process hazards analysis evaluated a number of accident scenarios not directly related to the operation of the MTU, such as natural phenomena damage and mishandling of chemical containers. Worst case accident scenarios were further evaluated to determine the risk potential to the MTU and to workers, the public, and the environment. The overall risk to any group from operation of the MTU was determined to be very low; the MTU is classified as a Radiological Facility with low hazards.

  9. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  10. A Physics-Based Modeling and Real-Time Simulation of Biomechanical Diffusion Process Through Optical Imaged Alveolar Tissues on Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kaya, Ilhan; Santhanam, Anand P.; Lee, Kye-Sung; Meemon, Panomsak; Papp, Nicolene; Rolland, Jannick P.

    Tissue engineering has broad applications from creating the much-needed engineered tissue and organ structures for regenerative medicine to providing in vitro testbeds for drug testing. In the latter, application domain, creating alveolar lung tissue, and simulating the diffusion process of oxygen and other possible agents from the air into the blood stream as well as modeling the removal of carbon dioxide and other possible entities from the blood stream are of critical importance to simulating lung functions in various environments. In this chapter, we propose a physics-based model to simulate the alveolar gas exchange and the alveolar diffusionDiffusion alveolar process. Tissue engineers, for the first time, may utilize these simulation results to better understand the underlying gas exchange process and properly adjust the tissue growing cycles. In this work, alveolar tissues are imaged by means of an optical coherence microscopyOptical coherence microscopy (OCM Modality OCM ) system developed in our laboratory. As a consequence, 3D alveoli tissue data with its inherent complex boundary is taken as input to the simulationSimulation diffusion system, which is based on computational fluid mechanics in simulating the alveolar gas exchange. The visualizationVisualization and the simulation of diffusion of the air into the blood through the alveoli tissue is performed using a state-of-art graphics processing unitGraphics processing unit (GPU). Results show the real-time simulation of the gas exchange through the 2D alveoli tissue.

  11. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2013-08-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.

  12. Communication: A reduced scaling J-engine based reformulation of SOS-MP2 using graphics processing units.

    PubMed

    Maurer, S A; Kussmann, J; Ochsenfeld, C

    2014-08-01

    We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N⁵) to O(N³) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows to replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server. PMID:25106563

  13. Monte Carlo-based fluorescence molecular tomography reconstruction method accelerated by a cluster of graphic processing units

    NASA Astrophysics Data System (ADS)

    Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

    2011-02-01

    High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

  14. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid.

    PubMed

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn

    2016-09-15

    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc. PMID:27431905

  15. Rigid body constraints in HOOMD-Blue, a general purpose molecular dynamics code on graphics processing units

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung D.; Phillips, Carolyn L.; Anderson, Joshua A.; Glotzer, Sharon C.

    2011-03-01

    Rigid body constraints are commonly used in a wide range of molecular modeling applications from the atomistic scale, modeling the bonds in molecules such as water, carbon dioxide, and benzene, to the colloidal scale, modeling macroscopic rods, plates and patchy nanoparticles. While the parallel implementations of rigid constraints for molecular dynamics simulations for distributed memory clusters have poor performance scaling, on shared memory systems, such as multi-core CPUs and many-core graphics processing units (GPUs), rigid body constraints can be parallelized so that significantly better performance is possible. We have designed a massively parallel rigid body constraint algorithm and implemented it in HOOMD-Blue, a GPU-accelerated, open-source, general purpose molecular dynamics simulation package. For typical simulations, the GPU implementation running on a single NVID IA GTX 480 card is twice as fast as LAMMPS running on 32 CPU cores. In the HOOMD-blue code package, rigid constraints can be used seamlessly with non-rigid parts of the system and with different integration methods, including NVE, NVT, NPT, and Brownian Dynamics. We have also incorporated the FIRE energy minimization algorithm, reformulated to be applicable to mixed systems of rigid bodies and non-rigid particles.

  16. High-Performance Iterative Electron Tomography Reconstruction with Long-Object Compensation using Graphics Processing Units (GPUs)

    PubMed Central

    Xu, Wei; Xu, Fang; Jones, Mel; Keszthelyi, Bettina; Sedat, John; Agard, David; Mueller, Klaus

    2010-01-01

    Iterative reconstruction algorithms pose tremendous computational challenges for 3D Electron Tomography (ET). Similar to X-ray Computed Tomography (CT), graphics processing units (GPUs) offer an affordable platform to meet these demands. In this paper, we outline a CT reconstruction approach for ET that is optimized for the special demands and application setting of ET. It exploits the fact that ET is typically cast as a parallel-beam configuration, which allows the design of an efficient data management scheme, using a holistic sinogram-based representation. Our method produces speedups of about an order of magnitude over a previously proposed GPU-based ET implementation, on similar hardware, and completes an iterative 3D reconstruction of practical problem size within minutes. We also describe a novel GPU-amenable approach that effectively compensates for reconstruction errors resulting from the TEM data acquisition on (long) samples which extend the width of the parallel TEM beam. We show that the vignetting artifacts typically arising at the periphery of non-compensated ET reconstructions are completely eliminated when our method is employed. PMID:20371381

  17. Communication: A reduced scaling J-engine based reformulation of SOS-MP2 using graphics processing units

    SciTech Connect

    Maurer, S. A.; Kussmann, J.; Ochsenfeld, C.

    2014-08-07

    We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N{sup 5}) to O(N{sup 3}) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows to replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server.

  18. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    PubMed

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU. PMID:24921860

  19. Integrative Processing of Verbal and Graphical Information during Re-Reading Predicts Learning from Illustrated Text: An Eye-Movement Study

    ERIC Educational Resources Information Center

    Mason, Lucia; Tornatora, Maria Caterina; Pluchino, Patrik

    2015-01-01

    Printed or digital textbooks contain texts accompanied by various kinds of visualisation. Successful comprehension of these materials requires integrating verbal and graphical information. This study investigates the time course of processing an illustrated text through eye-tracking methodology in the school context. The aims were to identify…

  20. Designing and Implementing an OVERFLOW Reader for ParaView and Comparing Performance Between Central Processing Units and Graphical Processing Units

    NASA Technical Reports Server (NTRS)

    Chawner, David M.; Gomez, Ray J.

    2010-01-01

    In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.

  1. Real-time dual-mode standard/complex Fourier-domain OCT system using graphics processing unit accelerated 4D signal processing and visualization

    NASA Astrophysics Data System (ADS)

    Zhang, Kang; Kang, Jin U.

    2011-03-01

    We realized a real-time dual-mode standard/complex Fourier-domain optical coherence tomography (FD-OCT) system using graphics processing unit (GPU) accelerated 4D (3D+time) signal processing and visualization. For both standard and complex FD-OCT modes, the signal processing tasks were implemented on a dual-GPUs architecture that included λ-to-k spectral re-sampling, fast Fourier transform (FFT), modified Hilbert transform, logarithmic-scaling, and volume rendering. The maximum A-scan processing speeds achieved are >3,000,000 line/s for the standard 1024-pixel-FD-OCT, and >500,000 line/s for the complex 1024-pixel-FD-OCT. Multiple volumerendering of the same 3D data set were preformed and displayed with different view angles. The GPU-acceleration technique is highly cost-effective and can be easily integrated into most ultrahigh speed FD-OCT systems to overcome the 3D data processing and visualization bottlenecks.

  2. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    PubMed

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards. PMID:22347787

  3. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards

    NASA Astrophysics Data System (ADS)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G.

    2011-07-01

    We describe and evaluate a fast implementation of a classical block-matching motion estimation algorithm for multiple graphical processing units (GPUs) using the compute unified device architecture computing engine. The implemented block-matching algorithm uses summed absolute difference error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation, we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and noninteger search grids. The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a noninteger search grid. The additional speedup for a noninteger search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable. In addition, we compared the execution time of the proposed FS GPU implementation with two existing, highly optimized nonfull grid search CPU-based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and simplified unsymmetrical multi-hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation. We also demonstrated that for an image sequence of 720 × 480 pixels in resolution commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  4. Structural, dynamic, and electrostatic properties of fully hydrated DMPC bilayers from molecular dynamics simulations accelerated with graphical processing units (GPUs).

    PubMed

    Ganesan, Narayan; Bauer, Brad A; Lucas, Timothy R; Patel, Sandeep; Taufer, Michela

    2011-11-15

    We present results of molecular dynamics simulations of fully hydrated DMPC bilayers performed on graphics processing units (GPUs) using current state-of-the-art non-polarizable force fields and a local GPU-enabled molecular dynamics code named FEN ZI. We treat the conditionally convergent electrostatic interaction energy exactly using the particle mesh Ewald method (PME) for solution of Poisson's Equation for the electrostatic potential under periodic boundary conditions. We discuss elements of our implementation of the PME algorithm on GPUs as well as pertinent performance issues. We proceed to show results of simulations of extended lipid bilayer systems using our program, FEN ZI. We performed simulations of DMPC bilayer systems consisting of 17,004, 68,484, and 273,936 atoms in explicit solvent. We present bilayer structural properties (atomic number densities, electron density profiles), deuterium order parameters (S(CD)), electrostatic properties (dipole potential, water dipole moments), and orientational properties of water. Predicted properties demonstrate excellent agreement with experiment and previous all-atom molecular dynamics simulations. We observe no statistically significant differences in calculated structural or electrostatic properties for different system sizes, suggesting the small bilayer simulations (less than 100 lipid molecules) provide equivalent representation of structural and electrostatic properties associated with significantly larger systems (over 1000 lipid molecules). We stress that the three system size representations will have differences in other properties such as surface capillary wave dynamics or surface tension related effects that are not probed in the current study. The latter properties are inherently dependent on system size. This contribution suggests the suitability of applying emerging GPU technologies to studies of an important class of biological environments, that of lipid bilayers and their associated integral

  5. Computer Graphics Verification

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Video processing creates technical animation sequences using studio quality equipment to realistically represent fluid flow over space shuttle surfaces, helicopter rotors, and turbine blades.Computer systems Co-op, Tim Weatherford, performing computer graphics verification. Part of Co-op brochure.

  6. Designing Award Winning Graphics.

    ERIC Educational Resources Information Center

    Kintigh, Cynthia

    1990-01-01

    Graphic designers, marketing specialists, and campus activities professionals who have won awards for the design of campus programing publicity offer tips in the process of designing successful promotional items, including ingredients of winning pieces and aspects of a productive designer-client relationship. (MSE)

  7. A Physics-Based Modeling and Real-Time Simulation of Biomechanical Diffusion Process Through Optical Imaged Alveolar Tissues on Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kaya, Ilhan; Santhanam, Anand P.; Lee, Kye-Sung; Meemon, Panomsak; Papp, Nicolene; Rolland, Jannick P.

    Tissue engineering has broad applications from creating the much-needed engineered tissue and organ structures for regenerative medicine to providing in vitro testbeds for drug testing. In the latter, application domain, creating alveolar lung tissue, and simulating the diffusion process of oxygen and other possible agents from the air into the blood stream as well as modeling the removal of carbon dioxide and other possible entities from the blood stream are of critical importance to simulating lung functions in various environments. In this chapter, we propose a physics-based model to simulate the alveolar gas exchange and the alveolar process. Tissue engineers, for the first time, may utilize these simulation results to better understand the underlying gas exchange process and properly adjust the tissue growing cycles. In this work, alveolar tissues are imaged by means of an optical coherence microscopy (OCM) system developed in our laboratory. As a consequence, 3D alveoli tissue data with its inherent complex boundary is taken as input to the system, which is based on computational fluid mechanics in simulating the alveolar gas exchange. The visualization and the simulation of diffusion of the air into the blood through the alveoli tissue is performed using a state-of-art graphics processing unit (GPU). Results show the real-time simulation of the gas exchange through the 2D alveoli tissue.

  8. Design Graphics

    NASA Technical Reports Server (NTRS)

    1990-01-01

    A mathematician, David R. Hedgley, Jr. developed a computer program that considers whether a line in a graphic model of a three-dimensional object should or should not be visible. Known as the Hidden Line Computer Code, the program automatically removes superfluous lines and displays an object from a specific viewpoint, just as the human eye would see it. An example of how one company uses the program is the experience of Birdair which specializes in production of fabric skylights and stadium covers. The fabric called SHEERFILL is a Teflon coated fiberglass material developed in cooperation with DuPont Company. SHEERFILL glazed structures are either tension structures or air-supported tension structures. Both are formed by patterned fabric sheets supported by a steel or aluminum frame or cable network. Birdair uses the Hidden Line Computer Code, to illustrate a prospective structure to an architect or owner. The program generates a three- dimensional perspective with the hidden lines removed. This program is still used by Birdair and continues to be commercially available to the public.

  9. Measuring Cognitive Load in Test Items: Static Graphics versus Animated Graphics

    ERIC Educational Resources Information Center

    Dindar, M.; Kabakçi Yurdakul, I.; Inan Dönmez, F.

    2015-01-01

    The majority of multimedia learning studies focus on the use of graphics in learning process but very few of them examine the role of graphics in testing students' knowledge. This study investigates the use of static graphics versus animated graphics in a computer-based English achievement test from a cognitive load theory perspective. Three…

  10. Interoperability framework for communication between processes running on different mobile operating systems

    NASA Astrophysics Data System (ADS)

    Gal, A.; Filip, I.; Dragan, F.

    2016-02-01

    As we live in an era where mobile communication is everywhere around us, the necessity to communicate between the variety of the devices we have available becomes even more of an urge. The major impediment to be able to achieve communication between the available devices is the incompatibility between the operating systems running on these devices. In the present paper we propose a framework that will make possible the ability to inter-operate between processes running on different mobile operating systems. The interoperability process will make use of any communication environment which is made available by the mobile devices where the processes are installed. The communication environment is chosen so as the process is optimal in terms of transferring the data between the mobile devices. The paper defines the architecture of the framework, expanding the functionality and interrelation between modules that make up the framework. For the proof of concept, we propose to use three different mobile operating systems installed on three different types of mobile devices. Depending on the various factors related to the structure of the mobile devices and the data type to be transferred, the framework will establish a data transfer protocol that will be used. The framework automates the interoperability process, user intervention being limited to a simple selection from the options that the framework suggests based on the full analysis of structural and functional elements of the mobile devices used in the process.

  11. HLYWD: a program for post-processing data files to generate selected plots or time-lapse graphics

    SciTech Connect

    Munro, J.K. Jr.

    1980-05-01

    The program HLYWD is a post-processor of output files generated by large plasma simulation computations or of data files containing a time sequence of plasma diagnostics. It is intended to be used in a production mode for either type of application; i.e., it allows one to generate along with the graphics sequence, segments containing title, credits to those who performed the work, text to describe the graphics, and acknowledgement of funding agency. The current version is designed to generate 3D plots and allows one to select type of display (linear or semi-log scales), choice of normalization of function values for display purposes, viewing perspective, and an option to allow continuous rotations of surfaces. This program was developed with the intention of being relatively easy to use, reasonably flexible, and requiring a minimum investment of the user's time. It uses the TV80 library of graphics software and ORDERLIB system software on the CDC 7600 at the National Magnetic Fusion Energy Computing Center at Lawrence Livermore Laboratory in California.

  12. Graphic engine resource management

    NASA Astrophysics Data System (ADS)

    Bautin, Mikhail; Dwarakinath, Ashok; Chiueh, Tzi-cker

    2008-01-01

    Modern consumer-grade 3D graphic cards boast a computation/memory resource that can easily rival or even exceed that of standard desktop PCs. Although these cards are mainly designed for 3D gaming applications, their enormous computational power has attracted developers to port an increasing number of scientific computation programs to these cards, including matrix computation, collision detection, cryptography, database sorting, etc. As more and more applications run on 3D graphic cards, there is a need to allocate the computation/memory resource on these cards among the sharing applications more fairly and efficiently. In this paper, we describe the design, implementation and evaluation of a Graphic Processing Unit (GPU) scheduler based on Deficit Round Robin scheduling that successfully allocates to every process an equal share of the GPU time regardless of their demand. This scheduler, called GERM, estimates the execution time of each GPU command group based on dynamically collected statistics, and controls each process's GPU command production rate through its CPU scheduling priority. Measurements on the first GERM prototype show that this approach can keep the maximal GPU time consumption difference among concurrent GPU processes consistently below 5% for a variety of application mixes.

  13. Dynamic stepping information process method in mobile bio-sensing computing environments.

    PubMed

    Lee, Tae-Gyu; Lee, Seong-Hoon

    2014-01-01

    Recently, the interest toward human longevity free from diseases is being converged as one system frame along with the development of mobile computing environment, diversification of remote medical system and aging society. Such converged system enables implementation of a bioinformatics system created as various supplementary information services by sensing and gathering health conditions and various bio-information of mobile users to set up medical information. The existing bio-information system performs static and identical process without changes after the bio-information process defined at the initial system configuration executes the system. However, such static process indicates ineffective execution in the application of mobile bio-information system performing mobile computing. Especially, an inconvenient duty of having to perform initialization of new definition and execution is accompanied during the process configuration of bio-information system and change of method. This study proposes a dynamic process design and execution method to overcome such ineffective process. PMID:24704651

  14. Mobile Technology and CAD Technology Integration in Teaching Architectural Design Process for Producing Creative Product

    ERIC Educational Resources Information Center

    Bin Hassan, Isham Shah; Ismail, Mohd Arif; Mustafa, Ramlee

    2011-01-01

    The purpose of this research is to examine the effect of integrating the mobile and CAD technology on teaching architectural design process for Malaysian polytechnic architectural students in producing a creative product. The website is set up based on Caroll's minimal theory, while mobile and CAD technology integration is based on Brown and…

  15. The Longitudinal Impact of Cognitive Speed of Processing Training on Driving Mobility

    ERIC Educational Resources Information Center

    Edwards, Jerri D.; Myers, Charlsie; Ross, Lesley A.; Roenker, Daniel L.; Cissell, Gayla M.; McLaughlin, Alexis M.; Ball, Karlene K.

    2009-01-01

    Purpose: To examine how cognitive speed of processing training affects driving mobility across a 3-year period among older drivers. Design and Methods: Older drivers with poor Useful Field of View (UFOV) test performance (indicating greater risk for subsequent at-fault crashes and mobility declines) were randomly assigned to either a speed of…

  16. Space Spurred Computer Graphics

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Dicomed Corporation was asked by NASA in the early 1970s to develop processing capabilities for recording images sent from Mars by Viking spacecraft. The company produced a film recorder which increased the intensity levels and the capability for color recording. This development led to a strong technology base resulting in sophisticated computer graphics equipment. Dicomed systems are used to record CAD (computer aided design) and CAM (computer aided manufacturing) equipment, to update maps and produce computer generated animation.

  17. Building Regression Models: The Importance of Graphics.

    ERIC Educational Resources Information Center

    Dunn, Richard

    1989-01-01

    Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)

  18. Mathematical Creative Activity and the Graphic Calculator

    ERIC Educational Resources Information Center

    Duda, Janina

    2011-01-01

    Teaching mathematics using graphic calculators has been an issue of didactic discussions for years. Finding ways in which graphic calculators can enrich the development process of creative activity in mathematically gifted students between the ages of 16-17 is the focus of this article. Research was conducted using graphic calculators with…

  19. Graphic Design Is Not a Medium.

    ERIC Educational Resources Information Center

    Gruber, John Edward, Jr.

    2001-01-01

    Discusses graphic design and reviews its development from analog processes to a digital tool with the use of computers. Topics include graphical user interfaces; the need for visual communication concepts; transmedia as opposed to repurposing; and graphic design instruction in higher education. (LRW)

  20. Fast point-based method of a computer-generated hologram for a triangle-patch model by using a graphics processing unit.

    PubMed

    Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji

    2016-01-20

    The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method. PMID:26835949

  1. Gasoline from coal in the state of Illinois: feasibility study. Volume I. Design. [KBW gasification process, ICI low-pressure methanol process and Mobil M-gasoline process

    SciTech Connect

    Not Available

    1980-01-01

    Volume 1 describes the proposed plant: KBW gasification process, ICI low-pressure methanol process and Mobil M-gasoline process, and also with ancillary processes, such as oxygen plant, shift process, RECTISOL purification process, sulfur recovery equipment and pollution control equipment. Numerous engineering diagrams are included. (LTN)

  2. Mobile Phone Service Process Hiccups at Cellular Inc.

    ERIC Educational Resources Information Center

    Edgington, Theresa M.

    2010-01-01

    This teaching case documents an actual case of process execution and failure. The case is useful in MIS introductory courses seeking to demonstrate the interdependencies within a business process, and the concept of cascading failure at the process level. This case demonstrates benefits and potential problems with information technology systems,…

  3. Spins Dynamics in a Dissipative Environment: Hierarchal Equations of Motion Approach Using a Graphics Processing Unit (GPU).

    PubMed

    Tsuchimoto, Masashi; Tanimura, Yoshitaka

    2015-08-11

    A system with many energy states coupled to a harmonic oscillator bath is considered. To study quantum non-Markovian system-bath dynamics numerically rigorously and nonperturbatively, we developed a computer code for the reduced hierarchy equations of motion (HEOM) for a graphics processor unit (GPU) that can treat the system as large as 4096 energy states. The code employs a Padé spectrum decomposition (PSD) for a construction of HEOM and the exponential integrators. Dynamics of a quantum spin glass system are studied by calculating the free induction decay signal for the cases of 3 × 2 to 3 × 4 triangular lattices with antiferromagnetic interactions. We found that spins relax faster at lower temperature due to transitions through a quantum coherent state, as represented by the off-diagonal elements of the reduced density matrix, while it has been known that the spins relax slower due to suppression of thermal activation in a classical case. The decay of the spins are qualitatively similar regardless of the lattice sizes. The pathway of spin relaxation is analyzed under a sudden temperature drop condition. The Compute Unified Device Architecture (CUDA) based source code used in the present calculations is provided as Supporting Information . PMID:26574467

  4. Graphical fiber shaping control interface

    NASA Astrophysics Data System (ADS)

    Basso, Eric T.; Ninomiya, Yasuyuki

    2016-03-01

    In this paper, we present an improved graphical user interface for defining single-pass novel shaping techniques on glass processing machines that allows for streamlined process development. This approach offers unique modularity and debugging capability to researchers during the process development phase not usually afforded with similar scripting languages.

  5. Chemical Effects in the Separation Process of a Differential Mobility / Mass Spectrometer System

    PubMed Central

    Schneider, Bradley B.; Covey, Thomas R.; Coy, Stephen L.; Krylov, Evgeny V.; Nazarov, Erkinjon G.

    2013-01-01

    In differential mobility spectrometry (DMS, also referred to as high field asymmetric waveform ion mobility spectrometry, FAIMS), ions are separated on the basis of the difference in their mobility under high and low electric fields. The addition of polar modifiers to the gas transporting the ions through a DMS enhances the formation of clusters in a field-dependent way and thus amplifies the high and low field mobility difference resulting in increased peak capacity and separation power. Observations of the increase in mobility field dependence are consistent with a cluster formation model, also referred to as the dynamic cluster-decluster model. The uniqueness of chemical interactions that occur between an ion and cluster-forming neutrals increases the selectivity of the separation and the depression of low-field mobility relative to high-field mobility increases the compensation voltage and peak capacity. The effect of polar modifiers on the peak capacity across a broad range of chemicals has been investigated. We discuss the theoretical underpinnings which explain the observed effects. In contrast to the result from polar modifiers, we find that using mixtures of inert gases as the transport gas improve resolution by reducing peak width but has very little effect on peak capacity or selectivity. Inert gases do not cluster and thus do not reduce low field mobility relative to high-field mobility. The observed changes in the differential mobility α parameter exhibited by different classes of compounds when the transport gas contains polar modifiers or has a significant fraction of inert gas can be explained on the basis of the physical mechanisms involved in the separation processes. PMID:20121077

  6. Weather information network including graphical display

    NASA Technical Reports Server (NTRS)

    Leger, Daniel R. (Inventor); Burdon, David (Inventor); Son, Robert S. (Inventor); Martin, Kevin D. (Inventor); Harrison, John (Inventor); Hughes, Keith R. (Inventor)

    2006-01-01

    An apparatus for providing weather information onboard an aircraft includes a processor unit and a graphical user interface. The processor unit processes weather information after it is received onboard the aircraft from a ground-based source, and the graphical user interface provides a graphical presentation of the weather information to a user onboard the aircraft. Preferably, the graphical user interface includes one or more user-selectable options for graphically displaying at least one of convection information, turbulence information, icing information, weather satellite information, SIGMET information, significant weather prognosis information, and winds aloft information.

  7. Effects of Mobile Instant Messaging on Collaborative Learning Processes and Outcomes: The Case of South Korea

    ERIC Educational Resources Information Center

    Kim, Hyewon; Lee, MiYoung; Kim, Minjeong

    2014-01-01

    The purpose of this paper was to investigate the effects of mobile instant messaging on collaborative learning processes and outcomes. The collaborative processes were measured in terms of different types of interactions. We measured the outcomes of the collaborations through both the students' taskwork and their teamwork. The collaborative…

  8. Twitter Micro-Blogging Based Mobile Learning Approach to Enhance the Agriculture Education Process

    ERIC Educational Resources Information Center

    Dissanayeke, Uvasara; Hewagamage, K. P.; Ramberg, Robert; Wikramanayake, G. N.

    2013-01-01

    The study intends to see how to introduce mobile learning within the domain of agriculture so as to enhance the agriculture education process. We propose to use the Activity theory together with other methodologies such as participatory methods to design, implement, and evaluate mLearning activities. The study explores the process of introducing…

  9. A Web Graphics Primer.

    ERIC Educational Resources Information Center

    Buchanan, Larry

    1999-01-01

    Discusses the basic technical concepts of using graphics in World Wide Web pages, including: color depth and dithering, dots-per-inch, image size, file types, Graphics Interchange Formats (GIFs), Joint Photographic Experts Group (JPEG), format, and software recommendations. (AEF)

  10. GRASP/Ada: Graphical Representations of Algorithms, Structures, and Processes for Ada. The development of a program analysis environment for Ada: Reverse engineering tools for Ada, task 2, phase 3

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1991-01-01

    The main objective is the investigation, formulation, and generation of graphical representations of algorithms, structures, and processes for Ada (GRASP/Ada). The presented task, in which various graphical representations that can be extracted or generated from source code are described and categorized, is focused on reverse engineering. The following subject areas are covered: the system model; control structure diagram generator; object oriented design diagram generator; user interface; and the GRASP library.

  11. Shuttle Systems 3-D Applications: Application of 3-D Graphics in Engineering Training for Shuttle Ground Processing

    NASA Technical Reports Server (NTRS)

    Godfrey, Gary S.

    2003-01-01

    This project illustrates an animation of the orbiter mate to the external tank, an animation of the OMS POD installation to the orbiter, and a simulation of the landing gear mechanism at the Kennedy Space Center. A detailed storyboard was created to reflect each animation or simulation. Solid models were collected and translated into Pro/Engineer's prt and asm formats. These solid models included computer files of the: orbiter, external tank, solid rocket booster, mobile launch platform, transporter, vehicle assembly building, OMS POD fixture, and landing gear. A depository of the above solid models was established. These solid models were translated into several formats. This depository contained the following files: stl for sterolithography, stp for neutral file work, shrinkwrap for compression, tiff for photoshop work, jpeg for Internet use, and prt and asm for Pro/Engineer use. Solid models were created of the material handling sling, bay 3 platforms, and orbiter contact points. Animations were developed using mechanisms to reflect each storyboard. Every effort was made to build all models technically correct for engineering use. The result was an animated routine that could be used by NASA for training material handlers and uncovering engineering safety issues.

  12. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF).

    PubMed

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2012-02-23

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame. PMID:24027619

  13. Graphics processing unit (GPU) implementation of image processing algorithms to improve system performance of the control acquisition, processing, and image display system (CAPIDS) of the micro-angiographic fluoroscope (MAF)

    NASA Astrophysics Data System (ADS)

    Swetadri Vasan, S. N.; Ionita, Ciprian N.; Titus, A. H.; Cartwright, A. N.; Bednarek, D. R.; Rudin, S.

    2012-03-01

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  14. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF)

    PubMed Central

    Vasan, S.N. Swetadri; Ionita, Ciprian N.; Titus, A.H.; Cartwright, A.N.; Bednarek, D.R.; Rudin, S.

    2012-01-01

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame. PMID:24027619

  15. A graphical language for reliability model generation

    NASA Technical Reports Server (NTRS)

    Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.

    1990-01-01

    A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.

  16. Graphics at DESY

    NASA Astrophysics Data System (ADS)

    Schilling, Peter K.

    1989-12-01

    After a short history of computer graphics at DESY the introduction of graphic workstations based on true and "quasi" standards is described. An overview of graphics hardware and software at DESY is given as well as the communication facilities used. Some remarks about current and future development finish the paper.

  17. Emergency healthcare process automation using mobile computing and cloud services.

    PubMed

    Poulymenopoulou, M; Malamateniou, F; Vassilacopoulos, G

    2012-10-01

    Emergency care is basically concerned with the provision of pre-hospital and in-hospital medical and/or paramedical services and it typically involves a wide variety of interdependent and distributed activities that can be interconnected to form emergency care processes within and between Emergency Medical Service (EMS) agencies and hospitals. Hence, in developing an information system for emergency care processes, it is essential to support individual process activities and to satisfy collaboration and coordination needs by providing readily access to patient and operational information regardless of location and time. Filling this information gap by enabling the provision of the right information, to the right people, at the right time fosters new challenges, including the specification of a common information format, the interoperability among heterogeneous institutional information systems or the development of new, ubiquitous trans-institutional systems. This paper is concerned with the development of an integrated computer support to emergency care processes by evolving and cross-linking institutional healthcare systems. To this end, an integrated EMS cloud-based architecture has been developed that allows authorized users to access emergency case information in standardized document form, as proposed by the Integrating the Healthcare Enterprise (IHE) profile, uses the Organization for the Advancement of Structured Information Standards (OASIS) standard Emergency Data Exchange Language (EDXL) Hospital Availability Exchange (HAVE) for exchanging operational data with hospitals and incorporates an intelligent module that supports triaging and selecting the most appropriate ambulances and hospitals for each case. PMID:22205383

  18. A service protocol for post-processing of medical images on the mobile device

    NASA Astrophysics Data System (ADS)

    He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian

    2014-03-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.

  19. Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks

    PubMed Central

    Xu, Yunfei; Choi, Jongeun

    2011-01-01

    This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal Gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of Gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme. PMID:22163785

  20. Fast Shepard interpolation on graphics processing units: potential energy surfaces and dynamics for H + CH4 → H2 + CH3.

    PubMed

    Welsch, Ralph; Manthe, Uwe

    2013-04-28

    A strategy for the fast evaluation of Shepard interpolated potential energy surfaces (PESs) utilizing graphics processing units (GPUs) is presented. Speed ups of several orders of magnitude are gained for the title reaction on the ZFWCZ PES [Y. Zhou, B. Fu, C. Wang, M. A. Collins, and D. H. Zhang, J. Chem. Phys. 134, 064323 (2011)]. Thermal rate constants are calculated employing the quantum transition state concept and the multi-layer multi-configurational time-dependent Hartree approach. Results for the ZFWCZ PES are compared to rate constants obtained for other ab initio PESs and problems are discussed. A revised PES is presented. Thermal rate constants obtained for the revised PES indicate that an accurate description of the anharmonicity around the transition state is crucial. PMID:23635122

  1. On the effective implementation of a boundary element code on graphics processing units unsing an out-of-core LU algorithm

    SciTech Connect

    D'Azevedo, Ed F; Nintcheu Fata, Sylvain

    2012-01-01

    A collocation boundary element code for solving the three-dimensional Laplace equation, publicly available from \\url{http://www.intetec.org}, has been adapted to run on an Nvidia Tesla general purpose graphics processing unit (GPU). Global matrix assembly and LU factorization of the resulting dense matrix were performed on the GPU. Out-of-core techniques were used to solve problems larger than available GPU memory. The code achieved over eight times speedup in matrix assembly and about 56~Gflops/sec in the LU factorization using only 512~Mbytes of GPU memory. Details of the GPU implementation and comparisons with the standard sequential algorithm are included to illustrate the performance of the GPU code.

  2. NMR data visualization, processing, and analysis on mobile devices.

    PubMed

    Cobas, Carlos; Iglesias, Isaac; Seoane, Felipe

    2015-08-01

    Touch-screen computers are emerging as a popular platform for many applications, including those in chemistry and analytical sciences. In this work, we present our implementation of a new NMR 'app' designed for hand-held and portable touch-controlled devices, such as smartphones and tablets. It features a flexible architecture formed by a powerful NMR processing and analysis kernel and an intuitive user interface that makes full use of the smart devices haptic capabilities. Routine 1D and 2D NMR spectra acquired in most NMR instruments can be processed in a fully unattended way. More advanced experiments such as non-uniform sampled NMR spectra are also supported through a very efficient parallelized Modified Iterative Soft Thresholding algorithm. Specific technical development features as well as the overall feasibility of using NMR software apps will also be discussed. All aspects considered the functionalities of the app allowing it to work as a stand-alone tool or as a 'companion' to more advanced desktop applications such as Mnova NMR. PMID:25924947

  3. Relativistic hydrodynamics on graphic cards

    NASA Astrophysics Data System (ADS)

    Gerhard, Jochen; Lindenstruth, Volker; Bleicher, Marcus

    2013-02-01

    We show how to accelerate relativistic hydrodynamics simulations using graphic cards (graphic processing units, GPUs). These improvements are of highest relevance e.g. to the field of high-energetic nucleus-nucleus collisions at RHIC and LHC where (ideal and dissipative) relativistic hydrodynamics is used to calculate the evolution of hot and dense QCD matter. The results reported here are based on the Sharp And Smooth Transport Algorithm (SHASTA), which is employed in many hydrodynamical models and hybrid simulation packages, e.g. the Ultrarelativistic Quantum Molecular Dynamics model (UrQMD). We have redesigned the SHASTA using the OpenCL computing framework to work on accelerators like graphic processing units (GPUs) as well as on multi-core processors. With the redesign of the algorithm the hydrodynamic calculations have been accelerated by a factor 160 allowing for event-by-event calculations and better statistics in hybrid calculations.

  4. Real Time Processing and Transferring ECG Signal by a Mobile Phone

    PubMed Central

    Raeiatibanadkooki, Mahsa; Quachani, Saeed Rahati; Khalilzade, Mohammadmahdi; Bahaadinbeigy, Kambiz

    2014-01-01

    The real-time ECG signal processing system based on mobile phones is very effective in identifying continuous ambulatory patients. It could monitor cardiovascular patients in their daily life and warns them in case of cardiac arrhythmia. An ECG signal of a patient is processed by a mobile phone with this proposed algorithm. An IIR low-pass filter is used to remove the noise and it has the 55 Hz cutoff frequency and order 3. The obtained SNR showed a desirable noise removal and it helps physicians in their diagnosis. In this paper, Hilbert transform was used and the R peaks are important component to differ normal beats from abnormal ones. The results of sensitivity and positive predictivity of algorithm are 96.97% and 95.63% respectively. If an arrhythmia occurred, 4 seconds of this signal is displayed on the mobile phone then it will be sent to a remote medical center by TCP/IP protocol. PMID:25684847

  5. Real Time Processing and Transferring ECG Signal by a Mobile Phone.

    PubMed

    Raeiatibanadkooki, Mahsa; Quachani, Saeed Rahati; Khalilzade, Mohammadmahdi; Bahaadinbeigy, Kambiz

    2014-12-01

    The real-time ECG signal processing system based on mobile phones is very effective in identifying continuous ambulatory patients. It could monitor cardiovascular patients in their daily life and warns them in case of cardiac arrhythmia. An ECG signal of a patient is processed by a mobile phone with this proposed algorithm. An IIR low-pass filter is used to remove the noise and it has the 55 Hz cutoff frequency and order 3. The obtained SNR showed a desirable noise removal and it helps physicians in their diagnosis. In this paper, Hilbert transform was used and the R peaks are important component to differ normal beats from abnormal ones. The results of sensitivity and positive predictivity of algorithm are 96.97% and 95.63% respectively. If an arrhythmia occurred, 4 seconds of this signal is displayed on the mobile phone then it will be sent to a remote medical center by TCP/IP protocol. PMID:25684847

  6. Programmer's Guide for Subroutine PRNT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PRNT3D is a subroutine package which generates a variety of printed plot displays. The displays…

  7. User's Guide for Subroutine PLOT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    This module is part of a series designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PLOT3D is a subroutine package which generates a variety of three dimensional hidden…

  8. Design Standards for Instructional Computer Programs. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. The report describes design standards for the computer programs. They are designed to be…

  9. User's Guide for Subroutine FFORM. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry; Anderson, Lougenia

    These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. FFORM is a portable format-free input subroutine package which simplifies the input of values…

  10. Programmer's Guide for Subroutine PLOT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    This module is part of a series designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PLOT3D is a subroutine package which generates a variety of three-dimensional hidden…

  11. Color applied to printing graphic design: the importance of lighting in the color perception and specification process

    NASA Astrophysics Data System (ADS)

    Goncalves, Berenice S.; Pereira, Alice C.; Pereira, Fernando R.

    2002-06-01

    This work approaches the importance of lighting in the process of chromatic categorization, selection and specification applied to the printed media. Some concepts regarding lighting are presented, such as color temperature, color appearance and color rendering index. Finally, stands out the necessity to evaluate the samples under standard lighting conditions regarding the environment where the final product will be exposed.

  12. User's Guide for Subroutine PRNT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PRNT3D is a subroutine package which generates a variety of printer plot displays. The displays…

  13. High Electron Mobility Transistor Structures on Sapphire Substrates Using CMOS Compatible Processing Techniques

    NASA Technical Reports Server (NTRS)

    Mueller, Carl; Alterovitz, Samuel; Croke, Edward; Ponchak, George

    2004-01-01

    System-on-a-chip (SOC) processes are under intense development for high-speed, high frequency transceiver circuitry. As frequencies, data rates, and circuit complexity increases, the need for substrates that enable high-speed analog operation, low-power digital circuitry, and excellent isolation between devices becomes increasingly critical. SiGe/Si modulation doped field effect transistors (MODFETs) with high carrier mobilities are currently under development to meet the active RF device needs. However, as the substrate normally used is Si, the low-to-modest substrate resistivity causes large losses in the passive elements required for a complete high frequency circuit. These losses are projected to become increasingly troublesome as device frequencies progress to the Ku-band (12 - 18 GHz) and beyond. Sapphire is an excellent substrate for high frequency SOC designs because it supports excellent both active and passive RF device performance, as well as low-power digital operations. We are developing high electron mobility SiGe/Si transistor structures on r-plane sapphire, using either in-situ grown n-MODFET structures or ion-implanted high electron mobility transistor (HEMT) structures. Advantages of the MODFET structures include high electron mobilities at all temperatures (relative to ion-implanted HEMT structures), with mobility continuously improving to cryogenic temperatures. We have measured electron mobilities over 1,200 and 13,000 sq cm/V-sec at room temperature and 0.25 K, respectively in MODFET structures. The electron carrier densities were 1.6 and 1.33 x 10(exp 12)/sq cm at room and liquid helium temperature, respectively, denoting excellent carrier confinement. Using this technique, we have observed electron mobilities as high as 900 sq cm/V-sec at room temperature at a carrier density of 1.3 x 10(exp 12)/sq cm. The temperature dependence of mobility for both the MODFET and HEMT structures provides insights into the mechanisms that allow for enhanced

  14. Scalable Indoor Localization via Mobile Crowdsourcing and Gaussian Process

    PubMed Central

    Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei; Wang, Weiping

    2016-01-01

    Indoor localization using Received Signal Strength Indication (RSSI) fingerprinting has been extensively studied for decades. The positioning accuracy is highly dependent on the density of the signal database. In areas without calibration data, however, this algorithm breaks down. Building and updating a dense signal database is labor intensive, expensive, and even impossible in some areas. Researchers are continually searching for better algorithms to create and update dense databases more efficiently. In this paper, we propose a scalable indoor positioning algorithm that works both in surveyed and unsurveyed areas. We first propose Minimum Inverse Distance (MID) algorithm to build a virtual database with uniformly distributed virtual Reference Points (RP). The area covered by the virtual RPs can be larger than the surveyed area. A Local Gaussian Process (LGP) is then applied to estimate the virtual RPs’ RSSI values based on the crowdsourced training data. Finally, we improve the Bayesian algorithm to estimate the user’s location using the virtual database. All the parameters are optimized by simulations, and the new algorithm is tested on real-case scenarios. The results show that the new algorithm improves the accuracy by 25.5% in the surveyed area, with an average positioning error below 2.2 m for 80% of the cases. Moreover, the proposed algorithm can localize the users in the neighboring unsurveyed area. PMID:26999139

  15. Scalable Indoor Localization via Mobile Crowdsourcing and Gaussian Process.

    PubMed

    Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei; Wang, Weiping

    2016-01-01

    Indoor localization using Received Signal Strength Indication (RSSI) fingerprinting has been extensively studied for decades. The positioning accuracy is highly dependent on the density of the signal database. In areas without calibration data, however, this algorithm breaks down. Building and updating a dense signal database is labor intensive, expensive, and even impossible in some areas. Researchers are continually searching for better algorithms to create and update dense databases more efficiently. In this paper, we propose a scalable indoor positioning algorithm that works both in surveyed and unsurveyed areas. We first propose Minimum Inverse Distance (MID) algorithm to build a virtual database with uniformly distributed virtual Reference Points (RP). The area covered by the virtual RPs can be larger than the surveyed area. A Local Gaussian Process (LGP) is then applied to estimate the virtual RPs' RSSI values based on the crowdsourced training data. Finally, we improve the Bayesian algorithm to estimate the user's location using the virtual database. All the parameters are optimized by simulations, and the new algorithm is tested on real-case scenarios. The results show that the new algorithm improves the accuracy by 25.5% in the surveyed area, with an average positioning error below 2.2 m for 80% of the cases. Moreover, the proposed algorithm can localize the users in the neighboring unsurveyed area. PMID:26999139

  16. Using analytic network process for evaluating mobile text entry methods.

    PubMed

    Ocampo, Lanndon A; Seva, Rosemary R

    2016-01-01

    This paper highlights a preference evaluation methodology for text entry methods in a touch keyboard smartphone using analytic network process (ANP). Evaluation of text entry methods in literature mainly considers speed and accuracy. This study presents an alternative means for selecting text entry method that considers user preference. A case study was carried out with a group of experts who were asked to develop a selection decision model of five text entry methods. The decision problem is flexible enough to reflect interdependencies of decision elements that are necessary in describing real-life conditions. Results showed that QWERTY method is more preferred than other text entry methods while arrangement of keys is the most preferred criterion in characterizing a sound method. Sensitivity analysis using simulation of normally distributed random numbers under fairly large perturbation reported the foregoing results reliable enough to reflect robust judgment. The main contribution of this paper is the introduction of a multi-criteria decision approach in the preference evaluation of text entry methods. PMID:26360215

  17. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs.

    PubMed

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854

  18. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs

    PubMed Central

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854

  19. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  20. IMAT graphics manual

    NASA Technical Reports Server (NTRS)

    Stockwell, Alan E.; Cooper, Paul A.

    1991-01-01

    The Integrated Multidisciplinary Analysis Tool (IMAT) consists of a menu driven executive system coupled with a relational database which links commercial structures, structural dynamics and control codes. The IMAT graphics system, a key element of the software, provides a common interface for storing, retrieving, and displaying graphical information. The IMAT Graphics Manual shows users of commercial analysis codes (MATRIXx, MSC/NASTRAN and I-DEAS) how to use the IMAT graphics system to obtain high quality graphical output using familiar plotting procedures. The manual explains the key features of the IMAT graphics system, illustrates their use with simple step-by-step examples, and provides a reference for users who wish to take advantage of the flexibility of the software to customize their own applications.

  1. The TEKLIB graphic library

    NASA Technical Reports Server (NTRS)

    Bostic, S. W.

    1983-01-01

    TEKLIB is a library of procedures written in TI PASCAL to perform basic graphic tasks. TEKLIB was written to provide an interface between a graphics terminal and the TI 990. The TI 990 is used as a controller for the Finite Element Machine which is an array of microprocessors designed to solve problems by finite element methods in parallel. The use of TEKLIB provides a means of inputting data graphically and displaying output.

  2. The Identification and Classification of Graphic Communication Technology.

    ERIC Educational Resources Information Center

    Fecik, John T.

    All graphic reproduction processes are a means of communication. The purpose of this study was to identify and classify common elements of graphic communication technology into a structure representing the various industrial techniques. Six graphic reproduction processes were identified as relief, intaglio, planography, screen process,…

  3. Improvement of MS (multiple sclerosis) CAD (computer aided diagnosis) performance using C/C++ and computing engine in the graphical processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Suh, Joohyung; Ma, Kevin; Le, Anh

    2011-03-01

    Multiple Sclerosis (MS) is a disease which is caused by damaged myelin around axons of the brain and spinal cord. Currently, MR Imaging is used for diagnosis, but it is very highly variable and time-consuming since the lesion detection and estimation of lesion volume are performed manually. For this reason, we developed a CAD (Computer Aided Diagnosis) system which would assist segmentation of MS to facilitate physician's diagnosis. The MS CAD system utilizes K-NN (k-nearest neighbor) algorithm to detect and segment the lesion volume in an area based on the voxel. The prototype MS CAD system was developed under the MATLAB environment. Currently, the MS CAD system consumes a huge amount of time to process data. In this paper we will present the development of a second version of MS CAD system which has been converted into C/C++ in order to take advantage of the GPU (Graphical Processing Unit) which will provide parallel computation. With the realization of C/C++ and utilizing the GPU, we expect to cut running time drastically. The paper investigates the conversion from MATLAB to C/C++ and the utilization of a high-end GPU for parallel computing of data to improve algorithm performance of MS CAD.

  4. Real-time intraoperative 4D full-range FD-OCT based on the dual graphics processing units architecture for microsurgery guidance.

    PubMed

    Zhang, Kang; Kang, Jin U

    2011-01-01

    Real-time 4D full-range complex-conjugate-free Fourier-domain optical coherence tomography (FD-OCT) is implemented using a dual graphics processing units (dual-GPUs) architecture. One GPU is dedicated to the FD-OCT data processing while the second one is used for the volume rendering and display. GPU accelerated non-uniform fast Fourier transform (NUFFT) is also implemented to suppress the side lobes of the point spread function to improve the image quality. Using a 128,000 A-scan/second OCT spectrometer, we obtained 5 volumes/second real-time full-range 3D OCT imaging. A complete micro-manipulation of a phantom using a microsurgical tool is monitored by multiple volume renderings of the same 3D date set with different view angles. Compared to the conventional surgical microscope, this technology would provide the surgeons a more comprehensive spatial view of the microsurgical site and could serve as an effective intraoperative guidance tool. PMID:21483601

  5. Graphics processing unit aided highly stable real-time spectral-domain optical coherence tomography at 1375 nm based on dual-coupled-line subtraction

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2013-04-01

    We have proposed and demonstrated a highly stable spectral-domain optical coherence tomography (SD-OCT) system based on dual-coupled-line subtraction. The proposed system achieved an ultrahigh axial resolution of 5 μm by combining four kinds of spectrally shifted superluminescent diodes at 1375 nm. Using the dual-coupled-line subtraction method, we made the system insensitive to fluctuations of the optical intensity that can possibly arise in various clinical and experimental conditions. The imaging stability was verified by perturbing the intensity by bending an optical fiber, our system being the only one to reduce the noise among the conventional systems. Also, the proposed method required less computational complexity than conventional mean- and median-line subtraction. The real-time SD-OCT scheme was implemented by graphics processing unit aided signal processing. This is the first reported reduction method for A-line-wise fixed-pattern noise in a single-shot image without estimating the DC component.

  6. Getting Graphic at the School Library.

    ERIC Educational Resources Information Center

    Kan, Kat

    2003-01-01

    Provides information for school libraries interested in acquiring graphic novels. Discusses theft prevention; processing and cataloging; maintaining the collection; what to choose, with two Web sites for more information on graphic novels for libraries; collection development decisions; and Japanese comics called Manga. Includes an annotated list…

  7. Graphics mini manual

    NASA Technical Reports Server (NTRS)

    Taylor, Nancy L.; Randall, Donald P.; Bowen, John T.; Johnson, Mary M.; Roland, Vincent R.; Matthews, Christine G.; Gates, Raymond L.; Skeens, Kristi M.; Nolf, Scott R.; Hammond, Dana P.

    1990-01-01

    The computer graphics capabilities available at the Center are introduced and their use is explained. More specifically, the manual identifies and describes the various graphics software and hardware components, details the interfaces between these components, and provides information concerning the use of these components at LaRC.

  8. Quantitative Graphics in Newspapers.

    ERIC Educational Resources Information Center

    Tankard, James W., Jr.

    The use of quantitative graphics in newspapers requires achieving a balance between being accurate and getting the attention of the reader. The statistical representations in newspapers are drawn by graphic designers whose key technique is fusion--the striking combination of two visual images. This technique often results in visual puns,…

  9. How Computer Graphics Work.

    ERIC Educational Resources Information Center

    Prosise, Jeff

    This document presents the principles behind modern computer graphics without straying into the arcane languages of mathematics and computer science. Illustrations accompany the clear, step-by-step explanations that describe how computers draw pictures. The 22 chapters of the book are organized into 5 sections. "Part 1: Computer Graphics in…

  10. Molecular Graphics and Chemistry.

    ERIC Educational Resources Information Center

    Weber, Jacques; And Others

    1992-01-01

    Explains molecular graphics, i.e., the application of computer graphics techniques to investigate molecular structure, function, and interaction. Structural models and molecular surfaces are discussed, and a theoretical model that can be used for the evaluation of intermolecular interaction energies for organometallics is described. (45…

  11. Fully Solution-Processed Flexible Organic Thin Film Transistor Arrays with High Mobility and Exceptional Uniformity

    PubMed Central

    Fukuda, Kenjiro; Takeda, Yasunori; Mizukami, Makoto; Kumaki, Daisuke; Tokito, Shizuo

    2014-01-01

    Printing fully solution-processed organic electronic devices may potentially revolutionize production of flexible electronics for various applications. However, difficulties in forming thin, flat, uniform films through printing techniques have been responsible for poor device performance and low yields. Here, we report on fully solution-processed organic thin-film transistor (TFT) arrays with greatly improved performance and yields, achieved by layering solution-processable materials such as silver nanoparticle inks, organic semiconductors, and insulating polymers on thin plastic films. A treatment layer improves carrier injection between the source/drain electrodes and the semiconducting layer and dramatically reduces contact resistance. Furthermore, an organic semiconductor with large-crystal grains results in TFT devices with shorter channel lengths and higher field-effect mobilities. We obtained mobilities of over 1.2 cm2 V−1 s−1 in TFT devices with channel lengths shorter than 20 μm. By combining these fabrication techniques, we built highly uniform organic TFT arrays with average mobility levels as high as 0.80 cm2 V−1 s−1 and ideal threshold voltages of 0 V. These results represent major progress in the fabrication of fully solution-processed organic TFT device arrays. PMID:24492785

  12. Fully Solution-Processed Flexible Organic Thin Film Transistor Arrays with High Mobility and Exceptional Uniformity

    NASA Astrophysics Data System (ADS)

    Fukuda, Kenjiro; Takeda, Yasunori; Mizukami, Makoto; Kumaki, Daisuke; Tokito, Shizuo

    2014-02-01

    Printing fully solution-processed organic electronic devices may potentially revolutionize production of flexible electronics for various applications. However, difficulties in forming thin, flat, uniform films through printing techniques have been responsible for poor device performance and low yields. Here, we report on fully solution-processed organic thin-film transistor (TFT) arrays with greatly improved performance and yields, achieved by layering solution-processable materials such as silver nanoparticle inks, organic semiconductors, and insulating polymers on thin plastic films. A treatment layer improves carrier injection between the source/drain electrodes and the semiconducting layer and dramatically reduces contact resistance. Furthermore, an organic semiconductor with large-crystal grains results in TFT devices with shorter channel lengths and higher field-effect mobilities. We obtained mobilities of over 1.2 cm2 V-1 s-1 in TFT devices with channel lengths shorter than 20 μm. By combining these fabrication techniques, we built highly uniform organic TFT arrays with average mobility levels as high as 0.80 cm2 V-1 s-1 and ideal threshold voltages of 0 V. These results represent major progress in the fabrication of fully solution-processed organic TFT device arrays.

  13. Super-Sonograms and graphical seismic source locations: Facing the challenge of real-time data processing in an OSI SAMS installation

    NASA Astrophysics Data System (ADS)

    Joswig, Manfred

    2010-05-01

    The installation and operation of an OSI seismic aftershock monitoring system (SAMS) is bound by strict time constraints: 30+ small arrays must be set up within days, and data screening must cope with the daily seismogram input. This is a significant challenge since any potential, single ML -2.0 aftershock from a potential nuclear test must be detected and discriminated against a variety of higher-amplitude noise bursts. No automated approach can handle this task to date; thus some 200 traces of 24/7 data must be screened manually with a time resolution sufficient to recover signals of just a few sec duration, and with tiny amplitudes just above the threshold of ambient noise. Previous tests confirmed that this task can not be performed by time-domain signal screening via established seismological processing software, e.g. PITSA, SEISAN, or GEOTOOLS. Instead, we introduced 'SonoView', a seismic diagnosis tool based on a compilation of array traces into super-sonograms. Several hours of cumulative array data can be displayed at once on a single computer screen - without sacrifying the necessary detectability of few-sec signals. Then 'TraceView' will guide the analyst to select the relevant traces with best SNR, and 'HypoLine' offers some interactive, graphical location tools for fast epicenter estimates and source signature identifications. A previous release of this software suite was successfully applied at IFE08 in Kasakhstan, and supported the seismic sub-team of OSI in its timely report compilation.

  14. Realtime cerebellum: a large-scale spiking network model of the cerebellum that runs in realtime using a graphics processing unit.

    PubMed

    Yamazaki, Tadashi; Igarashi, Jun

    2013-11-01

    The cerebellum plays an essential role in adaptive motor control. Once we are able to build a cerebellar model that runs in realtime, which means that a computer simulation of 1 s in the simulated world completes within 1 s in the real world, the cerebellar model could be used as a realtime adaptive neural controller for physical hardware such as humanoid robots. In this paper, we introduce "Realtime Cerebellum (RC)", a new implementation of our large-scale spiking network model of the cerebellum, which was originally built to study cerebellar mechanisms for simultaneous gain and timing control and acted as a general-purpose supervised learning machine of spatiotemporal information known as reservoir computing, on a graphics processing unit (GPU). Owing to the massive parallel computing capability of a GPU, RC runs in realtime, while reproducing qualitatively the same simulation results of the Pavlovian delay eyeblink conditioning with the previous version. RC is adopted as a realtime adaptive controller of a humanoid robot, which is instructed to learn a proper timing to swing a bat to hit a flying ball online. These results suggest that RC provides a means to apply the computational power of the cerebellum as a versatile supervised learning machine towards engineering applications. PMID:23434303

  15. Accelerating electrostatic interaction calculations with graphical processing units based on new developments of Ewald method using non-uniform fast Fourier transform.

    PubMed

    Yang, Sheng-Chun; Wang, Yong-Lei; Jiao, Gui-Sheng; Qian, Hu-Jun; Lu, Zhong-Yuan

    2016-01-30

    We present new algorithms to improve the performance of ENUF method (F. Hedman, A. Laaksonen, Chem. Phys. Lett. 425, 2006, 142) which is essentially Ewald summation using Non-Uniform FFT (NFFT) technique. A NearDistance algorithm is developed to extensively reduce the neighbor list size in real-space computation. In reciprocal-space computation, a new algorithm is developed for NFFT for the evaluations of electrostatic interaction energies and forces. Both real-space and reciprocal-space computations are further accelerated by using graphical processing units (GPU) with CUDA technology. Especially, the use of CUNFFT (NFFT based on CUDA) very much reduces the reciprocal-space computation. In order to reach the best performance of this method, we propose a procedure for the selection of optimal parameters with controlled accuracies. With the choice of suitable parameters, we show that our method is a good alternative to the standard Ewald method with the same computational precision but a dramatically higher computational efficiency. PMID:26584145

  16. Molecular mobility and relaxation process of isolated lignin studied by multifrequency calorimetric experiments.

    PubMed

    Guigo, Nathanael; Mija, Alice; Vincent, Luc; Sbirrazzuoli, Nicolas

    2009-02-28

    The glass transition of lignin has been studied by multifrequency calorimetric measurements in order to highlight the morphological changes and the dynamic aspects associated to this relaxation process. Influences of water sorption and thermal annealing on molecular mobility have been considered. Additional investigations by thermogravimetry, infra-red spectroscopy and rheometry have been performed to corroborate the claims. The relaxation process of annealed lignin shows a different behaviour as the consequence of micro-structural modifications of lignin. These are explained by redistribution of secondary bonds as well as formation of new interunit linkages. Concerning the dynamic aspects, apparent activation energy, E, and sizes of cooperatively rearranging region, V(crr), have been evaluated respectively from the frequency dependence and heat capacity measurements of the glass transition. Compared to dried lignin, both E and V(crr) significantly decrease in a water-sorbed matrix indicating that the three-dimensional structure presents a higher mobility and is less confined. PMID:19209367

  17. A High Speed Mobile Courier Data Access System That Processes Database Queries in Real-Time

    NASA Astrophysics Data System (ADS)

    Gatsheni, Barnabas Ndlovu; Mabizela, Zwelakhe

    A secure high-speed query processing mobile courier data access (MCDA) system for a Courier Company has been developed. This system uses the wireless networks in combination with wired networks for updating a live database at the courier centre in real-time by an offsite worker (the Courier). The system is protected by VPN based on IPsec. There is no system that we know of to date that performs the task for the courier as proposed in this paper.

  18. Mobile digital data acquisition and recording system for geoenergy process monitoring and control

    SciTech Connect

    Kimball, K B; Ogden, H C

    1980-12-01

    Three mobile, general purpose data acquisition and recording systems have been built to support geoenergy field experiments. These systems were designed to record and display information from large assortments of sensors used to monitor in-situ combustion recovery or similar experiments. They provide experimenters and operations personnel with easy access to current and past data for evaluation and control of the process, and provide permanent recordings for subsequent detailed analysis. The configurations of these systems and their current capabilities are briefly described.

  19. Anomalous diffusion due to hindering by mobile obstacles undergoing Brownian motion or Orstein-Ulhenbeck processes.

    PubMed

    Berry, Hugues; Chaté, Hugues

    2014-02-01

    In vivo measurements of the passive movements of biomolecules or vesicles in cells consistently report "anomalous diffusion," where mean-squared displacements scale as a power law of time with exponent α<1 (subdiffusion). While the detailed mechanisms causing such behaviors are not always elucidated, movement hindrance by obstacles is often invoked. However, our understanding of how hindered diffusion leads to subdiffusion is based on diffusion amidst randomly located immobile obstacles. Here, we have used Monte Carlo simulations to investigate transient subdiffusion due to mobile obstacles with various modes of mobility. Our simulations confirm that the anomalous regimes rapidly disappear when the obstacles move by Brownian motion. By contrast, mobile obstacles with more confined displacements, e.g., Orstein-Ulhenbeck motion, are shown to preserve subdiffusive regimes. The mean-squared displacement of tracked protein displays convincing power laws with anomalous exponent α that varies with the density of Orstein-Ulhenbeck (OU) obstacles or the relaxation time scale of the OU process. In particular, some of the values we observed are significantly below the universal value predicted for immobile obstacles in two dimensions. Therefore, our results show that subdiffusion due to mobile obstacles with OU type of motion may account for the large variation range exhibited by experimental measurements in living cells and may explain that some experimental estimates are below the universal value predicted for immobile obstacles. PMID:25353510

  20. Anomalous diffusion due to hindering by mobile obstacles undergoing Brownian motion or Orstein-Ulhenbeck processes

    NASA Astrophysics Data System (ADS)

    Berry, Hugues; Chaté, Hugues

    2014-02-01

    In vivo measurements of the passive movements of biomolecules or vesicles in cells consistently report "anomalous diffusion," where mean-squared displacements scale as a power law of time with exponent α <1 (subdiffusion). While the detailed mechanisms causing such behaviors are not always elucidated, movement hindrance by obstacles is often invoked. However, our understanding of how hindered diffusion leads to subdiffusion is based on diffusion amidst randomly located immobile obstacles. Here, we have used Monte Carlo simulations to investigate transient subdiffusion due to mobile obstacles with various modes of mobility. Our simulations confirm that the anomalous regimes rapidly disappear when the obstacles move by Brownian motion. By contrast, mobile obstacles with more confined displacements, e.g., Orstein-Ulhenbeck motion, are shown to preserve subdiffusive regimes. The mean-squared displacement of tracked protein displays convincing power laws with anomalous exponent α that varies with the density of Orstein-Ulhenbeck (OU) obstacles or the relaxation time scale of the OU process. In particular, some of the values we observed are significantly below the universal value predicted for immobile obstacles in two dimensions. Therefore, our results show that subdiffusion due to mobile obstacles with OU type of motion may account for the large variation range exhibited by experimental measurements in living cells and may explain that some experimental estimates are below the universal value predicted for immobile obstacles.

  1. Graphics processing unit-accelerated non-rigid registration of MR images to CT images during CT-guided percutaneous liver tumor ablations

    PubMed Central

    Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G.; Shekhar, Raj; Hata, Nobuhiko

    2015-01-01

    Rationale and Objectives Accuracy and speed are essential for the intraprocedural nonrigid MR-to-CT image registration in the assessment of tumor margins during CT-guided liver tumor ablations. While both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique based on volume subdivision with hardware acceleration using a graphical processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Materials and Methods Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance (HD)), and total processing time including contouring of ROIs and computation were compared using a paired Student’s t-test. Results Accuracy of the GPU-accelerated registrations and B-spline registrations, respectively were 88.3 ± 3.7% vs 89.3 ± 4.9% (p = 0.41) for DSC and 13.1 ± 5.2 mm vs 11.4 ± 6.3 mm (p = 0.15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 s vs 557 ± 116 s (p < 0.000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (p = 0.71). Conclusion The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU

  2. Do larger graphic health warnings on standardised cigarette packs increase adolescents’ cognitive processing of consumer health information and beliefs about smoking-related harms?

    PubMed Central

    White, Victoria; Williams, Tahlia; Faulkner, Agatha; Wakefield, Melanie

    2015-01-01

    Objective To examine the impact of plain packaging of cigarettes with enhanced graphic health warnings on Australian adolescents’ cognitive processing of warnings and awareness of different health consequences of smoking. Methods Cross-sectional school-based surveys conducted in 2011 (prior to introduction of standardised packaging, n=6338) and 2013 (7–12 months afterwards, n=5915). Students indicated frequency of attending to, reading, thinking or talking about warnings. Students viewed a list of diseases or health effects and were asked to indicate whether each was caused by smoking. Two—‘kidney and bladder cancer’ and ‘damages gums and teeth’—were new while the remainder had been promoted through previous health warnings and/or television campaigns. The 60% of students seeing a cigarette pack in previous 6 months in 2011 and 65% in 2013 form the sample for analysis. Changes in responses over time are examined. Results Awareness that smoking causes bladder cancer increased between 2011 and 2013 (p=0.002). There was high agreement with statements reflecting health effects featured in previous warnings or advertisements with little change over time. Exceptions to this were increases in the proportion agreeing that smoking was a leading cause of death (p<0.001) and causes blindness (p<0.001). The frequency of students reading, attending to, thinking or talking about the health warnings on cigarette packs did not change. Conclusions Acknowledgement of negative health effects of smoking among Australian adolescents remains high. Apart from increased awareness of bladder cancer, new requirements for packaging and health warnings did not increase adolescents’ cognitive processing of warning information.

  3. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    NASA Astrophysics Data System (ADS)

    Setiani, Tia Dwi; Suprijadi, Haryanto, Freddy

    2016-03-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 - 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 108 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  4. Enabling virtual reality on mobile devices: enhancing students' learning experience

    NASA Astrophysics Data System (ADS)

    Feisst, Markus E.

    2011-05-01

    Nowadays, mobile devices are more and more powerful concerning processing power, main memory and storage as well as graphical output capability and the support for 3D mostly via OpenGL ES. Therefore modern devices allows it to enable Virtual Reality (VR) on them. Most students own (or will own in future) one of these more powerful mobile device. The students owning such a mobile device already using it to communicate (SMS, twitter, etc) and/or to listen to podcasts. Taking this knowledge into account, it makes sense to improve the students learning experience by enabling mobile devices to display VR content.

  5. Image reproduction with interactive graphics

    NASA Technical Reports Server (NTRS)

    Buckner, J. D.; Council, H. W.; Edwards, T. R.

    1974-01-01

    Software application or development in optical image digital data processing requires a fast, good quality, yet inexpensive hard copy of processed images. To achieve this, a Cambo camera with an f 2.8/150-mm Xenotar lens in a Copal shutter having a Graflok back for 4 x 5 Polaroid type 57 pack-film has been interfaced to an existing Adage, AGT-30/Electro-Mechanical Research, EMR 6050 graphic computer system. Time-lapse photography in conjunction with a log to linear voltage transformation has resulted in an interactive system capable of producing a hard copy in 54 sec. The interactive aspect of the system lies in a Tektronix 4002 graphic computer terminal and its associated hard copy unit.

  6. A graphical ICU workstation.

    PubMed Central

    Higgins, S. B.; Jiang, K.; Swindell, B. B.; Bernard, G. R.

    1991-01-01

    A workstation designed to facilitate electronic charting in the intensive care unit is described. The system design incorporates a graphical, windows-based user interface. The system captures all data formerly recorded on the paper flowsheet including direct patient measurements, nursing assessment, patient care procedures, and nursing notes. It has the ability to represent charted data in a variety of graphical formats, thereby providing additional insights to facilitate the management of the critically ill patient. Initial nursing evaluation is described. PMID:1807712

  7. Flowfield computer graphics

    NASA Technical Reports Server (NTRS)

    Desautel, Richard

    1993-01-01

    The objectives of this research include supporting the Aerothermodynamics Branch's research by developing graphical visualization tools for both the branch's adaptive grid code and flow field ray tracing code. The completed research for the reporting period includes development of a graphical user interface (GUI) and its implementation into the NAS Flowfield Analysis Software Tool kit (FAST), for both the adaptive grid code (SAGE) and the flow field ray tracing code (CISS).

  8. A Photo Storm Report Mobile Application, Processing/Distribution System, and AWIPS-II Display Concept

    NASA Astrophysics Data System (ADS)

    Longmore, S. P.; Bikos, D.; Szoke, E.; Miller, S. D.; Brummer, R.; Lindsey, D. T.; Hillger, D.

    2014-12-01

    The increasing use of mobile phones equipped with digital cameras and the ability to post images and information to the Internet in real-time has significantly improved the ability to report events almost instantaneously. In the context of severe weather reports, a representative digital image conveys significantly more information than a simple text or phone relayed report to a weather forecaster issuing severe weather warnings. It also allows the forecaster to reasonably discern the validity and quality of a storm report. Posting geo-located, time stamped storm report photographs utilizing a mobile phone application to NWS social media weather forecast office pages has generated recent positive feedback from forecasters. Building upon this feedback, this discussion advances the concept, development, and implementation of a formalized Photo Storm Report (PSR) mobile application, processing and distribution system and Advanced Weather Interactive Processing System II (AWIPS-II) plug-in display software.The PSR system would be composed of three core components: i) a mobile phone application, ii) a processing and distribution software and hardware system, and iii) AWIPS-II data, exchange and visualization plug-in software. i) The mobile phone application would allow web-registered users to send geo-location, view direction, and time stamped PSRs along with severe weather type and comments to the processing and distribution servers. ii) The servers would receive PSRs, convert images and information to NWS network bandwidth manageable sizes in an AWIPS-II data format, distribute them on the NWS data communications network, and archive the original PSRs for possible future research datasets. iii) The AWIPS-II data and exchange plug-ins would archive PSRs, and the visualization plug-in would display PSR locations, times and directions by hour, similar to surface observations. Hovering on individual PSRs would reveal photo thumbnails and clicking on them would display the

  9. Self-Assembly, Molecular Ordering, and Charge Mobility in Solution-Processed Ultrathin Oligothiophene Films

    SciTech Connect

    Murphy,A.; Chang, P.; VanDyke, P.; Liu, J.; Frechet, J.; Subramanian, V.; Delongchamp, D.; Sambasivan, S.; Fischer, D.; Lin, E.

    2005-01-01

    Symmetrical {alpha}, {omega}-substituted quarter-(T4), penta-(T5), sexi-(T6), and heptathiophene (T7) oligomers containing thermally removable aliphatic ester solubilizing groups were synthesized, and their UV-vis and thermal characteristics were compared. Spun-cast thin films of each oligomer were examined with atomic force microscopy and near-edge X-ray absorption fine structure spectroscopy to evaluate the ability of the material to self-assemble from a solution-based process while maintaining complete surface coverage. Films of the T5-T7 oligomers self-assemble into crystalline terraces after thermal annealing with higher temperatures required to affect this transformation as the size of the oligomers increases. A symmetrical {alpha}, {omega}-substituted sexithiophene (T6-acid) that reveals carboxylic acids after thermolysis was also prepared to evaluate the effect of the presence of hydrogen-bonding moieties. The charge transport properties for these materials evaluated in top-contact thin film transistor devices were found to correlate with the observed morphology of the films. Therefore, the T4 and the T6-acid performed poorly because of incomplete surface coverage after thermolysis, while T5-T7 exhibited much higher performance as a result of molecular ordering. Increases in charge mobility correlated to increasing conjugation length with measured mobilities ranging from 0.02 to 0.06 cm2/(V{center_dot}s). The highest mobilities were measured when films of each oligomer had an average thickness between one and two monolayers, indicating that the molecules become exceptionally well-ordered during the thermolysis process. This unprecedented ordering of the solution-cast molecules results in efficient charge mobility rarely seen in such ultrathin films.

  10. The Role of Mobile Technologies in Health Care Processes: The Case of Cancer Supportive Care

    PubMed Central

    Cucciniello, Maria; Guerrazzi, Claudia

    2015-01-01

    Background Health care systems are gradually moving toward new models of care based on integrated care processes shared by different care givers and on an empowered role of the patient. Mobile technologies are assuming an emerging role in this scenario. This is particularly true in care processes where the patient has a particularly enhanced role, as is the case of cancer supportive care. Objective This paper aims to review existing studies on the actual role and use of mobile technology during the different stages of care processes, with particular reference to cancer supportive care. Methods We carried out a review of literature with the aim of identifying studies related to the use of mHealth in cancer care and cancer supportive care. The final sample size consists of 106 records. Results There is scant literature concerning the use of mHealth in cancer supportive care. Looking more generally at cancer care, we found that mHealth is mainly used for self-management activities carried out by patients. The main tools used are mobile devices like mobile phones and tablets, but remote monitoring devices also play an important role. Text messaging technologies (short message service, SMS) have a minor role, with the exception of middle income countries where text messaging plays a major role. Telehealth technologies are still rarely used in cancer care processes. If we look at the different stages of health care processes, we can see that mHealth is mainly used during the treatment of patients, especially for self-management activities. It is also used for prevention and diagnosis, although to a lesser extent, whereas it appears rarely used for decision-making and follow-up activities. Conclusions Since mHealth seems to be employed only for limited uses and during limited phases of the care process, it is unlikely that it can really contribute to the creation of new care models. This under-utilization may depend on many issues, including the need for it to be embedded

  11. Modelling multi-phase liquid-sediment scour and resuspension induced by rapid flows using Smoothed Particle Hydrodynamics (SPH) accelerated with a Graphics Processing Unit (GPU)

    NASA Astrophysics Data System (ADS)

    Fourtakas, G.; Rogers, B. D.

    2016-06-01

    A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.

  12. HIN9/468: The Last Mile - Secure and mobile data processing in healthcare

    PubMed Central

    Bludau, HB; Vocke, A; Herzog, W

    1999-01-01

    Motivation According to the Federal Ministry the avowed target of modern medicine is to administer the best medical care, the newest scientific insights and the knowledge of experienced specialists on affordable conditions to every patient no matter whether he is located in a rural area or in a teaching hospital. One way of administer information is on mobile tools. To find out more about the influence of mobile computer on the physician-patient relation, the acceptance of these tools as well as prerequisites of new security and data-processing concepts were investigated in a simulation study. Methods The Personal Digital Assistant Based on a personal digital assistant a prototype was developed. The Apple Newton was used because it appeared suitable for easy data input and retrieval by the means of a touch screen with handwriting recognition. The device was coupled with a conventional cellular phone for voice and data transfer The prototype provided several functions for information processing: access to a patient database access to medical knowledge documentation of diagnosis electronic requests forms for investigations tools for the personal organization. The prototype of an accessibility and safety manager was integrated. This software enables to control telephone accessibility individually. Situational adjustments and a complex set of rules configured the way arriving calls were dealt with. Moreover this software contained a component for sending and receiving text messages. The Simulation Study In simulation studies, test users are observed while working with prototypical technology in a close-to-reality environment. The aim is to test an early prototype in its avowed environment to obtain design proposals for technology by the future users. Within the Ladenburger group "Security in communications technology" of the Gottlieb-Daimler und Karl-Benz-Stiftung an investigation at the Heidelberg University Medical Centre was conducted under organisational management

  13. Control of Chemical Effects in the Separation Process of a Differential Mobility / Mass Spectrometer System

    PubMed Central

    Schneider, Bradley B.; Coy, Stephen L.; Krylov, Evgeny V.; Nazarov, Erkinjon G.

    2013-01-01

    Differential mobility spectrometry (DMS) separates ions on the basis of the difference in their migration rates under high versus low electric fields. Several models describing the physical nature of this field mobility dependence have been proposed but emerging as a dominant effect is the clusterization model sometimes referred to as the dynamic cluster-decluster model. DMS resolution and peak capacity is strongly influenced by the addition of modifiers which results in the formation and dissociation of clusters. This process increases selectivity due to the unique chemical interactions that occur between an ion and neutral gas phase molecules. It is thus imperative to bring the parameters influencing the chemical interactions under control and find ways to exploit them in order to improve the analytical utility of the device. In this paper we describe three important areas that need consideration in order to stabilize and capitalize on the chemical processes that dominate a DMS separation. The first involves means of controlling the dynamic equilibrium of the clustering reactions with high concentrations of specific reagents. The second area involves a means to deal with the unwanted heterogeneous cluster ion populations emitted from the electrospray ionization process that degrade resolution and sensitivity. The third involves fine control of parameters that affect the fundamental collision processes, temperature and pressure. PMID:20065515

  14. mHealth Quality: A Process to Seal the Qualified Mobile Health Apps.

    PubMed

    Yasini, Mobin; Beranger, Jérôme; Desmarais, Pierre; Perez, Lucas; Marchand, Guillaume

    2016-01-01

    A large number of mobile health applications (apps) are currently available with a variety of functionalities. The user ratings in the app stores seem not to be reliable to determine the quality of the apps. The traditional methods of evaluation are not suitable for fast paced nature of mobile technology. In this study, we propose a collaborative multidimensional scale to assess the quality of mHealth apps. During our process, the app quality is assessed in various aspects including medical reliability, legal consistency, ethical consistency, usability aspects, personal data privacy and IT security. A hypothetico-deductive approach was used in various working groups to define the audit criteria based on the various use cases that an app could provide. These criteria were then implemented into a web based self-administered questionnaires and the generation of automatic reports were considered. This method is on the one hand specific to each app because it allows to assess each health app according to its offered functionalities. On the other hand, this method is automatic, transferable to all apps and adapted to the dynamic nature of mobile technology. PMID:27577372

  15. Self-Authored Graphic Design: A Strategy for Integrative Studies

    ERIC Educational Resources Information Center

    McCarthy, Steven; De Almeida, Cristina Melibeu

    2002-01-01

    The purpose of this essay is to introduce the concepts of self-authorship in graphic design education as part of an integrative pedagogy. The enhanced potential of harnessing graphic design's dual modalities--the integrative processes inherent in design thinking and doing, and the ability of graphic design to engage other disciplines by giving…

  16. Information Graphics at the "Boston Globe": From Concept to Execution.

    ERIC Educational Resources Information Center

    McNaughton, Sean

    1998-01-01

    Shows how the "Boston Globe" brings words, diagrams, illustrations, and photographs together in evocative information packages. Traces the process of discussion and decision making among reporters, editors, art directors, and graphic artists as the team chooses concepts the graphics will illustrate, and produces the graphics themselves. (SR)

  17. A Graphical Physics Course

    NASA Astrophysics Data System (ADS)

    Wood, Roy C.

    2001-11-01

    There has been a desire in recent years to introduce physics to students at the middle school, or freshmen high school level. However, traditional physics courses involve a great deal of mathematics, and this makes physics unattractive to many of them. In the last few decades, courses have been developed with a focus that is more conceptual than mathematical, and is generally referred to as conceptual physics. These two types of courses emphasize two methods that physicist use to solve physics problems. However, there is a third, graphical method that is also useful, and complements mathematical and verbal reasoning. A course emphasizing graphical methods would deal with quantitative graphical diagrams, as well as qualitative diagrams. Examples of quantitative graphical diagrams are scaled force diagrams and scaled optical ray-tracing diagrams. A course based on this type of approach would involve measurements and uncertainties, and would involve active (hands-on) student participation suitable for younger students. This talk will discuss a graphical physics course, and its benefits to younger students.

  18. Doping suppression and mobility enhancement of graphene transistors fabricated using an adhesion promoting dry transfer process

    SciTech Connect

    Cheol Shin, Woo; Hun Mun, Jeong; Yong Kim, Taek; Choi, Sung-Yool; Jin Cho, Byung E-mail: tskim1@kaist.ac.kr; Yoon, Taeshik; Kim, Taek-Soo E-mail: tskim1@kaist.ac.kr

    2013-12-09

    We present the facile dry transfer of graphene synthesized via chemical vapor deposition on copper film to a functional device substrate. High quality uniform dry transfer of graphene to oxidized silicon substrate was achieved by exploiting the beneficial features of a poly(4-vinylphenol) adhesive layer involving a strong adhesion energy to graphene and negligible influence on the electronic and structural properties of graphene. The graphene field effect transistors (FETs) fabricated using the dry transfer process exhibit excellent electrical performance in terms of high FET mobility and low intrinsic doping level, which proves the feasibility of our approach in graphene-based nanoelectronics.

  19. Recovery of critical and value metals from mobile electronics enabled by electrochemical processing

    SciTech Connect

    Tedd E. Lister; Peiming Wang; Andre Anderko

    2014-10-01

    Electrochemistry-based schemes were investigated as a means to recover critical and value metals from scrap mobile electronics. Mobile electronics offer a growing feedstock for replenishing value and critical metals and reducing need to exhaust primary sources. The electrorecycling process generates oxidizing agents at an anode to dissolve metals from the scrap matrix while reducing dissolved metals at the cathode. The process uses a single cell to maximize energy efficiency. E vs pH diagrams and metals dissolution experiments were used to assess effectiveness of various solution chemistries. Following this work, a flow chart was developed where two stages of electrorecycling were proposed: 1) initial dissolution of Cu, Sn, Ag and magnet materials using Fe+3 generated in acidic sulfate and 2) final dissolution of Pd and Au using Cl2 generated in an HCl solution. Experiments were performed using a simulated metal mixture equivalent to 5 cell phones. Both Cu and Ag were recovered at ~ 97% using Fe+3 while leaving Au and Pd intact. Strategy for extraction of rare earth elements (REE) from dissolved streams is discussed as well as future directions in process development.

  20. Data Processing and Quality Evaluation of a Boat-Based Mobile Laser Scanning System

    PubMed Central

    Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri

    2013-01-01

    Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0–1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data. PMID:24048340

  1. Micromagnetics on high-performance workstation and mobile computational platforms

    NASA Astrophysics Data System (ADS)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  2. Interactive computer graphics

    NASA Astrophysics Data System (ADS)

    Purser, K.

    1980-08-01

    Design layouts have traditionally been done on a drafting board by drawing a two-dimensional representation with section cuts and side views to describe the exact three-dimensional model. With the advent of computer graphics, a three-dimensional model can be created directly. The computer stores the exact three-dimensional model, which can be examined from any angle and at any scale. A brief overview of interactive computer graphics, how models are made and some of the benefits/limitations are described.

  3. Mobile air monitoring data processing strategies and effects on spatial air pollution trends

    NASA Astrophysics Data System (ADS)

    Brantley, H. L.; Hagler, G. S. W.; Kimbrough, S.; Williams, R. W.; Mukerjee, S.; Neas, L. M.

    2013-12-01

    The collection of real-time air quality measurements while in motion (i.e., mobile monitoring) is currently conducted worldwide to evaluate in situ emissions, local air quality trends, and air pollutant exposure. This measurement strategy pushes the limits of traditional data analysis with complex second-by-second multipollutant data varying as a function of time and location. Data reduction and filtering techniques are often applied to deduce trends, such as pollutant spatial gradients downwind of a highway. However, rarely do mobile monitoring studies report the sensitivity of their results to the chosen data processing approaches. The study being reported here utilized a large mobile monitoring dataset collected on a roadway network in central North Carolina to explore common data processing strategies including time-alignment, short-term emissions event detection, background estimation, and averaging techniques. One-second time resolution measurements of ultrafine particles ≤ 100 nm in diameter (UFPs), black carbon (BC), particulate matter (PM), carbon monoxide (CO), carbon dioxide (CO2), and nitrogen dioxide (NO2) were collected on twelve unique driving routes that were repeatedly sampled. Analyses demonstrate that the multiple emissions event detection strategies reported produce generally similar results and that utilizing a median (as opposed to a mean) as a summary statistic may be sufficient to avoid bias in near-source spatial trends. Background levels of the pollutants are shown to vary with time, and the estimated contributions of the background to the mean pollutant concentrations were: BC (6%), PM2.5-10 (12%), UFPs (19%), CO (38%), PM10 (45%), NO2 (51%), PM2.5 (56%), and CO2 (86%). Lastly, while temporal smoothing (e.g., 5 s averages) results in weak pair-wise correlation and the blurring of spatial trends, spatial averaging (e.g., 10 m) is demonstrated to increase correlation and refine spatial trends.

  4. Near Real-Time Assessment of Anatomic and Dosimetric Variations for Head and Neck Radiation Therapy via Graphics Processing Unit–based Dose Deformation Framework

    SciTech Connect

    Qi, X. Sharon; Santhanam, Anand; Neylon, John; Min, Yugang; Armstrong, Tess; Sheng, Ke; Staton, Robert J.; Pukala, Jason; Pham, Andrew; Low, Daniel A.; Lee, Steve P.; Steinberg, Michael; Manon, Rafael; Chen, Allen M.; Kupelian, Patrick

    2015-06-01

    Purpose: The purpose of this study was to systematically monitor anatomic variations and their dosimetric consequences during intensity modulated radiation therapy (IMRT) for head and neck (H&N) cancer by using a graphics processing unit (GPU)-based deformable image registration (DIR) framework. Methods and Materials: Eleven IMRT H&N patients undergoing IMRT with daily megavoltage computed tomography (CT) and weekly kilovoltage CT (kVCT) scans were included in this analysis. Pretreatment kVCTs were automatically registered with their corresponding planning CTs through a GPU-based DIR framework. The deformation of each contoured structure in the H&N region was computed to account for nonrigid change in the patient setup. The Jacobian determinant of the planning target volumes and the surrounding critical structures were used to quantify anatomical volume changes. The actual delivered dose was calculated accounting for the organ deformation. The dose distribution uncertainties due to registration errors were estimated using a landmark-based gamma evaluation. Results: Dramatic interfractional anatomic changes were observed. During the treatment course of 6 to 7 weeks, the parotid gland volumes changed up to 34.7%, and the center-of-mass displacement of the 2 parotid glands varied in the range of 0.9 to 8.8 mm. For the primary treatment volume, the cumulative minimum and mean and equivalent uniform doses assessed by the weekly kVCTs were lower than the planned doses by up to 14.9% (P=.14), 2% (P=.39), and 7.3% (P=.05), respectively. The cumulative mean doses were significantly higher than the planned dose for the left parotid (P=.03) and right parotid glands (P=.006). The computation including DIR and dose accumulation was ultrafast (∼45 seconds) with registration accuracy at the subvoxel level. Conclusions: A systematic analysis of anatomic variations in the H&N region and their dosimetric consequences is critical in improving treatment efficacy. Nearly real

  5. Parallel processor-based raster graphics system architecture

    DOEpatents

    Littlefield, Richard J.

    1990-01-01

    An apparatus for generating raster graphics images from the graphics command stream includes a plurality of graphics processors connected in parallel, each adapted to receive any part of the graphics command stream for processing the command stream part into pixel data. The apparatus also includes a frame buffer for mapping the pixel data to pixel locations and an interconnection network for interconnecting the graphics processors to the frame buffer. Through the interconnection network, each graphics processor may access any part of the frame buffer concurrently with another graphics processor accessing any other part of the frame buffer. The plurality of graphics processors can thereby transmit concurrently pixel data to pixel locations in the frame buffer.

  6. Sub pixel analysis and processing of sensor data for mobile target intelligence information and verification

    NASA Astrophysics Data System (ADS)

    Williams, Theresa Allen

    This dissertation introduces a novel process to study and analyze sensor data in order to obtain information pertaining to mobile targets at the sub-pixel level. The process design is modular in nature and utilizes a set of algorithmic tools for change detection, target extraction and analysis, super-pixel processing and target refinement. The scope of this investigation is confined to a staring sensor that records data of sub-pixel vehicles traveling horizontally across the ground. Statistical models of the targets and background are developed with noise and jitter effects. Threshold Change Detection, Duration Change Detection and Fast Adaptive Power Iteration (FAPI) Detection techniques are the three methods used for target detection. The PolyFit and FermiFit are two tools developed and employed for target analysis, which allows for flexible processing. Tunable parameters in the detection methods, along with filters for false alarms, show the adaptability of the procedures. Super-pixel processing tools are designed, and Refinement Through Tracking (RTT) techniques are investigated as post-processing refinement options. The process is tested on simulated datasets, and validated with sensor datasets obtained from RP Flight Systems, Inc.

  7. Graphical workstation capability for reliability modeling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.

    1992-01-01

    In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.

  8. Printer Graphics Package

    NASA Technical Reports Server (NTRS)

    Blanchard, D. C.

    1986-01-01

    Printer Graphics Package (PGP) is tool for making two-dimensional symbolic plots on line printer. PGP created to support development of Heads-Up Display (HUD) simulation. Standard symbols defined with HUD in mind. Available symbols include circle, triangle, quadrangle, window, line, numbers, and text. Additional symbols easily added or built up from available symbols.

  9. Mathematical Graphic Organizers

    ERIC Educational Resources Information Center

    Zollman, Alan

    2009-01-01

    As part of a math-science partnership, a university mathematics educator and ten elementary school teachers developed a novel approach to mathematical problem solving derived from research on reading and writing pedagogy. Specifically, research indicates that students who use graphic organizers to arrange their ideas improve their comprehension…

  10. Raster graphics display library

    NASA Technical Reports Server (NTRS)

    Grimsrud, Anders; Stephenson, Michael B.

    1987-01-01

    The Raster Graphics Display Library (RGDL) is a high level subroutine package that give the advanced raster graphics display capabilities needed. The RGDL uses FORTRAN source code routines to build subroutines modular enough to use as stand-alone routines in a black box type of environment. Six examples are presented which will teach the use of RGDL in the fastest, most complete way possible. Routines within the display library that are used to produce raster graphics are presented in alphabetical order, each on a separate page. Each user-callable routine is described by function and calling parameters. All common blocks that are used in the display library are listed and the use of each variable within each common block is discussed. A reference on the include files that are necessary to compile the display library is contained. Each include file and its purpose are listed. The link map for MOVIE.BYU version 6, a general purpose computer graphics display system that uses RGDL software, is also contained.

  11. Computing Graphical Confidence Bounds

    NASA Technical Reports Server (NTRS)

    Mezzacappa, M. A.

    1983-01-01

    Approximation for graphical confidence bounds is simple enough to run on programmable calculator. Approximation is used in lieu of numerical tables not always available, and exact calculations, which often require rather sizable computer resources. Approximation verified for collection of up to 50 data points. Method used to analyze tile-strength data on Space Shuttle thermal-protection system.

  12. Comics & Graphic Novels

    ERIC Educational Resources Information Center

    Cleaver, Samantha

    2008-01-01

    Not so many years ago, comic books in school were considered the enemy. Students caught sneaking comics between the pages of bulky--and less engaging--textbooks were likely sent to the principal. Today, however, comics, including classics such as "Superman" but also their generally more complex, nuanced cousins, graphic novels, are not only…

  13. Graphic Novels: A Roundup.

    ERIC Educational Resources Information Center

    Kan, Katherine L.

    1994-01-01

    Reviews graphic novels for young adults, including five titles from "The Adventures of Tintin," a French series that often uses ethnic and racial stereotypes which reflect the time in which they were published, and "Wolverine," a Marvel comic character adventure. (Contains six references.) (LRW)

  14. Evaluation of food processing wastewater loading characteristics on metal mobilization within the soil.

    PubMed

    Julien, Ryan; Safferman, Steven

    2015-01-01

    Wastewater generated during food processing is commonly treated using land-application systems which primarily rely on soil microbes to transform nutrients and organic compounds into benign byproducts. Naturally occurring metals in the soil may be chemically reduced via microbially mediated oxidation-reduction reactions as oxygen becomes depleted. Some metals such as manganese and iron become water soluble when chemically reduced, leading to groundwater contamination. Alternatively, metals within the wastewater may not become assimilated into the soil and leach into the groundwater if the environment is not sufficiently oxidizing. A lab-scale column study was conducted to investigate the impacts of wastewater loading values on metal mobilization within the soil. Oxygen content and volumetric water data were collected via soil sensors for the duration of the study. The pH, chemical oxygen demand, manganese, and iron concentrations in the influent and effluent water from each column were measured. Average organic loading and organic loading per dose were shown to have statistically significant impacts using Spearman's Rank Correlation Coefficient on effluent water quality. The Hydraulic resting period qualitatively appeared to have impacts on effluent water quality. This study verifies that excessive organic loading of land application systems causes mobilization of naturally occurring metals and prevents those added in the wastewater from becoming immobilized, resulting in ineffective wastewater treatment. Results also indicate the need to consider the organic dose load and hydraulic resting period in the treatment system design. Findings from this study demonstrate waste application twice daily may encourage soil aeration and allow for increased organic loading while limiting the mobilization of metals already in the soil and those being applied. PMID:26327299

  15. Hydrothermal Processes and Mobile Element Transport in Martian Impact Craters - Evidence from Terrestrial Analogue Craters

    NASA Technical Reports Server (NTRS)

    Newsom, H. E.; Nelson, M. J.; Shearer, C. K.; Dressler, B. L.

    2005-01-01

    Hydrothermal alteration and chemical transport involving impact craters probably occurred on Mars throughout its history. Our studies of alteration products and mobile element transport in ejecta blanket and drill core samples from impact craters show that these processes may have contributed to the surface composition of Mars. Recent work on the Chicxulub Yaxcopoil-1 drill core has provided important information on the relative mobility of many elements that may be relevant to Mars. The Chicxulub impact structure in the Yucatan Peninsula of Mexico and offshore in the Gulf of Mexico is one of the largest impact craters identified on the Earth, has a diameter of 180-200 km, and is associated with the mass extinctions at the K/T boundary. The Yax-1 hole was drilled in 2001 and 2002 on the Yaxcopoil hacienda near Merida on the Yucatan Peninsula. Yax-1 is located just outside of the transient cavity, which explains some of the unusual characteristics of the core stratigraphy. No typical impact melt sheet was encountered in the hole and most of the Yax-1 impactites are breccias. In particular, the impact melt and breccias are only 100 m thick which is surprising taking into account the considerably thicker breccia accumulations towards the center of the structure and farther outside the transient crater encountered by other drill holes.

  16. Coupled Biogeochemical and Hydrologic Processes Governing Arsenic Mobility Within Sediments of Southeast Asia

    NASA Astrophysics Data System (ADS)

    Kocar, B. D.; Polizzotto, M. L.; Ying, S. C.; Benner, S. G.; Sampson, M.; Fendorf, S.

    2008-12-01

    Weathering of As-bearing rocks in the Himalayas has resulted in the transport of sediments down major river systems such as the Brahmaputra, Ganges, Red, Irrawaddy, and Mekong. Groundwater in these river basins commonly has As concentrations exceeding the World Health Organization's recommended drinking water limit (10 μg L-1) by more than an order of magnitude. Coupling of hydrology and biogeochemical processes underlies the elevated concentrations of As in these aquifers, necessitating studies that allow their deconvolution. Furthermore, to fully elucidate the biogeochemical mechanisms of sedimentary As release, the thermodynamic favorability of controlling biogeochemical reactions must be considered. We therefore used a combination of spectroscopic and wet chemical measurements to resolve the dominant processes controlling As release and transport in surficial soils/sediments within an As-afflicted field area of the Mekong delta. Based on these measurements, we assess the thermodynamic potential for As, Fe, and S reduction to transpire--major processes influencing As release and mobility. Our results illustrate that clay (0-12m deep) underlying oxbow and wetland environments are subjected to continuously reducing conditions due to ample carbon input and saturated conditions. Ensuing reductive mobilization of As from As-bearing Fe (hydr)oxides results in its migration to the underlying sandy aquifer (>12 m deep). Reactive transport modeling using PHREEQC and MIN3P, constrained with chemical and hydrologic field measurements, provides a calibrated illustration of As release and transport occurring within the clays underlying organic-rich, permanently inundated locations. These areas provide sufficient As to the aqueous phase for widespread contamination of the aquifer, and release is predicted to occur for several thousand years prior to depletion of As from the solid phase.

  17. Colloid/Nanoparticle mobility determining processes investigated by laser- and synchrotron based techniques

    NASA Astrophysics Data System (ADS)

    Schäfer, Thorsten; Huber, Florian; Temgoua, Louis; Claret, Francis; Darbha, Gopala; Chagneau, Aurélie; Fischer, Cornelius; Jacobsen, Chris

    2014-05-01

    Transport of pollutants can occur in the aqueous phase or for strongly sorbing pollutants associated on mobile solid phases spanning the range from a couple of nanometers up to approx. ~1μm; usually called colloids or nanoparticles [1,2]. A new form of pollutants are engineered nanoparticles (ENP's), where properties differ substantially from those of bulk materials of the same composition and cannot be scaled by simple surface area corrections. Potential harmful interactions with biological systems and the environment are a new field of research [3]. A challenge with respect to understand and predict the contaminant mobility is the contaminant speciation, the aquifer surface interaction and the mobility of nanoparticles. Especially for colloid/nanoparticle associated contaminant transport the metal sorption reversibility is a key element for long-term mobility prediction. The spatial resolution needed is clearly demanding for nanoscopic techniques benefiting from the new technical developments in the laser and synchrotron community [4]. Furthermore, high energy resolution is needed to either resolve different chemical species or the oxidation state of redox sensitive elements. In the context of successful planning of remediation strategies for contaminated sites this chemical information is categorically needed. In addition, chemical sensitivity as well as post processing methods extracting trace chemical information from a complex geo-matrix are required. The presentation will give examples of homogeneous and heterogeneous nucleation of nanoparticles [5], the speciation of radionuclides through incorporation in these newly formed phases [6], the changes of surface roughness and charge heterogeneity and its impact on nanoparticle mobility [7] and the sorption of organic colloids on mineral surfaces leading to functional group fractionation and consequently different metal binding environment as unraveled by time resolved laser fluorescence measurements [8

  18. Timeseries Signal Processing for Enhancing Mobile Surveys: Learning from Field Studies

    NASA Astrophysics Data System (ADS)

    Risk, D. A.; Lavoie, M.; Marshall, A. D.; Baillie, J.; Atherton, E. E.; Laybolt, W. D.

    2015-12-01

    Vehicle-based surveys using laser and other analyzers are now commonplace in research and industry. In many cases when these studies target biologically-relevant gases like methane and carbon dioxide, the minimum detection limits are often coarse (ppm) relative to the analyzer's capabilities (ppb), because of the inherent variability in the ambient background concentrations across the landscape that creates noise and uncertainty. This variation arises from localized biological sinks and sources, but also atmospheric turbulence, air pooling, and other factors. Computational processing routines are widely used in many fields to increase resolution of a target signal in temporally dense data, and offer promise for enhancing mobile surveying techniques. Signal processing routines can both help identify anomalies at very low levels, or can be used inversely to remove localized industrially-emitted anomalies from ecological data. This presentation integrates learnings from various studies in which simple signal processing routines were used successfully to isolate different temporally-varying components of 1 Hz timeseries measured with laser- and UV fluorescence-based analyzers. As illustrative datasets, we present results from industrial fugitive emission studies from across Canada's western provinces and other locations, and also an ecological study that aimed to model near-surface concentration variability across different biomes within eastern Canada. In these cases, signal processing algorithms contributed significantly to the clarity of both industrial, and ecological processes. In some instances, signal processing was too computationally intensive for real-time in-vehicle processing, but we identified workarounds for analyzer-embedded software that contributed to an improvement in real-time resolution of small anomalies. Signal processing is a natural accompaniment to these datasets, and many avenues are open to researchers who wish to enhance existing, and future

  19. Graphic Grown Up

    ERIC Educational Resources Information Center

    Kim, Ann

    2009-01-01

    It's no secret that children and YAs are clued in to graphic novels (GNs) and that comics-loving adults are positively giddy that this format is getting the recognition it deserves. Still, there is a whole swath of library card-carrying grown-up readers out there with no idea where to start. Splashy movies such as "300" and "Spider-Man" and their…

  20. Graphical Contingency Analysis Tool

    SciTech Connect

    2010-03-02

    GCA is a visual analytic tool for power grid contingency analysis to provide more decision support for power grid operations. GCA allows power grid operators to quickly gain situational awareness of power grid by converting large amounts of operational data to graphic domain with a color contoured map; identify system trend and foresee and discern emergencies by performing trending analysis; identify the relationships between system configurations and affected assets by conducting clustering analysis; and identify the best action by interactively evaluate candidate actions.

  1. A stable solution-processed polymer semiconductor with record high-mobility for printed transistors

    PubMed Central

    Li, Jun; Zhao, Yan; Tan, Huei Shuan; Guo, Yunlong; Di, Chong-An; Yu, Gui; Liu, Yunqi; Lin, Ming; Lim, Suo Hon; Zhou, Yuhua; Su, Haibin; Ong, Beng S.

    2012-01-01

    Microelectronic circuits/arrays produced via high-speed printing instead of traditional photolithographic processes offer an appealing approach to creating the long-sought after, low-cost, large-area flexible electronics. Foremost among critical enablers to propel this paradigm shift in manufacturing is a stable, solution-processable, high-performance semiconductor for printing functionally capable thin-film transistors — fundamental building blocks of microelectronics. We report herein the processing and optimisation of solution-processable polymer semiconductors for thin-film transistors, demonstrating very high field-effect mobility, high on/off ratio, and excellent shelf-life and operating stabilities under ambient conditions. Exceptionally high-gain inverters and functional ring oscillator devices on flexible substrates have been demonstrated. This optimised polymer semiconductor represents a significant progress in semiconductor development, dispelling prevalent skepticism surrounding practical usability of organic semiconductors for high-performance microelectronic devices, opening up application opportunities hitherto functionally or economically inaccessible with silicon technologies, and providing an excellent structural framework for fundamental studies of charge transport in organic systems. PMID:23082244

  2. John Herschel's Graphical Method

    NASA Astrophysics Data System (ADS)

    Hankins, Thomas L.

    2011-01-01

    In 1833 John Herschel published an account of his graphical method for determining the orbits of double stars. He had hoped to be the first to determine such orbits, but Felix Savary in France and Johann Franz Encke in Germany beat him to the punch using analytical methods. Herschel was convinced, however, that his graphical method was much superior to analytical methods, because it used the judgment of the hand and eye to correct the inevitable errors of observation. Line graphs of the kind used by Herschel became common only in the 1830s, so Herschel was introducing a new method. He also found computation fatiguing and devised a "wheeled machine" to help him out. Encke was skeptical of Herschel's methods. He said that he lived for calculation and that the English would be better astronomers if they calculated more. It is difficult to believe that the entire Scientific Revolution of the 17th century took place without graphs and that only a few examples appeared in the 18th century. Herschel promoted the use of graphs, not only in astronomy, but also in the study of meteorology and terrestrial magnetism. Because he was the most prominent scientist in England, Herschel's advocacy greatly advanced graphical methods.

  3. Linking geochemical processes in mud volcanoes with arsenic mobilization driven by organic matter.

    PubMed

    Liu, Chia-Chuan; Kar, Sandeep; Jean, Jiin-Shuh; Wang, Chung-Ho; Lee, Yao-Chang; Sracek, Ondra; Li, Zhaohui; Bundschuh, Jochen; Yang, Huai-Jen; Chen, Chien-Yen

    2013-11-15

    The present study deals with geochemical characterization of mud fluids and sediments collected from Kunshuiping (KSP), Liyushan (LYS), Wushanting (WST), Sinyangnyuhu (SYNH), Hsiaokunshui (HKS) and Yenshuikeng (YSK) mud volcanoes in southwestern Taiwan. Chemical constituents (cations, anions, trace elements, organic carbon, humic acid, and stable isotopes) in both fluids and mud were analyzed to investigate the geochemical processes and spatial variability among the mud volcanoes under consideration. Analytical results suggested that the anoxic mud volcanic fluids are highly saline, implying connate water as the probable source. The isotopic signature indicated that δ(18)O-rich fluids may be associated with silicate and carbonate mineral released through water-rock interaction, along with dehydration of clay minerals. Considerable amounts of arsenic in mud irrespective of fluid composition suggested possible release through biogeochemical processes in the subsurface environment. Sequential extraction of As from the mud indicated that As was mostly present in organic and sulphidic phases, and adsorbed on amorphous Mn oxyhydroxides. Volcanic mud and fluids are rich in organic matter (in terms of organic carbon), and the presence of humic acid in mud has implications for the binding of arsenic. Functional groups of humic acid also showed variable sources of organic matter among the mud volcanoes being examined. Because arsenate concentration in the mud fluids was found to be independent from geochemical factors, it was considered that organic matter may induce arsenic mobilization through an adsorption/desorption mechanism with humic substances under reducing conditions. Organic matter therefore plays a significant role in the mobility of arsenic in mud volcanoes. PMID:22809631

  4. Evaluation of a Mobile Hot Cell Technology for Processing Idaho National Laboratory Remote-Handled Wastes

    SciTech Connect

    B.J. Orchard; L.A. Harvego; R.P. Miklos; F. Yapuncich; L. Care

    2009-03-01

    The Idaho National Laboratory (INL) currently does not have the necessary capabilities to process all remote-handled wastes resulting from the Laboratory’s nuclear-related missions. Over the years, various U.S. Department of Energy (DOE)-sponsored programs undertaken at the INL have produced radioactive wastes and other materials that are categorized as remote-handled (contact radiological dose rate > 200 mR/hr). These materials include Spent Nuclear Fuel (SNF), transuranic (TRU) waste, waste requiring geological disposal, low-level waste (LLW), mixed waste (both radioactive and hazardous per the Resource Conservation and Recovery Act [RCRA]), and activated and/or radioactively-contaminated reactor components. The waste consists primarily of uranium, plutonium, other TRU isotopes, and shorter-lived isotopes such as cesium and cobalt with radiological dose rates up to 20,000 R/hr. The hazardous constituents in the waste consist primarily of reactive metals (i.e., sodium and sodium-potassium alloy [NaK]), which are reactive and ignitable per RCRA, making the waste difficult to handle and treat. A smaller portion of the waste is contaminated with other hazardous components (i.e., RCRA toxicity characteristic metals). Several analyses of alternatives to provide the required remote-handling and treatment capability to manage INL’s remote-handled waste have been conducted over the years and have included various options ranging from modification of existing hot cells to construction of new hot cells. Previous analyses have identified a mobile processing unit as an alternative for providing the required remote-handled waste processing capability; however, it was summarily dismissed as being a potentially viable alternative based on limitations of a specific design considered. In 2008 INL solicited expressions of interest from Vendors who could provide existing, demonstrated technology that could be applied to the retrieval, sorting, treatment (as required), and

  5. Career Opportunities in Computer Graphics.

    ERIC Educational Resources Information Center

    Langer, Victor

    1983-01-01

    Reviews the impact of computer graphics on industrial productivity. Details the computer graphics technician curriculum at Milwaukee Area Technical College and the cooperative efforts of business and industry to fund and equip the program. (SK)

  6. Span graphics display utilities handbook, first edition

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Green, J. L.; Newman, R.

    1985-01-01

    The Space Physics Analysis Network (SPAN) is a computer network connecting scientific institutions throughout the United States. This network provides an avenue for timely, correlative research between investigators, in a multidisciplinary approach to space physics studies. An objective in the development of SPAN is to make available direct and simplified procedures that scientists can use, without specialized training, to exchange information over the network. Information exchanges include raw and processes data, analysis programs, correspondence, documents, and graphite images. This handbook details procedures that can be used to exchange graphic images over SPAN. The intent is to periodically update this handbook to reflect the constantly changing facilities available on SPAN. The utilities described within reflect an earnest attempt to provide useful descriptions of working utilities that can be used to transfer graphic images across the network. Whether graphic images are representative of satellite servations or theoretical modeling and whether graphics images are of device dependent or independent type, the SPAN graphics display utilities handbook will be the users guide to graphic image exchange.

  7. Approximate Counting of Graphical Realizations

    PubMed Central

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  8. Approximate Counting of Graphical Realizations.

    PubMed

    Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  9. Computer graphics in aerodynamic analysis

    NASA Technical Reports Server (NTRS)

    Cozzolongo, J. V.

    1984-01-01

    The use of computer graphics and its application to aerodynamic analyses on a routine basis is outlined. The mathematical modelling of the aircraft geometries and the shading technique implemented are discussed. Examples of computer graphics used to display aerodynamic flow field data and aircraft geometries are shown. A future need in computer graphics for aerodynamic analyses is addressed.

  10. Graphic Novels and School Libraries

    ERIC Educational Resources Information Center

    Rudiger, Hollis Margaret; Schliesman, Megan

    2007-01-01

    School libraries serving children and teenagers today should be committed to collecting graphic novels to the extent that their budgets allow. However, the term "graphic novel" is enough to make some librarians--not to mention administrators and parents--pause. Graphic novels are simply book-length comics. They can be works of fiction or…

  11. Low Cost Graphics. Second Edition.

    ERIC Educational Resources Information Center

    Tinker, Robert F.

    This manual describes the CALM TV graphics interface, a low-cost means of producing quality graphics on an ordinary TV. The system permits the output of data in graphic as well as alphanumeric form and the input of data from the face of the TV using a light pen. The integrated circuits required in the interface can be obtained from standard…

  12. Selecting Mangas and Graphic Novels

    ERIC Educational Resources Information Center

    Nylund, Carol

    2007-01-01

    The decision to add graphic novels, and particularly the Japanese styled called manga, was one the author has debated for a long time. In this article, the author shares her experience when she purchased graphic novels and mangas to add to her library collection. She shares how graphic novels and mangas have revitalized the library.

  13. Graphics performance in rich Internet applications.

    PubMed

    Hoetzlein, Rama C

    2012-01-01

    Rendering performance for rich Internet applications (RIAs) has recently focused on the debate between using Flash and HTML5 for streaming video and gaming on mobile devices. A key area not widely explored, however, is the scalability of raw bitmap graphics performance for RIAs. Does Flash render animated sprites faster than HTML5? How much faster is WebGL than Flash? Answers to these questions are essential for developing large-scale data visualizations, online games, and truly dynamic websites. A new test methodology analyzes graphics performance across RIA frameworks and browsers, revealing specific performance outliers in existing frameworks. The results point toward a future in which all online experiences might be GPU accelerated. PMID:24806992

  14. Graphical Contingency Analysis Tool

    Energy Science and Technology Software Center (ESTSC)

    2010-03-02

    GCA is a visual analytic tool for power grid contingency analysis to provide more decision support for power grid operations. GCA allows power grid operators to quickly gain situational awareness of power grid by converting large amounts of operational data to graphic domain with a color contoured map; identify system trend and foresee and discern emergencies by performing trending analysis; identify the relationships between system configurations and affected assets by conducting clustering analysis; and identifymore » the best action by interactively evaluate candidate actions.« less

  15. Graphical timeline editing

    NASA Technical Reports Server (NTRS)

    Meyer, Patrick E.; Jaap, John P.

    1994-01-01

    NASA's Experiment Scheduling Program (ESP), which has been used for approximately 12 Spacelab missions, is being enhanced with the addition of a Graphical Timeline Editor. The GTE Clipboard, as it is called, was developed to demonstrate new technology which will lead the development of International Space Station Alpha's Payload Planning System and support the remaining Spacelab missions. ESP's GTE Clipboard is developed in C using MIT's X Windows System X11R5 and follows OSF/Motif Style Guide Revision 1.2.

  16. Design and Certification of the Extravehicular Activity Mobility Unit (EMU) Water Processing Jumper

    NASA Technical Reports Server (NTRS)

    Peterson, Laurie J.; Neumeyer, Derek J.; Lewis, John F.

    2006-01-01

    The Extravehicular Mobility Units (EMUs) onboard the International Space Station (ISS) experienced a failure due to cooling water contamination from biomass and corrosion byproducts forming solids around the EMU pump rotor. The coolant had no biocide and a low pH which induced biofilm growth and corrosion precipitates, respectively. NASA JSC was tasked with building hardware to clean the ionic, organic, and particulate load from the EMU coolant loop before and after Extravehicular Activity (EVAs). Based on a return sample of the EMU coolant loop, the chemical load was well understood, but there was not sufficient volume of the returned sample to analyze particulates. Through work with EMU specialists, chemists, (EVA) Mission Operations Directorate (MOD) representation, safety and mission assurance, astronaut crew, and team engineers, requirements were developed for the EMU Water Processing hardware (sometimes referred to as the Airlock Coolant Loop Recovery [A/L CLR] system). Those requirements ranged from the operable level of ionic, organic, and particulate load, interfaces to the EMU, maximum cycle time, operating pressure drop, flow rate, and temperature, leakage rates, and biocide levels for storage. Design work began in February 2005 and certification was completed in April 2005 to support a return to flight launch date of May 12, 2005. This paper will discuss the details of the design and certification of the EMU Water Processing hardware and its components

  17. An atomic orbital-based formulation of analytical gradients and nonadiabatic coupling vector elements for the state-averaged complete active space self-consistent field method on graphical processing units.

    PubMed

    Snyder, James W; Hohenstein, Edward G; Luehr, Nathan; Martínez, Todd J

    2015-10-21

    We recently presented an algorithm for state-averaged complete active space self-consistent field (SA-CASSCF) orbital optimization that capitalizes on sparsity in the atomic orbital basis set to reduce the scaling of computational effort with respect to molecular size. Here, we extend those algorithms to calculate the analytic gradient and nonadiabatic coupling vectors for SA-CASSCF. Combining the low computational scaling with acceleration from graphical processing units allows us to perform SA-CASSCF geometry optimizations for molecules with more than 1000 atoms. The new approach will make minimal energy conical intersection searches and nonadiabatic dynamics routine for molecular systems with O(10(2)) atoms. PMID:26493897

  18. An atomic orbital-based formulation of analytical gradients and nonadiabatic coupling vector elements for the state-averaged complete active space self-consistent field method on graphical processing units

    SciTech Connect

    Snyder, James W.; Hohenstein, Edward G.; Luehr, Nathan; Martínez, Todd J.

    2015-10-21

    We recently presented an algorithm for state-averaged complete active space self-consistent field (SA-CASSCF) orbital optimization that capitalizes on sparsity in the atomic orbital basis set to reduce the scaling of computational effort with respect to molecular size. Here, we extend those algorithms to calculate the analytic gradient and nonadiabatic coupling vectors for SA-CASSCF. Combining the low computational scaling with acceleration from graphical processing units allows us to perform SA-CASSCF geometry optimizations for molecules with more than 1000 atoms. The new approach will make minimal energy conical intersection searches and nonadiabatic dynamics routine for molecular systems with O(10{sup 2}) atoms.

  19. Graphical Model Theory for Wireless Sensor Networks

    SciTech Connect

    Davis, William B.

    2002-12-08

    Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm.

  20. Finding the Sweet Spot: Network Structures and Processes for Increased Knowledge Mobilization

    ERIC Educational Resources Information Center

    Briscoe, Patricia; Pollock, Katina; Campbell, Carol; Carr-Harris, Shasta

    2015-01-01

    The use of networks in public education is one of many knowledge mobilization (KMb) strategies utilized to promote evidence-based research into practice. However, challenges exist in the ability to mobilize knowledge through networks. The purpose of this paper is to explore how networks work. Data were collected from virtual discussions for an…

  1. Making Higher Education More European through Student Mobility? Revisiting EU Initiatives in the Context of the Bologna Process

    ERIC Educational Resources Information Center

    Papatsiba, Vassiliki

    2006-01-01

    This paper focuses on the analysis of student mobility in the EU as a means to stimulate convergence of diverse higher education systems. The argument is based on official texts and other texts of political communication of the European Commission. The following discussion is placed within the current context of the Bologna process and its aim to…

  2. The Effects of Image-Based Concept Mapping on the Learning Outcomes and Cognitive Processes of Mobile Learners

    ERIC Educational Resources Information Center

    Yen, Jung-Chuan; Lee, Chun-Yi; Chen, I-Jung

    2012-01-01

    The purpose of this study was to investigate the effects of different teaching strategies (text-based concept mapping vs. image-based concept mapping) on the learning outcomes and cognitive processes of mobile learners. Eighty-six college freshmen enrolled in the "Local Area Network Planning and Implementation" course taught by the first author…

  3. A mobile monitoring system to understand the processes controlling episodic events in Corpus Christi Bay.

    PubMed

    Islam, Mohammad Shahidul; Bonner, James S; Ojo, Temitope O; Page, Cheryl

    2011-04-01

    Corpus Christi Bay (TX, USA) is a shallow wind-driven bay and thereby, can be characterized as a highly pulsed system. It cycles through various episodic events such as hypoxia, water column stratification, sediment resuspension, flooding, etc. Understanding of the processes that control these events requires an efficient observation system that can measure various hydrodynamic and water quality parameters at the multitude of spatial and temporal scales of interest. As part of our effort to implement an efficient observation system for Corpus Christi Bay, a mobile monitoring system was developed that can acquire and visualize data measured by various submersible sensors on an undulating tow-body deployed behind a research vessel. Along with this system, we have installed a downward-looking Acoustic Doppler Current Profiler to measure the vertical profile of water currents. Real-time display of each measured parameter intensity (measured value relative to a pre-set peak value) guides in selecting the transect route to capture the event of interest. In addition, large synchronized datasets measured by this system provide an opportunity to understand the processes that control various episodic events in the bay. To illustrate the capability of this system, datasets from two research cruises are presented in this paper that help to clarify processes inducing an inverse estuary condition at the mouth of the ship channel and hypoxia at the bottom of the bay. These measured datasets can also be used to drive numerical models to understand various environmental phenomena that control the water quality of the bay. PMID:20556650

  4. Ash iron mobilization through physicochemical processing in volcanic eruption plumes: a numerical modeling approach

    NASA Astrophysics Data System (ADS)

    Hoshyaripour, G. A.; Hort, M.; Langmann, B.

    2015-08-01

    It has been shown that volcanic ash fertilizes the Fe-limited areas of the surface ocean through releasing soluble iron. As ash iron is mostly insoluble upon the eruption, it is hypothesized that heterogeneous in-plume and in-cloud processing of the ash promote the iron solubilization. Direct evidences concerning such processes are, however, lacking. In this study, a 1-D numerical model is developed to simulate the physicochemical interactions of the gas-ash-aerosol in volcanic eruption plumes focusing on the iron mobilization processes at temperatures between 600 and 0 °C. Results show that sulfuric acid and water vapor condense at ~ 150 and ~ 50 °C on the ash surface, respectively. This liquid phase then efficiently scavenges the surrounding gases (> 95 % of HCl, 3-20 % of SO2 and 12-62 % of HF) forming an extremely acidic coating at the ash surface. The low pH conditions of the aqueous film promote acid-mediated dissolution of the Fe-bearing phases present in the ash material. We estimate that 0.1-33 % of the total iron available at the ash surface is dissolved in the aqueous phase before the freezing point is reached. The efficiency of dissolution is controlled by the halogen content of the erupted gas as well as the mineralogy of the iron at ash surface: elevated halogen concentrations and presence of Fe2+-carrying phases lead to the highest dissolution efficiency. Findings of this study are in agreement with the data obtained through leaching experiments.

  5. Dietary Assessment on a Mobile Phone Using Image Processing and Pattern Recognition Techniques: Algorithm Design and System Prototyping

    PubMed Central

    Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing

    2015-01-01

    Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work. PMID:26225994

  6. Dietary Assessment on a Mobile Phone Using Image Processing and Pattern Recognition Techniques: Algorithm Design and System Prototyping.

    PubMed

    Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing

    2015-08-01

    Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work. PMID:26225994

  7. On-site installation and shielding of a mobile electron accelerator for radiation processing

    NASA Astrophysics Data System (ADS)

    Catana, Dumitru; Panaitescu, Julian; Axinescu, Silviu; Manolache, Dumitru; Matei, Constantin; Corcodel, Calin; Ulmeanu, Magdalena; Bestea, Virgil

    1995-05-01

    The development of radiation processing of some bulk products, e.g. grains or potatoes, would be sustained if the irradiation had been carried out at the place of storage, i.e. silo. A promising solution is proposed consisting of a mobile electron accelerator, installed on a couple of trucks and traveling from one customer to another. The energy of the accelerated electrons was chosen at 5 MeV, with 10 to 50 kW beam power. The irradiation is possible either with electrons or with bremsstrahlung. A major problem of the above solution is the provision of adequate shielding at the customer, with a minimum investment cost. Plans for a bunker are presented, which houses the truck carrying the radiation head. The beam is vertical downwards, through the truck floor, through a transport pipe and a scanning horn. The irradiation takes place in a pit, where the products are transported through a belt. The belt path is so chosen as to minimize openings in the shielding. Shielding calculations are presented supposing a working regime with 5 MeV bremsstrahlung. Leakage and scattered radiation are taken into account.

  8. [Hardware for graphics systems].

    PubMed

    Goetz, C

    1991-02-01

    In all personal computer applications, be it for private or professional use, the decision of which "brand" of computer to buy is of central importance. In the USA Apple computers are mainly used in universities, while in Europe computers of the so-called "industry standard" by IBM (or clones thereof) have been increasingly used for many years. Independently of any brand name considerations, the computer components purchased must meet the current (and projected) needs of the user. Graphic capabilities and standards, processor speed, the use of co-processors, as well as input and output devices such as "mouse", printers and scanners are discussed. This overview is meant to serve as a decision aid. Potential users are given a short but detailed summary of current technical features. PMID:2042260

  9. Solution-Processed Transistors Using Colloidal Nanocrystals with Composition-Matched Molecular "Solders": Approaching Single Crystal Mobility.

    PubMed

    Jang, Jaeyoung; Dolzhnikov, Dmitriy S; Liu, Wenyong; Nam, Sooji; Shim, Moonsub; Talapin, Dmitri V

    2015-10-14

    Crystalline silicon-based complementary metal-oxide-semiconductor transistors have become a dominant platform for today's electronics. For such devices, expensive and complicated vacuum processes are used in the preparation of active layers. This increases cost and restricts the scope of applications. Here, we demonstrate high-performance solution-processed CdSe nanocrystal (NC) field-effect transistors (FETs) that exhibit very high carrier mobilities (over 400 cm(2)/(V s)). This is comparable to the carrier mobilities of crystalline silicon-based transistors. Furthermore, our NC FETs exhibit high operational stability and MHz switching speeds. These NC FETs are prepared by spin coating colloidal solutions of CdSe NCs capped with molecular solders [Cd2Se3](2-) onto various oxide gate dielectrics followed by thermal annealing. We show that the nature of gate dielectrics plays an important role in soldered CdSe NC FETs. The capacitance of dielectrics and the NC electronic structure near gate dielectric affect the distribution of localized traps and trap filling, determining carrier mobility and operational stability of the NC FETs. We expand the application of the NC soldering process to core-shell NCs consisting of a III-V InAs core and a CdSe shell with composition-matched [Cd2Se3](2-) molecular solders. Soldering CdSe shells forms nanoheterostructured material that combines high electron mobility and near-IR photoresponse. PMID:26280943

  10. LONGLIB - A GRAPHICS LIBRARY

    NASA Technical Reports Server (NTRS)

    Long, D.

    1994-01-01

    This library is a set of subroutines designed for vector plotting to CRT's, plotters, dot matrix, and laser printers. LONGLIB subroutines are invoked by program calls similar to standard CALCOMP routines. In addition to the basic plotting routines, LONGLIB contains an extensive set of routines to allow viewport clipping, extended character sets, graphic input, shading, polar plots, and 3-D plotting with or without hidden line removal. LONGLIB capabilities include surface plots, contours, histograms, logarithm axes, world maps, and seismic plots. LONGLIB includes master subroutines, which are self-contained series of commonly used individual subroutines. When invoked, the master routine will initialize the plotting package, and will plot multiple curves, scatter plots, log plots, 3-D plots, etc. and then close the plot package, all with a single call. Supported devices include VT100 equipped with Selanar GR100 or GR100+ boards, VT125s, VT240s, VT220 equipped with Selanar SG220, Tektronix 4010/4014 or 4107/4109 and compatibles, and Graphon GO-235 terminals. Dot matrix printer output is available by using the provided raster scan conversion routines for DEC LA50, Printronix printers, and high or low resolution Trilog printers. Other output devices include QMS laser printers, Postscript compatible laser printers, and HPGL compatible plotters. The LONGLIB package includes the graphics library source code, an on-line help library, scan converter and meta file conversion programs, and command files for installing, creating, and testing the library. The latest version, 5.0, is significantly enhanced and has been made more portable. Also, the new version's meta file format has been changed and is incompatible with previous versions. A conversion utility is included to port the old meta files to the new format. Color terminal plotting has been incorporated. LONGLIB is written in FORTRAN 77 for batch or interactive execution and has been implemented on a DEC VAX series

  11. GFI - EASY PC GRAPHICS

    NASA Technical Reports Server (NTRS)

    Katz, R. B.

    1994-01-01

    Easy PC Graphics (GFI) is a graphical plot program that permits data to be easily and flexibly plotted. Data is input in a standard format which allows easy data entry and evaluation. Multiple dependent axes are also supported. The program may either be run in a stand alone mode or be embedded in the user's own software. Automatic scaling is built in for several logarithmic and decibel scales. New scales are easily incorporated into the code through the use of object-oriented programming techniques. For the autoscale routines and the actual plotting code, data is not retrieved directly from a file, but a "method" delivers the data, performing scaling as appropriate. Each object (variable) has state information which selects its own scaling. GFI is written in Turbo Pascal version 6.0 for IBM PC compatible computers running MS-DOS. The source code will only compile properly with the Turbo Pascal v. 6.0 or v. 7.0 compilers; however, an executable is provided on the distribution disk. This executable requires at least 64K of RAM and DOS 3.1 or higher, as well as an HP LaserJet printer to print output plots. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. An electronic copy of the documentation is provided on the distribution medium in ASCII format. GFI was developed in 1993.

  12. Reconstructed Sediment Mobilization Processes in a Large Reservoir Using Short Sediment Cores

    NASA Astrophysics Data System (ADS)

    Cockburn, J.; Feist, S.

    2014-12-01

    Williston Reservoir in northern British Columbia (56°10'31"N, 124°06'33") was formed when the W.A.C. Bennett Dam was created in the late 1960s, is the largest inland body of water in BC and facilitates hydroelectric power generation. Annually the reservoir level rises and lowers with the hydroelectric dam operation, and this combined with the inputs from several river systems (Upper Peace, Finlay, Parsnip, and several smaller creeks) renews suspended sediment sources. Several short-cores retrieved from shallow bays of the Finlay Basin reveal near-annual sedimentary units and distinct patterns related to both hydroclimate variability and the degree to which the reservoir lowered in a particular year. Thin section and sedimentology from short-cores collected in three bays are used to evaluate sediment mobilization processes. The primary sediment sources in each core location is linked to physical inputs from rivers draining into the bays, aeolian contributions, and reworked shoreline deposits as water levels fluctuate. Despite uniform water level lowering across the reservoir, sediment sequences differed at each site, reflecting the local stream inputs. However, distinct organic-rich units, facilitated correlation across the sites. Notable differences in particle size distributions from each core points to important aeolian derived sediment sources. Using these sedimentary records, we can evaluate the processes that contribute to sediment deposition in the basin. This work will contribute to decisions regarding reservoir water levels to reduce adverse impacts on health, economic activities and recreation in the communities along the shores of the reservoir.

  13. Software Package For Real-Time Graphics

    NASA Technical Reports Server (NTRS)

    Malone, Jacqueline C.; Moore, Archie L.

    1991-01-01

    Software package for master graphics interactive console (MAGIC) at Western Aeronautical Test Range (WATR) of NASA Ames Research Center provides general-purpose graphical display system for real-time and post-real-time analysis of data. Written in C language and intended for use on workstation of interactive raster imaging system (IRIS) equipped with level-V Unix operating system. Enables flight researchers to create their own displays on basis of individual requirements. Applicable to monitoring of complicated processes in chemical industry.

  14. Efficient Multiplication of Polynomials on Graphics Hardware

    NASA Astrophysics Data System (ADS)

    Emeliyanenko, Pavel

    We present the algorithm to multiply univariate polynomials with integer coefficients efficiently using the Number Theoretic transform (NTT) on Graphics Processing Units (GPU). The same approach can be used to multiply large integers encoded as polynomials. Our algorithm exploits fused multiply-add capabilities of the graphics hardware. NTT multiplications are executed in parallel for a set of distinct primes followed by reconstruction using the Chinese Remainder theorem (CRT) on the GPU. Our benchmarking experiences show the NTT multiplication performance up to 77 GMul/s. We compared our approach with CPU-based implementations of polynomial and large integer multiplication provided by NTL and GMP libraries.

  15. Data Analysis with Graphical Models: Software Tools

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.

    1994-01-01

    Probabilistic graphical models (directed and undirected Markov fields, and combined in chain graphs) are used widely in expert systems, image processing and other areas as a framework for representing and reasoning with probabilities. They come with corresponding algorithms for performing probabilistic inference. This paper discusses an extension to these models by Spiegelhalter and Gilks, plates, used to graphically model the notion of a sample. This offers a graphical specification language for representing data analysis problems. When combined with general methods for statistical inference, this also offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper outlines the framework and then presents some basic tools for the task: a graphical version of the Pitman-Koopman Theorem for the exponential family, problem decomposition, and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.

  16. Arrows: A Special Case of Graphic Communication.

    ERIC Educational Resources Information Center

    Hardin, Pris

    The purpose of this paper is to examine arrow design in relation to the type of pointing, connecting, or processing involved. Three possible approaches to the investigation of arrows as graphic communication include research: by arrow function, relating message structure to arrow design, and linking user expectations to arrow design. The following…

  17. Interactive Learning for Graphic Design Foundations

    ERIC Educational Resources Information Center

    Chu, Sauman; Ramirez, German Mauricio Mejia

    2012-01-01

    One of the biggest problems for students majoring in pre-graphic design is students' inability to apply their knowledge to different design solutions. The purpose of this study is to examine the effectiveness of interactive learning modules in facilitating knowledge acquisition during the learning process and to create interactive learning modules…

  18. Interactive computer graphics - Why's, wherefore's and examples

    NASA Technical Reports Server (NTRS)

    Gregory, T. J.; Carmichael, R. L.

    1983-01-01

    The benefits of using computer graphics in design are briefly reviewed. It is shown that computer graphics substantially aids productivity by permitting errors in design to be found immediately and by greatly reducing the cost of fixing the errors and the cost of redoing the process. The possibilities offered by computer-generated displays in terms of information content are emphasized, along with the form in which the information is transferred. The human being is ideally and naturally suited to dealing with information in picture format, and the content rate in communication with pictures is several orders of magnitude greater than with words or even graphs. Since science and engineering involve communicating ideas, concepts, and information, the benefits of computer graphics cannot be overestimated.

  19. Operations on Graphical Models with Plates

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper explains how graphical models, for instance Bayesian or Markov networks, can be extended to model problems in data analysis and learning. This provides a unified framework that combines lessons learned from the artificial intelligence, statistical and connectionist communities. This also offers a set of principles for developing a software generator for data analysis, whereby a learning or discovery system can be compiled from specifications. Many of the popular learning algorithms can be compiled in this way from graphical specifications. While in a sense this paper is a multidisciplinary review of learning, the main contribution here is the presentation of the material within the unifying framework of graphical models, and the observation that, as a result, the process of developing learning algorithms can be partly automated.

  20. Big system: Interactive graphics for the engineer

    NASA Technical Reports Server (NTRS)

    Quenneville, C. E.

    1975-01-01

    The BCS Interactive Graphics System (BIG System) approach to graphics was presented, along with several significant engineering applications. The BIG System precompiler, the graphics support library, and the function requirements of graphics applications are discussed. It was concluded that graphics standardization and a device independent code can be developed to assure maximum graphic terminal transferability.

  1. All-digital multicarrier demodulators for on-board processing satellites in mobile communication systems

    NASA Astrophysics Data System (ADS)

    Yim, Wan Hung

    Economical operation of future satellite systems for mobile communications can only be fulfilled by using dedicated on-board processing satellites, which would allow both cheap earth terminals and lower space segment costs. With on-board modems and codecs, the up-link and down-link can be optimized separately. An attractive scheme is to use frequency-division multiple access/single chanel per carrier (FDMA/SCPC) on the up-link and time division multiplexing (TDM) on the down-link. This scheme allows mobile terminals to transmit a narrow band, low power signal, resulting in smaller dishes and high power amplifiers (HPA's) with lower output power. On the up-link, there are hundreds to thousands of FDM channels to be demodulated on-board. The most promising approach is the use of all-digital multicarrier demodulators (MCD's), where analog and digital hardware are efficiently shared among channels, and digital signal processing (DSP) is used at an early stage to take advantage of very large scale integration (VLSI) implementation. A MCD consists of a channellizer for separation of frequency division multiplexing (FDM) channels, followed by individual modulators for each channel. Major research areas in MCD's are in multirate DSP, and the optimal estimation for synchronization, which form the basis of the thesis. Complex signal theories are central to the development of structured approaches for the sampling and processing of bandpass signals, which are the foundations in both channellizer and demodulator design. In multirate DSP, polyphase theories replace many ad-hoc, tedious and error-prone design procedures. For example, a polyphase-matrix deep space network frequency and timing system (DFT) channellizer includes all efficient filter bank techniques as special cases. Also, a polyphase-lattice filter is derived, not only for sampling rate conversion, but also capable of sampling phase variation, which is required for symbol timing adjustment in all

  2. Graphical programming of telerobotic tasks

    SciTech Connect

    Small, D.E.; McDonald, M.J.

    1996-11-01

    With a goal of producing faster, safer, and cheaper technologies for nuclear waste cleanup, Sandia is actively developing and extending intelligent systems technologies through the US Department of Energy Office of Technology Development (DOE OTD) Robotic Technology Development Program (RTDP). Graphical programming is a key technology for robotic waste cleanup that Sandia is developing for this goal. Graphical programming uses simulation such as TELEGRIP `on-line` to program and control robots. Characterized by its model-based control architecture, integrated simulation, `point-and-click` graphical user interfaces, task and path planning software, and network communications, Sandia`s Graphical Programming systems allow operators to focus on high-level robotic tasks rather than the low-level details. Use of scripted tasks, rather than customized programs minimizes the necessity of recompiling supervisory control systems and enhances flexibility. Rapid world-modelling technologies allow Graphical Programming to be used in dynamic and unpredictable environments including digging and pipe-cutting. This paper describes Sancho, Sandia`s most advanced graphical programming supervisory software. Sancho, now operational on several robot systems, incorporates all of Sandia`s recent advances in supervisory control. Graphical programming uses 3-D graphics models as intuitive operator interfaces to program and control complex robotic systems. The goal of the paper is to help the reader understand how Sandia implements graphical programming systems and which key features in Sancho have proven to be most effective.

  3. ARCGRAPH SYSTEM - AMES RESEARCH GRAPHICS SYSTEM

    NASA Technical Reports Server (NTRS)

    Hibbard, E. A.

    1994-01-01

    Ames Research Graphics System, ARCGRAPH, is a collection of libraries and utilities which assist researchers in generating, manipulating, and visualizing graphical data. In addition, ARCGRAPH defines a metafile format that contains device independent graphical data. This file format is used with various computer graphics manipulation and animation packages at Ames, including SURF (COSMIC Program ARC-12381) and GAS (COSMIC Program ARC-12379). In its full configuration, the ARCGRAPH system consists of a two stage pipeline which may be used to output graphical primitives. Stage one is associated with the graphical primitives (i.e. moves, draws, color, etc.) along with the creation and manipulation of the metafiles. Five distinct data filters make up stage one. They are: 1) PLO which handles all 2D vector primitives, 2) POL which handles all 3D polygonal primitives, 3) RAS which handles all 2D raster primitives, 4) VEC which handles all 3D raster primitives, and 5) PO2 which handles all 2D polygonal primitives. Stage two is associated with the process of displaying graphical primitives on a device. To generate the various graphical primitives, create and reprocess ARCGRAPH metafiles, and access the device drivers in the VDI (Video Device Interface) library, users link their applications to ARCGRAPH's GRAFIX library routines. Both FORTRAN and C language versions of the GRAFIX and VDI libraries exist for enhanced portability within these respective programming environments. The ARCGRAPH libraries were developed on a VAX running VMS. Minor documented modification of various routines, however, allows the system to run on the following computers: Cray X-MP running COS (no C version); Cray 2 running UNICOS; DEC VAX running BSD 4.3 UNIX, or Ultrix; SGI IRIS Turbo running GL2-W3.5 and GL2-W3.6; Convex C1 running UNIX; Amhdahl 5840 running UTS; Alliant FX8 running UNIX; Sun 3/160 running UNIX (no native device driver); Stellar GS1000 running Stellex (no native device driver

  4. Computer graphics application in the engineering design integration system

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  5. DspaceOgre 3D Graphics Visualization Tool

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.

    2011-01-01

    This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.

  6. Evaluating Texts for Graphical Literacy Instruction: The Graphic Rating Tool

    ERIC Educational Resources Information Center

    Roberts, Kathryn L.; Brugar, Kristy A.; Norman, Rebecca R.

    2015-01-01

    In this article, we present the Graphical Rating Tool (GRT), which is designed to evaluate the graphical devices that are commonly found in content-area, non-fiction texts, in order to identify books that are well suited for teaching about those devices. We also present a "best of" list of science and social studies books, which includes…

  7. Color Graphics Brings It Together for DP Students.

    ERIC Educational Resources Information Center

    Morris, Eugene

    1981-01-01

    Discusses the use of color computer graphics in teaching computer programing in the data processing department at Florida A and M University. A description of the course of study for data processing majors at the University is included. (JJD)

  8. Parallel processor-based raster graphics system architecture

    SciTech Connect

    Littlefield, R.J.

    1990-08-14

    This paper discusses apparatus for generating raster graphics images from a graphics command stream. It comprises graphics processing means each adapted to receive any part of the graphics command stream for processing the command stream part into pixel data; frame buffer means for mapping the pixel data to pixel locations; and a unidirectional interconnection network having multiple levels of linked nodes to provide a data path from each graphics processing means to any part of the frame buffer means, each node at one level including means for queuing at the node pixel data intended for a part of the frame buffer until a link is available from the node to a node at another level.

  9. Development of computer graphics

    SciTech Connect

    Nuttall, H.E.

    1989-07-01

    The purpose of this project was to screen and evaluate three graphics packages as to their suitability for displaying concentration contour graphs. The information to be displayed is from computer code simulations describing air-born contaminant transport. The three evaluation programs were MONGO (John Tonry, MIT, Cambridge, MA, 02139), Mathematica (Wolfram Research Inc.), and NCSA Image (National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign). After a preliminary investigation of each package, NCSA Image appeared to be significantly superior for generating the desired concentration contour graphs. Hence subsequent work and this report describes the implementation and testing of NCSA Image on both an Apple MacII and Sun 4 computers. NCSA Image includes several utilities (Layout, DataScope, HDF, and PalEdit) which were used in this study and installed on Dr. Ted Yamada`s Mac II computer. Dr. Yamada provided two sets of air pollution plume data which were displayed using NCSA Image. Both sets were animated into a sequential expanding plume series.

  10. Fast DRR splat rendering using common consumer graphics hardware

    SciTech Connect

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-11-15

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2x10{sup 6} voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine.

  11. Graphic Interfaces and Online Information.

    ERIC Educational Resources Information Center

    Percival, J. Mark

    1990-01-01

    Discusses the growing importance of the use of Graphic User Interfaces (GUIs) with microcomputers and online services. Highlights include the development of graphics interfacing with microcomputers; CD-ROM databases; an evaluation of HyperCard as a potential interface to electronic mail and online commercial databases; and future possibilities.…

  12. Computer Graphics and Physics Teaching.

    ERIC Educational Resources Information Center

    Bork, Alfred M.; Ballard, Richard

    New, more versatile and inexpensive terminals will make computer graphics more feasible in science instruction than before. This paper describes the use of graphics in physics teaching at the University of California at Irvine. Commands and software are detailed in established programs, which include a lunar landing simulation and a program which…

  13. REQUIREMENTS FOR GRAPHIC TEACHING MACHINES.

    ERIC Educational Resources Information Center

    HICKEY, ALBERT; AND OTHERS

    AN EXPERIMENT WAS REPORTED WHICH DEMONSTRATES THAT GRAPHICS ARE MORE EFFECTIVE THAN SYMBOLS IN ACQUIRING ALGEBRA CONCEPTS. THE SECOND PHASE OF THE STUDY DEMONSTRATED THAT GRAPHICS IN HIGH SCHOOL TEXTBOOKS WERE RELIABLY CLASSIFIED IN A MATRIX OF 480 FUNCTIONAL STIMULUS-RESPONSE CATEGORIES. SUGGESTIONS WERE MADE FOR EXTENDING THE CLASSIFICATION…

  14. Computer Graphics Evolution: A Survey.

    ERIC Educational Resources Information Center

    Gartel, Laurence M.

    1985-01-01

    The history of the field of computer graphics is discussed. In 1976 there were no institutions that offered any kind of study of computer graphics. Today electronic image-making is seen as a viable, legitimate art form, and courses are offered by many universities and colleges. (RM)

  15. Research on graphical workflow modeling tool

    NASA Astrophysics Data System (ADS)

    Gu, Hongjiu

    2013-07-01

    Through the technical analysis of existing modeling tools, combined with Web technology, this paper presents a graphical workflow modeling tool design program, through which designers can draw process directly in the browser and automatically transform the drawn process description in XML description file, to facilitate the workflow engine analysis and barrier-free sharing of workflow data in a networked environment. The program has software reusability, cross-platform, scalability, and strong practicality.

  16. Super VGA Primitives Graphics System.

    Energy Science and Technology Software Center (ESTSC)

    1992-05-14

    Version 00 These primitives are the lowest level routines needed to perform super VGA graphics on a PC. A sample main program is included that exercises the primitives. Both Lahey and Microsoft FORTRAN's have graphics libraries. However, the libraries do not support 256 color graphics at resolutions greater than 320x200. The primitives bypass these libraries while still conforming to standard usage of BIOS. The supported graphics modes depend upon the PC graphics card and itsmore » memory. Super VGA resolutions of 640x480 and 800x600 have been tested on an ATI VGA Wonder card with 512K memory and on several 80486 PC's (unknown manufacturers) at retail stores.« less

  17. Physical and technological foundations of graphical treatment processes based on inner defects under the action of powerful pulses of laser radiation

    NASA Astrophysics Data System (ADS)

    Davidov, Nicolay N.; Kudaev, Serge V.

    1999-01-01

    Researchers of damage formation in processes in glass are directed on studying the interaction mechanisms of powerful impulses of penetrating laser radiation with materials for the purpose of improvement of optical components resistance. However, the processes of glass structure defects formation as local areas with low factor of visible light admittance can find application in a final glassware processing. Application of treatment modes, using these effects, allows: to increase art expression of decorative glassware for furnish of buildings interior; to solve some problems of manufacturing counter devices, and also indication devices of electronic instruments. Mathematical models of defect formation processes in optically transparent materials under an action of powerful pulses of laser radiation are necessary for development of control principles of glass treatment.

  18. Automatic Palette Identification of Colored Graphics

    NASA Astrophysics Data System (ADS)

    Lacroix, Vinciane

    The median-shift, a new clustering algorithm, is proposed to automatically identify the palette of colored graphics, a pre-requisite for graphics vectorization. The median-shift is an iterative process which shifts each data point to the "median" point of its neighborhood defined thanks to a distance measure and a maximum radius, the only parameter of the method. The process is viewed as a graph transformation which converges to a set of clusters made of one or several connected vertices. As the palette identification depends on color perception, the clustering is performed in the L*a*b* feature space. As pixels located on edges are made of mixed colors not expected to be part of the palette, they are removed from the initial data set by an automatic pre-processing. Results are shown on scanned maps and on the Macbeth color chart and compared to well established methods.

  19. Integration of rocket turbine design and analysis through computer graphics

    NASA Technical Reports Server (NTRS)

    Hsu, Wayne; Boynton, Jim

    1988-01-01

    An interactive approach with engineering computer graphics is used to integrate the design and analysis processes of a rocket engine turbine into a progressive and iterative design procedure. The processes are interconnected through pre- and postprocessors. The graphics are used to generate the blade profiles, their stacking, finite element generation, and analysis presentation through color graphics. Steps of the design process discussed include pitch-line design, axisymmetric hub-to-tip meridional design, and quasi-three-dimensional analysis. The viscous two- and three-dimensional analysis codes are executed after acceptable designs are achieved and estimates of initial losses are confirmed.

  20. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

    SciTech Connect

    Nakata, Susumu

    2008-09-01

    This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

  1. Mobile Air Monitoring Data Processing Strategies and Effects on Spatial Air Pollution Trends

    EPA Science Inventory

    The collection of real-time air quality measurements while in motion (i.e., mobile monitoring) is currently conducted worldwide to evaluate in situ emissions, local air quality trends, and air pollutant exposure. This measurement strategy pushes the limits of traditional data an...

  2. An efficient process for producing economical and eco-friendly cotton textile composites for mobile industry

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The mobile industry comprised of airplanes, automotives, and ships uses enormous quantities of various types of textiles. Just a few decades ago, most of these textile products and composites were made with woven or knitted fabrics that were mostly made with the then only available natural fibers, i...

  3. A Comparative Analysis of the Processes of Social Mobility in the USSR and in Today's Russia

    ERIC Educational Resources Information Center

    Shkaratan, O. I.; Iastrebov, G. A.

    2012-01-01

    When it comes to analyzing problems of mobility, most studies of the post-Soviet era have cited random and unconnected data with respect to the Soviet era, on the principle of comparing "the old" and "the new." The authors have deemed it possible (although based on material that is not fully comparable) to examine the late Soviet past as a period…

  4. Social Mobility and Status Attainment Process of People with Farm Backgrounds in West Germany.

    ERIC Educational Resources Information Center

    Bruse, Rudolf

    1979-01-01

    Describing dominant patterns of social mobility and analyzing factors determining educational and occupational attainment in farm and nonfarm sectors, this article investigates whether farm background is a handicap to status attainment in nonfarm sectors and examines the determinants of occupational status of farmers' sons in the nonfarm sector.…

  5. Graphical presentation of diagnostic information

    PubMed Central

    Whiting, Penny F; Sterne, Jonathan AC; Westwood, Marie E; Bachmann, Lucas M; Harbord, Roger; Egger, Matthias; Deeks, Jonathan J

    2008-01-01

    Background Graphical displays of results allow researchers to summarise and communicate the key findings of their study. Diagnostic information should be presented in an easily interpretable way, which conveys both test characteristics (diagnostic accuracy) and the potential for use in clinical practice (predictive value). Methods We discuss the types of graphical display commonly encountered in primary diagnostic accuracy studies and systematic reviews of such studies, and systematically review the use of graphical displays in recent diagnostic primary studies and systematic reviews. Results We identified 57 primary studies and 49 systematic reviews. Fifty-six percent of primary studies and 53% of systematic reviews used graphical displays to present results. Dot-plot or box-and- whisker plots were the most commonly used graph in primary studies and were included in 22 (39%) studies. ROC plots were the most common type of plot included in systematic reviews and were included in 22 (45%) reviews. One primary study and five systematic reviews included a probability-modifying plot. Conclusion Graphical displays are currently underused in primary diagnostic accuracy studies and systematic reviews of such studies. Diagnostic accuracy studies need to include multiple types of graphic in order to provide both a detailed overview of the results (diagnostic accuracy) and to communicate information that can be used to inform clinical practice (predictive value). Work is required to improve graphical displays, to better communicate the utility of a test in clinical practice and the implications of test results for individual patients. PMID:18405357

  6. The role of interpersonal communication in the process of knowledge mobilization within a community-based organization: a network analysis

    PubMed Central

    2014-01-01

    Background Diffusion of innovations theory has been widely used to explain knowledge mobilization of research findings. This theory posits that individuals who are more interpersonally connected within an organization may be more likely to adopt an innovation (e.g., research evidence) than individuals who are less interconnected. Research examining this tenet of diffusion of innovations theory in the knowledge mobilization literature is limited. The purpose of the present study was to use network analysis to examine the role of interpersonal communication in the adoption and mobilization of the physical activity guidelines for people with spinal cord injury (SCI) among staff in a community-based organization (CBO). Methods The study used a cross-sectional, whole-network design. In total, 56 staff completed the network survey. Adoption of the guidelines was assessed using Rogers’ innovation-decision process and interpersonal communication was assessed using an online network instrument. Results The patterns of densities observed within the network were indicative of a core-periphery structure revealing that interpersonal communication was greater within the core than between the core and periphery and within the periphery. Membership in the core, as opposed to membership in the periphery, was associated with greater knowledge of the evidence-based physical activity resources available and engagement in physical activity promotion behaviours (ps < 0.05). Greater in-degree centrality was associated with adoption of evidence-based behaviours (p < 0.05). Conclusions Findings suggest that interpersonal communication is associated with knowledge mobilization and highlight how the network structure could be improved for further dissemination efforts. Keywords: diffusion of innovations; network analysis; community-based organization; knowledge mobilization; knowledge translation, interpersonal communication. PMID:24886429

  7. The Effects of Integrating Mobile and CAD Technology in Teaching Design Process for Malaysian Polytechnic Architecture Student in Producing Creative Product

    ERIC Educational Resources Information Center

    Hassan, Isham Shah; Ismail, Mohd Arif; Mustapha, Ramlee

    2010-01-01

    The purpose of this research is to examine the effect of integrating the digital media such as mobile and CAD technology on designing process of Malaysian polytechnic architecture students in producing a creative product. A website is developed based on Caroll's minimal theory, while mobile and CAD technology integration is based on Brown and…

  8. Graphic arts techniques and equipment: A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Technology utilization of NASA sponsored projects involving graphic arts techniques and equipment is discussed. The subjects considered are: (1) modification to graphics tools, (1) new graphics tools, (3) visual aids for graphics, and (4) graphic arts shop hints. Photographs and diagrams are included to support the written material.

  9. User Dynamics in Graphical Authentication Systems

    NASA Astrophysics Data System (ADS)

    Revett, Kenneth; Jahankhani, Hamid; de Magalhães, Sérgio Tenreiro; Santos, Henrique M. D.

    In this paper, a graphical authentication system is presented which is based on a matching scheme. The user is required to match up thumbnail graphical images that belong to a variety of categories - in an order based approach. The number of images in the selection panel was varied to determine how this effects memorability. In addition, timing information was included as a means of enhancing the security level of the system. That is, the user's mouse clicks were timed and used as part of the authentication process. This is one of the few studies that employ a proper biometric facility, namely mouse dynamics, into a graphical authentication system. Lastly, this study employees the use of the 2-D version of Fitts' law, the Accot-Zhai streering law, which is used to examine the effect of image size on usability. The results from this study indicate that the combination of biometrics (mouse timing information) into a graphical authentication scheme produces FAR/FRR values that approach textual based authentication schemes.

  10. Graphics Software For VT Terminals

    NASA Technical Reports Server (NTRS)

    Wang, Caroline

    1991-01-01

    VTGRAPH graphics software tool for DEC/VT computer terminal or terminals compatible with it, widely used by government and industry. Callable in FORTRAN or C language, library program enabling user to cope with many computer environments in which VT terminals used for window management and graphic systems. Provides PLOT10-like package plus color or shade capability for VT240, VT241, and VT300 terminals. User can easily design more-friendly user-interface programs and design PLOT10 programs on VT terminals with different computer systems. Requires ReGis graphics set terminal and FORTRAN compiler.

  11. Graphical Planning Of Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Jeletic, J. F.; Ruley, L. T.

    1991-01-01

    Mission Planning Graphical Tool (MPGT) computer program provides analysts with graphical representations of spacecraft and environmental data used in planning missions. Designed to be generic software tool configured to analyze any specified Earth-orbiting spacecraft mission. Data presented as series of overlays on top of two-dimensional or three-dimensional projection of Earth. Includes spacecraft-orbit tracks, ground-station-antenna masks, solar and lunar ephemerides, and coverage by Tracking Data and Relay Satellite System (TDRSS). From graphical representations, analyst determines such spacecraft-related constraints as communication coverage, infringement upon zones of interference, availability of sunlight, and visibility of targets to instruments.

  12. Low-temperature processable amorphous In-W-O thin-film transistors with high mobility and stability

    SciTech Connect

    Kizu, Takio; Aikawa, Shinya; Mitoma, Nobuhiko; Shimizu, Maki; Gao, Xu; Lin, Meng-Fang; Tsukagoshi, Kazuhito; Nabatame, Toshihide

    2014-04-14

    Thin-film transistors (TFTs) with a high stability and a high field-effect mobility have been achieved using W-doped indium oxide semiconductors in a low-temperature process (∼150 °C). By incorporating WO{sub 3} into indium oxide, TFTs that were highly stable under a negative bias stress were reproducibly achieved without high-temperature annealing, and the degradation of the field-effect mobility was not pronounced. This may be due to the efficient suppression of the excess oxygen vacancies in the film by the high dissociation energy of the bond between oxygen and W atoms and to the different charge states of W ions.

  13. Spatial heterogeneity of mobilization processes and input pathways of herbicides into a brook in a small agricultural catchment

    NASA Astrophysics Data System (ADS)

    Doppler, Tobias; Lück, Alfred; Popow, Gabriel; Strahm, Ivo; Winiger, Luca; Gaj, Marcel; Singer, Heinz; Stamm, Christian

    2010-05-01

    Soil applied herbicides can be transported from their point of application (agricultural field) to surface waters during rain events. There they can have harmful effects on aquatic species. Since the spatial distribution of mobilization and transport processes is very heterogeneous, the contributions of different fields to the total load in a surface water body may differ considerably. The localization of especially critical areas (contributing areas) can help to efficiently minimize herbicide inputs to surface waters. An agricultural field becomes a contributing area when three conditions are met: 1) herbicides are applied, 2) herbicides are mobilized on the field and 3) the mobilized herbicides are transported rapidly to the surface water. In spring 2009, a controlled herbicide application was performed on corn fields in a small (ca 1 km2) catchment with intensive crop production in the Swiss plateau. Subsequently water samples were taken at different locations in the catchment with a high temporal resolution during rain events. We observed both saturation excess and hortonian overland flow during the field campaign. Both can be important mobilization processes depending on the intensity and quantity of the rain. This can lead to different contributing areas during different types of rain events. We will show data on the spatial distribution of herbicide loads during different types of rain events. Also the connectivity of the fields with the brook is spatially heterogeneous. Most of the fields are disconnected from the brook by internal sinks in the catchment, which prevents surface runoff from entering the brook directly. Surface runoff from these disconnected areas can only enter the brook rapidly via macropore-flow into tile drains beneath the internal sinks or via direct shortcuts to the drainage system (maintenance manholes, farmyard or road drains). We will show spatially distributed data on herbicide concentration in purely subsurface systems which shows

  14. Managing facts and concepts: computer graphics and information graphics from a graphic designer's perspective

    SciTech Connect

    Marcus, A.

    1983-01-01

    This book emphasizes the importance of graphic design for an information-oriented society. In an environment in which many new graphic communication technologies are emerging, it raises some issues which graphic designers and managers of graphic design production should consider in using the new technology effectively. In its final sections, it gives an example of the steps taken in designing a visual narrative as a prototype for responsible information-oriented graphic design. The management of complex facts and concepts, of complex systems of ideas and issues, presented in a visual as well as verbal narrative or dialogue and conveyed through new technology will challenge the graphic design community in the coming decades. This shift to visual-verbal communication has repercussions in the educational system and the political/governance systems that go beyond the scope of this book. If there is a single goal for this book, it is to stimulate the reader and then to provide references that will help you learn more about graphic design in an era of communication when know business is show business.

  15. Study on application of dynamic monitoring of land use based on mobile GIS technology

    NASA Astrophysics Data System (ADS)

    Tian, Jingyi; Chu, Jian; Guo, Jianxing; Wang, Lixin

    2006-10-01

    The land use dynamic monitoring is an important mean to maintain the real-time update of the land use data. Mobile GIS technology integrates GIS, GPS and Internet. It can update the historic al data in real time with site-collected data and realize the data update in large scale with high precision. The Monitoring methods on the land use change data with the mobile GIS technology were discussed. Mobile terminal of mobile GIS has self-developed for this study with GPS-25 OEM and notebook computer. The RTD (real-time difference) operation mode is selected. Mobile GIS system of dynamic monitoring of land use have developed with Visual C++ as operation platform, MapObjects control as graphic platform and MSCmm control as communication platform, which realizes organic integration of GPS, GPRS and GIS. This system has such following basic functions as data processing, graphic display, graphic editing, attribute query and navigation. Qinhuangdao city was selected as the experiential area. Shown by the study result, the mobile GIS integration system of dynamic monitoring of land use developed by this study has practical application value.

  16. Calculators and Computers: Graphical Addition.

    ERIC Educational Resources Information Center

    Spero, Samuel W.

    1978-01-01

    A computer program is presented that generates problem sets involving sketching graphs of trigonometric functions using graphical addition. The students use calculators to sketch the graphs and a computer solution is used to check it. (MP)

  17. Reflex: Graphical workflow engine for data reduction

    NASA Astrophysics Data System (ADS)

    ESO Reflex development Team

    2014-01-01

    Reflex provides an easy and flexible way to reduce VLT/VLTI science data using the ESO pipelines. It allows graphically specifying the sequence in which the data reduction steps are executed, including conditional stops, loops and conditional branches. It eases inspection of the intermediate and final data products and allows repetition of selected processing steps to optimize the data reduction. The data organization necessary to reduce the data is built into the system and is fully automatic; advanced users can plug their own modules and steps into the data reduction sequence. Reflex supports the development of data reduction workflows based on the ESO Common Pipeline Library. Reflex is based on the concept of a scientific workflow, whereby the data reduction cascade is rendered graphically and data seamlessly flow from one processing step to the next. It is distributed with a number of complete test datasets so users can immediately start experimenting and familiarize themselves with the system.

  18. APSRS state-base graphics

    USGS Publications Warehouse

    U.S. Geological Survey

    1981-01-01

    The National Cartographic Information Center (NCIC) is the information branch of the U.S. Geological Survey's National Mapping Division. In order to organize and distribute information about U.S. aerial photography coverage and to help eliminate aerial mapping duplication by tracking individual aerial projects, NCIC developed the Aerial Photography Summary Record System (APSRS). APSRS's principal products are State-Base Graphics (SBG), graphic indexes that show the coverage of conventional aerial photography projects over each State.

  19. Graphic design of pinhole cameras

    NASA Technical Reports Server (NTRS)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  20. Planetary Photojournal Home Page Graphic

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This image is an unannotated version of the Planetary Photojournal Home Page graphic. This digital collage contains a highly stylized rendition of our solar system and points beyond. As this graphic was intended to be used as a navigation aid in searching for data within the Photojournal, certain artistic embellishments have been added (color, location, etc.). Several data sets from various planetary and astronomy missions were combined to create this image.

  1. Photojournal Home Page Graphic 2007

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image is an unannotated version of the Photojournal Home Page graphic released in October 2007. This digital collage contains a highly stylized rendition of our solar system and points beyond. As this graphic was intended to be used as a navigation aid in searching for data within the Photojournal, certain artistic embellishments have been added (color, location, etc.). Several data sets from various planetary and astronomy missions were combined to create this image.

  2. Building a Mobile HIV Prevention App for Men Who Have Sex With Men: An Iterative and Community-Driven Process

    PubMed Central

    McDougal, Sarah J; Sullivan, Patrick S; Stekler, Joanne D; Stephenson, Rob

    2015-01-01

    Background Gay, bisexual, and other men who have sex with men (MSM) account for a disproportionate burden of new HIV infections in the United States. Mobile technology presents an opportunity for innovative interventions for HIV prevention. Some HIV prevention apps currently exist; however, it is challenging to encourage users to download these apps and use them regularly. An iterative research process that centers on the community’s needs and preferences may increase the uptake, adherence, and ultimate effectiveness of mobile apps for HIV prevention. Objective The aim of this paper is to provide a case study to illustrate how an iterative community approach to a mobile HIV prevention app can lead to changes in app content to appropriately address the needs and the desires of the target community. Methods In this three-phase study, we conducted focus group discussions (FGDs) with MSM and HIV testing counselors in Atlanta, Seattle, and US rural regions to learn preferences for building a mobile HIV prevention app. We used data from these groups to build a beta version of the app and theater tested it in additional FGDs. A thematic data analysis examined how this approach addressed preferences and concerns expressed by the participants. Results There was an increased willingness to use the app during theater testing than during the first phase of FGDs. Many concerns that were identified in phase one (eg, disagreements about reminders for HIV testing, concerns about app privacy) were considered in building the beta version. Participants perceived these features as strengths during theater testing. However, some disagreements were still present, especially regarding the tone and language of the app. Conclusions These findings highlight the benefits of using an interactive and community-driven process to collect data on app preferences when building a mobile HIV prevention app. Through this process, we learned how to be inclusive of the larger MSM population without

  3. Distributed computation of graphics primitives on a transputer network

    NASA Technical Reports Server (NTRS)

    Ellis, Graham K.

    1988-01-01

    A method is developed for distributing the computation of graphics primitives on a parallel processing network. Off-the-shelf transputer boards are used to perform the graphics transformations and scan-conversion tasks that would normally be assigned to a single transputer based display processor. Each node in the network performs a single graphics primitive computation. Frequently requested tasks can be duplicated on several nodes. The results indicate that the current distribution of commands on the graphics network shows a performance degradation when compared to the graphics display board alone. A change to more computation per node for every communication (perform more complex tasks on each node) may cause the desired increase in throughput.

  4. Graphical object-oriented programming with LabVIEW

    NASA Astrophysics Data System (ADS)

    Jamal, Rahman

    1994-12-01

    Programming with graphical languages implements a completely new type of man-machine interface. Conventional text-based programming is an inherently linear process that forces engineers and scientists to think and express their ideas in terms constrained by the programming language. The ability to visualise graphically a process or algorithm allows them, however, to express their ideas in a more intuitive, natural way. This is true, particularly for parallel processes. LabVIEW is such an icon-based graphical programming system that provides a powerful alternative for scientific and engineering programming - it offers the significant productivity gains of a graphical environment with no sacrifice in performance or flexibility. Typical scientific applications include process control, automation, instrumentation, motion control, simulation, and a number of other technical disciplines.

  5. On the possibility of producing true real-time retinal cross-sectional images using a graphics processing unit enhanced master-slave optical coherence tomography system

    NASA Astrophysics Data System (ADS)

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian

    2015-07-01

    In a previous report, we demonstrated master-slave optical coherence tomography (MS-OCT), an OCT method that does not need resampling of data and can be used to deliver en face images from several depths simultaneously. In a separate report, we have also demonstrated MS-OCT's capability of producing cross-sectional images of a quality similar to those provided by the traditional Fourier domain (FD) OCT technique, but at a much slower rate. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real time. We analyze the conditions that ensure a true real-time B-scan imaging operation and demonstrate in vivo real-time images from human fovea and the optic nerve, with resolution and sensitivity comparable to those produced using the traditional FD-based method, however, without the need of data resampling.

  6. On the possibility of producing true real-time retinal cross-sectional images using a graphics processing unit enhanced master-slave optical coherence tomography system.

    PubMed

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian

    2015-07-01

    In a previous report, we demonstrated master-slave optical coherence tomography (MS-OCT), an OCT method that does not need resampling of data and can be used to deliver en face images from several depths simultaneously. In a separate report, we have also demonstrated MS-OCT's capability of producing cross-sectional images of a quality similar to those provided by the traditional Fourier domain (FD) OCT technique, but at a much slower rate. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real time. We analyze the conditions that ensure a true real-time B-scan imaging operation and demonstrate in vivo real-time images from human fovea and the optic nerve, with resolution and sensitivity comparable to those produced using the traditional FD-based method, however, without the need of data resampling. PMID:26198418

  7. GICUDA: A parallel program for 3D correlation imaging of large scale gravity and gravity gradiometry data on graphics processing units with CUDA

    NASA Astrophysics Data System (ADS)

    Chen, Zhaoxi; Meng, Xiaohong; Guo, Lianghui; Liu, Guofeng

    2012-09-01

    The 3D correlation imaging for gravity and gravity gradiometry data provides a rapid approach to the equivalent estimation of objective bodies with different density contrasts in the subsurface. The subsurface is divided into a 3D regular grid, and then a cross correlation between the observed data and the theoretical gravity anomaly due to a point mass source is calculated at each grid node. The resultant correlation coefficients are adopted to describe the equivalent mass distribution in a quantitate probability sense. However, when the size of the survey data is large, it is still computationally expensive. With the advent of the CUDA, GPUs lead to a new path for parallel computing, which have been widely applied in seismic processing, astronomy, molecular dynamics simulation, fluid mechanics and some other fields. We transfer the main time-consuming program of 3D correlation imaging into GPU device, where the program can be executed in a parallel way. The synthetic and real tests have been performed to validate the correctness of our code on NVIDIA GTX 550. The precision evaluation and performance speedup comparison of the CPU and GPU implementations are illustrated with different sizes of gravity data. When the size of grid nodes and observed data sets is 1024×1024×1 and 1024×1024, the speed up can reach to 81.5 for gravity data and 90.7 for gravity vertical gradient data respectively, thus providing the basis for the rapid interpretation of gravity and gravity gradiometry data.

  8. RHENIUM SOLUBILITY IN BOROSILICATE NUCLEAR WASTE GLASS IMPLICATIONS FOR THE PROCESSING AND IMMOBILIZATION OF TECHNETIUM-99 (AND SUPPORTING INFORMATION WITH GRAPHICAL ABSTRACT)

    SciTech Connect

    AA KRUGER; A GOEL; CP RODRIGUEZ; JS MCCLOY; MJ SCHWEIGER; WW LUKENS; JR, BJ RILEY; D KIM; M LIEZERS; P HRMA

    2012-08-13

    The immobilization of 99Tc in a suitable host matrix has proved a challenging task for researchers in the nuclear waste community around the world. At the Hanford site in Washington State in the U.S., the total amount of 99Tc in low-activity waste (LAW) is {approx} 1,300 kg and the current strategy is to immobilize the 99Tc in borosilicate glass with vitrification. In this context, the present article reports on the solubility and retention of rhenium, a nonradioactive surrogate for 99Tc, in a LAW sodium borosilicate glass. Due to the radioactive nature of technetium, rhenium was chosen as a simulant because of previously established similarities in ionic radii and other chemical aspects. The glasses containing target Re concentrations varying from 0 to10,000 ppm by mass were synthesized in vacuum-sealed quartz ampoules to minimize the loss of Re by volatilization during melting at 1000 DC. The rhenium was found to be present predominantly as Re7 + in all the glasses as observed by X-ray absorption near-edge structure (XANES). The solubility of Re in borosilicate glasses was determined to be {approx}3,000 ppm (by mass) using inductively coupled plasma-optical emission spectroscopy (ICP-OES). At higher rhenium concentrations, some additional material was retained in the glasses in the form of alkali perrhenate crystalline inclusions detected by X-ray diffraction (XRD) and laser ablation-ICP mass spectrometry (LA-ICP-MS). Assuming justifiably substantial similarities between Re7 + and Tc 7+ behavior in this glass system, these results implied that the processing and immobilization of 99Tc from radioactive wastes should not be limited by the solubility of 99Tc in borosilicate LAW glasses.

  9. Electron Mobility Exceeding 10 cm(2) V(-1) s(-1) and Band-Like Charge Transport in Solution-Processed n-Channel Organic Thin-Film Transistors.

    PubMed

    Xu, Xiaomin; Yao, Yifan; Shan, Bowen; Gu, Xiao; Liu, Danqing; Liu, Jinyu; Xu, Jianbin; Zhao, Ni; Hu, Wenping; Miao, Qian

    2016-07-01

    Solution-processed n-channel organic thin-film transistors (OTFTs) that exhibit a field-effect mobility as high as 11 cm(2) V(-1) s(-1) at room temperature and a band-like temperature dependence of electron mobility are reported. By comparison of solution-processed OTFTs with vacuum-deposited OTFTs of the same organic semiconductor, it is found that grain boundaries are a key factor inhibiting band-like charge transport. PMID:27151777

  10. Defining Identities through Multiliteracies: EL Teens Narrate Their Immigration Experiences as Graphic Stories

    ERIC Educational Resources Information Center

    Danzak, Robin L.

    2011-01-01

    Based on a framework of identity-as-narrative and multiliteracies, this article describes "Graphic Journeys," a multimedia literacy project in which English learners (ELs) in middle school created graphic stories that expressed their families' immigration experiences. The process involved reading graphic novels, journaling, interviewing, and…

  11. Write Is Right: Using Graphic Organizers to Improve Student Mathematical Problem Solving

    ERIC Educational Resources Information Center

    Zollman, Alan

    2012-01-01

    Teachers have used graphic organizers successfully in teaching the writing process. This paper describes graphic organizers and their potential mathematics benefits for both students and teachers, elucidates a specific graphic organizer adaptation for mathematical problem solving, and discusses results using the "four-corners-and-a-diamond"…

  12. A video processing method for convenient mobile reading of printed barcodes with camera phones

    NASA Astrophysics Data System (ADS)

    Bäckström, Christer; Södergård, Caj; Udd, Sture

    2006-01-01

    Efficient communication requires an appropriate choice and combination of media. The print media has succeeded to attract audiences also in our electronic age because of its high usability. However, the limitations of print are self evident. By finding ways of combining printed and electronic information into so called hybrid media, the strengths of both media can be obtained. In hybrid media, paper functions as an interface to the web, integrating printed products into the connected digital world. This is a "reinvention" of printed matter making it into a more communicative technology. Hybrid media means that printed products can be updated in real time. Multimedia clips, personalization and e-shopping can be added as a part of the interactive medium. The concept of enhancing print with interactive features has been around for years. However, the technology has been so far too restricting - people don't want to be tied in front of their PC's reading newspapers. Our solution is communicative and totally mobile. A code on paper or electronic media constitutes the link to mobility.

  13. Computer graphics in architecture and engineering

    NASA Technical Reports Server (NTRS)

    Greenberg, D. P.

    1975-01-01

    The present status of the application of computer graphics to the building profession or architecture and its relationship to other scientific and technical areas were discussed. It was explained that, due to the fragmented nature of architecture and building activities (in contrast to the aerospace industry), a comprehensive, economic utilization of computer graphics in this area is not practical and its true potential cannot now be realized due to the present inability of architects and structural, mechanical, and site engineers to rely on a common data base. Future emphasis will therefore have to be placed on a vertical integration of the construction process and effective use of a three-dimensional data base, rather than on waiting for any technological breakthrough in interactive computing.

  14. Trends in Continuity and Interpolation for Computer Graphics.

    PubMed

    Gonzalez Garcia, Francisco

    2015-01-01

    In every computer graphics oriented application today, it is a common practice to texture 3D models as a way to obtain realistic material. As part of this process, mesh texturing, deformation, and visualization are all key parts of the computer graphics field. This PhD dissertation was completed in the context of these three important and related fields in computer graphics. The article presents techniques that improve on existing state-of-the-art approaches related to continuity and interpolation in texture space (texturing), object space (deformation), and screen space (rendering). PMID:26594958

  15. Graphical programming of telerobotic tasks

    SciTech Connect

    Small, D.E.; McDonald, M.J.

    1997-02-01

    With a goal of producing faster, safer, and cheaper technologies for nuclear waste cleanup, Sandia is actively developing and extending intelligent systems technologies. Graphical Programming is a key technology for robotic waste cleanup that Sandia is developing for this goal. This paper describes Sancho, Sandia most advanced Graphical Programming supervisory software. Sancho, now operational on several robot systems, incorporates all of Sandia`s recent advances in supervisory control. Sancho, developed to rapidly apply Graphical Programming on a diverse set of robot systems, uses a general set of tools to implement task and operational behavior. Sancho can be rapidly reconfigured for new tasks and operations without modifying the supervisory code. Other innovations include task-based interfaces, event-based sequencing, and sophisticated GUI design. These innovations have resulted in robot control programs and approaches that are easier and safer to use than teleoperation, off-line programming, or full automation.

  16. Optical design using computer graphics.

    PubMed

    Howard, J M

    2001-07-01

    For decades the computer has been the primary tool used for optical design. Typical tasks include performing numerical calculations for ray tracing and analysis and rendering graphics for system drawings. As machines become faster with each new generation, the time needed for a particular design task has greatly reduced, allowing multiple assignments to be performed with little noticeable delay. This lets the designer modify a system and then immediately see the results rendered in graphics with a single motion. Such visual design methods are discussed here, where graphics of systems and plots relating to their performance are produced in real time, permitting the optical designer to design by pictures. Three examples are given: an educational tutorial for designing a simple microscope objective, an unobstructed reflective telescope composed of three spherical mirrors, and a modified Offner relay with an accessible pupil. PMID:11958264

  17. PHIGS PLUS for scientific graphics

    SciTech Connect

    Crawfis, R.A.

    1991-01-14

    This paper gives a brief overview of the use of computer graphics standards in the scientific community. It particularly details how how PHIGS PLUS meets the needs of users at the Lawrence Livermore National Laboratory. Although standards for computer graphics have improved substantially over the past decade, their acceptance in the scientific community has been slow. As the use and diversity of computers has increased, the scientific graphics libraries have not been able to keep pace with the additional capabilities these new machines offer. Therefore, several organizations have or are now working on converting their scientific libraries to reset upon a portable standard. This paper will address why is transition has been so slow and offer suggestions for future standards work to enhance scientific visualization. This work was performed under the auspices of the US Department of Energy by Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.

  18. Investigation of characteristics and transformation processes of megacity emission plumes using a mobile laboratory in the Paris metropolitan area

    NASA Astrophysics Data System (ADS)

    von der Weiden-Reinmüller, S.-L.; Drewnick, F.; Zhang, Q.; Meleux, F.; Beekmann, M.; Borrmann, S.

    2012-04-01

    A growing fraction of the world's population is living in urban agglomerations of increasing size. Currently, 20 cities worldwide qualify as so-called megacities, having more than 10 million inhabitants. These intense pollution hot-spots cause a number of scientific questions concerning their influence on local and regional air quality, which is connected with human health, flora and fauna. In the framework of the European Union FP7 MEGAPOLI project (Megacities: Emissions, urban, regional and Global Atmospheric POLlution and climate effects, and Integrated tools for assessment and mitigation) two major field campaigns were carried out in the greater Paris region in July 2009 and January/February 2010. This work presents results from mobile particulate and gas phase measurements with focus on the characteristics of the Paris emission plume and its impact on the regional air quality and on aerosol transformation processes within this plume as it travels away from its source. In addition differences between summer and winter conditions are discussed. The mobile laboratory was equipped with high time resolution instrumentation to measure particle number concentrations (dP > 2.5 nm), size distributions (dP ~ 5 nm - 32 μm), sub-micron chemical composition (non-refractory species using Aerodyne HR-ToF-AMS, PAH and black carbon) as well as major trace gases (CO2, SO2, O3, NOx) and standard meteorological parameters. On-board webcam and GPS allow detailed monitoring of traffic situation and vehicle track. In a total of 29 mobile and 25 stationary measurements with the mobile laboratory the Paris emission plume as well as the atmospheric background was characterized under various meteorological conditions. This allows investigating the influence of external factors like temperature, solar radiation or precipitation on the plume characteristics. Three measurement strategies were applied to investigate the emission plume. First, circular mobile measurements around Paris

  19. Graphic Journeys: Graphic Novels' Representations of Immigrant Experiences

    ERIC Educational Resources Information Center

    Boatright, Michael D.

    2010-01-01

    This article explores how immigrant experiences are represented in the narratives of three graphic novels published in the last decade: Tan's (2007) "The Arrival," Kiyama's (1931/1999) "The Four Immigrants Manga: A Japanese Experience in San Francisco, 1904-1924," and Yang's (2006) "American Born Chinese." Through a theoretical lens informed by…

  20. New Challenge for Graphic Arts: Modernize Now!

    ERIC Educational Resources Information Center

    Sundeen, Earl I.

    1974-01-01

    The Kodak Graphic Arts Manpower Study obtained information from over 1000 graphic arts companies as to the educational needs of today in graphic arts. Vocational educators may have to stop thinking in terms of graphic arts education and begin working on curriculums for career education in the communication field. (Author/DS)

  1. Graphical Methods of Exploratory Data Analysis

    NASA Astrophysics Data System (ADS)

    Friedman, J. H.; McDonald, J. A.; Stuetzle, W.

    This paper describes briefly Orion I, a graphic system used to study applications of computer graphics - especially interactive motion graphics - in statistics. Orion I is the newest of a family of "Prim" systems whose most striking common feature is the use of real-time motion graphics to display three-dimensional scatterplots.

  2. Antinomies of Semiotics in Graphic Design

    ERIC Educational Resources Information Center

    Storkerson, Peter

    2010-01-01

    The following paper assesses the roles played by semiotics in graphic design and in graphic design education, which both reflects and shapes practice. It identifies a series of factors; graphic design education methods and culture; semiotic theories themselves and their application to graphic design; the two wings of Peircian semiotics and…

  3. Comprehending, Composing, and Celebrating Graphic Poetry

    ERIC Educational Resources Information Center

    Calo, Kristine M.

    2011-01-01

    The use of graphic poetry in classrooms is encouraged as a way to engage students and motivate them to read and write poetry. This article discusses how graphic poetry can help students with their comprehension of poetry while tapping into popular culture. It is organized around three main sections--reading graphic poetry, writing graphic poetry,…

  4. Cartooning History: Canada's Stories in Graphic Novels

    ERIC Educational Resources Information Center

    King, Alyson E.

    2012-01-01

    In recent years, historical events, issues, and characters have been portrayed in an increasing number of non-fiction graphic texts. Similar to comics and graphic novels, graphic texts are defined as fully developed, non-fiction narratives told through panels of sequential art. Such non-fiction graphic texts are being used to teach history in…

  5. Graphic Design Career Guide 2. Revised Edition.

    ERIC Educational Resources Information Center

    Craig, James

    The graphic design field is diverse and includes many areas of specialization. This guide introduces students to career opportunities in graphic design. The guide is organized in four parts. "Part One: Careers in Graphic Design" identifies and discusses the various segments of the graphic design industry, including: Advertising, Audio-Visual, Book…

  6. Interactive Computer Graphics

    NASA Technical Reports Server (NTRS)

    Kenwright, David

    2000-01-01

    Aerospace data analysis tools that significantly reduce the time and effort needed to analyze large-scale computational fluid dynamics simulations have emerged this year. The current approach for most postprocessing and visualization work is to explore the 3D flow simulations with one of a dozen or so interactive tools. While effective for analyzing small data sets, this approach becomes extremely time consuming when working with data sets larger than one gigabyte. An active area of research this year has been the development of data mining tools that automatically search through gigabyte data sets and extract the salient features with little or no human intervention. With these so-called feature extraction tools, engineers are spared the tedious task of manually exploring huge amounts of data to find the important flow phenomena. The software tools identify features such as vortex cores, shocks, separation and attachment lines, recirculation bubbles, and boundary layers. Some of these features can be extracted in a few seconds; others take minutes to hours on extremely large data sets. The analysis can be performed off-line in a batch process, either during or following the supercomputer simulations. These computations have to be performed only once, because the feature extraction programs search the entire data set and find every occurrence of the phenomena being sought. Because the important questions about the data are being answered automatically, interactivity is less critical than it is with traditional approaches.

  7. Foundations of representation: where might graphical symbol systems come from?

    PubMed

    Garrod, Simon; Fay, Nicolas; Lee, John; Oberlander, Jon; Macleod, Tracy

    2007-11-12

    It has been suggested that iconic graphical signs evolve into symbolic graphical signs through repeated usage. This article reports a series of interactive graphical communication experiments using a 'pictionary' task to establish the conditions under which the evolution might occur. Experiment 1 rules out a simple repetition based account in favor of an account that requires feedback and interaction between communicators. Experiment 2 shows how the degree of interaction affects the evolution of signs according to a process of grounding. Experiment 3 confirms the prediction that those not involved directly in the interaction have trouble interpreting the graphical signs produced in Experiment 1. On the basis of these results, this article argues that icons evolve into symbols as a consequence of the systematic shift in the locus of information from the sign to the users' memory of the sign's usage supported by an interactive grounding process. PMID:21635324

  8. Intellectual system of identification of Arabic graphics

    NASA Astrophysics Data System (ADS)

    Abdoullayeva, Gulchin G.; Aliyev, Telman A.; Gurbanova, Nazakat G.

    2001-08-01

    The studies made by using the domain of graphic images allowed creating facilities of the artificial intelligence for letters, letter combinations etc. for various graphics and prints. The work proposes a system of recognition and identification of symbols of the Arabic graphics, which has its own specificity as compared to Latin and Cyrillic ones. The starting stage of the recognition and the identification is coding with further entry of information into a computer. Here the problem of entry is one of the essentials. For entry of a large volume of information in the unit of time a scanner is usually employed. Along with the scanner the authors suggest their elaboration of technical facilities for effective input and coding of the information. For refinement of symbols not identified from the scanner mostly for a small bulk of information the developed coding devices are used directly in the process of writing. The functional design of the software is elaborated on the basis of the heuristic model of the creative activity of a researcher and experts in the description and estimation of states of the weakly formalizable systems on the strength of the methods of identification and of selection of geometric features.

  9. Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes.

    PubMed

    Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo

    2016-01-20

    A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays. PMID:26835954

  10. Collection Of Software For Computer Graphics

    NASA Technical Reports Server (NTRS)

    Hibbard, Eric A.; Makatura, George

    1990-01-01

    Ames Research Graphics System (ARCGRAPH) collection of software libraries and software utilities assisting researchers in generating, manipulating, and visualizing graphical data. Defines metafile format containing device-independent graphical data. File format used with various computer-graphics-manipulation and -animation software packages at Ames, including SURF (COSMIC Program ARC-12381) and GAS (COSMIC Program ARC-12379). Consists of two-stage "pipeline" used to put out graphical primitives. ARCGRAPH libraries developed on VAX computer running VMS.

  11. Trend Monitoring System (TMS) graphics software

    NASA Technical Reports Server (NTRS)

    Brown, J. S.

    1979-01-01

    A prototype bus communications systems, which is being used to support the Trend Monitoring System (TMS) and to evaluate the bus concept is considered. A set of FORTRAN-callable graphics subroutines for the host MODCOMP comuter, and an approach to splitting graphics work between the host and the system's intelligent graphics terminals are described. The graphics software in the MODCOMP and the operating software package written for the graphics terminals are included.

  12. Forced gradient infiltration experiments: effect on the release processes of mobile particles and organic contaminants

    NASA Astrophysics Data System (ADS)

    Pagels, B.; Reichel, K.; Totsche, K. U.

    2009-04-01

    Mobile colloidal and suspended matter is likely to affect themobility of polycyclic aromatic hydrocarbons (PAHs) in the unsaturatedsoil zone at contaminated sites. We studied the release of mobile (organic) particles (MOPs), which include among others dissolved and colloidal organic matter in response to forced sprinkling infiltration and multiple flow interrupts using undisturbed zero-tensionlysimeters. The aim was to assess the effect of these MOPs on the exportof PAHs and other contaminants in floodplain soils. Seepage water samples were analyzed for dissolvedand colloidal organic carbon (DOC), PAH, suspended particles, pH, electrical conductivity, turbidity,zeta potential and surface tension in the fraction smaller 0.7 m. In additional selected PAH were analysed in the size fraction > 0.7 m. Bromide was used as a conservative tracer to determine the flow regime. First arrival of bromide was detected 3.8 hours after start of irrigation. The concentration gradually increased and reached a level of C/C0=0.1 just before the flow interrupt (FI). After flow was resumed, effluent bromide concentration was equal to the concentration before the FI. Ongoing irrigation caused a breakthrough wave, which continuously increased until the bromide concentration reached ~100% of the input concentration. A high-intensity rain event of 4 L m -2 h-1 upon summer-dried lysimeters results in a release of particles in a the size of 250-400 nm. In addition it seems that with the initial exported seepage water surface-active agents are released which is indicated by the decrease of the surface to 60 mN m-1 (Pure water: 72mN m-1). The turbidity values range from 8-14 FAU. The concentration of DOC is about 30-40 mg L-1 in the initial effluent fractions and equilibrates to 15 mg L-1 with ongoing percolation. The PAHs in the fraction < 0.7 m amount to 0.02 g L-1, and 0.05 g L-1 in the fraction > 0.7 m. After establishing steady state flow conditions, first arrival of bromide was detected

  13. Animation graphic interface for the space shuttle onboard computer

    NASA Technical Reports Server (NTRS)

    Wike, Jeffrey; Griffith, Paul

    1989-01-01

    Graphics interfaces designed to operate on space qualified hardware challenge software designers to display complex information under processing power and physical size constraints. Under contract to Johnson Space Center, MICROEXPERT Systems is currently constructing an intelligent interface for the LASER DOCKING SENSOR (LDS) flight experiment. Part of this interface is a graphic animation display for Rendezvous and Proximity Operations. The displays have been designed in consultation with Shuttle astronauts. The displays show multiple views of a satellite relative to the shuttle, coupled with numeric attitude information. The graphics are generated using position data received by the Shuttle Payload and General Support Computer (PGSC) from the Laser Docking Sensor. Some of the design considerations include crew member preferences in graphic data representation, single versus multiple window displays, mission tailoring of graphic displays, realistic 3D images versus generic icon representations of real objects, the physical relationship of the observers to the graphic display, how numeric or textual information should interface with graphic data, in what frame of reference objects should be portrayed, recognizing conditions of display information-overload, and screen format and placement consistency.

  14. Graphic Communications. Career Education Guide.

    ERIC Educational Resources Information Center

    Dependents Schools (DOD), Washington, DC. European Area.

    The curriculum guide is designed to provide students with realistic training in graphic communications theory and practice within the secondary educational framework and to prepare them for entry into an occupation or continuing postsecondary education. The program modules outlined in the guide have been grouped into four areas: printing,…

  15. A Natural Language Graphics System.

    ERIC Educational Resources Information Center

    Brown, David, C.; Kwasny, Stan C.

    This report describes an experimental system for drawing simple pictures on a computer graphics terminal using natural language input. The system is capable of drawing lines, points, and circles on command from the user, as well as answering questions about system capabilities and objects on the screen. Erasures are permitted and language input…

  16. Astronomy Simulation with Computer Graphics.

    ERIC Educational Resources Information Center

    Thomas, William E.

    1982-01-01

    "Planetary Motion Simulations" is a system of programs designed for students to observe motions of a superior planet (one whose orbit lies outside the orbit of the earth). Programs run on the Apple II microcomputer and employ high-resolution graphics to present the motions of Saturn. (Author/JN)

  17. Graphic Novels in the Classroom

    ERIC Educational Resources Information Center

    Martin, Adam

    2009-01-01

    Today many authors and artists adapt works of classic literature into a medium more "user friendly" to the increasingly visual student population. Stefan Petrucha and Kody Chamberlain's version of "Beowulf" is one example. The graphic novel captures the entire epic in arresting images and contrasts the darkness of the setting and characters with…

  18. Revised adage graphics computer system

    NASA Technical Reports Server (NTRS)

    Tulppo, J. S.

    1980-01-01

    Bootstrap loader and mode-control options for Adage Graphics Computer System Significantly simplify operations procedures. Normal load and control functions are performed quickly and easily from control console. Operating characteristics of revised system include greatly increased speed, convenience, and reliability.

  19. Graphic Arts/Offset Lithography.

    ERIC Educational Resources Information Center

    Hoisington, James; Metcalf, Joseph

    This revised curriculum for graphic arts is designed to provide secondary and postsecondary students with entry-level skills and an understanding of current printing technology. It contains lesson plans based on entry-level competencies for offset lithography as identified by educators and industry representatives. The guide is divided into 15…

  20. Recorded Music and Graphic Design.

    ERIC Educational Resources Information Center

    Osterer, Irv

    1998-01-01

    Reviews the history of art as an element of music-recording packaging. Describes a project in which students design a jacket for either cassette or CD using a combination of computerized and traditional rendering techniques. Reports that students have been inspired to look into careers in graphic design. (DSK)