Science.gov

Sample records for parallel processing strategies

  1. Parallel strategies for SAR processing

    NASA Astrophysics Data System (ADS)

    Segoviano, Jesus A.

    2004-12-01

    This article proposes a series of strategies for improving the computer process of the Synthetic Aperture Radar (SAR) signal treatment, following the three usual lines of action to speed up the execution of any computer program. On the one hand, it is studied the optimization of both, the data structures and the application architecture used on it. On the other hand it is considered a hardware improvement. For the former, they are studied both, the usually employed SAR process data structures, proposing the use of parallel ones and the way the parallelization of the algorithms employed on the process is implemented. Besides, the parallel application architecture classifies processes between fine/coarse grain. These are assigned to individual processors or separated in a division among processors, all of them in their corresponding architectures. For the latter, it is studied the hardware employed on the computer parallel process used in the SAR handling. The improvement here refers to several kinds of platforms in which the SAR process is implemented, shared memory multicomputers, and distributed memory multiprocessors. A comparison between them gives us some guidelines to follow in order to get a maximum throughput with a minimum latency and a maximum effectiveness with a minimum cost, all together with a limited complexness. It is concluded and described, that the approach consisting of the processing of the algorithms in a GNU/Linux environment, together with a Beowulf cluster platform offers, under certain conditions, the best compromise between performance and cost, and promises the major development in the future for the Synthetic Aperture Radar computer power thirsty applications in the next years.

  2. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  3. Parallel processing ITS

    SciTech Connect

    Fan, W.C.; Halbleib, J.A. Sr.

    1996-09-01

    This report provides a users` guide for parallel processing ITS on a UNIX workstation network, a shared-memory multiprocessor or a massively-parallel processor. The parallelized version of ITS is based on a master/slave model with message passing. Parallel issues such as random number generation, load balancing, and communication software are briefly discussed. Timing results for example problems are presented for demonstration purposes.

  4. A new strategy for efficient solar energy conversion: Parallel-processing with surface plasmons

    NASA Technical Reports Server (NTRS)

    Anderson, L. M.

    1982-01-01

    This paper introduces an advanced concept for direct conversion of sunlight to electricity, which aims at high efficiency by tailoring the conversion process to separate energy bands within the broad solar spectrum. The objective is to obtain a high level of spectrum-splitting without sequential losses or unique materials for each frequency band. In this concept, sunlight excites a spectrum of surface plasma waves which are processed in parallel on the same metal film. The surface plasmons transport energy to an array of metal-barrier-semiconductor diodes, where energy is extracted by inelastic tunneling. Diodes are tuned to different frequency bands by selecting the operating voltage and geometry, but all diodes share the same materials.

  5. Tightly integrated single- and multi-crystal data collection strategy calculation and parallelized data processing in JBluIce beamline control system

    SciTech Connect

    Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; Hilgart, Mark C.; Stepanov, Sergey; Sanishvili, Ruslan; Becker, Michael; Winter, Graeme; Sauter, Nicholas K.; Smith, Janet L.; Fischetti, Robert F.

    2014-11-18

    The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates a collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce.

  6. Tightly integrated single- and multi-crystal data collection strategy calculation and parallelized data processing in JBluIce beamline control system.

    PubMed

    Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M; Hilgart, Mark C; Stepanov, Sergey; Sanishvili, Ruslan; Becker, Michael; Winter, Graeme; Sauter, Nicholas K; Smith, Janet L; Fischetti, Robert F

    2014-12-01

    The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates a collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce.

  7. Tightly integrated single- and multi-crystal data collection strategy calculation and parallelized data processing in JBluIce beamline control system

    DOE PAGES

    Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; ...

    2014-11-18

    The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates amore » collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce.« less

  8. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  9. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  10. Parallel processing spacecraft communication system

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary S. (Inventor); Donaldson, James A. (Inventor); Luong, Huy H. (Inventor); Wood, Steven H. (Inventor)

    1998-01-01

    An uplink controlling assembly speeds data processing using a special parallel codeblock technique. A correct start sequence initiates processing of a frame. Two possible start sequences can be used; and the one which is used determines whether data polarity is inverted or non-inverted. Processing continues until uncorrectable errors are found. The frame ends by intentionally sending a block with an uncorrectable error. Each of the codeblocks in the frame has a channel ID. Each channel ID can be separately processed in parallel. This obviates the problem of waiting for error correction processing. If that channel number is zero, however, it indicates that the frame of data represents a critical command only. That data is handled in a special way, independent of the software. Otherwise, the processed data further handled using special double buffering techniques to avoid problems from overrun. When overrun does occur, the system takes action to lose only the oldest data.

  11. Parallel Computing Strategies for Irregular Algorithms

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  12. Scheduling Tasks In Parallel Processing

    NASA Technical Reports Server (NTRS)

    Price, Camille C.; Salama, Moktar A.

    1989-01-01

    Algorithms sought to minimize time and cost of computation. Report describes research on scheduling of computations tasks in system of multiple identical data processors operating in parallel. Computational intractability requires use of suboptimal heuristic algorithms. First algorithm called "list heuristic", variation of classical list scheduling. Second algorithm called "cluster heuristic" applied to tightly coupled tasks and consists of four phases. Third algorithm called "exchange heuristic", iterative-improvement algorithm beginning with initial feasible assignment of tasks to processors and periods of time. Fourth algorithm is iterative one for optimal assignment of tasks and based on concept called "simulated annealing" because of mathematical resemblance to aspects of physical annealing processes.

  13. Processes parallel execution using Grid Wizard Enterprise.

    PubMed

    Ruiz, Marco

    2009-01-01

    The field of high-performance computing (HPC) has provided a wide array of strategies for supplying additional computing power to the goal of reducing the total "clock time" required to complete various computational processes. These strategies range from the development of higher-performance hardware to the assembly of large networks of commodity computers, with each strategy designed to address a particular aspect and/or manifestation of a given computational problem. GWE (Grid Wizard Enterprise) in that regard, is an HPC distributed enterprise system, aimed at providing a solution to the particular problem of running inter-independent computational processes faster by parallelizing their execution across a virtual grid of computational resources with a minimum of user intervention.

  14. Parallel Strategies for Crash and Impact Simulations

    SciTech Connect

    Attaway, S.; Brown, K.; Hendrickson, B.; Plimpton, S.

    1998-12-07

    We describe a general strategy we have found effective for parallelizing solid mechanics simula- tions. Such simulations often have several computationally intensive parts, including finite element integration, detection of material contacts, and particle interaction if smoothed particle hydrody- namics is used to model highly deforming materials. The need to balance all of these computations simultaneously is a difficult challenge that has kept many commercial and government codes from being used effectively on parallel supercomputers with hundreds or thousands of processors. Our strategy is to load-balance each of the significant computations independently with whatever bal- ancing technique is most appropriate. The chief benefit is that each computation can be scalably paraIlelized. The drawback is the data exchange between processors and extra coding that must be written to maintain multiple decompositions in a single code. We discuss these trade-offs and give performance results showing this strategy has led to a parallel implementation of a widely-used solid mechanics code that can now be run efficiently on thousands of processors of the Pentium-based Sandia/Intel TFLOPS machine. We illustrate with several examples the kinds of high-resolution, million-element models that can now be simulated routinely. We also look to the future and dis- cuss what possibilities this new capabUity promises, as well as the new set of challenges it poses in material models, computational techniques, and computing infrastructure.

  15. Parallel Processing at the High School Level.

    ERIC Educational Resources Information Center

    Sheary, Kathryn Anne

    This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

  16. Coordination in serial-parallel image processing

    NASA Astrophysics Data System (ADS)

    Wójcik, Waldemar; Dubovoi, Vladymyr M.; Duda, Marina E.; Romaniuk, Ryszard S.; Yesmakhanova, Laura; Kozbakova, Ainur

    2015-12-01

    Serial-parallel systems used to convert the image. The control of their work results with the need to solve coordination problem. The paper summarizes the model of coordination of resource allocation in relation to the task of synchronizing parallel processes; the genetic algorithm of coordination developed, its adequacy verified in relation to the process of parallel image processing.

  17. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1991-01-01

    The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.

  18. Dual compile strategy for parallel heterogeneous execution.

    SciTech Connect

    Smith, Tyler Barratt; Perry, James Thomas

    2012-06-01

    The purpose of the Dual Compile Strategy is to increase our trust in the Compute Engine during its execution of instructions. This is accomplished by introducing a heterogeneous Monitor Engine that checks the execution of the Compute Engine. This leads to the production of a second and custom set of instructions designed for monitoring the execution of the Compute Engine at runtime. This use of multiple engines differs from redundancy in that one engine is working on the application while the other engine is monitoring and checking in parallel instead of both applications (and engines) performing the same work at the same time.

  19. Parallel Activation in Bilingual Phonological Processing

    ERIC Educational Resources Information Center

    Lee, Su-Yeon

    2011-01-01

    In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

  20. Partitioning in parallel processing of production systems

    SciTech Connect

    Oflazer, K.

    1987-01-01

    This thesis presents research on certain issues related to parallel processing of production systems. It first presents a parallel production system interpreter that has been implemented on a four-processor multiprocessor. This parallel interpreter is based on Forgy's OPS5 interpreter and exploits production-level parallelism in production systems. Runs on the multiprocessor system indicate that it is possible to obtain speed-up of around 1.7 in the match computation for certain production systems when productions are split into three sets that are processed in parallel. The next issue addressed is that of partitioning a set of rules to processors in a parallel interpreter with production-level parallelism, and the extent of additional improvement in performance. The partitioning problem is formulated and an algorithm for approximate solutions is presented. The thesis next presents a parallel processing scheme for OPS5 production systems that allows some redundancy in the match computation. This redundancy enables the processing of a production to be divided into units of medium granularity each of which can be processed in parallel. Subsequently, a parallel processor architecture for implementing the parallel processing algorithm is presented.

  1. Parallel processing considerations for image recognition tasks

    NASA Astrophysics Data System (ADS)

    Simske, Steven J.

    2011-01-01

    Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.

  2. Parallel processing of genomics data

    NASA Astrophysics Data System (ADS)

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  3. Parallelization Strategies for Large Particle Simulations in Astrophysics

    NASA Astrophysics Data System (ADS)

    Pattabiraman, Bharath

    The modeling of collisional N-body stellar systems is a topic of great current interest in several branches of astrophysics and cosmology. These systems are dominated by the physics of relaxation, the collective effect of many weak, random gravitational encounters between stars. They connect directly to our understanding of star clusters, and to the formation of exotic objects such as X-ray binaries, pulsars, and massive black holes. As a prototypical multi-physics, multi-scale problem, the numerical simulation of such systems is computationally intensive, and can only be achieved through high-performance computing. The goal of this thesis is to present parallelization and optimization strategies that can be used to develop efficient computational tools for simulating collisional N-body systems. This leads to major advances: 1) From an astrophysics perspective, these tools enable the study of new physical regimes out of reach by previous simulations. They also lead to much more complete parameter space exploration, allowing direct comparison of numerical results to observational data. 2) On the high-performance computing front, efficient parallelization of a multi-component application requires the meticulous redesign of the various components, as well as innovative parallelization techniques. Many of the challenges faced in this process lie at the very heart of high-performance computing research, including achieving optimal load balancing, maximizing utilization of computational resources, and making effective use of different parallel platforms. For modeling collisional N-body systems, a Monte Carlo approach provides ideal balance between speed and accuracy, as opposed to the more accurate but less scalable direct N-body method. We describe the development of a new version of the Cluster Monte Carlo (CMC) code capable of simulating systems with a realistic number of stars, while accounting for all important physical processes. This efficient and scalable

  4. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1995-01-01

    The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.

  5. Parallel processing of a rotating shaft simulation

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.

    1989-01-01

    A FORTRAN program describing the vibration modes of a rotor-bearing system is analyzed for parellelism in this simulation using a Pascal-like structured language. Potential vector operations are also identified. A critical path through the simulation is identified and used in conjunction with somewhat fictitious processor characteristics to determine the time to calculate the problem on a parallel processing system having those characteristics. A parallel processing overhead time is included as a parameter for proper evaluation of the gain over serial calculation. The serial calculation time is determined for the same fictitious system. An improvement of up to 640 percent is possible depending on the value of the overhead time. Based on the analysis, certain conclusions are drawn pertaining to the development needs of parallel processing technology, and to the specification of parallel processing systems to meet computational needs.

  6. Highly Parallel Modern Signal Processing.

    DTIC Science & Technology

    1982-02-28

    looked at the application of these techniques to systems with coherent speckle noise, such as synthetic aperature (SAR) imagery, coherent sonar and...pprtitioned matrix inversion , comput;atio-n o"f crossambigul ty fun~ctions, formation of outer prCdu1cL tAand skewed outer products, and multiplication of...operations are multiplication, inversion , and L-U decomposition. In signal processing such operations can be found in adaptive filtering, data

  7. Knowledge representation into Ada parallel processing

    NASA Technical Reports Server (NTRS)

    Masotto, Tom; Babikyan, Carol; Harper, Richard

    1990-01-01

    The Knowledge Representation into Ada Parallel Processing project is a joint NASA and Air Force funded project to demonstrate the execution of intelligent systems in Ada on the Charles Stark Draper Laboratory fault-tolerant parallel processor (FTPP). Two applications were demonstrated - a portion of the adaptive tactical navigator and a real time controller. Both systems are implemented as Activation Framework Objects on the Activation Framework intelligent scheduling mechanism developed by Worcester Polytechnic Institute. The implementations, results of performance analyses showing speedup due to parallelism and initial efficiency improvements are detailed and further areas for performance improvements are suggested.

  8. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  9. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  10. Efficient multitasking: parallel versus serial processing of multiple tasks.

    PubMed

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.

  11. Parallel algorithm strategies for circuit simulation.

    SciTech Connect

    Thornquist, Heidi K.; Schiek, Richard Louis; Keiter, Eric Richard

    2010-01-01

    Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. However, they have been pushed to their performance limits in addressing circuit design challenges that come from the technology drivers of smaller feature scales and higher integration. Improving the performance of circuit simulation tools through exploiting new opportunities in widely-available multi-processor architectures is a logical next step. Unfortunately, not all traditional simulation applications are inherently parallel, and quickly adapting mature application codes (even codes designed to parallel applications) to new parallel paradigms can be prohibitively difficult. In general, performance is influenced by many choices: hardware platform, runtime environment, languages and compilers used, algorithm choice and implementation, and more. In this complicated environment, the use of mini-applications small self-contained proxies for real applications is an excellent approach for rapidly exploring the parameter space of all these choices. In this report we present a multi-core performance study of Xyce, a transistor-level circuit simulation tool, and describe the future development of a mini-application for circuit simulation.

  12. Parallel processing: The Cm/sup */ experience

    SciTech Connect

    Siewiorek, D.; Gehringer, E.; Segall, Z.

    1986-01-01

    This book describes the parallel-processing research with CM/sup */ at Carnegie-Mellon University. Cm/sup */ is a tightly coupled 50-processor multiprocessing system that has been in operation since 1977. Two complete operating systems-StarOS and Medusa-are part of its development along with a number of applications.

  13. Parallel processing for computer vision and display

    SciTech Connect

    Dew, P.M. . Dept. of Computer Studies); Earnshaw, R.A. ); Heywood, T.R. )

    1989-01-01

    The widespread availability of high performance computers has led to an increased awareness of the importance of visualization techniques particularly in engineering and science. However, many visualization tasks involve processing large amounts of data or manipulating complex computer models of 3D objects. For example, in the field of computer aided engineering it is often necessary to display an edit solid object (see Plate 1) which can take many minutes even on the fastest serial processors. Another example of a computationally intensive problem, this time from computer vision, is the recognition of objects in a 3D scene from a stereo image pair. To perform visualization tasks of this type in real and reasonable time it is necessary to exploit the advances in parallel processing that have taken place over the last decade. This book uniquely provides a collection of papers from leading visualization researchers with a common interest in the application and exploitation of parallel processing techniques.

  14. Parallel Function Strategy in Pronoun Assignment

    ERIC Educational Resources Information Center

    Grober, Ellen H.; And Others

    1978-01-01

    Subjects completed sentences of the form NP1 aux V NP2 because (but) Pro...(e.g., John may scold Bill because he...) with a reason or motive for the action described. A basic perceptual strategy was hypothesized to underlie the comprehension of these sentences which have a potentially ambiguous pronoun in the subject position of the subordinate…

  15. Parallel Programming Strategies for Irregular Adaptive Applications

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Achieving scalable performance for dynamic irregular applications is eminently challenging. Traditional message-passing approaches have been making steady progress towards this goal; however, they suffer from complex implementation requirements. The use of a global address space greatly simplifies the programming task, but can degrade the performance for such computations. In this work, we examine two typical irregular adaptive applications, Dynamic Remeshing and N-Body, under competing programming methodologies and across various parallel architectures. The Dynamic Remeshing application simulates flow over an airfoil, and refines localized regions of the underlying unstructured mesh. The N-Body experiment models two neighboring Plummer galaxies that are about to undergo a merger. Both problems demonstrate dramatic changes in processor workloads and interprocessor communication with time; thus, dynamic load balancing is a required component.

  16. Parallel processing for nonlinear dynamics simulations of structures including rotating bladed-disk assemblies

    NASA Technical Reports Server (NTRS)

    Hsieh, Shang-Hsien

    1993-01-01

    The principal objective of this research is to develop, test, and implement coarse-grained, parallel-processing strategies for nonlinear dynamic simulations of practical structural problems. There are contributions to four main areas: finite element modeling and analysis of rotational dynamics, numerical algorithms for parallel nonlinear solutions, automatic partitioning techniques to effect load-balancing among processors, and an integrated parallel analysis system.

  17. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  18. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  19. Oxytocin: parallel processing in the social brain?

    PubMed

    Dölen, Gül

    2015-06-01

    Early studies attempting to disentangle the network complexity of the brain exploited the accessibility of sensory receptive fields to reveal circuits made up of synapses connected both in series and in parallel. More recently, extension of this organisational principle beyond the sensory systems has been made possible by the advent of modern molecular, viral and optogenetic approaches. Here, evidence supporting parallel processing of social behaviours mediated by oxytocin is reviewed. Understanding oxytocinergic signalling from this perspective has significant implications for the design of oxytocin-based therapeutic interventions aimed at disorders such as autism, where disrupted social function is a core clinical feature. Moreover, identification of opportunities for novel technology development will require a better appreciation of the complexity of the circuit-level organisation of the social brain.

  20. A multiarchitecture parallel-processing development environment

    NASA Technical Reports Server (NTRS)

    Townsend, Scott; Blech, Richard; Cole, Gary

    1993-01-01

    A description is given of the hardware and software of a multiprocessor test bed - the second generation Hypercluster system. The Hypercluster architecture consists of a standard hypercube distributed-memory topology, with multiprocessor shared-memory nodes. By using standard, off-the-shelf hardware, the system can be upgraded to use rapidly improving computer technology. The Hypercluster's multiarchitecture nature makes it suitable for researching parallel algorithms in computational field simulation applications (e.g., computational fluid dynamics). The dedicated test-bed environment of the Hypercluster and its custom-built software allows experiments with various parallel-processing concepts such as message passing algorithms, debugging tools, and computational 'steering'. Such research would be difficult, if not impossible, to achieve on shared, commercial systems.

  1. Parallel Guessing: A Strategy for High-Speed Computation

    DTIC Science & Technology

    1984-09-19

    for using additional hardware to obtain higher processing speed). In this paper we argue that "parallel guessing" for image analysis is a useful...34distance" from a true solution, or the correctness of a guess, can be readily checked. We review image - analysis algorithms having a parallel guessing or

  2. Partitioning sparse rectangular matrices for parallel processing

    SciTech Connect

    Kolda, T.G.

    1998-05-01

    The authors are interested in partitioning sparse rectangular matrices for parallel processing. The partitioning problem has been well-studied in the square symmetric case, but the rectangular problem has received very little attention. They will formalize the rectangular matrix partitioning problem and discuss several methods for solving it. They will extend the spectral partitioning method for symmetric matrices to the rectangular case and compare this method to three new methods -- the alternating partitioning method and two hybrid methods. The hybrid methods will be shown to be best.

  3. Parallel processing for digital picture comparison

    NASA Technical Reports Server (NTRS)

    Cheng, H. D.; Kou, L. T.

    1987-01-01

    In picture processing an important problem is to identify two digital pictures of the same scene taken under different lighting conditions. This kind of problem can be found in remote sensing, satellite signal processing and the related areas. The identification can be done by transforming the gray levels so that the gray level histograms of the two pictures are closely matched. The transformation problem can be solved by using the packing method. Researchers propose a VLSI architecture consisting of m x n processing elements with extensive parallel and pipelining computation capabilities to speed up the transformation with the time complexity 0(max(m,n)), where m and n are the numbers of the gray levels of the input picture and the reference picture respectively. If using uniprocessor and a dynamic programming algorithm, the time complexity will be 0(m(3)xn). The algorithm partition problem, as an important issue in VLSI design, is discussed. Verification of the proposed architecture is also given.

  4. Parallelization strategy for large-scale vibronic coupling calculations.

    PubMed

    Rabidoux, Scott M; Eijkhout, Victor; Stanton, John F

    2014-12-26

    The vibronic coupling model of Köppel, Domcke, and Cederbaum is a powerful means to understand, predict, and analyze electronic spectra of molecules, especially those that exhibit phenomena that involve breakdown of the Born-Oppenheimer approximation. In this work, we describe a new parallel algorithm for carrying out such calculations. The algorithm is conceptually founded upon a "stencil" representation of the required computational steps, which motivates an efficient strategy for coarse-grained parallelization. The equations involved in the direct-CI type diagonalization of the model Hamiltonian are presented, the parallelization strategy is discussed in detail, and the method is illustrated by calculations involving direct-product basis sets with as many as 17 vibrational modes and 130 billion basis functions.

  5. Cloud parallel processing of tandem mass spectrometry based proteomics data.

    PubMed

    Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus

    2012-10-05

    Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.

  6. Airbreathing Propulsion System Analysis Using Multithreaded Parallel Processing

    NASA Technical Reports Server (NTRS)

    Schunk, Richard Gregory; Chung, T. J.; Rodriguez, Pete (Technical Monitor)

    2000-01-01

    In this paper, parallel processing is used to analyze the mixing, and combustion behavior of hypersonic flow. Preliminary work for a sonic transverse hydrogen jet injected from a slot into a Mach 4 airstream in a two-dimensional duct combustor has been completed [Moon and Chung, 1996]. Our aim is to extend this work to three-dimensional domain using multithreaded domain decomposition parallel processing based on the flowfield-dependent variation theory. Numerical simulations of chemically reacting flows are difficult because of the strong interactions between the turbulent hydrodynamic and chemical processes. The algorithm must provide an accurate representation of the flowfield, since unphysical flowfield calculations will lead to the faulty loss or creation of species mass fraction, or even premature ignition, which in turn alters the flowfield information. Another difficulty arises from the disparity in time scales between the flowfield and chemical reactions, which may require the use of finite rate chemistry. The situations are more complex when there is a disparity in length scales involved in turbulence. In order to cope with these complicated physical phenomena, it is our plan to utilize the flowfield-dependent variation theory mentioned above, facilitated by large eddy simulation. Undoubtedly, the proposed computation requires the most sophisticated computational strategies. The multithreaded domain decomposition parallel processing will be necessary in order to reduce both computational time and storage. Without special treatments involved in computer engineering, our attempt to analyze the airbreathing combustion appears to be difficult, if not impossible.

  7. Enjoying Sad Music: Paradox or Parallel Processes?

    PubMed

    Schubert, Emery

    2016-01-01

    Enjoyment of negative emotions in music is seen by many as a paradox. This article argues that the paradox exists because it is difficult to view the process that generates enjoyment as being part of the same system that also generates the subjective negative feeling. Compensation theories explain the paradox as the compensation of a negative emotion by the concomitant presence of one or more positive emotions. But compensation brings us no closer to explaining the paradox because it does not explain how experiencing sadness itself is enjoyed. The solution proposed is that an emotion is determined by three critical processes-labeled motivational action tendency (MAT), subjective feeling (SF) and Appraisal. For many emotions the MAT and SF processes are coupled in valence. For example, happiness has positive MAT and positive SF, annoyance has negative MAT and negative SF. However, it is argued that in an aesthetic context, such as listening to music, emotion processes can become decoupled. The decoupling is controlled by the Appraisal process, which can assess if the context of the sadness is real-life (where coupling occurs) or aesthetic (where decoupling can occur). In an aesthetic context sadness retains its negative SF but the aversive, negative MAT is inhibited, leaving sadness to still be experienced as a negative valanced emotion, while contributing to the overall positive MAT. Individual differences, mood and previous experiences mediate the degree to which the aversive aspects of MAT are inhibited according to this Parallel Processing Hypothesis (PPH). The reason for hesitancy in considering or testing PPH, as well as the preponderance of research on sadness at the exclusion of other negative emotions, are discussed.

  8. Enjoying Sad Music: Paradox or Parallel Processes?

    PubMed Central

    Schubert, Emery

    2016-01-01

    Enjoyment of negative emotions in music is seen by many as a paradox. This article argues that the paradox exists because it is difficult to view the process that generates enjoyment as being part of the same system that also generates the subjective negative feeling. Compensation theories explain the paradox as the compensation of a negative emotion by the concomitant presence of one or more positive emotions. But compensation brings us no closer to explaining the paradox because it does not explain how experiencing sadness itself is enjoyed. The solution proposed is that an emotion is determined by three critical processes—labeled motivational action tendency (MAT), subjective feeling (SF) and Appraisal. For many emotions the MAT and SF processes are coupled in valence. For example, happiness has positive MAT and positive SF, annoyance has negative MAT and negative SF. However, it is argued that in an aesthetic context, such as listening to music, emotion processes can become decoupled. The decoupling is controlled by the Appraisal process, which can assess if the context of the sadness is real-life (where coupling occurs) or aesthetic (where decoupling can occur). In an aesthetic context sadness retains its negative SF but the aversive, negative MAT is inhibited, leaving sadness to still be experienced as a negative valanced emotion, while contributing to the overall positive MAT. Individual differences, mood and previous experiences mediate the degree to which the aversive aspects of MAT are inhibited according to this Parallel Processing Hypothesis (PPH). The reason for hesitancy in considering or testing PPH, as well as the preponderance of research on sadness at the exclusion of other negative emotions, are discussed. PMID:27445752

  9. Serial Order: A Parallel Distributed Processing Approach.

    ERIC Educational Resources Information Center

    Jordan, Michael I.

    Human behavior shows a variety of serially ordered action sequences. This paper presents a theory of serial order which describes how sequences of actions might be learned and performed. In this theory, parallel interactions across time (coarticulation) and parallel interactions across space (dual-task interference) are viewed as two aspects of a…

  10. Partitioning And Packing Equations For Parallel Processing

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.; Milner, Edward J.

    1989-01-01

    Algorithm developed to identify parallelism in set of coupled ordinary differential equations that describe physical system and to divide set into parallel computational paths, along with parts of solution proceeds independently of others during at least part of time. Path-identifying algorithm creates number of paths consisting of equations that must be computed serially and table that gives dependent and independent arguments and "can start," "can end," and "must end" times of each equation. "Must end" time used subsequently by packing algorithm.

  11. Parallel Processing with Digital Signal Processing Hardware and Software

    NASA Technical Reports Server (NTRS)

    Swenson, Cory V.

    1995-01-01

    The assembling and testing of a parallel processing system is described which will allow a user to move a Digital Signal Processing (DSP) application from the design stage to the execution/analysis stage through the use of several software tools and hardware devices. The system will be used to demonstrate the feasibility of the Algorithm To Architecture Mapping Model (ATAMM) dataflow paradigm for static multiprocessor solutions of DSP applications. The individual components comprising the system are described followed by the installation procedure, research topics, and initial program development.

  12. Dynamic Load Balancing Strategies for Parallel Reacting Flow Simulations

    NASA Astrophysics Data System (ADS)

    Pisciuneri, Patrick; Meneses, Esteban; Givi, Peyman

    2014-11-01

    Load balancing in parallel computing aims at distributing the work as evenly as possible among the processors. This is a critical issue in the performance of parallel, time accurate, flow simulators. The constraint of time accuracy requires that all processes must be finished with their calculation for a given time step before any process can begin calculation of the next time step. Thus, an irregularly balanced compute load will result in idle time for many processes for each iteration and thus increased walltimes for calculations. Two existing, dynamic load balancing approaches are applied to the simplified case of a partially stirred reactor for methane combustion. The first is Zoltan, a parallel partitioning, load balancing, and data management library developed at the Sandia National Laboratories. The second is Charm++, which is its own machine independent parallel programming system developed at the University of Illinois at Urbana-Champaign. The performance of these two approaches is compared, and the prospects for their application to full 3D, reacting flow solvers is assessed.

  13. Direct stereo radargrammetric processing using massively parallel processing

    NASA Astrophysics Data System (ADS)

    Balz, Timo; Zhang, Lu; Liao, Mingsheng

    2013-05-01

    Synthetic Aperture Radar (SAR) offers many ways to reconstruct digital surface models (DSMs). The two most commonly used methods are SAR interferometry (InSAR) and stereo radargrammetry. Stereo radargrammetry is a very stable and reliable process and is far less affected by temporal decorrelation compared with InSAR. It is therefore often used for DSM generation in heavily vegetated areas. However, stereo radargrammetry often produces rather noisy DSMs, sometimes containing large outliers. In this manuscript, we present a new approach for stereo radargrammetric processing, where the homologous points between the images are found by geocoding large amount of points. This offers a very flexible approach, allowing the simultaneous processing of multiple images and of cross-heading image pairs. Our approach relies on a good initial geocoding accuracy of the data and on very fast processing using a massively parallel implementation. The approach is demonstrated using TerraSAR-X images from Mount Song, China, and from Trento, Italy.

  14. Efficient parallel implementation of polarimetric synthetic aperture radar data processing

    NASA Astrophysics Data System (ADS)

    Martinez, Sergio S.; Marpu, Prashanth R.; Plaza, Antonio J.

    2014-10-01

    This work investigates the parallel implementation of polarimetric synthetic aperture radar (POLSAR) data processing chain. Such processing can be computationally expensive when large data sets are processed. However, the processing steps can be largely implemented in a high performance computing (HPC) environ- ment. In this work, we studied different aspects of the computations involved in processing the POLSAR data and developed an efficient parallel scheme to achieve near-real time performance. The algorithm is implemented using message parsing interface (MPI) framework in this work, but it can be easily adapted for other parallel architectures such as general purpose graphics processing units (GPGPUs).

  15. "Let's Move" campaign: applying the extended parallel process model.

    PubMed

    Batchelder, Alicia; Matusitz, Jonathan

    2014-01-01

    This article examines Michelle Obama's health campaign, "Let's Move," through the lens of the extended parallel process model (EPPM). "Let's Move" aims to reduce the childhood obesity epidemic in the United States. Developed by Kim Witte, EPPM rests on the premise that people's attitudes can be changed when fear is exploited as a factor of persuasion. Fear appeals work best (a) when a person feels a concern about the issue or situation, and (b) when he or she believes to have the capability of dealing with that issue or situation. Overall, the analysis found that "Let's Move" is based on past health campaigns that have been successful. An important element of the campaign is the use of fear appeals (as it is postulated by EPPM). For example, part of the campaign's strategies is to explain the severity of the diseases associated with obesity. By looking at the steps of EPPM, readers can also understand the strengths and weaknesses of "Let's Move."

  16. CRBLASTER: A Parallel-Processing Computational Framework for Embarrassingly-Parallel Image-Analysis Algorithms

    NASA Astrophysics Data System (ADS)

    Mighell, Kenneth John

    2011-11-01

    The development of parallel-processing image-analysis codes is generally a challenging task that requires complicated choreography of interprocessor communications. If, however, the image-analysis algorithm is embarrassingly parallel, then the development of a parallel-processing implementation of that algorithm can be a much easier task to accomplish because, by definition, there is little need for communication between the compute processes. I describe the design, implementation, and performance of a parallel-processing image-analysis application, called CRBLASTER, which does cosmic-ray rejection of CCD (charge-coupled device) images using the embarrassingly-parallel L.A.COSMIC algorithm. CRBLASTER is written in C using the high-performance computing industry standard Message Passing Interface (MPI) library. The code has been designed to be used by research scientists who are familiar with C as a parallel-processing computational framework that enables the easy development of parallel-processing image-analysis programs based on embarrassingly-parallel algorithms. The CRBLASTER source code is freely available at the official application website at the National Optical Astronomy Observatory. Removing cosmic rays from a single 800x800 pixel Hubble Space Telescope WFPC2 image takes 44 seconds with the IRAF script lacos_im.cl running on a single core of an Apple Mac Pro computer with two 2.8-GHz quad-core Intel Xeon processors. CRBLASTER is 7.4 times faster processing the same image on a single core on the same machine. Processing the same image with CRBLASTER simultaneously on all 8 cores of the same machine takes 0.875 seconds -- which is a speedup factor of 50.3 times faster than the IRAF script. A detailed analysis is presented of the performance of CRBLASTER using between 1 and 57 processors on a low-power Tilera 700-MHz 64-core TILE64 processor.

  17. Bipartite memory network architectures for parallel processing

    SciTech Connect

    Smith, W.; Kale, L.V. . Dept. of Computer Science)

    1990-01-01

    Parallel architectures are boradly classified as either shared memory or distributed memory architectures. In this paper, the authors propose a third family of architectures, called bipartite memory network architectures. In this architecture, processors and memory modules constitute a bipartite graph, where each processor is allowed to access a small subset of the memory modules, and each memory module allows access from a small set of processors. The architecture is particularly suitable for computations requiring dynamic load balancing. The authors explore the properties of this architecture by examining the Perfect Difference set based topology for the graph. Extensions of this topology are also suggested.

  18. Hypercluster - Parallel processing for computational mechanics

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.

    1988-01-01

    An account is given of the development status, performance capabilities and implications for further development of NASA-Lewis' testbed 'hypercluster' parallel computer network, in which multiple processors communicate through a shared memory. Processors have local as well as shared memory; the hypercluster is expanded in the same manner as the hypercube, with processor clusters replacing the normal single processor node. The NASA-Lewis machine has three nodes with a vector personality and one node with a scalar personality. Each of the vector nodes uses four board-level vector processors, while the scalar node uses four general-purpose microcomputer boards.

  19. Holographic Routing Network For Parallel Processing Machines

    NASA Astrophysics Data System (ADS)

    Maniloff, Eric S.; Johnson, Kristina M.; Reif, John H.

    1989-10-01

    Dynamic holographic architectures for connecting processors in parallel computers have been generally limited by the response time of the holographic recording media. In this paper we present a different approach to dynamic optical interconnects involving spatial light modulators (SLMs) and volume holograms. Multiple-exposure holograms are stored in a volume recording media, which associate the address of a destination processor encoded on a spatial light modulator with a distinct reference beam. A destination address programmed on the spatial light modulator is then holographically steered to the correct destination processor. We present the design and experimental results of a holographic router for connecting four originator processors to four destination processors.

  20. Parafrase restructuring of FORTRAN code for parallel processing

    NASA Technical Reports Server (NTRS)

    Wadhwa, Atul

    1988-01-01

    Parafrase transforms a FORTRAN code, subroutine by subroutine, into a parallel code for a vector and/or shared-memory multiprocessor system. Parafrase is not a compiler; it transforms a code and provides information for a vector or concurrent process. Parafrase uses a data dependency to reveal parallelism among instructions. The data dependency test distinguishes between recurrences and statements that can be directly vectorized or parallelized. A number of transformations are required to build a data dependency graph.

  1. A high resolution finite volume method for efficient parallel simulation of casting processes on unstructured meshes

    SciTech Connect

    Kothe, D.B.; Turner, J.A.; Mosso, S.J.; Ferrell, R.C.

    1997-03-01

    We discuss selected aspects of a new parallel three-dimensional (3-D) computational tool for the unstructured mesh simulation of Los Alamos National Laboratory (LANL) casting processes. This tool, known as {bold Telluride}, draws upon on robust, high resolution finite volume solutions of metal alloy mass, momentum, and enthalpy conservation equations to model the filling, cooling, and solidification of LANL castings. We briefly describe the current {bold Telluride} physical models and solution methods, then detail our parallelization strategy as implemented with Fortran 90 (F90). This strategy has yielded straightforward and efficient parallelization on distributed and shared memory architectures, aided in large part by new parallel libraries {bold JTpack9O} for Krylov-subspace iterative solution methods and {bold PGSLib} for efficient gather/scatter operations. We illustrate our methodology and current capabilities with source code examples and parallel efficiency results for a LANL casting simulation.

  2. Parallel processing research in the former Soviet Union

    SciTech Connect

    Dongarra, J.J.; Snyder, L.; Wolcott, P.

    1992-03-01

    This technical assessment report examines strengths and weaknesses of parallel processing research and development in the Soviet Union from the 1980s to June 1991. The assessment was carried out by panel of US scientists who are experts on parallel processing hardware, software, algorithms, and applications, and on Soviet computing. Soviet computer research and development organizations have pursued many of the major avenues of inquiry related to parallel processing that the West has chosen to explore. But, the limited size and substantial breadth of their effort have limited the collective depth of Soviet activity. Even more serious limitations (and delays) of Soviet achievement in parallel processing research can be traced to shortcomings of the Soviet computer industry, which was unable to supply adequate, reliable computer components. Without the ability to build, demonstrate, and test embodiments of their ideas in actual high-performance parallel hardware, both the scope of activity and the success of Soviet parallel processing researchers were severely limited. The quality of the Soviet parallel processing research assessed varied from very sound and interesting to pedestrian, with most of the groups at the major hardware and software centers to which the work is largely confined doing good (or at least serious) research. In a few instances, interesting and competent parallel language development work was found at institutions not associated with hardware development efforts. Unlike Soviet mainframe and minicomputer developers, Soviet parallel processing researchers have not concentrated their efforts on reverse- engineering specific Western systems. No evidence was found of successful Soviet attempts to use breakthroughs in parallel processing technology to ``leapfrog`` impediments and limitations that Soviet industrial weakness in microelectronics and other computer manufacturing areas impose on the performance of high-end Soviet computers.

  3. Parallel processing research in the former Soviet Union

    SciTech Connect

    Dongarra, J.J.; Snyder, L.; Wolcott, P.

    1992-03-01

    This technical assessment report examines strengths and weaknesses of parallel processing research and development in the Soviet Union from the 1980s to June 1991. The assessment was carried out by panel of US scientists who are experts on parallel processing hardware, software, algorithms, and applications, and on Soviet computing. Soviet computer research and development organizations have pursued many of the major avenues of inquiry related to parallel processing that the West has chosen to explore. But, the limited size and substantial breadth of their effort have limited the collective depth of Soviet activity. Even more serious limitations (and delays) of Soviet achievement in parallel processing research can be traced to shortcomings of the Soviet computer industry, which was unable to supply adequate, reliable computer components. Without the ability to build, demonstrate, and test embodiments of their ideas in actual high-performance parallel hardware, both the scope of activity and the success of Soviet parallel processing researchers were severely limited. The quality of the Soviet parallel processing research assessed varied from very sound and interesting to pedestrian, with most of the groups at the major hardware and software centers to which the work is largely confined doing good (or at least serious) research. In a few instances, interesting and competent parallel language development work was found at institutions not associated with hardware development efforts. Unlike Soviet mainframe and minicomputer developers, Soviet parallel processing researchers have not concentrated their efforts on reverse- engineering specific Western systems. No evidence was found of successful Soviet attempts to use breakthroughs in parallel processing technology to leapfrog'' impediments and limitations that Soviet industrial weakness in microelectronics and other computer manufacturing areas impose on the performance of high-end Soviet computers.

  4. Strategy Process in Higher Education

    ERIC Educational Resources Information Center

    Kettunen, Juha

    2010-01-01

    Higher education institutions educate those who are the most talented and best able to secure the future for the next generation. This study examines an efficient strategy process in higher education and emphasises the importance of sufficient dialogue during the process. The study describes the strategy process of the Turku University of Applied…

  5. Parallel firing strategy on Petri nets: A review

    NASA Astrophysics Data System (ADS)

    Mavlankulov, Gairatzhan; Turaev, Sherzod; Zhumabaeva, Laula; Zhukabayeva, Tamara

    2015-05-01

    In this paper we review the recent results related on Petri net controlled grammars and the close related topics. Though the theme of regulated grammars is one of the classic topics in formal language theory, a Petri net controlled grammar is still interesting subject for the investigation for many reasons. This type of grammars can successfully be used in modeling new problems emerging in manufacturing systems, systems biology and other areas. Moreover, the graphically illustrability, the ability to represent both a grammar and its control in one structure, and the possibility to unify different regulated rewritings make this formalization attractive for the study. We also summarize the obtained results and propose a new conception such as parallel firing strategy on Petri Nets.

  6. [CMACPAR an modified parallel neuro-controller for control processes].

    PubMed

    Ramos, E; Surós, R

    1999-01-01

    CMACPAR is a Parallel Neurocontroller oriented to real time systems as for example Control Processes. Its characteristics are mainly a fast learning algorithm, a reduced number of calculations, great generalization capacity, local learning and intrinsic parallelism. This type of neurocontroller is used in real time applications required by refineries, hydroelectric centers, factories, etc. In this work we present the analysis and the parallel implementation of a modified scheme of the Cerebellar Model CMAC for the n-dimensional space projection using a mean granularity parallel neurocontroller. The proposed memory management allows for a significant memory reduction in training time and required memory size.

  7. Use of parallel computing in mass processing of laser data

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.

    2015-12-01

    The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.

  8. The effects of parallel processing architectures on discrete event simulation

    NASA Astrophysics Data System (ADS)

    Cave, William; Slatt, Edward; Wassmer, Robert E.

    2005-05-01

    As systems become more complex, particularly those containing embedded decision algorithms, mathematical modeling presents a rigid framework that often impedes representation to a sufficient level of detail. Using discrete event simulation, one can build models that more closely represent physical reality, with actual algorithms incorporated in the simulations. Higher levels of detail increase simulation run time. Hardware designers have succeeded in producing parallel and distributed processor computers with theoretical speeds well into the teraflop range. However, the practical use of these machines on all but some very special problems is extremely limited. The inability to use this power is due to great difficulties encountered when trying to translate real world problems into software that makes effective use of highly parallel machines. This paper addresses the application of parallel processing to simulations of real world systems of varying inherent parallelism. It provides a brief background in modeling and simulation validity and describes a parameter that can be used in discrete event simulation to vary opportunities for parallel processing at the expense of absolute time synchronization and is constrained by validity. It focuses on the effects of model architecture, run-time software architecture, and parallel processor architecture on speed, while providing an environment where modelers can achieve sufficient model accuracy to produce valid simulation results. It describes an approach to simulation development that captures subject area expert knowledge to leverage inherent parallelism in systems in the following ways: * Data structures are separated from instructions to track which instruction sets share what data. This is used to determine independence and thus the potential for concurrent processing at run-time. * Model connectivity (independence) can be inspected visually to determine if the inherent parallelism of a physical system is properly

  9. CRBLASTER: A Parallel-Processing Computational Framework for Embarrassingly Parallel Image-Analysis Algorithms

    NASA Astrophysics Data System (ADS)

    Mighell, Kenneth John

    2010-10-01

    The development of parallel-processing image-analysis codes is generally a challenging task that requires complicated choreography of interprocessor communications. If, however, the image-analysis algorithm is embarrassingly parallel, then the development of a parallel-processing implementation of that algorithm can be a much easier task to accomplish because, by definition, there is little need for communication between the compute processes. I describe the design, implementation, and performance of a parallel-processing image-analysis application, called crblaster, which does cosmic-ray rejection of CCD images using the embarrassingly parallel l.a.cosmic algorithm. crblaster is written in C using the high-performance computing industry standard Message Passing Interface (MPI) library. crblaster uses a two-dimensional image partitioning algorithm that partitions an input image into N rectangular subimages of nearly equal area; the subimages include sufficient additional pixels along common image partition edges such that the need for communication between computer processes is eliminated. The code has been designed to be used by research scientists who are familiar with C as a parallel-processing computational framework that enables the easy development of parallel-processing image-analysis programs based on embarrassingly parallel algorithms. The crblaster source code is freely available at the official application Web site at the National Optical Astronomy Observatory. Removing cosmic rays from a single 800 × 800 pixel Hubble Space Telescope WFPC2 image takes 44 s with the IRAF script lacos_im.cl running on a single core of an Apple Mac Pro computer with two 2.8 GHz quad-core Intel Xeon processors. crblaster is 7.4 times faster when processing the same image on a single core on the same machine. Processing the same image with crblaster simultaneously on all eight cores of the same machine takes 0.875 s—which is a speedup factor of 50.3 times faster than the

  10. Parallel Processing and Sentence Comprehension Difficulty

    ERIC Educational Resources Information Center

    Boston, Marisa Ferrara; Hale, John T.; Vasishth, Shravan; Kliegl, Reinhold

    2011-01-01

    Eye fixation durations during normal reading correlate with processing difficulty, but the specific cognitive mechanisms reflected in these measures are not well understood. This study finds support in German readers' eye fixations for two distinct difficulty metrics: surprisal, which reflects the change in probabilities across syntactic analyses…

  11. Image Processing Using a Parallel Architecture.

    DTIC Science & Technology

    1987-12-01

    Computer," Byte, 3: 14-25 (December 1978). McGraw-Hill, 1985 24. Trussell, H. Joel . "Processing of X-ray Images," Proceedings of the IEEE, 69: 615-627...Services Electronics Program contract N00014-79-C-0424 (AD-085-846). 107 Therrien , Charles W. et al. "A Multiprocessor System for Simulation of

  12. Parallel Signal Processing and System Simulation using aCe

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2003-01-01

    Recently, networked and cluster computation have become very popular for both signal processing and system simulation. A new language is ideally suited for parallel signal processing applications and system simulation since it allows the programmer to explicitly express the computations that can be performed concurrently. In addition, the new C based parallel language (ace C) for architecture-adaptive programming allows programmers to implement algorithms and system simulation applications on parallel architectures by providing them with the assurance that future parallel architectures will be able to run their applications with a minimum of modification. In this paper, we will focus on some fundamental features of ace C and present a signal processing application (FFT).

  13. Repartitioning Strategies for Massively Parallel Simulation of Reacting Flow

    NASA Astrophysics Data System (ADS)

    Pisciuneri, Patrick; Zheng, Angen; Givi, Peyman; Labrinidis, Alexandros; Chrysanthis, Panos

    2015-11-01

    The majority of parallel CFD simulators partition the domain into equal regions and assign the calculations for a particular region to a unique processor. This type of domain decomposition is vital to the efficiency of the solver. However, as the simulation develops, the workload among the partitions often become uneven (e.g. by adaptive mesh refinement, or chemically reacting regions) and a new partition should be considered. The process of repartitioning adjusts the current partition to evenly distribute the load again. We compare two repartitioning tools: Zoltan, an architecture-agnostic graph repartitioner developed at the Sandia National Laboratories; and Paragon, an architecture-aware graph repartitioner developed at the University of Pittsburgh. The comparative assessment is conducted via simulation of the Taylor-Green vortex flow with chemical reaction.

  14. Sentence Comprehension: A Parallel Distributed Processing Approach

    DTIC Science & Technology

    1989-07-14

    to suggest that the model we have presented here Is a tabula rasa , acquiring knowledge of language without any prior structure. Indeed, the input is...AvailabilitY CodesAvail and/cr .Dist Special 2 What is constructed mentally when we comprehend a sentence? How does this constructive process occur? What role ...expression is a function of the semantic content of each of the parts of the expression and of the organization of the constituents. 1.2. What role do

  15. Software Engineering Challenges for Parallel Processing Systems

    DTIC Science & Technology

    2008-05-02

    count + 2 = 4 write count = 4 This type of error caused by Therac - 25 radiation therapy machine resulted in 5 deaths Data Race Deadlock PROCESS 1 Send...OpenMP Jacobi using OpenMP 1 5 25 125 625 1 2 4 8 16 Execution Time Sequential OpenMP 1 2 4 8 16 32 64 128 256 1 2 4 8 16 Execution Time Sequential

  16. High power parallel ultrashort pulse laser processing

    NASA Astrophysics Data System (ADS)

    Gillner, Arnold; Gretzki, Patrick; Büsing, Lasse

    2016-03-01

    The class of ultra-short-pulse (USP) laser sources are used, whenever high precession and high quality material processing is demanded. These laser sources deliver pulse duration in the range of ps to fs and are characterized with high peak intensities leading to a direct vaporization of the material with a minimum thermal damage. With the availability of industrial laser source with an average power of up to 1000W, the main challenge consist of the effective energy distribution and disposition. Using lasers with high repetition rates in the MHz region can cause thermal issues like overheating, melt production and low ablation quality. In this paper, we will discuss different approaches for multibeam processing for utilization of high pulse energies. The combination of diffractive optics and conventional galvometer scanner can be used for high throughput laser ablation, but are limited in the optical qualities. We will show which applications can benefit from this hybrid optic and which improvements in productivity are expected. In addition, the optical limitations of the system will be compiled, in order to evaluate the suitability of this approach for any given application.

  17. Processing data communications events by awakening threads in parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2016-03-15

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for the context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.

  18. Fault Tolerant Statistical Signal Processing Algorithms for Parallel Architectures.

    DTIC Science & Technology

    2014-09-26

    AD-fi57 393 FAULT TOLERANT STATISTICAL SIGNAL PROCESSING ALGORITHMS i/i FOR PARALLEL ARCH U) JOHNS HOPKINS UNIV BALTIMORE MD DEPT OF ELECTRICAL...COVERED * ’ Fault Tolerant Statistical Signal Processing Technical A l g o r i t h m s f o r P a r a l l e l A r c h i t e c t u r e s a ._ P E R F O R M I...Identify by block number) , Fault Tolerance, Signal Processing, Parallel Architecture 0 20. ABSTRACT (Continue on reveree side It neceseary and identify by

  19. FPGA-Based Filterbank Implementation for Parallel Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Berner, Stephan; DeLeon, Phillip

    1999-01-01

    One approach to parallel digital signal processing decomposes a high bandwidth signal into multiple lower bandwidth (rate) signals by an analysis bank. After processing, the subband signals are recombined into a fullband output signal by a synthesis bank. This paper describes an implementation of the analysis and synthesis banks using (Field Programmable Gate Arrays) FPGAs.

  20. The science of computing - The evolution of parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1985-01-01

    The present paper is concerned with the approaches to be employed to overcome the set of limitations in software technology which impedes currently an effective use of parallel hardware technology. The process required to solve the arising problems is found to involve four different stages. At the present time, Stage One is nearly finished, while Stage Two is under way. Tentative explorations are beginning on Stage Three, and Stage Four is more distant. In Stage One, parallelism is introduced into the hardware of a single computer, which consists of one or more processors, a main storage system, a secondary storage system, and various peripheral devices. In Stage Two, parallel execution of cooperating programs on different machines becomes explicit, while in Stage Three, new languages will make parallelism implicit. In Stage Four, there will be very high level user interfaces capable of interacting with scientists at the same level of abstraction as scientists do with each other.

  1. Mapping Pixel Windows To Vectors For Parallel Processing

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    1996-01-01

    Mapping performed by matrices of transistor switches. Arrays of transistor switches devised for use in forming simultaneous connections from square subarray (window) of n x n pixels within electronic imaging device containing np x np array of pixels to linear array of n(sup2) input terminals of electronic neural network or other parallel-processing circuit. Method helps to realize potential for rapidity in parallel processing for such applications as enhancement of images and recognition of patterns. In providing simultaneous connections, overcomes timing bottleneck or older multiplexing, serial-switching, and sample-and-hold methods.

  2. Massively parallel processing of remotely sensed hyperspectral images

    NASA Astrophysics Data System (ADS)

    Plaza, Javier; Plaza, Antonio; Valencia, David; Paz, Abel

    2009-08-01

    In this paper, we develop several parallel techniques for hyperspectral image processing that have been specifically designed to be run on massively parallel systems. The techniques developed cover the three relevant areas of hyperspectral image processing: 1) spectral mixture analysis, a popular approach to characterize mixed pixels in hyperspectral data addressed in this work via efficient implementation of a morphological algorithm for automatic identification of pure spectral signatures or endmembers from the input data; 2) supervised classification of hyperspectral data using multi-layer perceptron neural networks with back-propagation learning; and 3) automatic target detection in the hyperspectral data using orthogonal subspace projection concepts. The scalability of the proposed parallel techniques is investigated using Barcelona Supercomputing Center's MareNostrum facility, one of the most powerful supercomputers in Europe.

  3. Parallel processing of atmospheric chemistry calculations: Preliminary considerations

    SciTech Connect

    Elliott, S.; Jones, P.

    1995-01-01

    Global climate calculations are already saturating the class modern vector supercomputers with only a few central processing units. Increased resolution and inclusion of routines to deal with biogeochemical portions of the terrestrial climate system will soon demand massively parallel approaches. The atmospheric photochemistry ensemble is intimately linked to climate through the trace greenhouse gases ozone and methane and modules for representing it are being attached to global three dimensional transport and GCM frameworks. Atmospheric kinetics involve dozens of highly interactive tracers and so will accentuate the need for parallel processing of earth system simulations. In the present text we lay some of the groundwork for addition of atmospheric kinetics packages to GCM and global scale atmospheric models on multiply parallel computers. The discussion is tailored for consumption by the photochemical modelling community. After a review of numerical atmospheric chemistry methods, we examine how kinetics can be implemented on a parallel computer. We concentrate especially on data layout and flexibility and how these can be implemented in various programming models. We conclude that chemistry can be implemented rather easily within existing frameworks of several parallel atmospheric models. However, memory limitations may preclude high resolution studies of global chemistry.

  4. Parallel Processing of Objects in a Naming Task

    ERIC Educational Resources Information Center

    Meyer, Antje S.; Ouellet, Marc; Hacker, Christine

    2008-01-01

    The authors investigated whether speakers who named several objects processed them sequentially or in parallel. Speakers named object triplets, arranged in a triangle, in the order left, right, and bottom object. The left object was easy or difficult to identify and name. During the saccade from the left to the right object, the right object shown…

  5. Postscript: Parallel Distributed Processing in Localist Models without Thresholds

    ERIC Educational Resources Information Center

    Plaut, David C.; McClelland, James L.

    2010-01-01

    The current authors reply to a response by Bowers on a comment by the current authors on the original article. Bowers (2010) mischaracterizes the goals of parallel distributed processing (PDP research)--explaining performance on cognitive tasks is the primary motivation. More important, his claim that localist models, such as the interactive…

  6. The Extended Parallel Process Model: Illuminating the Gaps in Research

    ERIC Educational Resources Information Center

    Popova, Lucy

    2012-01-01

    This article examines constructs, propositions, and assumptions of the extended parallel process model (EPPM). Review of the EPPM literature reveals that its theoretical concepts are thoroughly developed, but the theory lacks consistency in operational definitions of some of its constructs. Out of the 12 propositions of the EPPM, a few have not…

  7. Using Motivational Interviewing Techniques to Address Parallel Process in Supervision

    ERIC Educational Resources Information Center

    Giordano, Amanda; Clarke, Philip; Borders, L. DiAnne

    2013-01-01

    Supervision offers a distinct opportunity to experience the interconnection of counselor-client and counselor-supervisor interactions. One product of this network of interactions is parallel process, a phenomenon by which counselors unconsciously identify with their clients and subsequently present to their supervisors in a similar fashion…

  8. A novel parallel architecture for real-time image processing

    NASA Astrophysics Data System (ADS)

    Hu, Junhong; Zhang, Tianxu; Zhong, Sheng; Chen, Xujun

    2009-10-01

    A novel DSP/FPGA-based parallel architecture for real-time image processing is presented in this paper, DSPs are the main processing unit and FPGAs are used to be logic units for image interface protocol, image processing, image display, synchronization communication portocol of DSPs and DSP's reprogramming interface of 422/485. The presented architecture is composed of two modules: the preprocessing module and the processing module, and the latter is extendable for better performance. Modules are connected by LINK communication port, whose LVDS protocol has the ability of anti-jamming. And DSP's programs can be updated easily by 422/485 with PC's serial port. Analysis and experiments result shows that the prototype with the proposed parallel architecture has many promising charactersitics such as powerful computing capability, broad data transfer bandwidth, and is easy to be extended and updated.

  9. Overview of parallel processing approaches to image and video compression

    NASA Astrophysics Data System (ADS)

    Shen, Ke; Cook, Gregory W.; Jamieson, Leah H.; Delp, Edward J., III

    1994-05-01

    In this paper we present an overview of techniques used to implement various image and video compression algorithms using parallel processing. Approaches used can largely be divided into four areas. The first is the use of special purpose architectures designed specifically for image and video compression. An example of this is the use of an array of DSP chips to implement a version of MPEG1. The second approach is the use of VLSI techniques. These include various chip sets for JPEG and MPEG1. The third approach is algorithm driven, in which the structure of the compression algorithm describes the architecture, e.g. pyramid algorithms. The fourth approach is the implementation of algorithms on high performance parallel computers. Examples of this approach are the use of a massively parallel computer such as the MasPar MP-1 or the use of a coarse-grained machine such as the Intel Touchstone Delta.

  10. Parallel, semiparallel, and serial processing of visual hyperacuity

    NASA Astrophysics Data System (ADS)

    Fahle, Manfred W.

    1990-10-01

    Humans can discriminate between certain elementary stimulus features in parallel, i.e., simultaneously over the visual field. I present evidence that, in man, vernier rnisalignments in the hyperacuity-range, i.e., below the photoreceptor diameter, can also be detected in parallel. This indicates that the visUal system performs some form of spatial interpolation beyond the photoreceptor spacing simultaneously over the visual field. Vernier offsets are detected in parallel even when orientation cues are masked: deviation from straightness is an elementary feature of visual perception. However, the identification process, that classifies each vernier in a stimulus as being offset to the right (versus to the left) is serial and has to scan the visual field sequentially if orientation cues are masked. Therefore, reaction times and thresholds in vernier acuity tasks increase with the number of verniers presented simultaneously if classification of different features is required. Furthermore, when approaching vernier threshold, simple vernier detection is no longer parallel but becomes partially serial, or semi-parallel.

  11. Parallel-Processing Software for Correlating Stereo Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric

    2007-01-01

    A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.

  12. Parallel Processing of Broad-Band PPM Signals

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement

    2010-01-01

    A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).

  13. Digital intermediate frequency QAM modulator using parallel processing

    DOEpatents

    Pao, Hsueh-Yuan; Tran, Binh-Nien

    2008-05-27

    The digital Intermediate Frequency (IF) modulator applies to various modulation types and offers a simple and low cost method to implement a high-speed digital IF modulator using field programmable gate arrays (FPGAs). The architecture eliminates multipliers and sequential processing by storing the pre-computed modulated cosine and sine carriers in ROM look-up-tables (LUTs). The high-speed input data stream is parallel processed using the corresponding LUTs, which reduces the main processing speed, allowing the use of low cost FPGAs.

  14. A dataflow analysis tool for parallel processing of algorithms

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1993-01-01

    A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on a set of identical parallel processors. Typical applications include signal processing and control law problems. Graph analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool is shown to facilitate the application of the design process to a given problem.

  15. Transactional memories: A new abstraction for parallel processing

    SciTech Connect

    Fasel, J.H.; Lubeck, O.M.; Agrawal, D.; Bruno, J.L.; El Abbadi, A.

    1997-12-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). Current distributed memory multiprocessor computer systems make the development of parallel programs difficult. From a programmer`s perspective, it would be most desirable if the underlying hardware and software could provide the programming abstraction commonly referred to as sequential consistency--a single address space and multiple threads; but enforcement of sequential consistency limits opportunities for architectural and operating system performance optimizations, leading to poor performance. Recently, Herlihy and Moss have introduced a new abstraction called transactional memories for parallel programming. The programming model is shared memory with multiple threads. However, data consistency is obtained through the use of transactions rather than mutual exclusion based on locking. The transaction approach permits the underlying system to exploit the potential parallelism in transaction processing. The authors explore the feasibility of designing parallel programs using the transaction paradigm for data consistency and a barrier type of thread synchronization.

  16. Parallel tools in HEVC for high-throughput processing

    NASA Astrophysics Data System (ADS)

    Zhou, Minhua; Sze, Vivienne; Budagavi, Madhukar

    2012-10-01

    HEVC (High Efficiency Video Coding) is the next-generation video coding standard being jointly developed by the ITU-T VCEG and ISO/IEC MPEG JCT-VC team. In addition to the high coding efficiency, which is expected to provide 50% more bit-rate reduction when compared to H.264/AVC, HEVC has built-in parallel processing tools to address bitrate, pixel-rate and motion estimation (ME) throughput requirements. This paper describes how CABAC, which is also used in H.264/AVC, has been redesigned for improved throughput, and how parallel merge/skip and tiles, which are new tools introduced for HEVC, enable high-throughput processing. CABAC has data dependencies which make it difficult to parallelize and thus limit its throughput. The prediction error/residual, represented as quantized transform coefficients, accounts for the majority of the CABAC workload. Various improvements have been made to the context selection and scans in transform coefficient coding that enable CABAC in HEVC to potentially achieve higher throughput and increased coding gains relative to H.264/AVC. The merge/skip mode is a coding efficiency enhancement tool in HEVC; the parallel merge/skip breaks dependency between the regular and merge/skip ME, which provides flexibility for high throughput and high efficiency HEVC encoder designs. For ultra high definition (UHD) video, such as 4kx2k and 8kx4k resolutions, low-latency and real-time processing may be beyond the capability of a single core codec. Tiles are an effective tool which enables pixel-rate balancing among the cores to achieve parallel processing with a throughput scalable implementation of multi-core UHD video codec. With the evenly divided tiles, a multi-core video codec can be realized by simply replicating single core codec and adding a tile boundary processing core on top of that. These tools illustrate that accounting for implementation cost when designing video coding algorithms can enable higher processing speed and reduce

  17. Dual-thread parallel control strategy for ophthalmic adaptive optics.

    PubMed

    Yu, Yongxin; Zhang, Yuhua

    To improve ophthalmic adaptive optics speed and compensate for ocular wavefront aberration of high temporal frequency, the adaptive optics wavefront correction has been implemented with a control scheme including 2 parallel threads; one is dedicated to wavefront detection and the other conducts wavefront reconstruction and compensation. With a custom Shack-Hartmann wavefront sensor that measures the ocular wave aberration with 193 subapertures across the pupil, adaptive optics has achieved a closed loop updating frequency up to 110 Hz, and demonstrated robust compensation for ocular wave aberration up to 50 Hz in an adaptive optics scanning laser ophthalmoscope.

  18. Parallel-Processing Equalizers for Multi-Gbps Communications

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Ghuman, Parminder; Hoy, Scott; Satorius, Edgar H.

    2004-01-01

    Architectures have been proposed for the design of frequency-domain least-mean-square complex equalizers that would be integral parts of parallel- processing digital receivers of multi-gigahertz radio signals and other quadrature-phase-shift-keying (QPSK) or 16-quadrature-amplitude-modulation (16-QAM) of data signals at rates of multiple gigabits per second. Equalizers as used here denotes receiver subsystems that compensate for distortions in the phase and frequency responses of the broad-band radio-frequency channels typically used to convey such signals. The proposed architectures are suitable for realization in very-large-scale integrated (VLSI) circuitry and, in particular, complementary metal oxide semiconductor (CMOS) application- specific integrated circuits (ASICs) operating at frequencies lower than modulation symbol rates. A digital receiver of the type to which the proposed architecture applies (see Figure 1) would include an analog-to-digital converter (A/D) operating at a rate, fs, of 4 samples per symbol period. To obtain the high speed necessary for sampling, the A/D and a 1:16 demultiplexer immediately following it would be constructed as GaAs integrated circuits. The parallel-processing circuitry downstream of the demultiplexer, including a demodulator followed by an equalizer, would operate at a rate of only fs/16 (in other words, at 1/4 of the symbol rate). The output from the equalizer would be four parallel streams of in-phase (I) and quadrature (Q) samples.

  19. Probabilistic structural mechanics research for parallel processing computers

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.

    1991-01-01

    Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.

  20. Reducing neural network training time with parallel processing

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1995-01-01

    Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.

  1. Regional-scale calculation of the LS factor using parallel processing

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong

    2015-05-01

    With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.

  2. A massively parallel solution strategy for efficient thermal radiation simulation

    NASA Astrophysics Data System (ADS)

    Nguyen, P. D.; Moureau, V.; Vervisch, L.; Perret, N.

    2012-06-01

    A novel and efficient methodology to solve the Radiative Transfer Equations (RTE) in thermal radiation is discussed. The BiCGStab(2) iterative solution method, as designed for the non-symmetric linear equation systems, is used to solve the discretized RTE. The numerical upwind and central schemes are blended to provide a stable numerical scheme (MUCS) for interpolation of the cell facial radiation intensities in finite volume formulation. The combination of the BiCGStab(2) and MUCS methods proved to be very efficient when coupling with the DOM approach to solve the RTE. A cost-effective tabulation technique for the gaseous radiative property model SNB-FSCK using 7-point Gauss-Labatto quadrature scheme is also introduced. The whole methodology is implemented into a massively parallel unstructured CFD code where the radiative and fluid flow solutions share the same domain decomposition, which is the bottleneck in current radiative solvers. The dual mesh decomposition at the cell groups level and processors level is adopted to optimize the CFD code for massively parallel computing. The whole method is applied to simulate the radiation heat-transfer in a 3D rectangular enclosure containing non-isothermal CO2 and H2O mixtures. Two test cases are studied for homogeneous and inhomogeneous distributions of CO2 and H2O in the enclosure. The result is reported for the heat flux and radiation energy source and the comparison is also made between the present methodology BiCGStab(2)/MUCS/tabulated SNB-FSCK, the benchmark method SNB-CK (implemented at 25cm-1 narrow-band) and some other methods available in the literature. The present method (BiCGStab(2)/MUCS/tabulated SNB-FSCK) yields more accurate predictions particularly for the radiation source term. When comparing with the benchmark solution, the relative error of the radiation source term is remarkably reduced to less than 4% and the CPU time is drastically diminished.

  3. Extraction of Hydrological Proximity Measures from DEMs using Parallel Processing

    SciTech Connect

    Tesfa, Teklu K.; Tarboton, David G.; Watson, Daniel W.; Schreuders, Kimberly A.; Baker, Matthew M.; Wallace, Robert M.

    2011-12-01

    Land surface topography is one of the most important terrain properties which impact hydrological, geomorphological, and ecological processes active on a landscape. In our previous efforts to develop a soil depth model based upon topographic and land cover variables, we extracted a set of hydrological proximity measures (HPMs) from a Digital Elevation Model (DEM) as potential explanatory variables for soil depth. These HPMs may also have other, more general modeling applicability in hydrology, geomorphology and ecology, and so are described here from a general perspective. The HPMs we derived are variations of the distance up to ridge points (cells with no incoming flow) and variations of the distance down to stream points (cells with a contributing area greater than a threshold), following the flow path. These HPMs were computed using the D-infinity flow model that apportions flow between adjacent neighbors based on the direction of steepest downward slope on the eight triangular facets constructed in a 3 x 3 grid cell window using the center cell and each pair of adjacent neighboring grid cells in turn. The D-infinity model typically results in multiple flow paths between 2 points on the topography, with the result that distances may be computed as the minimum, maximum or average of the individual flow paths. In addition, each of the HPMs, are calculated vertically, horizontally, and along the land surface. Previously, these HPMs were calculated using recursive serial algorithms which suffered from stack overflow problems when used to process large datasets, limiting the size of DEMs that could be analyzed using that method to approximately 7000 x 7000 cells. To overcome this limitation, we developed a message passing interface (MPI) parallel approach for calculating these HPMs. The parallel algorithms of the HPMs spatially partition the input grid into stripes which are each assigned to separate processes for computation. Each of those processes then uses a

  4. A simple hyperbolic model for communication in parallel processing environments

    NASA Technical Reports Server (NTRS)

    Stoica, Ion; Sultan, Florin; Keyes, David

    1994-01-01

    We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.

  5. Parallel processing at the SSC: The fact and the fiction

    SciTech Connect

    Bourianoff, G.; Cole, B.

    1991-10-01

    Accurately modelling the behavior of particles circulating in accelerators is a computationally demanding task. The particle tracking code currently in use at SSC is based upon a ``thin element`` analysis (TEAPOT). In this model each magnet in the lattice is described by a thin element at which the particle experiences an impulsive kick. Each kick requires approximately 200 floating point operations (``FLOP``). For the SSC collider lattice consisting of 10{sup 4} elements, performing a tracking of study for a set of 100 particles for 10{sup 7} turns would require 2 {times} 10{sup 15} FLOPS. Even on a machine capable of 100 MFLOP/sec (MFLOPS), this would require 2 {times} 10{sup 7} seconds, and many such runs are necessary. It should be noted that the accuracy with which the kicks are to be calculated is important: the large number of iterations involved will magnify the effects of small errors. The inability of current computational resources to effectively perform the full calculation motivates the migration of this calculation to the most powerful computers available. A survey of the current research into new technologies for superconducting reveals that the supercomputers of the future will be parallel in nature. Further, numerous such machines exist today, and are being used to solve other difficult problems. Thus it seems clear that it is not early to begin developing the capability to develop tracking codes for parallel architectures. This report discusses implementing parallel processing on the SCC.

  6. Parallel processing at the SSC: The fact and the fiction

    SciTech Connect

    Bourianoff, G.; Cole, B.

    1991-10-01

    Accurately modelling the behavior of particles circulating in accelerators is a computationally demanding task. The particle tracking code currently in use at SSC is based upon a thin element'' analysis (TEAPOT). In this model each magnet in the lattice is described by a thin element at which the particle experiences an impulsive kick. Each kick requires approximately 200 floating point operations ( FLOP''). For the SSC collider lattice consisting of 10{sup 4} elements, performing a tracking of study for a set of 100 particles for 10{sup 7} turns would require 2 {times} 10{sup 15} FLOPS. Even on a machine capable of 100 MFLOP/sec (MFLOPS), this would require 2 {times} 10{sup 7} seconds, and many such runs are necessary. It should be noted that the accuracy with which the kicks are to be calculated is important: the large number of iterations involved will magnify the effects of small errors. The inability of current computational resources to effectively perform the full calculation motivates the migration of this calculation to the most powerful computers available. A survey of the current research into new technologies for superconducting reveals that the supercomputers of the future will be parallel in nature. Further, numerous such machines exist today, and are being used to solve other difficult problems. Thus it seems clear that it is not early to begin developing the capability to develop tracking codes for parallel architectures. This report discusses implementing parallel processing on the SCC.

  7. Hitch-hiking: a parallel heuristic search strategy, applied to the phylogeny problem.

    PubMed

    Charleston, M A

    2001-01-01

    The article introduces a parallel heuristic search strategy ("Hitch-hiking") which can be used in conjunction with other random-walk heuristic search strategies. It is applied to an artificial phylogeny problem, in which character sequences are evolved using pseudo-random numbers from a hypothetical ancestral sequence. The objective function to be minimized is the minimum number of character-state changes required on a binary tree that could account for the sequences observed at the tips (leaves) of the tree -- the Maximum Parsimony criterion. The Hitch-hiking strategy is shown to be useful in that it is robust and that on average the solutions found using the strategy are better than those found without. Also the strategy can dynamically provide information on the characteristics of the landscape of the problem. I argue that Hitch-hiking as a scheme for parallelization of existing heuristic search strategies is of potentially very general use, in many areas of combinatorial optimization.

  8. Bin-Hash Indexing: A Parallel Method for Fast Query Processing

    SciTech Connect

    Bethel, Edward W; Gosink, Luke J.; Wu, Kesheng; Bethel, Edward Wes; Owens, John D.; Joy, Kenneth I.

    2008-06-27

    This paper presents a new parallel indexing data structure for answering queries. The index, called Bin-Hash, offers extremely high levels of concurrency, and is therefore well-suited for the emerging commodity of parallel processors, such as multi-cores, cell processors, and general purpose graphics processing units (GPU). The Bin-Hash approach first bins the base data, and then partitions and separately stores the values in each bin as a perfect spatial hash table. To answer a query, we first determine whether or not a record satisfies the query conditions based on the bin boundaries. For the bins with records that can not be resolved, we examine the spatial hash tables. The procedures for examining the bin numbers and the spatial hash tables offer the maximum possible level of concurrency; all records are able to be evaluated by our procedure independently in parallel. Additionally, our Bin-Hash procedures access much smaller amounts of data than similar parallel methods, such as the projection index. This smaller data footprint is critical for certain parallel processors, like GPUs, where memory resources are limited. To demonstrate the effectiveness of Bin-Hash, we implement it on a GPU using the data-parallel programming language CUDA. The concurrency offered by the Bin-Hash index allows us to fully utilize the GPU's massive parallelism in our work; over 12,000 records can be simultaneously evaluated at any one time. We show that our new query processing method is an order of magnitude faster than current state-of-the-art CPU-based indexing technologies. Additionally, we compare our performance to existing GPU-based projection index strategies.

  9. Application of parallel distributed processing to space based systems

    NASA Technical Reports Server (NTRS)

    Macdonald, J. R.; Heffelfinger, H. L.

    1987-01-01

    The concept of using Parallel Distributed Processing (PDP) to enhance automated experiment monitoring and control is explored. Recent very large scale integration (VLSI) advances have made such applications an achievable goal. The PDP machine has demonstrated the ability to automatically organize stored information, handle unfamiliar and contradictory input data and perform the actions necessary. The PDP machine has demonstrated that it can perform inference and knowledge operations with greater speed and flexibility and at lower cost than traditional architectures. In applications where the rule set governing an expert system's decisions is difficult to formulate, PDP can be used to extract rules by associating the information an expert receives with the actions taken.

  10. Parallel deterioration to language processing in a bilingual speaker.

    PubMed

    Druks, Judit; Weekes, Brendan Stuart

    2013-01-01

    The convergence hypothesis [Green, D. W. (2003). The neural basis of the lexicon and the grammar in L2 acquisition: The convergence hypothesis. In R. van Hout, A. Hulk, F. Kuiken, & R. Towell (Eds.), The interface between syntax and the lexicon in second language acquisition (pp. 197-218). Amsterdam: John Benjamins] assumes that the neural substrates of language representations are shared between the languages of a bilingual speaker. One prediction of this hypothesis is that neurodegenerative disease should produce parallel deterioration to lexical and grammatical processing in bilingual aphasia. We tested this prediction with a late bilingual Hungarian (first language, L1)-English (second language, L2) speaker J.B. who had nonfluent progressive aphasia (NFPA). J.B. had acquired L2 in adolescence but was premorbidly proficient and used English as his dominant language throughout adult life. Our investigations showed comparable deterioration to lexical and grammatical knowledge in both languages during a one-year period. Parallel deterioration to language processing in a bilingual speaker with NFPA challenges the assumption that L1 and L2 rely on different brain mechanisms as assumed in some theories of bilingual language processing [Ullman, M. T. (2001). The neural basis of lexicon and grammar in first and second language: The declarative/procedural model. Bilingualism: Language and Cognition, 4(1), 105-122].

  11. Parallel-Processing Software for Creating Mosaic Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

    2008-01-01

    A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

  12. Parallel approach to incorporating face image information into dialogue processing

    NASA Astrophysics Data System (ADS)

    Ren, Fuji

    2000-10-01

    There are many kinds of so-called irregular expressions in natural dialogues. Even if the content of a conversation is the same in words, different meanings can be interpreted by a person's feeling or face expression. To have a good understanding of dialogues, it is required in a flexible dialogue processing system to infer the speaker's view properly. However, it is difficult to obtain the meaning of the speaker's sentences in various scenes using traditional methods. In this paper, a new approach for dialogue processing that incorporates information from the speaker's face is presented. We first divide conversation statements into several simple tasks. Second, we process each simple task using an independent processor. Third, we employ some speaker's face information to estimate the view of the speakers to solve ambiguities in dialogues. The approach presented in this paper can work efficiently, because independent processors run in parallel, writing partial results to a shared memory, incorporating partial results at appropriate points, and complementing each other. A parallel algorithm and a method for employing the face information in a dialogue machine translation will be discussed, and some results will be included in this paper.

  13. Sentence comprehension: A parallel distributed processing approach. Technical report

    SciTech Connect

    McClelland, J.L.; St John, M.; Taraban, R.

    1989-07-14

    Basic aspects are reviewed of conventional approaches to sentence comprehension and point out are some of the difficulties faced by models that take these approaches. An alternative approach is described, based on the principles of parallel distributed processing, and shown how it offers different answers to basic questions about the nature of the language processing mechanism. An illustrative simulation model captures the key characteristics of the approach, and illustrates how it can cope with the difficulties faced by conventional models. Alternative ways of conceptualizing basic aspects of language processing within the framework of this approach will consider how it can address several arguments that might be brought to bear against it, and suggest avenues for future development.

  14. Hippocampal-prefrontal dynamics in spatial working memory: interactions and independent parallel processing.

    PubMed

    Churchwell, John C; Kesner, Raymond P

    2011-12-01

    Memory processes may be independent, compete, operate in parallel, or interact. In accordance with this view, behavioral studies suggest that the hippocampus (HPC) and prefrontal cortex (PFC) may act as an integrated circuit during performance of tasks that require working memory over longer delays, whereas during short delays the HPC and PFC may operate in parallel or have completely dissociable functions. In the present investigation we tested rats in a spatial delayed non-match to sample working memory task using short and long time delays to evaluate the hypothesis that intermediate CA1 region of the HPC (iCA1) and medial PFC (mPFC) interact and operate in parallel under different temporal working memory constraints. In order to assess the functional role of these structures, we used an inactivation strategy in which each subject received bilateral chronic cannula implantation of the iCA1 and mPFC, allowing us to perform bilateral, contralateral, ipsilateral, and combined bilateral inactivation of structures and structure pairs within each subject. This novel approach allowed us to test for circuit-level systems interactions, as well as independent parallel processing, while we simultaneously parametrically manipulated the temporal dimension of the task. The current results suggest that, at longer delays, iCA1 and mPFC interact to coordinate retrospective and prospective memory processes in anticipation of obtaining a remote goal, whereas at short delays either structure may independently represent spatial information sufficient to successfully complete the task.

  15. Parallel Latent Semantic Analysis using a Graphics Processing Unit

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E; Cavanagh, Joseph M

    2009-01-01

    Latent Semantic Analysis (LSA) can be used to reduce the dimensions of large Term-Document datasets using Singular Value Decomposition. However, with the ever expanding size of data sets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. The Graphics Processing Unit (GPU) can solve some highly parallel problems much faster than the traditional sequential processor (CPU). Thus, a deployable system using a GPU to speedup large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a computer cluster. In this paper, we presented a parallel LSA implementation on the GPU, using NVIDIA Compute Unified Device Architecture (CUDA) and Compute Unified Basic Linear Algebra Subprograms (CUBLAS). The performance of this implementation is compared to traditional LSA implementation on CPU using an optimized Basic Linear Algebra Subprograms library. For large matrices that have dimensions divisible by 16, the GPU algorithm ran five to six times faster than the CPU version.

  16. Time dependent processing in a parallel pipeline architecture.

    PubMed

    Biddiscombe, John; Geveci, Berk; Martin, Ken; Moreland, Kenneth; Thompson, David

    2007-01-01

    Pipeline architectures provide a versatile and efficient mechanism for constructing visualizations, and they have been implemented in numerous libraries and applications over the past two decades. In addition to allowing developers and users to freely combine algorithms, visualization pipelines have proven to work well when streaming data and scale well on parallel distributed-memory computers. However, current pipeline visualization frameworks have a critical flaw: they are unable to manage time varying data. As data flows through the pipeline, each algorithm has access to only a single snapshot in time of the data. This prevents the implementation of algorithms that do any temporal processing such as particle tracing; plotting over time; or interpolation, fitting, or smoothing of time series data. As data acquisition technology improves, as simulation time-integration techniques become more complex, and as simulations save less frequently and regularly, the ability to analyze the time-behavior of data becomes more important. This paper describes a modification to the traditional pipeline architecture that allows it to accommodate temporal algorithms. Furthermore, the architecture allows temporal algorithms to be used in conjunction with algorithms expecting a single time snapshot, thus simplifying software design and allowing adoption into existing pipeline frameworks. Our architecture also continues to work well in parallel distributed-memory environments. We demonstrate our architecture by modifying the popular VTK framework and exposing the functionality to the ParaView application. We use this framework to apply time-dependent algorithms on large data with a parallel cluster computer and thereby exercise a functionality that previously did not exist.

  17. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    SciTech Connect

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  18. Massively Parallel Processing for Fast and Accurate Stamping Simulations

    NASA Astrophysics Data System (ADS)

    Gress, Jeffrey J.; Xu, Siguang; Joshi, Ramesh; Wang, Chuan-tao; Paul, Sabu

    2005-08-01

    The competitive automotive market drives automotive manufacturers to speed up the vehicle development cycles and reduce the lead-time. Fast tooling development is one of the key areas to support fast and short vehicle development programs (VDP). In the past ten years, the stamping simulation has become the most effective validation tool in predicting and resolving all potential formability and quality problems before the dies are physically made. The stamping simulation and formability analysis has become an critical business segment in GM math-based die engineering process. As the simulation becomes as one of the major production tools in engineering factory, the simulation speed and accuracy are the two of the most important measures for stamping simulation technology. The speed and time-in-system of forming analysis becomes an even more critical to support the fast VDP and tooling readiness. Since 1997, General Motors Die Center has been working jointly with our software vendor to develop and implement a parallel version of simulation software for mass production analysis applications. By 2001, this technology was matured in the form of distributed memory processing (DMP) of draw die simulations in a networked distributed memory computing environment. In 2004, this technology was refined to massively parallel processing (MPP) and extended to line die forming analysis (draw, trim, flange, and associated spring-back) running on a dedicated computing environment. The evolution of this technology and the insight gained through the implementation of DM0P/MPP technology as well as performance benchmarks are discussed in this publication.

  19. Parallel distributed processing: Implications for cognition and development. Technical report

    SciTech Connect

    McClelland, J.L.

    1988-07-11

    This paper provides a brief overview of the connectionist or parallel distributed processing framework for modeling cognitive processes, and considers the application of the connectionist framework to problems of cognitive development. Several aspects of cognitive development might result from the process of learning as it occurs in multi-layer networks. This learning process has the characteristic that it reduces the discrepancy between expected and observed events. As it does this, representations develop on hidden units which dramatically change both the way in which the network represents the environment from which it learns and the expectations that the network generates about environmental events. The learning process exhibits relatively abrupt transitions corresponding to stage shifts in cognitive development. These points are illustrated using a network that learns to anticipate which side of a balance beam will go down, based on the number of weights on each side of the fulcrum and their distance from the fulcrum on each side of the beam. The network is trained in an environment in which weight more frequently governs which side will go down. It recapitulates the states of development seen in children, as well as the stage transitions, as it learns to represent weight and distance information.

  20. Parallel information processing channels created in the retina

    PubMed Central

    Schiller, Peter H.

    2010-01-01

    In the retina, several parallel channels originate that extract different attributes from the visual scene. This review describes how these channels arise and what their functions are. Following the introduction four sections deal with these channels. The first discusses the “ON” and “OFF” channels that have arisen for the purpose of rapidly processing images in the visual scene that become visible by virtue of either light increment or light decrement; the ON channel processes images that become visible by virtue of light increment and the OFF channel processes images that become visible by virtue of light decrement. The second section examines the midget and parasol channels. The midget channel processes fine detail, wavelength information, and stereoscopic depth cues; the parasol channel plays a central role in processing motion and flicker as well as motion parallax cues for depth perception. Both these channels have ON and OFF subdivisions. The third section describes the accessory optic system that receives input from the retinal ganglion cells of Dogiel; these cells play a central role, in concert with the vestibular system, in stabilizing images on the retina to prevent the blurring of images that would otherwise occur when an organism is in motion. The last section provides a brief overview of several additional channels that originate in the retina. PMID:20876118

  1. Efficient biased random bit generation for parallel processing

    SciTech Connect

    Slone, Dale M.

    1994-09-28

    A lattice gas automaton was implemented on a massively parallel machine (the BBN TC2000) and a vector supercomputer (the CRAY C90). The automaton models Burgers equation ρt + ρρx = vρxx in 1 dimension. The lattice gas evolves by advecting and colliding pseudo-particles on a 1-dimensional, periodic grid. The specific rules for colliding particles are stochastic in nature and require the generation of many billions of random numbers to create the random bits necessary for the lattice gas. The goal of the thesis was to speed up the process of generating the random bits and thereby lessen the computational bottleneck of the automaton.

  2. The power and efficiency of advanced software and parallel processing

    NASA Technical Reports Server (NTRS)

    Singh, Ramen P.; Taylor, Lawrence W., Jr.

    1989-01-01

    Real-time simulation of flexible and articulating systems is difficult because of the computational burden of the time varying calculations. The mobile servicing system of the NASA Space Station Freedom will handle heavy payloads by local arm manipulations and by translating along the spline of the Station, it is crucial to have real-time simulation available. To enable such a simulation to be of high fidelity and to be able to be hosted on a modest computer, special care must be made in formulating the structural dynamics. Frontal solution algorithms save considerable time in performing these calculations. In addition, it is necessary to take advantage of parallel processing be compatible to take full advantage of both. An approach is offered which will result in high fidelity, real-time simulation for flexible, articulating systems such as the space Station remote servicing system.

  3. Parallel Processing of Adaptive Meshes with Load Balancing

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.

  4. Massively Parallel Latent Semantic Analyzes using a Graphics Processing Unit

    SciTech Connect

    Cavanagh, Joseph M; Cui, Xiaohui

    2009-01-01

    Latent Semantic Indexing (LSA) aims to reduce the dimensions of large Term-Document datasets using Singular Value Decomposition. However, with the ever expanding size of data sets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. The Graphics Processing Unit (GPU) can solve some highly parallel problems much faster than the traditional sequential processor (CPU). Thus, a deployable system using a GPU to speedup large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a computer cluster. Due to the GPU s application-specific architecture, harnessing the GPU s computational prowess for LSA is a great challenge. We present a parallel LSA implementation on the GPU, using NVIDIA Compute Unified Device Architecture and Compute Unified Basic Linear Algebra Subprograms. The performance of this implementation is compared to traditional LSA implementation on CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1000x1000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran five to six times faster than the CPU version. The large variation is due to architectural benefits the GPU has for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  5. Configuration Management Process Assessment Strategy

    NASA Technical Reports Server (NTRS)

    Henry, Thad

    2014-01-01

    Purpose: To propose a strategy for assessing the development and effectiveness of configuration management systems within Programs, Projects, and Design Activities performed by technical organizations and their supporting development contractors. Scope: Various entities CM Systems will be assessed dependent on Project Scope (DDT&E), Support Services and Acquisition Agreements. Approach: Model based structured against assessing organizations CM requirements including best practices maturity criteria. The model is tailored to the entity being assessed dependent on their CM system. The assessment approach provides objective feedback to Engineering and Project Management of the observed CM system maturity state versus the ideal state of the configuration management processes and outcomes(system). center dot Identifies strengths and risks versus audit gotcha's (findings/observations). center dot Used "recursively and iteratively" throughout program lifecycle at select points of need. (Typical assessments timing is Post PDR/Post CDR) center dot Ideal state criteria and maturity targets are reviewed with the assessed entity prior to an assessment (Tailoring) and is dependent on the assessed phase of the CM system. center dot Supports exit success criteria for Preliminary and Critical Design Reviews. center dot Gives a comprehensive CM system assessment which ultimately supports configuration verification activities.*

  6. English III. Teacher's Guide [and Student Workbook]. Revised. Parallel Alternative Strategies for Students (PASS).

    ERIC Educational Resources Information Center

    Atkinson, Missy; Fresen, Sue; Goldstein, Jeren; Harrell, Stephanie; MacEnulty, Patricia; McLain, Janice

    This teacher's guide and student workbook are part of a series of content-centered supplementary curriculum packages of alternative methods and activities designed to help secondary students who have disabilities and those with diverse learning needs succeed in regular education content courses. The content of Parallel Alternative Strategies for…

  7. Introduction to Computers: Parallel Alternative Strategies for Students. Course No. 0200000.

    ERIC Educational Resources Information Center

    Chauvenne, Sherry; And Others

    Parallel Alternative Strategies for Students (PASS) is a content-centered package of alternative methods and materials designed to assist secondary teachers to meet the needs of mainstreamed learning-disabled and emotionally-handicapped students of various achievement levels in the basic education content courses. This supplementary text and…

  8. pMHC Multiplexing Strategy to Detect High Numbers of T Cell Responses in Parallel.

    PubMed

    Philips, Daisy; van den Braber, Marlous; Schumacher, Ton N; Kvistborg, Pia

    2017-01-01

    The development of peptide loaded major histocompatibility complexes (MHC) conjugated to fluorochromes by Davis and colleagues 20 years ago provided a highly useful tool to identify and characterize antigen-specific T cells. In this chapter we describe a multiplexing strategy that allows detection of high numbers of T cell responses in parallel.

  9. Mobile Devices and GPU Parallelism in Ionospheric Data Processing

    NASA Astrophysics Data System (ADS)

    Mascharka, D.; Pankratius, V.

    2015-12-01

    Scientific data acquisition in the field is often constrained by data transfer backchannels to analysis environments. Geoscientists are therefore facing practical bottlenecks with increasing sensor density and variety. Mobile devices, such as smartphones and tablets, offer promising solutions to key problems in scientific data acquisition, pre-processing, and validation by providing advanced capabilities in the field. This is due to affordable network connectivity options and the increasing mobile computational power. This contribution exemplifies a scenario faced by scientists in the field and presents the "Mahali TEC Processing App" developed in the context of the NSF-funded Mahali project. Aimed at atmospheric science and the study of ionospheric Total Electron Content (TEC), this app is able to gather data from various dual-frequency GPS receivers. It demonstrates parsing of full-day RINEX files on mobile devices and on-the-fly computation of vertical TEC values based on satellite ephemeris models that are obtained from NASA. Our experiments show how parallel computing on the mobile device GPU enables fast processing and visualization of up to 2 million datapoints in real-time using OpenGL. GPS receiver bias is estimated through minimum TEC approximations that can be interactively adjusted by scientists in the graphical user interface. Scientists can also perform approximate computations for "quickviews" to reduce CPU processing time and memory consumption. In the final stage of our mobile processing pipeline, scientists can upload data to the cloud for further processing. Acknowledgements: The Mahali project (http://mahali.mit.edu) is funded by the NSF INSPIRE grant no. AGS-1343967 (PI: V. Pankratius). We would like to acknowledge our collaborators at Boston College, Virginia Tech, Johns Hopkins University, Colorado State University, as well as the support of UNAVCO for loans of dual-frequency GPS receivers for use in this project, and Intel for loans of

  10. Processing modes and parallel processors in producing familiar keying sequences.

    PubMed

    Verwey, Willem B

    2003-05-01

    Recent theorizing indicates that the acquisition of movement sequence skill involves the development of several independent sequence representations at the same time. To examine this for the discrete sequence production task, participants in Experiment 1 produced a highly practiced sequence of six key presses in two conditions that allowed little preparation so that interkey intervals were slowed. Analyses of the distributions of moderately slowed interkey intervals indicated that this slowing was caused by the occasional use of two slower processing modes, that probably rely on independent sequence representations, and by reduced parallel processing in the fastest processing mode. Experiment 2 addressed the role of intention for the fast production of familiar keying sequences. It showed that the participants, who were not aware they were executing familiar sequences in a somewhat different task, had no benefits of prior practice. This suggests that the mechanisms underlying sequencing skills are not automatically activated by mere execution of familiar sequences, and that some form of top-down, intentional control remains necessary.

  11. Digital signal processor and programming system for parallel signal processing

    SciTech Connect

    Van den Bout, D.E.

    1987-01-01

    This thesis describes an integrated assault upon the problem of designing high-throughput, low-cost digital signal-processing systems. The dual prongs of this assault consist of: (1) the design of a digital signal processor (DSP) which efficiently executes signal-processing algorithms in either a uniprocessor or multiprocessor configuration, (2) the PaLS programming system which accepts an arbitrary algorithm, partitions it across a group of DSPs, synthesizes an optimal communication link topology for the DSPs, and schedules the partitioned algorithm upon the DSPs. The results of applying a new quasi-dynamic analysis technique to a set of high-level signal-processing algorithms were used to determine the uniprocessor features of the DSP design. For multiprocessing applications, the DSP contains an interprocessor communications port (IPC) which supports simple, flexible, dataflow communications while allowing the total communication bandwidth to be incrementally allocated to achieve the best link utilization. The net result is a DSP with a simple architecture that is easy to program for both uniprocessor and multi-processor modes of operation. The PaLS programming system simplifies the task of parallelizing an algorithm for execution upon a multiprocessor built with the DSP.

  12. Human pattern recognition: parallel processing and perceptual learning.

    PubMed

    Fahle, M

    1994-01-01

    A new theory of visual object recognition by Poggio et al that is based on multidimensional interpolation between stored templates requires fast, stimulus-specific learning in the visual cortex. Indeed, performance in a number of perceptual tasks improves as a result of practice. We distinguish between two phases of learning a vernier-acuity task, a fast one that takes place within less than 20 min and a slow phase that continues over 10 h of training and probably beyond. The improvement is specific for relatively 'simple' features, such as the orientation of the stimulus presented during training, for the position in the visual field, and for the eye through which learning occurred. Some of these results are simulated by means of a computer model that relies on object recognition by multidimensional interpolation between stored templates. Orientation specificity of learning is also found in a jump-displacement task. In a manner parallel to the improvement in performance, cortical potentials evoked by the jump displacement tend to decrease in latency and to increase in amplitude as a result of training. The distribution of potentials over the brain changes significantly as a result of repeated exposure to the same stimulus. The results both of psychophysical and of electrophysiological experiments indicate that some form of perceptual learning might occur very early during cortical information processing. The hypothesis that vernier breaks are detected 'early' during pattern recognition is supported by the fact that reaction times for the detection of verniers depend hardly at all on the number of stimuli presented simultaneously. Hence, vernier breaks can be detected in parallel at different locations in the visual field, indicating that deviation from straightness is an elementary feature for visual pattern recognition in humans that is detected at an early stage of pattern recognition. Several results obtained during the last few years are reviewed, some new

  13. Evaluating In-Clique and Topological Parallelism Strategies for Junction Tree-Based Bayesian Inference Algorithm on the Cray XMT

    SciTech Connect

    Chin, George; Choudhury, Sutanay; Kangas, Lars J.; McFarlane, Sally A.; Marquez, Andres

    2011-09-01

    Long viewed as a strong statistical inference technique, Bayesian networks have emerged to be an important class of applications for high-performance computing. We have applied an architecture-conscious approach to parallelizing the Lauritzen-Spiegelhalter Junction Tree algorithm for exact inferencing in Bayesian networks. In optimizing the Junction Tree algorithm, we have implemented both in-clique and topological parallelism strategies to best leverage the fine-grained synchronization and massive-scale multithreading of the Cray XMT architecture. Two topological techniques were developed to parallelize the evidence propagation process through the Bayesian network. One technique involves performing intelligent scheduling of junction tree nodes based on its topology and relative size. The second technique involves decomposing the junction tree into a much finer tree-like representation to offer much more opportunities for parallelism. We evaluate these optimizations on five different Bayesian networks and report our findings and observations. Another important contribution of this paper is to demonstrate the application of massive-scale multithreading for load balancing and use of implicit parallelism-based compiler optimizations in designing scalable inferencing algorithms.

  14. Resolving Multiscale Processes in Tropical Cyclogenesis Using Parallel EEMD

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Shen, B. W.; Cheung, S.; Li, J. L. F.; Liu, Z.

    2014-12-01

    The recent advance in high-resolution global models has suggested that improved multiscale simulations of tropical waves may help extend the lead time of tropical cyclone (TC) formation prediction (e.g., Shen et al., 2010ab, 2012, 2013a). In previous efforts in the multiscale analysis of tropical waves , the Ensemble Empirical Mode Decomposition (EEMD) has been successfully parallelized and used to detect atmospheric wave signals on different spatial scales (e.g. Shen et al., 2013b) that include Mixed Rossby Gravity (MRG) waves, Western Wind Belt (WWB), African Easterly Waves (AEWs), etc. We now extend the related studies to examine the evolution of the large scale waves and their association with the formation of tropical cyclones in the Atlantic for an extensive time period spanning multiple years. Our goal is to analyze the multiscale interaction in the initiation and early intensification stage of an AEW and its subsequent impact on TC genesis that involves mainly the large scale downscaling processes. Specific focus is on the impact of barotropic instability and critical level (CL, or steering level) that may appear in association with the AEW. The presence of the CL is believed to play an important role in providing a favorable environment in the early TC-genesis stage in the marsupial paradigm scenario. Preliminary analysis of the satellite data obtained from the newly launched Global Precipitation Measurement (GPM) mission linked to the TC genesis processes will be included.

  15. A visual parallel-BCI speller based on the time-frequency coding strategy

    NASA Astrophysics Data System (ADS)

    Xu, Minpeng; Chen, Long; Zhang, Lixin; Qi, Hongzhi; Ma, Lan; Tang, Jiabei; Wan, Baikun; Ming, Dong

    2014-04-01

    Objective. Spelling is one of the most important issues in brain-computer interface (BCI) research. This paper is to develop a visual parallel-BCI speller system based on the time-frequency coding strategy in which the sub-speller switching among four simultaneously presented sub-spellers and the character selection are identified in a parallel mode. Approach. The parallel-BCI speller was constituted by four independent P300+SSVEP-B (P300 plus SSVEP blocking) spellers with different flicker frequencies, thereby all characters had a specific time-frequency code. To verify its effectiveness, 11 subjects were involved in the offline and online spellings. A classification strategy was designed to recognize the target character through jointly using the canonical correlation analysis and stepwise linear discriminant analysis. Main results. Online spellings showed that the proposed parallel-BCI speller had a high performance, reaching the highest information transfer rate of 67.4 bit min-1, with an average of 54.0 bit min-1 and 43.0 bit min-1 in the three rounds and five rounds, respectively. Significance. The results indicated that the proposed parallel-BCI could be effectively controlled by users with attention shifting fluently among the sub-spellers, and highly improved the BCI spelling performance.

  16. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R; Ratterman, Joseph D; Smith, Brian E

    2014-11-18

    Methods, apparatuses, and computer program products for endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface (`PAMI`) of a parallel computer are provided. Embodiments include establishing by a parallel application a data communications geometry, the geometry specifying a set of endpoints that are used in collective operations of the PAMI, including associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry. Embodiments also include registering in each endpoint in the geometry a dispatch callback function for a collective operation and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation.

  17. Parallel implementation of RX anomaly detection on multi-core processors: impact of data partitioning strategies

    NASA Astrophysics Data System (ADS)

    Molero, Jose M.; Garzón, Ester M.; García, Inmaculada; Plaza, Antonio

    2011-11-01

    Anomaly detection is an important task for remotely sensed hyperspectral data exploitation. One of the most widely used and successful algorithms for anomaly detection in hyperspectral images is the Reed-Xiaoli (RX) algorithm. Despite its wide acceptance and high computational complexity when applied to real hyperspectral scenes, few documented parallel implementations of this algorithm exist, in particular for multi-core processors. The advantage of multi-core platforms over other specialized parallel architectures is that they are a low-power, inexpensive, widely available and well-known technology. A critical issue in the parallel implementation of RX is the sample covariance matrix calculation, which can be approached in global or local fashion. This aspect is crucial for the RX implementation since the consideration of a local or global strategy for the computation of the sample covariance matrix is expected to affect both the scalability of the parallel solution and the anomaly detection results. In this paper, we develop new parallel implementations of the RX in multi-core processors and specifically investigate the impact of different data partitioning strategies when parallelizing its computations. For this purpose, we consider both global and local data partitioning strategies in the spatial domain of the scene, and further analyze their scalability in different multi-core platforms. The numerical effectiveness of the considered solutions is evaluated using receiver operating characteristics (ROC) curves, analyzing their capacity to detect thermal hot spots (anomalies) in hyperspectral data collected by the NASA's Airborne Visible Infra- Red Imaging Spectrometer system over the World Trade Center in New York, five days after the terrorist attacks of September 11th, 2001.

  18. Seeing the forest for the trees: Networked workstations as a parallel processing computer

    NASA Technical Reports Server (NTRS)

    Breen, J. O.; Meleedy, D. M.

    1992-01-01

    Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.

  19. Parallel processing using an optical delay-based reservoir computer

    NASA Astrophysics Data System (ADS)

    Van der Sande, Guy; Nguimdo, Romain Modeste; Verschaffelt, Guy

    2016-04-01

    Delay systems subject to delayed optical feedback have recently shown great potential in solving computationally hard tasks. By implementing a neuro-inspired computational scheme relying on the transient response to optical data injection, high processing speeds have been demonstrated. However, reservoir computing systems based on delay dynamics discussed in the literature are designed by coupling many different stand-alone components which lead to bulky, lack of long-term stability, non-monolithic systems. Here we numerically investigate the possibility of implementing reservoir computing schemes based on semiconductor ring lasers. Semiconductor ring lasers are semiconductor lasers where the laser cavity consists of a ring-shaped waveguide. SRLs are highly integrable and scalable, making them ideal candidates for key components in photonic integrated circuits. SRLs can generate light in two counterpropagating directions between which bistability has been demonstrated. We demonstrate that two independent machine learning tasks , even with different nature of inputs with different input data signals can be simultaneously computed using a single photonic nonlinear node relying on the parallelism offered by photonics. We illustrate the performance on simultaneous chaotic time series prediction and a classification of the Nonlinear Channel Equalization. We take advantage of different directional modes to process individual tasks. Each directional mode processes one individual task to mitigate possible crosstalk between the tasks. Our results indicate that prediction/classification with errors comparable to the state-of-the-art performance can be obtained even with noise despite the two tasks being computed simultaneously. We also find that a good performance is obtained for both tasks for a broad range of the parameters. The results are discussed in detail in [Nguimdo et al., IEEE Trans. Neural Netw. Learn. Syst. 26, pp. 3301-3307, 2015

  20. Parallel Computations in Insect and Mammalian Visual Motion Processing.

    PubMed

    Clark, Damon A; Demb, Jonathan B

    2016-10-24

    Sensory systems use receptors to extract information from the environment and neural circuits to perform subsequent computations. These computations may be described as algorithms composed of sequential mathematical operations. Comparing these operations across taxa reveals how different neural circuits have evolved to solve the same problem, even when using different mechanisms to implement the underlying math. In this review, we compare how insect and mammalian neural circuits have solved the problem of motion estimation, focusing on the fruit fly Drosophila and the mouse retina. Although the two systems implement computations with grossly different anatomy and molecular mechanisms, the underlying circuits transform light into motion signals with strikingly similar processing steps. These similarities run from photoreceptor gain control and spatiotemporal tuning to ON and OFF pathway structures, motion detection, and computed motion signals. The parallels between the two systems suggest that a limited set of algorithms for estimating motion satisfies both the needs of sighted creatures and the constraints imposed on them by metabolism, anatomy, and the structure and regularities of the visual world.

  1. An integrated approach to improving the parallel applications development process

    SciTech Connect

    Rasmussen, Craig E; Watson, Gregory R; Tibbitts, Beth R

    2009-01-01

    The development of parallel applications is becoming increasingly important to a broad range of industries. Traditionally, parallel programming was a niche area that was primarily exploited by scientists trying to model extremely complicated physical phenomenon. It is becoming increasingly clear, however, that continued hardware performance improvements through clock scaling and feature-size reduction are simply not going to be achievable for much longer. The hardware vendor's approach to addressing this issue is to employ parallelism through multi-processor and multi-core technologies. While there is little doubt that this approach produces scaling improvements, there are still many significant hurdles to be overcome before parallelism can be employed as a general replacement to more traditional programming techniques. The Parallel Tools Platform (PTP) Project was created in 2005 in an attempt to provide developers with new tools aimed at addressing some of the parallel development issues. Since then, the introduction of a new generation of peta-scale and multi-core systems has highlighted the need for such a platform. In this paper, we describe some of the challenges facing parallel application developers, present the current state of PTP, and provide a simple case study that demonstrates how PTP can be used to locate a potential deadlock situation in an MPI code.

  2. Introducing data parallelism into climate model post-processing through a parallel version of the NCAR Command Language (NCL)

    NASA Astrophysics Data System (ADS)

    Jacob, R. L.; Xu, X.; Krishna, J.; Tautges, T.

    2011-12-01

    The relationship between the needs of post-processing climate model output and the capability of the available tools has reached a crisis point. The large volume of data currently produced by climate models is overwhelming the current, decades-old analysis workflow. The tools used to implement that workflow are now a bottleneck in the climate science discovery processes. This crisis will only worsen as ultra-high resolution global climate models with horizontal scales of 4 km or smaller, running on leadership computing facilities, begin to produce tens to hundreds of terabytes for a single, hundred-year climate simulation. While climate models have used parallelism for several years, the post-processing tools are still mostly single-threaded applications. We have created a Parallel Climate Analysis Library (ParCAL) which implements many common climate analysis operations in a data-parallel fashion using the Message Passing Interface. ParCAL has in turn been built on sophisticated packages for describing grids in parallel (the Mesh Oriented database (MOAB) and for performing vector operations on arbitrary grids (Intrepid). ParCAL is also using parallel I/O through the PnetCDF library. ParCAL has been used to implement a parallel version of the NCAR Command Language (NCL). ParNCL/ParCAL not only speeds up analysis of large datasets but also allows operations to be performed on native grids, eliminating the need to transform everything to latitude-longitude grids. In most cases, users NCL scripts can run unaltered in parallel using ParNCL.

  3. The finite element machine: An experiment in parallel processing

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Peebles, S. W.; Crockett, T. W.; Knott, J. D.; Adams, L.

    1982-01-01

    The finite element machine is a prototype computer designed to support parallel solutions to structural analysis problems. The hardware architecture and support software for the machine, initial solution algorithms and test applications, and preliminary results are described.

  4. Parallel ALLSPD-3D: Speeding Up Combustor Analysis Via Parallel Processing

    NASA Technical Reports Server (NTRS)

    Fricker, David M.

    1997-01-01

    The ALLSPD-3D Computational Fluid Dynamics code for reacting flow simulation was run on a set of benchmark test cases to determine its parallel efficiency. These test cases included non-reacting and reacting flow simulations with varying numbers of processors. Also, the tests explored the effects of scaling the simulation with the number of processors in addition to distributing a constant size problem over an increasing number of processors. The test cases were run on a cluster of IBM RS/6000 Model 590 workstations with ethernet and ATM networking plus a shared memory SGI Power Challenge L workstation. The results indicate that the network capabilities significantly influence the parallel efficiency, i.e., a shared memory machine is fastest and ATM networking provides acceptable performance. The limitations of ethernet greatly hamper the rapid calculation of flows using ALLSPD-3D.

  5. Distributed Computing for Signal Processing: Modeling of Asynchronous Parallel Computation.

    DTIC Science & Technology

    1986-03-01

    P36-844. **VAX is a trademark of Digital Equipment Corporation . ..- ’. 100 *e .................................................... Paper 2L Parallel...ming, Computzng Surveyv, 9, March, pp. 29-59. U .nix is a trademark AI Bell Lajboratories. ... VAX is a trademark of Digital Equipment Corporation ...parallelism will not reduce the processor communicatio s response time. Thus, there are associated costs and limitations (•) Amount of memory

  6. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  7. Improving the National Strategy Process

    DTIC Science & Technology

    2013-03-01

    International Development has suffered a 75% reduction in its staffing numbers since the 1970’ s .55 By involving Congress in the strategy approval...NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) Lieutenant Colonel Jeff Stewart United States Army 5d. PROJECT...NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) Professor Leonard J. Fullenkamp

  8. Parallel versus sequential processing in print and braille reading.

    PubMed

    Veispak, Anneli; Boets, Bart; Ghesquière, Pol

    2012-01-01

    In the current study we investigated word, pseudoword and story reading in Dutch speaking braille and print readers. To examine developmental patterns, these reading skills were assessed in both children and adults. The results reveal that braille readers read less accurately and fast than print readers. While item length has no impact on word reading accuracy and speed in the group of print readers, it has a significant impact on reading accuracy and speed in the group of braille readers, particularly in the younger sample. This suggests that braille readers rely more strongly on an enduring sequential reading strategy. Comparison of the different reading tasks suggests that the advantage in accuracy and speed of reading in adult as compared to young braille readers is achieved through semantic top-down processing.

  9. Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study

    DOE PAGES

    Radhakrishnan, Hari; Rouson, Damian W. I.; Morris, Karla; ...

    2015-01-01

    This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO) and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP) facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were donemore » using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.« less

  10. Parallel design of JPEG-LS encoder on graphics processing units

    NASA Astrophysics Data System (ADS)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  11. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  12. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R; Ratterman, Joseph D; Smith, Brian E

    2014-11-11

    Endpoint-based parallel data processing with non-blocking collective instructions in a PAMI of a parallel computer is disclosed. The PAMI is composed of data communications endpoints, each including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task. The compute nodes are coupled for data communications through the PAMI. The parallel application establishes a data communications geometry specifying a set of endpoints that are used in collective operations of the PAMI by associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry; registering in each endpoint in the geometry a dispatch callback function for a collective operation; and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation.

  13. A Parallel Detecting Spectroscopic Ellipsometer for In-Situ Real Time Process Control of TPV Manfacuturing

    SciTech Connect

    Dr.Simpson, Lin, ITN Energy Systems, Inc.

    2001-04-12

    (oak - 259) To provide deposited thin film property characterization, ITN has developed a patented parallel detecting, spectroscopic ellipsometer (PDSE) sensor that measures, in-situ, the change in polarization state of light reflected from the PV/TPV material inside the vacuum chamber in as little as 3 ms. For this project, It will apply this unique and enabling technology to state -of-the-art PV/TPV manufacturing and develop the corresponding PDSE interpretive algorithms, models, and process controls needed to enhance product yield and decrease manufacturing cost. The ultimate goal for this project is to develop, implement, and commercialize the necessary sensor and control components needed to provide the state-of-the-art process control required for world class PV/TPV manufacturing. In the Phase I project, ITN adapted the PDSE to measure near IR spectra and demonstrated the capability of the PDSE to perform in-situ, real-time measurements on PV/TPV materials deposited on a moving flexible substrate in the harsh conditions associated with vacuum deposition systems that contain molecular species. ITN also demonstrated the feasibility of relatively simple interpretive algorithms to convert the PDSE data to film property information that can be used for process control and demonstrated feasibility of a control strategy that incorporates measured data. We also developed process models and a multilayer dynamic model-based process control strategies that will be performed during a Phase II effort.

  14. Optical Digital Parallel Truth-Table Look-Up Processing

    NASA Astrophysics Data System (ADS)

    Mirsalehi, Mir Mojtaba

    During the last decade, a number of optical digital processors have been proposed that combine the parallelism and speed of optics with the accuracy and flexibility of a digital representation. In this thesis, two types of such processors (an EXCLUSIVE OR-based processor and a NAND-based processor) that function as content-addressable memories (CAM's) are analyzed. The main factors that affect the performance of the EXCLUSIVE OR-based processor are found to be the Gaussian nature of the reference beam and the finite square aperture of the crystal. A quasi-one-dimensional model is developed to analyze the effect of the Gaussian reference beam, and a circular aperture is used to increase the dynamic range in the output power. The main factors that affect the performance of the NAND-based processor are found to be the variations in the amplitudes and the relative phase of the laser beams during the recording process. A mathematical model is developed for analyzing the probability of error in the output of the processor. Using this model, the performance of the processor for some practical cases is analyzed. Techniques that have been previously used to reduce the number of reference patterns in a CAM include: using the residue number system and applying logical minimization methods. In the present work, these and additional techniques are investigated. A systematic procedure is developed for selecting the optimum set of moduli. The effect of coding is investigated and it is shown that multi-level coding, when used in conjunction with logical minimization techniques, significantly reduces the number of reference patterns. The Quine-McCluskey method is extended to multiple -valued logic and a computer program based on this extension is used for logical minimization. The results show that for moduli expressable as p('n), where p is a prime number and n is an integer greater than one, p-level coding provides significant reduction. The NAND-based processor is modified for

  15. Tank Waste Remediation System optimized processing strategy

    SciTech Connect

    Slaathaug, E.J.; Boldt, A.L.; Boomer, K.D.; Galbraith, J.D.; Leach, C.E.; Waldo, T.L.

    1996-03-01

    This report provides an alternative strategy evolved from the current Hanford Site Tank Waste Remediation System (TWRS) programmatic baseline for accomplishing the treatment and disposal of the Hanford Site tank wastes. This optimized processing strategy performs the major elements of the TWRS Program, but modifies the deployment of selected treatment technologies to reduce the program cost. The present program for development of waste retrieval, pretreatment, and vitrification technologies continues, but the optimized processing strategy reuses a single facility to accomplish the separations/low-activity waste (LAW) vitrification and the high-level waste (HLW) vitrification processes sequentially, thereby eliminating the need for a separate HLW vitrification facility.

  16. Control of automatic processes: A parallel distributed-processing model of the stroop effect. Technical report

    SciTech Connect

    Cohen, J.D.; Dunbar, K.; McClelland, J.L.

    1988-06-16

    A growing body of evidence suggests that traditional views of automaticity are in need of revision. For example, automaticity has often been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirial data suggests that automatic processes are continuous, and furthermore are subject to attentional control. In this paper we present a model of attention which addresses these issues. Using a parallel distributed processing framework we propose that the attributes of automaticity depend upon the strength of a process and that strength increases with training. Using the Stroop effect as an example, we show how automatic processes are continuous and emerge gradually with practice. Specifically, we present a computational model of the Stroop task which simulates the time course of processing as well as the effects of learning.

  17. A 16-bit parallel processing in a molecular assembly

    PubMed Central

    Bandyopadhyay, Anirban; Acharya, Somobrata

    2008-01-01

    A machine assembly consisting of 17 identical molecules of 2,3,5,6-tetramethyl-1–4-benzoquinone (DRQ) executes 16 instructions at a time. A single DRQ is positioned at the center of a circular ring formed by 16 other DRQs, controlling their operation in parallel through hydrogen-bond channels. Each molecule is a logic machine and generates four instructions by rotating its alkyl groups. A single instruction executed by a scanning tunneling microscope tip on the central molecule can change decisions of 16 machines simultaneously, in four billion (416) ways. This parallel communication represents a significant conceptual advance relative to today's fastest processors, which execute only one instruction at a time. PMID:18332437

  18. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, D.B.

    1996-12-31

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor to a plurality of slave processors to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor`s status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer, a digital signal processor, a parallel transfer controller, and two three-port memory devices. A communication switch within each node connects it to a fast parallel hardware channel through which all high density data arrives or leaves the node. 6 figs.

  19. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, Dario B.

    1996-01-01

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor (100) to a plurality of slave processors (200) to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor's status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer (104), a digital signal processor (114), a parallel transfer controller (106), and two three-port memory devices. A communication switch (108) within each node (100) connects it to a fast parallel hardware channel (70) through which all high density data arrives or leaves the node.

  20. Partitioning Rectangular and Structurally Nonsymmetric Sparse Matrices for Parallel Processing

    SciTech Connect

    B. Hendrickson; T.G. Kolda

    1998-09-01

    A common operation in scientific computing is the multiplication of a sparse, rectangular or structurally nonsymmetric matrix and a vector. In many applications the matrix- transpose-vector product is also required. This paper addresses the efficient parallelization of these operations. We show that the problem can be expressed in terms of partitioning bipartite graphs. We then introduce several algorithms for this partitioning problem and compare their performance on a set of test matrices.

  1. A Parallel Processing Algorithm for Remote Sensing Classification

    NASA Technical Reports Server (NTRS)

    Gualtieri, J. Anthony

    2005-01-01

    A current thread in parallel computation is the use of cluster computers created by networking a few to thousands of commodity general-purpose workstation-level commuters using the Linux operating system. For example on the Medusa cluster at NASA/GSFC, this provides for super computing performance, 130 G(sub flops) (Linpack Benchmark) at moderate cost, $370K. However, to be useful for scientific computing in the area of Earth science, issues of ease of programming, access to existing scientific libraries, and portability of existing code need to be considered. In this paper, I address these issues in the context of tools for rendering earth science remote sensing data into useful products. In particular, I focus on a problem that can be decomposed into a set of independent tasks, which on a serial computer would be performed sequentially, but with a cluster computer can be performed in parallel, giving an obvious speedup. To make the ideas concrete, I consider the problem of classifying hyperspectral imagery where some ground truth is available to train the classifier. In particular I will use the Support Vector Machine (SVM) approach as applied to hyperspectral imagery. The approach will be to introduce notions about parallel computation and then to restrict the development to the SVM problem. Pseudocode (an outline of the computation) will be described and then details specific to the implementation will be given. Then timing results will be reported to show what speedups are possible using parallel computation. The paper will close with a discussion of the results.

  2. Toward a Model Framework of Generalized Parallel Componential Processing of Multi-Symbol Numbers

    ERIC Educational Resources Information Center

    Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph

    2015-01-01

    In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining…

  3. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    NASA Astrophysics Data System (ADS)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU

  4. Studies in optical parallel processing. [All optical and electro-optic approaches

    NASA Technical Reports Server (NTRS)

    Lee, S. H.

    1978-01-01

    Threshold and A/D devices for converting a gray scale image into a binary one were investigated for all-optical and opto-electronic approaches to parallel processing. Integrated optical logic circuits (IOC) and optical parallel logic devices (OPA) were studied as an approach to processing optical binary signals. In the IOC logic scheme, a single row of an optical image is coupled into the IOC substrate at a time through an array of optical fibers. Parallel processing is carried out out, on each image element of these rows, in the IOC substrate and the resulting output exits via a second array of optical fibers. The OPAL system for parallel processing which uses a Fabry-Perot interferometer for image thresholding and analog-to-digital conversion, achieves a higher degree of parallel processing than is possible with IOC.

  5. Parallel architectures for image processing; Proceedings of the Meeting, Santa Clara, CA, Feb. 14, 15, 1990

    SciTech Connect

    Ghosh, J.; Harrison, C.G.

    1990-01-01

    The present conference discusses topics in the fields of VLSI-based and real-time image-processing systems, parallel architectures for image processing, image-processing algorithms, and image processing on the basis of artificial neural networks. Attention is given to a fixed-point VLSI architecture for high-speed image reconstruction, an orthogonal multiprocessor for image processing with neural networks, massively parallel processors in real-time applications, the use of the adiabatic approximation as a tool in image estimation, parallel algorithms for contour-extraction and coding, and a parallel architecture for multidimensional image processing. Also discussed are concurrent image-processing on hypercube multicomputers, neural-network simulation on a reduced-mesh-of-trees organization, and a goal-seeking neural net for recall and recognition.

  6. A strategy for the solution-phase parallel synthesis of N-(pyrrolidinylmethyl)hydroxamic acids.

    PubMed

    Takayanagi, M; Flessner, T; Wong, C H

    2000-06-16

    Both five- and six-membered iminocyclitols have proven to be useful transition-state analogue inhibitors of glycosidases. They also mimic the transition-state sugar moiety of the nucleoside phosphate sugar in glycosyltransferase-catalyzed reactions. Described here is the development of a general strategy toward the parallel synthesis of a five-membered iminocyclitol linked to a hydroxamic acid group designed to mimic the transition state of GDP-fucose complexed with Mn(II) in fucosyltransferase reactions. The iminocyclitol 8 containing a protected hydroxylamine unit was prepared from D-mannitol. The hydroxamic acid moiety was introduced via the reaction of 8 with various acid chlorides. The strategy is generally applicable to the construction of libraries for identification of glycosyltransferase inhibitors.

  7. Control Strategy of a Parallel System Using Both Matrix Converter and Voltage Type Inverter

    NASA Astrophysics Data System (ADS)

    Itoh, Jun-Ichi; Tamura, Hiroshi

    This paper proposes a control strategy for a matrix converter and voltage type inverter in a parallel system that does not require of interconnection reactors. The proposed control strategy is to divide the operation time between a matrix converter and a voltage type inverter. The operation time of each converter is divided in every carrier cycle. As a result, interconnection reactors are not required and the sinusoidal input current waveform of a matrix converter can be obtained. The total output voltage of the proposed system and the output power division ratio for a matrix converter and a voltage type inverter are controlled by the time division ratio of each converter. Furthermore, the voltage error resulting from the operation of time division control was analyzed and compensated. The availability of the proposed system and the validity of the proposed control method are confirmed by experimental results.

  8. Signal processing applications of massively parallel charge domain computing devices

    NASA Technical Reports Server (NTRS)

    Fijany, Amir (Inventor); Barhen, Jacob (Inventor); Toomarian, Nikzad (Inventor)

    1999-01-01

    The present invention is embodied in a charge coupled device (CCD)/charge injection device (CID) architecture capable of performing a Fourier transform by simultaneous matrix vector multiplication (MVM) operations in respective plural CCD/CID arrays in parallel in O(1) steps. For example, in one embodiment, a first CCD/CID array stores charge packets representing a first matrix operator based upon permutations of a Hartley transform and computes the Fourier transform of an incoming vector. A second CCD/CID array stores charge packets representing a second matrix operator based upon different permutations of a Hartley transform and computes the Fourier transform of an incoming vector. The incoming vector is applied to the inputs of the two CCD/CID arrays simultaneously, and the real and imaginary parts of the Fourier transform are produced simultaneously in the time required to perform a single MVM operation in a CCD/CID array.

  9. CRBLASTER: a fast parallel-processing program for cosmic ray rejection

    NASA Astrophysics Data System (ADS)

    Mighell, Kenneth J.

    2008-08-01

    Many astronomical image-analysis programs are based on algorithms that can be described as being embarrassingly parallel, where the analysis of one subimage generally does not affect the analysis of another subimage. Yet few parallel-processing astrophysical image-analysis programs exist that can easily take full advantage of todays fast multi-core servers costing a few thousands of dollars. A major reason for the shortage of state-of-the-art parallel-processing astrophysical image-analysis codes is that the writing of parallel codes has been perceived to be difficult. I describe a new fast parallel-processing image-analysis program called crblaster which does cosmic ray rejection using van Dokkum's L.A.Cosmic algorithm. crblaster is written in C using the industry standard Message Passing Interface (MPI) library. Processing a single 800×800 HST WFPC2 image takes 1.87 seconds using 4 processes on an Apple Xserve with two dual-core 3.0-GHz Intel Xeons; the efficiency of the program running with the 4 processors is 82%. The code can be used as a software framework for easy development of parallel-processing image-anlaysis programs using embarrassing parallel algorithms; the biggest required modification is the replacement of the core image processing function with an alternative image-analysis function based on a single-processor algorithm. I describe the design, implementation and performance of the program.

  10. Parallel processing of remotely sensed data: Application to the ATSR-2 instrument

    NASA Astrophysics Data System (ADS)

    Simpson, J.; McIntire, T.; Berg, J.; Tsou, Y.

    2007-01-01

    Massively parallel computational paradigms can mitigate many issues associated with the analysis of large and complex remotely sensed data sets. Recently, the Beowulf cluster has emerged as the most attractive, massively parallel architecture due to its low cost and high performance. Whereas most Beowulf designs have emphasized numerical modeling applications, the Parallel Image Processing Environment (PIPE) specifically addresses the unique requirements of remote sensing applications. Automated, parallelization of user-defined analyses is fully supported. A neural network application, applied to Along Track Scanning Radiometer-2 (ATSR-2) data shows the advantages and performance characteristics of PIPE.

  11. Arts Integration Parallels Between Music and Reading: Process, Product and Affective Response.

    ERIC Educational Resources Information Center

    Merrion, Margaret Dee

    The process of aesthetic education is not limited to the fine arts. Parallels may be identified in the language arts and particularly in the art of creative reading. As in a musical experience, a creative reader will apprehend the content of the literature and couple personal feelings with the events of the reading experience. Parallel brain…

  12. A general parallelization strategy for random path based geostatistical simulation methods

    NASA Astrophysics Data System (ADS)

    Mariethoz, Grégoire

    2010-07-01

    The size of simulation grids used for numerical models has increased by many orders of magnitude in the past years, and this trend is likely to continue. Efficient pixel-based geostatistical simulation algorithms have been developed, but for very large grids and complex spatial models, the computational burden remains heavy. As cluster computers become widely available, using parallel strategies is a natural step for increasing the usable grid size and the complexity of the models. These strategies must profit from of the possibilities offered by machines with a large number of processors. On such machines, the bottleneck is often the communication time between processors. We present a strategy distributing grid nodes among all available processors while minimizing communication and latency times. It consists in centralizing the simulation on a master processor that calls other slave processors as if they were functions simulating one node every time. The key is to decouple the sending and the receiving operations to avoid synchronization. Centralization allows having a conflict management system ensuring that nodes being simulated simultaneously do not interfere in terms of neighborhood. The strategy is computationally efficient and is versatile enough to be applicable to all random path based simulation methods.

  13. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  14. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  15. Design of a dataway processor for a parallel image signal processing system

    NASA Astrophysics Data System (ADS)

    Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu

    1995-04-01

    Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.

  16. A new cascaded control strategy for paralleled line-interactive UPS with LCL filter

    NASA Astrophysics Data System (ADS)

    Zhang, X. Y.; Zhang, X. H.; Li, L.; Luo, F.; Zhang, Y. S.

    2016-08-01

    Traditional uninterrupted power supply (UPS) is difficult to meet the output voltage quality and grid-side power quality requirements at the same time, and usually has some disadvantage, such as multi-stage conversion, complex structure, or harmonic current pollution to the utility grid and so on. A three-phase three-level paralleled line-interactive UPS with LCL filter is presented in this paper. It can achieve the output voltage quality and grid-side power quality control simultaneously with only single-conversion power stage, but the multi-objective control strategy design is difficult. Based on the detailed analysis of the circuit structure and operation mechanism, a new cascaded control strategy for the power, voltage, and current is proposed. An outer current control loop based on the resonant control theory is designed to ensure the grid-side power quality. An inner voltage control loop based on the capacitance voltage and capacitance current feedback is designed to ensure the output voltage quality and avoid the resonance peak of the LCL filter. Improved repetitive controller is added to reduce the distortion of the output voltage. The setting of the controller parameters is detailed discussed. A 100kVA UPS prototype is built and experiments under the unbalanced resistive load and nonlinear load are carried out. Theoretical analysis and experimental results show the effectiveness of the control strategy. The paralleled line-interactive UPS can not only remain constant three-phase balanced output voltage, but also has the comprehensive power quality management functions with three-phase balanced grid active power input, low THD of output voltage and grid current, and reactive power compensation. The UPS is a green friendly load to the utility.

  17. Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units

    SciTech Connect

    Beckingsale, D. A.; Gaudin, W. P.; Hornung, R. D.; Gunney, B. T.; Gamblin, T.; Herdman, J. A.; Jarvis, S. A.

    2014-11-17

    Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.

  18. Parallel systems of error processing in the brain.

    PubMed

    Yordanova, Juliana; Falkenstein, Michael; Hohnsbein, Joachim; Kolev, Vasil

    2004-06-01

    Major neurophysiological principles of performance monitoring are not precisely known. It is a current debate in cognitive neuroscience if an error-detection neural system is involved in behavioral control and adaptation. Such a system should generate error-specific signals, but their existence is questioned by observations that correct and incorrect reactions may elicit similar neuroelectric potentials. A new approach based on a time-frequency decomposition of event-related brain potentials was applied to extract covert sub-components from the classical error-related negativity (Ne) and correct-response-related negativity (Nc) in humans. A unique error-specific sub-component from the delta (1.5-3.5 Hz) frequency band was revealed only for Ne, which was associated with error detection at the level of overall performance monitoring. A sub-component from the theta frequency band (4-8 Hz) was associated with motor response execution, but this sub-component also differentiated error from correct reactions indicating error detection at the level of movement monitoring. It is demonstrated that error-specific signals do exist in the brain. More importantly, error detection may occur in multiple functional systems operating in parallel at different levels of behavioral control.

  19. Advantages of Parallel Processing and the Effects of Communications Time

    NASA Technical Reports Server (NTRS)

    Eddy, Wesley M.; Allman, Mark

    2000-01-01

    Many computing tasks involve heavy mathematical calculations, or analyzing large amounts of data. These operations can take a long time to complete using only one computer. Networks such as the Internet provide many computers with the ability to communicate with each other. Parallel or distributed computing takes advantage of these networked computers by arranging them to work together on a problem, thereby reducing the time needed to obtain the solution. The drawback to using a network of computers to solve a problem is the time wasted in communicating between the various hosts. The application of distributed computing techniques to a space environment or to use over a satellite network would therefore be limited by the amount of time needed to send data across the network, which would typically take much longer than on a terrestrial network. This experiment shows how much faster a large job can be performed by adding more computers to the task, what role communications time plays in the total execution time, and the impact a long-delay network has on a distributed computing system.

  20. Control of automatic processes: A parallel distributed-processing account of the Stroop effect. Technical report

    SciTech Connect

    Cohen, J.D.; Dunbar, K.; McClelland, J.L.

    1989-11-22

    A growing body of evidence suggests that traditional views of automaticity are in need of revision. For example, automaticity has often been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirical data suggest that automatic processes are continuous, and furthermore are subject to attentional control. In this paper we present a model of attention which addresses these issues. Using a parallel distributed processing framework we propose that the attributes of automaticity depend upon the strength of a processing pathway and that strength increases with training. Using the Stroop effect as an example, we show how automatic processes are continuous and emerge gradually with practice. Specifically, we present a computational model of the Stroop task which simulates the time course of processing as well as the effects of learning. This was accomplished by combining the cascade mechanism described by McClelland (1979) with the back propagation learning algorithm (Rumelhart, Hinton, Williams, 1986). The model is able to simulate performance in the standard Stroop task, as well as aspects of performance in variants of this task which manipulate SOA, response set, and degree of practice. In the discussion we contrast our model with other models, and indicate how it relates to many of the central issues in the literature on attention, automaticity, and interference.

  1. An iterative expanding and shrinking process for processor allocation in mixed-parallel workflow scheduling.

    PubMed

    Huang, Kuo-Chan; Wu, Wei-Ya; Wang, Feng-Jian; Liu, Hsiao-Ching; Hung, Chun-Hao

    2016-01-01

    Parallel computation has been widely applied in a variety of large-scale scientific and engineering applications. Many studies indicate that exploiting both task and data parallelisms, i.e. mixed-parallel workflows, to solve large computational problems can get better efficacy compared with either pure task parallelism or pure data parallelism. Scheduling traditional workflows of pure task parallelism on parallel systems has long been known to be an NP-complete problem. Mixed-parallel workflow scheduling has to deal with an additional challenging issue of processor allocation. In this paper, we explore the processor allocation issue in scheduling mixed-parallel workflows of moldable tasks, called M-task, and propose an Iterative Allocation Expanding and Shrinking (IAES) approach. Compared to previous approaches, our IAES has two distinguishing features. The first is allocating more processors to the tasks on allocated critical paths for effectively reducing the makespan of workflow execution. The second is allowing the processor allocation of an M-task to shrink during the iterative procedure, resulting in a more flexible and effective process for finding better allocation. The proposed IAES approach has been evaluated with a series of simulation experiments and compared to several well-known previous methods, including CPR, CPA, MCPA, and MCPA2. The experimental results indicate that our IAES approach outperforms those previous methods significantly in most situations, especially when nodes of the same layer in a workflow might have unequal workloads.

  2. Parallel plan execution with self-processing networks

    NASA Technical Reports Server (NTRS)

    Dautrechy, C. Lynne; Reggia, James A.

    1989-01-01

    A critical issue for space operations is how to develop and apply advanced automation techniques to reduce the cost and complexity of working in space. In this context, it is important to examine how recent advances in self-processing networks can be applied for planning and scheduling tasks. For this reason, the feasibility of applying self-processing network models to a variety of planning and control problems relevant to spacecraft activities is being explored. Goals are to demonstrate that self-processing methods are applicable to these problems, and that MIRRORS/II, a general purpose software environment for implementing self-processing models, is sufficiently robust to support development of a wide range of application prototypes. Using MIRRORS/II and marker passing modelling techniques, a model of the execution of a Spaceworld plan was implemented. This is a simplified model of the Voyager spacecraft which photographed Jupiter, Saturn, and their satellites. It is shown that plan execution, a task usually solved using traditional artificial intelligence (AI) techniques, can be accomplished using a self-processing network. The fact that self-processing networks were applied to other space-related tasks, in addition to the one discussed here, demonstrates the general applicability of this approach to planning and control problems relevant to spacecraft activities. It is also demonstrated that MIRRORS/II is a powerful environment for the development and evaluation of self-processing systems.

  3. A Distributed Parallel Genetic Algorithm of Placement Strategy for Virtual Machines Deployment on Cloud Platform

    PubMed Central

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform. PMID:25097872

  4. A distributed parallel genetic algorithm of placement strategy for virtual machines deployment on cloud platform.

    PubMed

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  5. Parallel Processing Response Times and Experimental Determination of the Stopping Rule

    PubMed

    Townsend; Colonius

    1997-12-01

    It was formerly demonstrated that virtually all reasonable exhaustive serial models, and a more constrained set of exhaustive parallel models, cannot predict critical effects associated with self-terminating models. The present investigation greatly generalizes the parallel class of models covered by similar "impossibility" theorems. Specifically, we prove that if an exhaustive parallel model is not super capacity, and if targets are processed at least as fast as non-targets, then it cannot predict such (self-terminating) effects. Such effects are ubiquitous in the experimental literature, offering strong confirmation for self-terminating processing. Copyright 1997 Academic Press. Copyright 1997 Academic Press

  6. Distinct lateral inhibitory circuits drive parallel processing of sensory information in the mammalian olfactory bulb

    PubMed Central

    Geramita, Matthew A; Burton, Shawn D; Urban, Nathan N

    2016-01-01

    Splitting sensory information into parallel pathways is a common strategy in sensory systems. Yet, how circuits in these parallel pathways are composed to maintain or even enhance the encoding of specific stimulus features is poorly understood. Here, we have investigated the parallel pathways formed by mitral and tufted cells of the olfactory system in mice and characterized the emergence of feature selectivity in these cell types via distinct lateral inhibitory circuits. We find differences in activity-dependent lateral inhibition between mitral and tufted cells that likely reflect newly described differences in the activation of deep and superficial granule cells. Simulations show that these circuit-level differences allow mitral and tufted cells to best discriminate odors in separate concentration ranges, indicating that segregating information about different ranges of stimulus intensity may be an important function of these parallel sensory pathways. DOI: http://dx.doi.org/10.7554/eLife.16039.001 PMID:27351103

  7. Distinct lateral inhibitory circuits drive parallel processing of sensory information in the mammalian olfactory bulb.

    PubMed

    Geramita, Matthew A; Burton, Shawn D; Urban, Nathan N

    2016-06-28

    Splitting sensory information into parallel pathways is a common strategy in sensory systems. Yet, how circuits in these parallel pathways are composed to maintain or even enhance the encoding of specific stimulus features is poorly understood. Here, we have investigated the parallel pathways formed by mitral and tufted cells of the olfactory system in mice and characterized the emergence of feature selectivity in these cell types via distinct lateral inhibitory circuits. We find differences in activity-dependent lateral inhibition between mitral and tufted cells that likely reflect newly described differences in the activation of deep and superficial granule cells. Simulations show that these circuit-level differences allow mitral and tufted cells to best discriminate odors in separate concentration ranges, indicating that segregating information about different ranges of stimulus intensity may be an important function of these parallel sensory pathways.

  8. MultiScheme: A Parallel Processing System Based on MIT (Massachusetts Institute of Technology) Scheme.

    DTIC Science & Technology

    1987-09-01

    a sufficiently wide range of parallel-processing applications, that it has become the base for a commercial product, the Butterfly Lisp System...robust, and supports a sufficiently wide range of parallel-processing applications, that it has become the base for a commercial product, the Butterfly ...and inspiration have kept me going when my own interest or ability lapsed. The members of the BBN Butterfly Lisp group (especially Don Allen and Seth

  9. Efficient parallel video processing techniques on GPU: from framework to implementation.

    PubMed

    Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan

    2014-01-01

    Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design.

  10. Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation

    PubMed Central

    Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan

    2014-01-01

    Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432

  11. Probabilistic Modeling of Tephra Dispersion using Parallel Processing

    NASA Astrophysics Data System (ADS)

    Hincks, T.; Bonadonna, C.; Connor, L.; Connor, C.; Sparks, S.

    2002-12-01

    Numerical models of tephra accumulation are important tools in assessing hazards of volcanic eruptions. Such tools can be used far in advance of future eruptions to calculate possible hazards as conditional probabilities. For example, given that a volcanic eruption occurs, what is the expected range of tephra deposition in a specific location or across a region? An empirical model is presented that uses physical characteristics (e.g., volume, column height, particle size distribution) of a volcanic eruption to calculate expected tephra accumulation at geographic locations distant from the vent. This model results from the combination of the Connor et al. (2001) and Bonadonna et al. (1998, 2002) numerical approaches and is based on application of the diffusion advection equation using a stratified atmosphere and particle fall velocities that account for particle shape, density, and variation in Reynold's number along the path of decent. Distribution of particles in the eruption column is a major source of uncertainty in estimation of tephra hazards. We adopt an approach in which several models of the volcanic column may be used and the impact of these various source term models on hazard estimated. Cast probabilistically, this model can use characteristics of historical eruptions, or data from analogous eruptions, to predict the expected tephra deposition from future eruptions. Application of such a model for computing a large number of events over a grid of many points is computationally expensive. In fact, the utility of the model for stochastic simulations of volcanic eruptions was limited by long execution time. To address this concern, we created a parallel version in C and MPI, a message passing interface, to run on a Beowulf cluster, a private network of reasonably high performance computers. We have discovered that grid or input decomposition and self-scheduling techniques lead to essentially linear speed-up in the code. This means that the code is readily

  12. Parallel Conjugate Gradient: Effects of Ordering Strategies, Programming Paradigms, and Architectural Platforms

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Heber, Gerd; Biswas, Rupak

    2000-01-01

    The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. A sparse matrix-vector multiply (SPMV) usually accounts for most of the floating-point operations within a CG iteration. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and SPMV using different programming paradigms and architectures. Results show that for this class of applications, ordering significantly improves overall performance, that cache reuse may be more important than reducing communication, and that it is possible to achieve message passing performance using shared memory constructs through careful data ordering and distribution. However, a multi-threaded implementation of CG on the Tera MTA does not require special ordering or partitioning to obtain high efficiency and scalability.

  13. Parallel conjugate gradient: effects of ordering strategies, programming paradigms, and architectural platforms

    SciTech Connect

    Oliker, L.; Li, X.; Heber, G.; Biswas, R.

    2000-05-01

    The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. A sparse matrix-vector multiply (SPMV) usually accounts for most of the floating-point operations with a CG iteration. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and SPMV using different programming and architectures. Results show that for this class of applications, ordering significantly improves overall performance, that cache reuse may be more important than reducing communication, and that it is possible to achieve message passing performance using shared memory constructs through careful data ordering and distribution. However, a multithreaded implementation of CG on the Tera MTA does not require special ordering or partitioning to obtain high efficiency and scalability.

  14. Parallel computing for simultaneous iterative tomographic imaging by graphics processing units

    NASA Astrophysics Data System (ADS)

    Bello-Maldonado, Pedro D.; López, Ricardo; Rogers, Colleen; Jin, Yuanwei; Lu, Enyue

    2016-05-01

    In this paper, we address the problem of accelerating inversion algorithms for nonlinear acoustic tomographic imaging by parallel computing on graphics processing units (GPUs). Nonlinear inversion algorithms for tomographic imaging often rely on iterative algorithms for solving an inverse problem, thus computationally intensive. We study the simultaneous iterative reconstruction technique (SIRT) for the multiple-input-multiple-output (MIMO) tomography algorithm which enables parallel computations of the grid points as well as the parallel execution of multiple source excitation. Using graphics processing units (GPUs) and the Compute Unified Device Architecture (CUDA) programming model an overall improvement of 26.33x was achieved when combining both approaches compared with sequential algorithms. Furthermore we propose an adaptive iterative relaxation factor and the use of non-uniform weights to improve the overall convergence of the algorithm. Using these techniques, fast computations can be performed in parallel without the loss of image quality during the reconstruction process.

  15. Connectionism, parallel constraint satisfaction processes, and gestalt principles: (re) introducing cognitive dynamics to social psychology.

    PubMed

    Read, S J; Vanman, E J; Miller, L C

    1997-01-01

    We argue that recent work in connectionist modeling, in particular the parallel constraint satisfaction processes that are central to many of these models, has great importance for understanding issues of both historical and current concern for social psychologists. We first provide a brief description of connectionist modeling, with particular emphasis on parallel constraint satisfaction processes. Second, we examine the tremendous similarities between parallel constraint satisfaction processes and the Gestalt principles that were the foundation for much of modem social psychology. We propose that parallel constraint satisfaction processes provide a computational implementation of the principles of Gestalt psychology that were central to the work of such seminal social psychologists as Asch, Festinger, Heider, and Lewin. Third, we then describe how parallel constraint satisfaction processes have been applied to three areas that were key to the beginnings of modern social psychology and remain central today: impression formation and causal reasoning, cognitive consistency (balance and cognitive dissonance), and goal-directed behavior. We conclude by discussing implications of parallel constraint satisfaction principles for a number of broader issues in social psychology, such as the dynamics of social thought and the integration of social information within the narrow time frame of social interaction.

  16. Dynamic CT perfusion image data compression for efficient parallel processing.

    PubMed

    Barros, Renan Sales; Olabarriaga, Silvia Delgado; Borst, Jordi; van Walderveen, Marianne A A; Posthuma, Jorrit S; Streekstra, Geert J; van Herk, Marcel; Majoie, Charles B L M; Marquering, Henk A

    2016-03-01

    The increasing size of medical imaging data, in particular time series such as CT perfusion (CTP), requires new and fast approaches to deliver timely results for acute care. Cloud architectures based on graphics processing units (GPUs) can provide the processing capacity required for delivering fast results. However, the size of CTP datasets makes transfers to cloud infrastructures time-consuming and therefore not suitable in acute situations. To reduce this transfer time, this work proposes a fast and lossless compression algorithm for CTP data. The algorithm exploits redundancies in the temporal dimension and keeps random read-only access to the image elements directly from the compressed data on the GPU. To the best of our knowledge, this is the first work to present a GPU-ready method for medical image compression with random access to the image elements from the compressed data.

  17. Parallel Distributed Processing: Implications for Cognition and Development

    DTIC Science & Technology

    1988-07-11

    cope with these kinds of compensation relations between variables. 11 Figure 4: Balance beam of the kind first used by Inhelder and Piaget (1958), and...developed by Inhelder and Piaget (1958), the so-called ba/ance-beam task, Is Eustrated In Figure 4. In this task, the child is shown a balance beam with pegs... Piaget stressed the continuity of the accomodation process, in spite of the overtly stage-like character of development, though he never gave a

  18. Parallel Processing Method for Airborne Laser Scanning Data Using a PC Cluster and a Virtual Grid.

    PubMed

    Han, Soo Hee; Heo, Joon; Sohn, Hong Gyoo; Yu, Kiyun

    2009-01-01

    In this study, a parallel processing method using a PC cluster and a virtual grid is proposed for the fast processing of enormous amounts of airborne laser scanning (ALS) data. The method creates a raster digital surface model (DSM) by interpolating point data with inverse distance weighting (IDW), and produces a digital terrain model (DTM) by local minimum filtering of the DSM. To make a consistent comparison of performance between sequential and parallel processing approaches, the means of dealing with boundary data and of selecting interpolation centers were controlled for each processing node in parallel approach. To test the speedup, efficiency and linearity of the proposed algorithm, actual ALS data up to 134 million points were processed with a PC cluster consisting of one master node and eight slave nodes. The results showed that parallel processing provides better performance when the computational overhead, the number of processors, and the data size become large. It was verified that the proposed algorithm is a linear time operation and that the products obtained by parallel processing are identical to those produced by sequential processing.

  19. Parallel Processing Method for Airborne Laser Scanning Data Using a PC Cluster and a Virtual Grid

    PubMed Central

    Han, Soo Hee; Heo, Joon; Sohn, Hong Gyoo; Yu, Kiyun

    2009-01-01

    In this study, a parallel processing method using a PC cluster and a virtual grid is proposed for the fast processing of enormous amounts of airborne laser scanning (ALS) data. The method creates a raster digital surface model (DSM) by interpolating point data with inverse distance weighting (IDW), and produces a digital terrain model (DTM) by local minimum filtering of the DSM. To make a consistent comparison of performance between sequential and parallel processing approaches, the means of dealing with boundary data and of selecting interpolation centers were controlled for each processing node in parallel approach. To test the speedup, efficiency and linearity of the proposed algorithm, actual ALS data up to 134 million points were processed with a PC cluster consisting of one master node and eight slave nodes. The results showed that parallel processing provides better performance when the computational overhead, the number of processors, and the data size become large. It was verified that the proposed algorithm is a linear time operation and that the products obtained by parallel processing are identical to those produced by sequential processing. PMID:22574032

  20. Climate systems modeling on massively parallel processing computers at Lawrence Livermore National Laboratory

    SciTech Connect

    Wehner, W.F.; Mirin, A.A.; Bolstad, J.H.

    1996-09-01

    A comprehensive climate system model is under development at Lawrence Livermore National Laboratory. The basis for this model is a consistent coupling of multiple complex subsystem models, each describing a major component of the Earth`s climate. Among these are general circulation models of the atmosphere and ocean, a dynamic and thermodynamic sea ice model, and models of the chemical processes occurring in the air, sea water, and near-surface land. The computational resources necessary to carry out simulations at adequate spatial resolutions for durations of climatic time scales exceed those currently available. Distributed memory massively parallel processing (MPP) computers promise to affordably scale to the computational rates required by directing large numbers of relatively inexpensive processors onto a single problem. We have developed a suite of routines designed to exploit current generation MPP architectures via domain and functional decomposition strategies. These message passing techniques have been implemented in each of the component models and in their coupling interfaces. Production runs of the atmospheric and oceanic components performed on the National Environmental Supercomputing Center (NESC) Cray T3D are described.

  1. Sculpting in cyberspace: Parallel processing the development of new software

    NASA Technical Reports Server (NTRS)

    Fisher, Rob

    1993-01-01

    Stimulating creativity in problem solving, particularly where software development is involved, is applicable to many disciplines. Metaphorical thinking keeps the problem in focus but in a different light, jarring people out of their mental ruts and sparking fresh insights. It forces the mind to stretch to find patterns between dissimilar concepts, in the hope of discovering unusual ideas in odd associations (Technology Review January 1993, p. 37). With a background in Engineering and Visual Design from MIT, I have for the past 30 years pursued a career as a sculptor of interdisciplinary monumental artworks that bridge the fields of science, engineering and art. Since 1979, I have pioneered the application of computer simulation to solve the complex problems associated with these projects. A recent project for the roof of the Carnegie Science Center in Pittsburgh made particular use of the metaphoric creativity technique described above. The problem-solving process led to the creation of hybrid software combining scientific, architectural and engineering visualization techniques. David Steich, a Doctoral Candidate in Electrical Engineering at Penn State, was commissioned to develop special software that enabled me to create innovative free-form sculpture. This paper explores the process of inventing the software through a detailed analysis of the interaction between an artist and a computer programmer.

  2. The role of parallelism in the real-time processing of anaphora

    PubMed Central

    Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P.

    2012-01-01

    Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution. PMID:23741080

  3. Parallel Processing in the Corticogeniculate Pathway of the Macaque Monkey

    PubMed Central

    Briggs, Farran; Usrey, W. Martin

    2009-01-01

    Summary Although corticothalamic feedback is ubiquitous across species and modalities, its role in sensory processing is unclear. This study provides the first detailed description of the visual physiology of corticogeniculate neurons in the primate. Using electrical stimulation to identify corticogeniculate neurons, we distinguish three groups of neurons with response properties that closely resemble those of neurons in the magnocellular, parvocellular and koniocellular layers of their target structure, the lateral geniculate nucleus (LGN) of the thalamus. Our results indicate that corticogeniculate feedback in the primate is stream-specific and provide strong evidence in support of the view that corticothalamic feedback can influence the transmission of sensory information from the thalamus to the cortex in a stream-selective manner. PMID:19376073

  4. Non-parallel processing: Gendered attrition in academic computer science

    NASA Astrophysics Data System (ADS)

    Cohoon, Joanne Louise Mcgrath

    2000-10-01

    This dissertation addresses the issue of disproportionate female attrition from computer science as an instance of gender segregation in higher education. By adopting a theoretical framework from organizational sociology, it demonstrates that the characteristics and processes of computer science departments strongly influence female retention. The empirical data identifies conditions under which women are retained in the computer science major at comparable rates to men. The research for this dissertation began with interviews of students, faculty, and chairpersons from five computer science departments. These exploratory interviews led to a survey of faculty and chairpersons at computer science and biology departments in Virginia. The data from these surveys are used in comparisons of the computer science and biology disciplines, and for statistical analyses that identify which departmental characteristics promote equal attrition for male and female undergraduates in computer science. This three-pronged methodological approach of interviews, discipline comparisons, and statistical analyses shows that departmental variation in gendered attrition rates can be explained largely by access to opportunity, relative numbers, and other characteristics of the learning environment. Using these concepts, this research identifies nine factors that affect the differential attrition of women from CS departments. These factors are: (1) The gender composition of enrolled students and faculty; (2) Faculty turnover; (3) Institutional support for the department; (4) Preferential attitudes toward female students; (5) Mentoring and supervising by faculty; (6) The local job market, starting salaries, and competitiveness of graduates; (7) Emphasis on teaching; and (8) Joint efforts for student success. This work contributes to our understanding of the gender segregation process in higher education. In addition, it contributes information that can lead to effective solutions for an

  5. Parallel Numerical Solution Process of a Two Dimensional Time Dependent Nonlinear Partial Differential Equation

    NASA Astrophysics Data System (ADS)

    Martin, I.; Tirado, F.; Vazquez, L.

    We present a process to achieve the solution of the two dimensional nonlinear Schrödinger equation using a multigrid technique on a distributed memory machine. Some features about the multigrid technique as its good convergence and parallel properties are explained in this paper. This makes multigrid method the optimal one to solve the systems of equations arising at each time step from an implicit numerical scheme. We give some experimental results about the parallel numerical simulation of this equation on a message passing parallel machine.

  6. Parallel processing implementation for the coupled transport of photons and electrons using OpenMP

    NASA Astrophysics Data System (ADS)

    Doerner, Edgardo

    2016-05-01

    In this work the use of OpenMP to implement the parallel processing of the Monte Carlo (MC) simulation of the coupled transport for photons and electrons is presented. This implementation was carried out using a modified EGSnrc platform which enables the use of the Microsoft Visual Studio 2013 (VS2013) environment, together with the developing tools available in the Intel Parallel Studio XE 2015 (XE2015). The performance study of this new implementation was carried out in a desktop PC with a multi-core CPU, taking as a reference the performance of the original platform. The results were satisfactory, both in terms of scalability as parallelization efficiency.

  7. Distributed Computing for Signal Processing: Modeling of Asynchronous Parallel Computation. Appendix F. Studies in Parallel Image Processing.

    DTIC Science & Technology

    1984-08-01

    represented in the iimage , tie greater the potential benefits to be derived from SIMI)’ implementation of the process . This section begins with an...AD0A167 317 DISTRIBUTED COMPUTING FOR SIGNRL PROCESSING : MODELING 1.12 OF ASYNCHRONOUS PAR.. (U) PURDUE UNIV LAFAYETTE IN SCHOOL OF ELECTRICRL...Sffllffllflflflf lllI ..hhmhmhmhmh. II.LP NA -. II ’** -u 118 U2- miT 11111125 .h 4 6 MICR ACP’ CHART .............. 71Ph.D. Thesis by: Gie-Hing Lin

  8. Nonlinear vector eigen-solver and parallel reassembly processing for structural nonlinear vibration

    NASA Astrophysics Data System (ADS)

    Xue, D. Y.; Mei, Chuh

    1993-12-01

    In the frequency domain solution of large amplitude nonlinear vibration, two operations are computationally costly. They are: (1) the iterative eigen-solution and (2) the iterative nonlinear matrix reassembly. This study introduces a nonlinear eigen-solver which greatly speeds up the solution procedure by using a combination of vector iteration and nonlinear matrix updating. A feature of this new method is that it avoids repeatedly using a costly eigen-solver or equation solver. This solution procedure has also been engaged in parallel processing to further speed up the computation. Parallel nonlinear matrix reassembly is the main interest in this parallel processing. Force Macro is used in the parallel program on a CRAY-2S supercomputer.

  9. CRBLASTER: A Fast Parallel-Processing Program for Cosmic Ray Rejection in Space-Based Observations

    NASA Astrophysics Data System (ADS)

    Mighell, K.

    Many astronomical image analysis tasks are based on algorithms that can be described as being embarrassingly parallel - where the analysis of one subimage generally does not affect the analysis of another subimage. Yet few parallel-processing astrophysical image-analysis programs exist that can easily take full advantage of today's fast multi-core servers costing a few thousands of dollars. One reason for the shortage of state-of-the-art parallel processing astrophysical image-analysis codes is that the writing of parallel codes has been perceived to be difficult. I describe a new fast parallel-processing image-analysis program called CRBLASTER which does cosmic ray rejection using van Dokkum's L.A.Cosmic algorithm. CRBLASTER is written in C using the industry standard Message Passing Interface library. Processing a single 800 x 800 Hubble Space Telescope Wide-Field Planetary Camera 2 (WFPC2) image takes 1.9 seconds using 4 processors on an Apple Xserve with two dual-core 3.0-GHz Intel Xeons; the efficiency of the program running with the 4 cores is 82%. The code has been designed to be used as a software framework for the easy development of parallel-processing image-analysis programs using embarrassing parallel algorithms; all that needs to be done is to replace the core image processing task (in this case the C function that performs the L.A.Cosmic algorithm) with an alternative image analysis task based on a single processor algorithm. I describe the design and implementation of the program and then discuss how it could possibly be used to quickly do time-critical analysis applications such as those involved with space surveillance or do complex calibration tasks as part of the pipeline processing of images from large focal plane arrays.

  10. Parallel computer processing and modeling: applications for the ICU

    NASA Astrophysics Data System (ADS)

    Baxter, Grant; Pranger, L. Alex; Draghic, Nicole; Sims, Nathaniel M.; Wiesmann, William P.

    2003-07-01

    Current patient monitoring procedures in hospital intensive care units (ICUs) generate vast quantities of medical data, much of which is considered extemporaneous and not evaluated. Although sophisticated monitors to analyze individual types of patient data are routinely used in the hospital setting, this equipment lacks high order signal analysis tools for detecting long-term trends and correlations between different signals within a patient data set. Without the ability to continuously analyze disjoint sets of patient data, it is difficult to detect slow-forming complications. As a result, the early onset of conditions such as pneumonia or sepsis may not be apparent until the advanced stages. We report here on the development of a distributed software architecture test bed and software medical models to analyze both asynchronous and continuous patient data in real time. Hardware and software has been developed to support a multi-node distributed computer cluster capable of amassing data from multiple patient monitors and projecting near and long-term outcomes based upon the application of physiologic models to the incoming patient data stream. One computer acts as a central coordinating node; additional computers accommodate processing needs. A simple, non-clinical model for sepsis detection was implemented on the system for demonstration purposes. This work shows exceptional promise as a highly effective means to rapidly predict and thereby mitigate the effect of nosocomial infections.

  11. High Speed Publication Subscription Brokering Through Highly Parallel Processing on Field Programmable Gate Array (FPGA)

    DTIC Science & Technology

    2010-01-01

    and that Unix style newlines are being used. Section 2. Hardware Required for a Single Node All of the information in the multi- node hardware...AFRL-RI-RS-TR-2010-29 Final Technical Report January 2010 HIGH SPEED PUBLICATION SUBSCRIPTION BROKERING THROUGH HIGHLY PARALLEL ...2007 – August 2009 4. TITLE AND SUBTITLE HIGH SPEED PUBLICATION SUBSCRIPTION BROKERING THROUGH HIGHLY PARALLEL PROCESSING ON FIELD PROGRAMMABLE

  12. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  13. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  14. Visual analysis of inter-process communication for large-scale parallel computing.

    PubMed

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  15. New optical scheme for parallel processing of 1D gray images

    NASA Astrophysics Data System (ADS)

    Huang, Guoliang; Jin, Guofan; Wu, Minxian; Yan, Yingbai

    1994-06-01

    Based on mathematical morphology and digital umbra shading and shadowing algorithm, a new scheme for realizing the fundamental morphological operation of one dimensional gray images is proposed. The mathematical formula for the parallel processing of 1D gray images is summarized; some important conclusions of morphological processing from binary images to gray images are obtained. The advantages of this scheme is simple in structure, high resolution in gray level, and good in parallelism. It can raise the speed of performing morphological processing of gray images greatly and obtain more accurate results.

  16. Parallel Processing of the Target Language during Source Language Comprehension in Interpreting

    ERIC Educational Resources Information Center

    Dong, Yanping; Lin, Jiexuan

    2013-01-01

    Two experiments were conducted to test the hypothesis that the parallel processing of the target language (TL) during source language (SL) comprehension in interpreting may be influenced by two factors: (i) link strength from SL to TL, and (ii) the interpreter's cognitive resources supplement to TL processing during SL comprehension. The…

  17. Parallels between a Collaborative Research Process and the Middle Level Philosophy

    ERIC Educational Resources Information Center

    Dever, Robin; Ross, Diane; Miller, Jennifer; White, Paula; Jones, Karen

    2014-01-01

    The characteristics of the middle level philosophy as described in This We Believe closely parallel the collaborative research process. The journey of one research team is described in relationship to these characteristics. The collaborative process includes strengths such as professional relationships, professional development, courageous…

  18. SURVEY OF HIGHLY PARALLEL INFORMATION PROCESSING TECHNOLOGY AND SYSTEMS. PHASE I OF AN IMPLICATIONS STUDY,

    DTIC Science & Technology

    The purpose of this report is to present the results of a survey of the technology of highly parallel information processing technology and systems... processing technology and systems. Completion of this study will require a survey of naval systems to determine which can benefit from the technology

  19. Solution-processed parallel tandem polymer solar cells using silver nanowires as intermediate electrode.

    PubMed

    Guo, Fei; Kubis, Peter; Li, Ning; Przybilla, Thomas; Matt, Gebhard; Stubhan, Tobias; Ameri, Tayebeh; Butz, Benjamin; Spiecker, Erdmann; Forberich, Karen; Brabec, Christoph J

    2014-12-23

    Tandem architecture is the most relevant concept to overcome the efficiency limit of single-junction photovoltaic solar cells. Series-connected tandem polymer solar cells (PSCs) have advanced rapidly during the past decade. In contrast, the development of parallel-connected tandem cells is lagging far behind due to the big challenge in establishing an efficient interlayer with high transparency and high in-plane conductivity. Here, we report all-solution fabrication of parallel tandem PSCs using silver nanowires as intermediate charge collecting electrode. Through a rational interface design, a robust interlayer is established, enabling the efficient extraction and transport of electrons from subcells. The resulting parallel tandem cells exhibit high fill factors of ∼60% and enhanced current densities which are identical to the sum of the current densities of the subcells. These results suggest that solution-processed parallel tandem configuration provides an alternative avenue toward high performance photovoltaic devices.

  20. The hypercluster: A parallel processing test-bed architecture for computational mechanics applications

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.

    1987-01-01

    The development of numerical methods and software tools for parallel processors can be aided through the use of a hardware test-bed. The test-bed architecture must be flexible enough to support investigations into architecture-algorithm interactions. One way to implement a test-bed is to use a commercial parallel processor. Unfortunately, most commercial parallel processors are fixed in their interconnection and/or processor architecture. In this paper, we describe a modified n cube architecture, called the hypercluster, which is a superset of many other processor and interconnection architectures. The hypercluster is intended to support research into parallel processing of computational fluid and structural mechanics problems which may require a number of different architectural configurations. An example of how a typical partial differential equation solution algorithm maps on to the hypercluster is given.

  1. Rapid Pattern Recognition of Three Dimensional Objects Using Parallel Processing Within a Hierarchy of Hexagonal Grids

    NASA Astrophysics Data System (ADS)

    Tang, Haojun

    1995-01-01

    This thesis describes using parallel processing within a hierarchy of hexagonal grids to achieve rapid recognition of patterns. A seven-pixel basic hexagonal neighborhood, a sixty-one-pixel superneighborhood and pyramids of a 2-to-4 area ratio are employed. The hexagonal network achieves improved accuracy over the square network for object boundaries. The hexagonal grid with less directional sensitivity is a better approximation of the human vision grid, is more suited to natural scenes than the square grid and avoids the 4-neighbor/8-neighbor problem. Parallel processing in image analysis saves considerable time versus the traditional line-by-line method. Hexagonal parallel processing combines the optimum hexagonal geometry with the parallel structure. Our work has surveyed behavior and internal properties to construct the image on the different level of hexagonal pixel grids in a parallel computation scheme. A computer code has been developed to detect edges of digital images of real objects taken with a CCD camera within a hexagonal grid at any level. The algorithm uses the differences of the local gray level and those of its six neighbors, and is able to determine the boundary of a digital image in parallel. Also a series of algorithms and techniques have been built up to manage edge linking, feature extraction, etc. The digital images obtained from the improved CRS digital image processing system are a good approximation to the images which would be obtained with a real physical hexagonal grid. We envision that our work done within this little-known area will have some important applications in real-time machine vision. A parallel two-layer hexagonal-array retina has been designed to do pattern recognition using simple operations such as differencing, rationing, thresholding, etc. which may occur in the human retina and other biological vision systems.

  2. American History--Part 2. Teacher's Guide [and Student Workbook]. Revised. Parallel Alternative Strategies for Students (PASS).

    ERIC Educational Resources Information Center

    Fresen, Sue; Logan, Joshua; McCarron, Kathleen

    This teacher's and student's guide is part of a series of content-centered packages of supplemental reading, activities, and methods adapted for students who have disabilities. Parallel Alternative Strategies for Students (PASS) materials are designed to help these students succeed in regular education content courses. The content in PASS differs…

  3. American History--Part 1. Teacher's Guide [and Student Workbook]. Revised. Parallel Alternative Strategies for Students (PASS).

    ERIC Educational Resources Information Center

    Fresen, Sue,; Logan, Joshua; McCarron, Kathleen

    This teacher's and student's guide is part of a series of content-centered packages of supplemental reading, activities, and methods adapted for students who have disabilities. Parallel Alternative Strategies for Students (PASS) materials are designed to help these students succeed in regular education content courses. The content in PASS differs…

  4. Advancing the extended parallel process model through the inclusion of response cost measures.

    PubMed

    Rintamaki, Lance S; Yang, Z Janet

    2014-01-01

    This study advances the Extended Parallel Process Model through the inclusion of response cost measures, which are drawbacks associated with a proposed response to a health threat. A sample of 502 college students completed a questionnaire on perceptions regarding sexually transmitted infections and condom use after reading information from the Centers for Disease Control and Prevention on the health risks of sexually transmitted infections and the utility of latex condoms in preventing sexually transmitted infection transmission. The questionnaire included standard Extended Parallel Process Model assessments of perceived threat and efficacy, as well as questions pertaining to response costs associated with condom use. Results from hierarchical ordinary least squares regression demonstrated how the addition of response cost measures improved the predictive power of the Extended Parallel Process Model, supporting the inclusion of this variable in the model.

  5. Toward a model framework of generalized parallel componential processing of multi-symbol numbers.

    PubMed

    Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph

    2015-05-01

    In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining and investigating a sign-decade compatibility effect for the comparison of positive and negative numbers, which extends the unit-decade compatibility effect in 2-digit number processing. Then, we evaluated whether the model is capable of accounting for previous findings in negative number processing. In a magnitude comparison task, in which participants had to single out the larger of 2 integers, we observed a reliable sign-decade compatibility effect with prolonged reaction times for incompatible (e.g., -97 vs. +53; in which the number with the larger decade digit has the smaller, i.e., negative polarity sign) as compared with sign-decade compatible number pairs (e.g., -53 vs. +97). Moreover, an analysis of participants' eye fixation behavior corroborated our model of parallel componential processing of multi-symbol numbers. These results are discussed in light of concurrent theoretical notions about negative number processing. On the basis of the present results, we propose a generalized integrated model framework of parallel componential multi-symbol processing.

  6. [Multi-DSP parallel processing technique of hyperspectral RX anomaly detection].

    PubMed

    Guo, Wen-Ji; Zeng, Xiao-Ru; Zhao, Bao-Wei; Ming, Xing; Zhang, Gui-Feng; Lü, Qun-Bo

    2014-05-01

    To satisfy the requirement of high speed, real-time and mass data storage etc. for RX anomaly detection of hyperspectral image data, the present paper proposes a solution of multi-DSP parallel processing system for hyperspectral image based on CPCI Express standard bus architecture. Hardware topological architecture of the system combines the tight coupling of four DSPs sharing data bus and memory unit with the interconnection of Link ports. On this hardware platform, by assigning parallel processing task for each DSP in consideration of the spectrum RX anomaly detection algorithm and the feature of 3D data in the spectral image, a 4DSP parallel processing technique which computes and solves the mean matrix and covariance matrix of the whole image by spatially partitioning the image is proposed. The experiment result shows that, in the case of equivalent detective effect, it can reach the time efficiency 4 times higher than single DSP process with the 4-DSP parallel processing technique of RX anomaly detection algorithm proposed by this paper, which makes a breakthrough in the constraints to the huge data image processing of DSP's internal storage capacity, meanwhile well meeting the demands of the spectral data in real-time processing.

  7. Managing internode data communications for an uninitialized process in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

  8. Evaluation of parallel reduction strategies for fusion of sensory information from a robot team

    NASA Astrophysics Data System (ADS)

    Lyons, Damian M.; Leroy, Joseph

    2015-05-01

    The advantage of using a team of robots to search or to map an area is that by navigating the robots to different parts of the area, searching or mapping can be completed more quickly. A crucial aspect of the problem is the combination, or fusion, of data from team members to generate an integrated model of the search/mapping area. In prior work we looked at the issue of removing mutual robots views from an integrated point cloud model built from laser and stereo sensors, leading to a cleaner and more accurate model. This paper addresses a further challenge: Even with mutual views removed, the stereo data from a team of robots can quickly swamp a WiFi connection. This paper proposes and evaluates a communication and fusion approach based on the parallel reduction operation, where data is combined in a series of steps of increasing subsets of the team. Eight different strategies for selecting the subsets are evaluated for bandwidth requirements using three robot missions, each carried out with teams of four Pioneer 3-AT robots. Our results indicate that selecting groups to combine based on similar pose but distant location yields the best results.

  9. Design, Implementation, and Evaluation of a Shared.Memory Parallel Processing System (SMPPS)

    DTIC Science & Technology

    1999-07-26

    AGENCY NAME(S) AND ADDRESS(ES) THE DEPARTMENT OF THE AIR FORCE AFIT/CIA, BLDG 125 2950 P STREET WPAFB OH 45433 10 . SPONSORING/MONITORING AGENCY...Existing Machines 4 1.2.1 Message-Passing 4 1.2.2 Shared-Memory 6 2 IMPLEMENTING A SHARED-MEMORY PARALLEL PROCESSING SYSTEM (SMPPS) 10 2.1...Objectives 10 2.2 A Dual-Processor Shared-Memory Parallel Processing System 10 2.2.1 Meeting Design Objectives 10 2.2.2 The Design 11 2.2.3 Timer

  10. Application of integration algorithms in a parallel processing environment for the simulation of jet engines

    NASA Technical Reports Server (NTRS)

    Krosel, S. M.; Milner, E. J.

    1982-01-01

    The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.

  11. Next generation Purex modeling by way of parallel processing with high performance computers

    SciTech Connect

    DeMuth, S.F.

    1993-08-01

    The Plutonium and Uranium Extraction (Purex) process is the predominant method used worldwide for solvent extraction in reprocessing spent nuclear fuels. Proper flowsheet design has a significant impact on the character of the process waste. Past Purex flowsheet modeling has been based on equilibrium conditions. It can be shown for the Purex process that optimum separation does not necessarily occur at equilibrium conditions. The next generation Purex flowsheet models should incorporate the fundamental diffusion and chemical kinetic processes required to study time-dependent behavior. Use of parallel processing with high-performance computers will permit transient multistage and multispecies design calculations based on mass transfer with simultaneous chemical reaction models. This paper presents an applicable mass transfer with chemical reaction model for the Purex system and presents a parallel processing solution methodology.

  12. [Work processes in Family Health Strategy team].

    PubMed

    Pavoni, Daniela Soccoloski; Medeiros, Cássia Regina Gotler

    2009-01-01

    The Family Health Strategy requires a redefinition of the health care model, characterized by interdisciplinary team work. This study is aimed at knowiong the work processes in a Family Health Team. The research was qualitative, and 10 team members were interviewed. Results demonstrated that the nurse performs a variety of functions that could be shared with other people; this overloads him/her and makes inherent job task execution difficult. Task planning and performing are usually done in teams, but some professionals get more involved in these activities. It was concluded that there is a need for the team to reflect upon work process as well as reassess task assignment, so that each individual is able to perform the work and contribute for an integrated work.

  13. Processing communications events in parallel active messaging interface by awakening thread from wait state

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-22

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for the context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.

  14. Parallel and serial processes in the human oculomotor system: bimodal integration and express saccades.

    PubMed

    Nozawa, G; Reuter-Lorenz, P A; Hughes, H C

    1994-01-01

    Saccadic reaction times (SRTs) were analyzed in the context of stochastic models of information processing (e.g., Townsend and Ashby 1983) to reveal the processing architecture(s) underlying integrative interactions between visual and auditory inputs and the mechanisms of express saccades. The results support the following conclusions. Bimodal (visual+auditory) targets are processed in parallel, and facilitate SRT to an extent that exceeds levels attainable by probability summation. This strongly implies neural summation between elements responding to spatially aligned visual and auditory inputs in the human oculomotor system. Second, express saccades are produced within a separable processing stage that is organized in series with that responsible for intersensory integration. A model is developed that implements this combination of parallel and serial processing. The activity in parallel input channels is summed within a sensory stage which is organized in series with a pre-motor and motor stage. The time course of each subprocess is considered a random variable, and different experimental manipulations can selectively influence different stages. Parallels between the model and physiological data are explored.

  15. Parallel processing of face and house stimuli by V1 and specialized visual areas: a magnetoencephalographic (MEG) study

    PubMed Central

    Shigihara, Yoshihito; Zeki, Semir

    2014-01-01

    We used easily distinguishable stimuli of faces and houses constituted from straight lines, with the aim of learning whether they activate V1 on the one hand, and the specialized areas that are critical for the processing of faces and houses on the other, with similar latencies. Eighteen subjects took part in the experiment, which used magnetoencephalography (MEG) coupled to analytical methods to detect the time course of the earliest responses which these stimuli provoke in these cortical areas. Both categories of stimuli activated V1 and areas of the visual cortex outside it at around 40 ms after stimulus onset, and the amplitude elicited by face stimuli was significantly larger than that elicited by house stimuli. These results suggest that “low-level” and “high-level” features of form stimuli are processed in parallel by V1 and visual areas outside it. Taken together with our previous results on the processing of simple geometric forms (Shgihara and Zeki, 2013; Shigihara and Zeki, 2014), the present ones reinforce the conclusion that parallel processing is an important component in the strategy used by the brain to process and construct forms. PMID:25426050

  16. Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP)

    NASA Astrophysics Data System (ADS)

    Calafiura, Paolo; Leggett, Charles; Seuster, Rolf; Tsulaia, Vakhtang; Van Gemmeren, Peter

    2015-12-01

    AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write mechanisms, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows the running of AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the diversity of ATLAS event processing workloads on various computing resources: Grid, opportunistic resources and HPC.

  17. Parallel Digital Watermarking Process on Ultrasound Medical Images in Multicores Environment

    PubMed Central

    Khor, Hui Liang; Liew, Siau-Chuin; Zain, Jasni Mohd.

    2016-01-01

    With the advancement of technology in communication network, it facilitated digital medical images transmitted to healthcare professionals via internal network or public network (e.g., Internet), but it also exposes the transmitted digital medical images to the security threats, such as images tampering or inserting false data in the images, which may cause an inaccurate diagnosis and treatment. Medical image distortion is not to be tolerated for diagnosis purposes; thus a digital watermarking on medical image is introduced. So far most of the watermarking research has been done on single frame medical image which is impractical in the real environment. In this paper, a digital watermarking on multiframes medical images is proposed. In order to speed up multiframes watermarking processing time, a parallel watermarking processing on medical images processing by utilizing multicores technology is introduced. An experiment result has shown that elapsed time on parallel watermarking processing is much shorter than sequential watermarking processing. PMID:26981111

  18. Intelligent approach for parallel HEV control strategy based on driving cycles

    NASA Astrophysics Data System (ADS)

    Montazeri-Gh, M.; Asadi, M.

    2011-02-01

    This article describes a methodological approach for the intelligent control of parallel hybrid electric vehicle (HEV) by the inclusion of the concept of driving cycles. In this approach, a fuzzy logic controller is designed to manage the internal combustion engine to work in the vicinity of its optimal condition instantaneously. In addition, based on the definition of microtrip, several driving patterns are classified that represent the congested to highway traffic conditions. The driving cycle and traffic conditions are then incorporated in an optimisation process to tune the fuzzy membership function parameters. In this study, the optimisation process is formulated to minimise the HEV fuel consumption (FC) and emissions as well as the satisfaction of the driving performance constraints. Finally, optimisation results are provided for three different driving cycles including ECE-EUDC, FTP and TEH-CAR. TEH-CAR is a driving cycle that is developed based on the experimental data collected from the real traffic condition in the city of Tehran. The results from the computer simulation show the effectiveness of the approach and reduction in FC and emissions while ensuring that the vehicle performance is not sacrificed.

  19. The Large Laboratory Course: Organize It to Parallel Industrial Process Development.

    ERIC Educational Resources Information Center

    Eckert, Roger E.; Ybarra, Robert M.

    1988-01-01

    Describes a senior level chemical engineering course at Purdue University that parallels an industrial process development department. Stresses the course organization, manager-engineer contract, evaluation of students, course evaluation, and gives examples of course improvements made during the course. (CW)

  20. High Performance Parallel Processing Project: Industrial computing initiative. Progress reports for fiscal year 1995

    SciTech Connect

    Koniges, A.

    1996-02-09

    This project is a package of 11 individual CRADA`s plus hardware. This innovative project established a three-year multi-party collaboration that is significantly accelerating the availability of commercial massively parallel processing computing software technology to U.S. government, academic, and industrial end-users. This report contains individual presentations from nine principal investigators along with overall program information.

  1. An Inconvenient Truth: An Application of the Extended Parallel Process Model

    ERIC Educational Resources Information Center

    Goodall, Catherine E.; Roberto, Anthony J.

    2008-01-01

    "An Inconvenient Truth" is an Academy Award-winning documentary about global warming presented by Al Gore. This documentary is appropriate for a lesson on fear appeals and the extended parallel process model (EPPM). The EPPM is concerned with the effects of perceived threat and efficacy on behavior change. Perceived threat is composed of an…

  2. One Factor or Two Parallel Processes? Comorbidity and Development of Adolescent Anxiety and Depressive Disorder Symptoms

    ERIC Educational Resources Information Center

    Hale, William W., III; Raaijmakers, Quinten A. W.; Muris, Peter; van Hoof, Anne; Meeus, Wim H. J.

    2009-01-01

    Background: This study investigates whether anxiety and depressive disorder symptoms of adolescents from the general community are best described by a model that assumes they are indicative of one general factor or by a model that assumes they are two distinct disorders with parallel growth processes. Additional analyses were conducted to explore…

  3. Parallel Distributed Processing at 25: Further Explorations in the Microstructure of Cognition

    ERIC Educational Resources Information Center

    Rogers, Timothy T.; McClelland, James L.

    2014-01-01

    This paper introduces a special issue of "Cognitive Science" initiated on the 25th anniversary of the publication of "Parallel Distributed Processing" (PDP), a two-volume work that introduced the use of neural network models as vehicles for understanding cognition. The collection surveys the core commitments of the PDP…

  4. Tracking the Continuity of Language Comprehension: Computer Mouse Trajectories Suggest Parallel Syntactic Processing

    ERIC Educational Resources Information Center

    Farmer, Thomas A.; Cargill, Sarah A.; Hindy, Nicholas C.; Dale, Rick; Spivey, Michael J.

    2007-01-01

    Although several theories of online syntactic processing assume the parallel activation of multiple syntactic representations, evidence supporting simultaneous activation has been inconclusive. Here, the continuous and non-ballistic properties of computer mouse movements are exploited, by recording their streaming x, y coordinates to procure…

  5. A Neurally Plausible Parallel Distributed Processing Model of Event-Related Potential Word Reading Data

    ERIC Educational Resources Information Center

    Laszlo, Sarah; Plaut, David C.

    2012-01-01

    The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between…

  6. Recent development for the ITS code system: Parallel processing and visualization

    SciTech Connect

    Fan, W.C.; Turner, C.D.; Halbleib, J.A. Sr.; Kensek, R.P.

    1996-03-01

    A brief overview is given for two software developments related to the ITS code system. These developments provide parallel processing and visualization capabilities and thus allow users to perform ITS calculations more efficiently. Timing results and a graphical example are presented to demonstrate these capabilities.

  7. Cocaine Use and Delinquent Behavior among High-Risk Youths: A Growth Model of Parallel Processes

    ERIC Educational Resources Information Center

    Dembo, Richard; Sullivan, Christopher

    2009-01-01

    We report the results of a parallel-process, latent growth model analysis examining the relationships between cocaine use and delinquent behavior among youths. The study examined a sample of 278 justice-involved juveniles completing at least one of three follow-up interviews as part of a National Institute on Drug Abuse-funded study. The results…

  8. Parallel Process and Isomorphism: A Model for Decision Making in the Supervisory Triad

    ERIC Educational Resources Information Center

    Koltz, Rebecca L.; Odegard, Melissa A.; Feit, Stephen S.; Provost, Kent; Smith, Travis

    2012-01-01

    Parallel process and isomorphism are two supervisory concepts that are often discussed independently but rarely discussed in connection with each other. These two concepts, philosophically, have different historical roots, as well as different implications for interventions with regard to the supervisory triad. The authors examine the difference…

  9. Using the Extended Parallel Process Model to Examine Teachers' Likelihood of Intervening in Bullying

    ERIC Educational Resources Information Center

    Duong, Jeffrey; Bradshaw, Catherine P.

    2013-01-01

    Background: Teachers play a critical role in protecting students from harm in schools, but little is known about their attitudes toward addressing problems like bullying. Previous studies have rarely used theoretical frameworks, making it difficult to advance this area of research. Using the Extended Parallel Process Model (EPPM), we examined the…

  10. Parallel processing in the honeybee olfactory pathway: structure, function, and evolution.

    PubMed

    Rössler, Wolfgang; Brill, Martin F

    2013-11-01

    Animals face highly complex and dynamic olfactory stimuli in their natural environments, which require fast and reliable olfactory processing. Parallel processing is a common principle of sensory systems supporting this task, for example in visual and auditory systems, but its role in olfaction remained unclear. Studies in the honeybee focused on a dual olfactory pathway. Two sets of projection neurons connect glomeruli in two antennal-lobe hemilobes via lateral and medial tracts in opposite sequence with the mushroom bodies and lateral horn. Comparative studies suggest that this dual-tract circuit represents a unique adaptation in Hymenoptera. Imaging studies indicate that glomeruli in both hemilobes receive redundant sensory input. Recent simultaneous multi-unit recordings from projection neurons of both tracts revealed widely overlapping response profiles strongly indicating parallel olfactory processing. Whereas lateral-tract neurons respond fast with broad (generalistic) profiles, medial-tract neurons are odorant specific and respond slower. In analogy to "what-" and "where" subsystems in visual pathways, this suggests two parallel olfactory subsystems providing "what-" (quality) and "when" (temporal) information. Temporal response properties may support across-tract coincidence coding in higher centers. Parallel olfactory processing likely enhances perception of complex odorant mixtures to decode the diverse and dynamic olfactory world of a social insect.

  11. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  12. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  13. Static and dynamic load-balancing strategies for parallel reservoir simulation

    SciTech Connect

    Anguille, L.; Killough, J.E.; Li, T.M.C.; Toepfer, J.L.

    1995-12-31

    Accurate simulation of the complex phenomena that occur in flow in porous media can tax even the most powerful serial computers. Emergence of new parallel computer architectures as a future efficient tool in reservoir simulation may overcome this difficulty. Unfortunately, major problems remain to be solved before using parallel computers commercially: production serial programs must be rewritten to be efficient in parallel environments and load balancing methods must be explored to evenly distribute the workload on each processor during the simulation. This study implements both a static load-balancing algorithm and a receiver-initiated dynamic load-sharing algorithm to achieve high parallel efficiencies on both the IBM SP2 and Intel IPSC/860 parallel computers. Significant speedup improvement was recorded for both methods. Further optimization of these algorithms yielded a technique with efficiencies as high as 90% and 70% on 8 and 32 nodes, respectively. The increased performance was the result of the minimization of message-passing overhead.

  14. Real-time target detection technology of large view-field infrared image based on multicore DSP parallel processing

    NASA Astrophysics Data System (ADS)

    Sun, Gang; Liu, Songlin; Wang, Weihua; Chen, Zengping

    2013-10-01

    In order to implement real-time detection of hedgehopping target in large view-field infrared (LVIR) image, the paper proposes a fast algorithm flow to extract the target region of interest (ROI). The ground building region was rejected quickly and target ROI was segmented roughly through the background classification. Then the background image containing target ROI was matched with previous frame based on a mean removal normalized product correlation (MRNPC) similarity measure function. Finally, the target motion area was extracted by inter-frame difference in time domain. According to the proposed algorithm flow, this paper designs the high-speed real-time signal processing hardware platform based on FPGA + DSP, and also presents a new parallel processing strategy that called function-level and task-level, which could parallel process LVIR image by multi-core and multi-task. Experimental results show that the algorithm can extract low altitude aero target with complex background in large view effectively, and the new design hardware platform could implement real time processing of the IR image with 50000x288 pixels per second in large view-field infrared search system (LVIRSS).

  15. Big Data GPU-Driven Parallel Processing Spatial and Spatio-Temporal Clustering Algorithms

    NASA Astrophysics Data System (ADS)

    Konstantaras, Antonios; Skounakis, Emmanouil; Kilty, James-Alexander; Frantzeskakis, Theofanis; Maravelakis, Emmanuel

    2016-04-01

    Advances in graphics processing units' technology towards encompassing parallel architectures [1], comprised of thousands of cores and multiples of parallel threads, provide the foundation in terms of hardware for the rapid processing of various parallel applications regarding seismic big data analysis. Seismic data are normally stored as collections of vectors in massive matrices, growing rapidly in size as wider areas are covered, denser recording networks are being established and decades of data are being compiled together [2]. Yet, many processes regarding seismic data analysis are performed on each seismic event independently or as distinct tiles [3] of specific grouped seismic events within a much larger data set. Such processes, independent of one another can be performed in parallel narrowing down processing times drastically [1,3]. This research work presents the development and implementation of three parallel processing algorithms using Cuda C [4] for the investigation of potentially distinct seismic regions [5,6] present in the vicinity of the southern Hellenic seismic arc. The algorithms, programmed and executed in parallel comparatively, are the: fuzzy k-means clustering with expert knowledge [7] in assigning overall clusters' number; density-based clustering [8]; and a selves-developed spatio-temporal clustering algorithm encompassing expert [9] and empirical knowledge [10] for the specific area under investigation. Indexing terms: GPU parallel programming, Cuda C, heterogeneous processing, distinct seismic regions, parallel clustering algorithms, spatio-temporal clustering References [1] Kirk, D. and Hwu, W.: 'Programming massively parallel processors - A hands-on approach', 2nd Edition, Morgan Kaufman Publisher, 2013 [2] Konstantaras, A., Valianatos, F., Varley, M.R. and Makris, J.P.: 'Soft-Computing Modelling of Seismicity in the Southern Hellenic Arc', Geoscience and Remote Sensing Letters, vol. 5 (3), pp. 323-327, 2008 [3] Papadakis, S. and

  16. Solving the Quadratic Assignment Problems using Parallel ACO with Symmetric Multi Processing

    NASA Astrophysics Data System (ADS)

    Tsutsui, Shigeyoshi

    In this paper, we propose several types of parallel ant colony optimization algorithms with symmetric multi processing for solving the quadratic assignment problem (QAP). These models include the master-slave models and the island models. As a base ant colony optimization algorithm, we used the cunning Ant System (cAS) which showed promising performance our in previous studies. We evaluated each parallel algorithm with a condition that the run time for each parallel algorithm and the base sequential algorithm are the same. The results suggest that using the master-slave model with increased iteration of ant colony optimization algorithms is promising in solving quadratic assignment problems for real or real-like instances.

  17. Parallel multigrid solver of radiative transfer equation for photon transport via graphics processing unit.

    PubMed

    Gao, Hao; Phan, Lan; Lin, Yuting

    2012-09-01

    A graphics processing unit-based parallel multigrid solver for a radiative transfer equation with vacuum boundary condition or reflection boundary condition is presented for heterogeneous media with complex geometry based on two-dimensional triangular meshes or three-dimensional tetrahedral meshes. The computational complexity of this parallel solver is linearly proportional to the degrees of freedom in both angular and spatial variables, while the full multigrid method is utilized to minimize the number of iterations. The overall gain of speed is roughly 30 to 300 fold with respect to our prior multigrid solver, which depends on the underlying regime and the parallelization. The numerical validations are presented with the MATLAB codes at https://sites.google.com/site/rtefastsolver/.

  18. Parallel multigrid solver of radiative transfer equation for photon transport via graphics processing unit

    PubMed Central

    Phan, Lan; Lin, Yuting

    2012-01-01

    Abstract. A graphics processing unit–based parallel multigrid solver for a radiative transfer equation with vacuum boundary condition or reflection boundary condition is presented for heterogeneous media with complex geometry based on two-dimensional triangular meshes or three-dimensional tetrahedral meshes. The computational complexity of this parallel solver is linearly proportional to the degrees of freedom in both angular and spatial variables, while the full multigrid method is utilized to minimize the number of iterations. The overall gain of speed is roughly 30 to 300 fold with respect to our prior multigrid solver, which depends on the underlying regime and the parallelization. The numerical validations are presented with the MATLAB codes at https://sites.google.com/site/rtefastsolver/. PMID:23085905

  19. A multi-satellite orbit determination problem in a parallel processing environment

    NASA Technical Reports Server (NTRS)

    Deakyne, M. S.; Anderle, R. J.

    1988-01-01

    The Engineering Orbit Analysis Unit at GE Valley Forge used an Intel Hypercube Parallel Processor to investigate the performance and gain experience of parallel processors with a multi-satellite orbit determination problem. A general study was selected in which major blocks of computation for the multi-satellite orbit computations were used as units to be assigned to the various processors on the Hypercube. Problems encountered or successes achieved in addressing the orbit determination problem would be more likely to be transferable to other parallel processors. The prime objective was to study the algorithm to allow processing of observations later in time than those employed in the state update. Expertise in ephemeris determination was exploited in addressing these problems and the facility used to bring a realism to the study which would highlight the problems which may not otherwise be anticipated. Secondary objectives were to gain experience of a non-trivial problem in a parallel processor environment, to explore the necessary interplay of serial and parallel sections of the algorithm in terms of timing studies, to explore the granularity (coarse vs. fine grain) to discover the granularity limit above which there would be a risk of starvation where the majority of nodes would be idle or under the limit where the overhead associated with splitting the problem may require more work and communication time than is useful.

  20. Parallel processing in a host plus multiple array processor system for radar

    NASA Technical Reports Server (NTRS)

    Barkan, B. Z.

    1983-01-01

    Host plus multiple array processor architecture is demonstrated to yield a modular, fast, and cost-effective system for radar processing. Software methodology for programming such a system is developed. Parallel processing with pipelined data flow among the host, array processors, and discs is implemented. Theoretical analysis of performance is made and experimentally verified. The broad class of problems to which the architecture and methodology can be applied is indicated.

  1. MC64-ClustalWP2: a highly-parallel hybrid strategy to align multiple sequences in many-core architectures.

    PubMed

    Díaz, David; Esteban, Francisco J; Hernández, Pilar; Caballero, Juan Antonio; Guevara, Antonio; Dorado, Gabriel; Gálvez, Sergio

    2014-01-01

    We have developed the MC64-ClustalWP2 as a new implementation of the Clustal W algorithm, integrating a novel parallelization strategy and significantly increasing the performance when aligning long sequences in architectures with many cores. It must be stressed that in such a process, the detailed analysis of both the software and hardware features and peculiarities is of paramount importance to reveal key points to exploit and optimize the full potential of parallelism in many-core CPU systems. The new parallelization approach has focused into the most time-consuming stages of this algorithm. In particular, the so-called progressive alignment has drastically improved the performance, due to a fine-grained approach where the forward and backward loops were unrolled and parallelized. Another key approach has been the implementation of the new algorithm in a hybrid-computing system, integrating both an Intel Xeon multi-core CPU and a Tilera Tile64 many-core card. A comparison with other Clustal W implementations reveals the high-performance of the new algorithm and strategy in many-core CPU architectures, in a scenario where the sequences to align are relatively long (more than 10 kb) and, hence, a many-core GPU hardware cannot be used. Thus, the MC64-ClustalWP2 runs multiple alignments more than 18x than the original Clustal W algorithm, and more than 7x than the best x86 parallel implementation to date, being publicly available through a web service. Besides, these developments have been deployed in cost-effective personal computers and should be useful for life-science researchers, including the identification of identities and differences for mutation/polymorphism analyses, biodiversity and evolutionary studies and for the development of molecular markers for paternity testing, germplasm management and protection, to assist breeding, illegal traffic control, fraud prevention and for the protection of the intellectual property (identification

  2. An automated workflow for parallel processing of large multiview SPIM recordings

    PubMed Central

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-01-01

    Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585

  3. Comparing Binaural Pre-processing Strategies III

    PubMed Central

    Warzybok, Anna; Ernst, Stephan M. A.

    2015-01-01

    A comprehensive evaluation of eight signal pre-processing strategies, including directional microphones, coherence filters, single-channel noise reduction, binaural beamformers, and their combinations, was undertaken with normal-hearing (NH) and hearing-impaired (HI) listeners. Speech reception thresholds (SRTs) were measured in three noise scenarios (multitalker babble, cafeteria noise, and single competing talker). Predictions of three common instrumental measures were compared with the general perceptual benefit caused by the algorithms. The individual SRTs measured without pre-processing and individual benefits were objectively estimated using the binaural speech intelligibility model. Ten listeners with NH and 12 HI listeners participated. The participants varied in age and pure-tone threshold levels. Although HI listeners required a better signal-to-noise ratio to obtain 50% intelligibility than listeners with NH, no differences in SRT benefit from the different algorithms were found between the two groups. With the exception of single-channel noise reduction, all algorithms showed an improvement in SRT of between 2.1 dB (in cafeteria noise) and 4.8 dB (in single competing talker condition). Model predictions with binaural speech intelligibility model explained 83% of the measured variance of the individual SRTs in the no pre-processing condition. Regarding the benefit from the algorithms, the instrumental measures were not able to predict the perceptual data in all tested noise conditions. The comparable benefit observed for both groups suggests a possible application of noise reduction schemes for listeners with different hearing status. Although the model can predict the individual SRTs without pre-processing, further development is necessary to predict the benefits obtained from the algorithms at an individual level. PMID:26721922

  4. XPP A High Performance Parallel Signal Processing Platform for Space Applications

    NASA Astrophysics Data System (ADS)

    Schueler, E.; Syed, M.; Helfers, T.

    This document presents the eXtreme Processing Platform (XPP), a new runtime reconfigurable data processing technology developed by PACT GmbH which combines the performance of ASICs with the flexibility of DSPs. The XPP is built using a scalable array of arithmetic processing elements, embedded memories, high bandwidth I/O, an auto synchronizing packet oriented data network, an internal event network that enables the control of program flow using execution flags, and is designed to support different types of parallelism, like multithreading, multitasking and multiple parallel instances. The technology promises to provide high flexible payloads in future telecommunication satellites, scientific missions and short time to market, much needed to cope up with the significant changes in telecommunication market and rapidly changing customer needs.

  5. Parallel processing architecture for H.264 deblocking filter on multi-core platforms

    NASA Astrophysics Data System (ADS)

    Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao

    2012-03-01

    filter for multi core platforms such as HyperX technology. Parallel techniques such as parallel processing of independent macroblocks, sub blocks, and pixel row level are examined in this work. The deblocking architecture consists of a basic cell called deblocking filter unit (DFU) and dependent data buffer manager (DFM). The DFU can be used in several instances, catering to different performance needs the DFM serves the data required for the different number of DFUs, and also manages all the neighboring data required for future data processing of DFUs. This approach achieves the scalability, flexibility, and performance excellence required in deblocking filters.

  6. Application of parallel computing to seismic damage process simulation of an arch dam

    NASA Astrophysics Data System (ADS)

    Zhong, Hong; Lin, Gao; Li, Jianbo

    2010-06-01

    The simulation of damage process of high arch dam subjected to strong earthquake shocks is significant to the evaluation of its performance and seismic safety, considering the catastrophic effect of dam failure. However, such numerical simulation requires rigorous computational capacity. Conventional serial computing falls short of that and parallel computing is a fairly promising solution to this problem. The parallel finite element code PDPAD was developed for the damage prediction of arch dams utilizing the damage model with inheterogeneity of concrete considered. Developed with programming language Fortran, the code uses a master/slave mode for programming, domain decomposition method for allocation of tasks, MPI (Message Passing Interface) for communication and solvers from AZTEC library for solution of large-scale equations. Speedup test showed that the performance of PDPAD was quite satisfactory. The code was employed to study the damage process of a being-built arch dam on a 4-node PC Cluster, with more than one million degrees of freedom considered. The obtained damage mode was quite similar to that of shaking table test, indicating that the proposed procedure and parallel code PDPAD has a good potential in simulating seismic damage mode of arch dams. With the rapidly growing need for massive computation emerged from engineering problems, parallel computing will find more and more applications in pertinent areas.

  7. On the design and implementation of a parallel, object-oriented, image processing toolkit

    SciTech Connect

    Kamath, C; Baldwin, C; Fodor, I; Tang, N A

    2000-06-22

    Advanced in technology have enabled us to collect data from observations, experiments, and simulations at an ever increasing pace. As these data sets approach the terabyte and petabyte range, scientists are increasingly using semi-automated techniques from data mining and pattern recognition to find useful information in the data. In order for data mining to be successful, the raw data must first be processed into a form suitable for the detection of patterns. When the data is in the form of images, this can involve a substantial amount of processing on very large data sets. To help make this task more efficient, they are designing and implementing an object-oriented image processing toolkit that specifically targets massively-parallel, distributed-memory architectures. They first show that it is possible to use object-oriented technology to effectively address the diverse needs of image applications. Next, they describe how we abstract out the similarities in image processing algorithms to enable re-use in the software. They will also discuss the difficulties encountered in parallelizing image algorithms on massively parallel machines as well as the bottlenecks to high performance. They will demonstrate the work using images from an astronomical data set, and illustrate how techniques such as filters and denoising through the thresholding of wavelet coefficients can be applied when a large image is distributed across several processors.

  8. A learnable parallel processing architecture towards unity of memory and computing.

    PubMed

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-08-14

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  9. Optoelectronic parallel processing with smart pixel arrays for automated screening of cervical smear imagery

    NASA Astrophysics Data System (ADS)

    Metz, John Langdon

    2000-10-01

    This thesis investigates the use of optoelectronic parallel processing systems with smart photosensor arrays (SPAs) to examine cervical smear images. The automation of cervical smear screening seeks to reduce human workload and improve the accuracy of detecting pre- cancerous and cancerous conditions. Increasing the parallelism of image processing improves the speed and accuracy of locating regions-of-interest (ROI) from images of the cervical smear for the first stage of a two-stage screening system. The two-stage approach first detects ROI optoelectronically before classifying them using more time consuming electronic algorithms. The optoelectronic hit/miss transform (HMT) is computed using gray scale modulation spatial light modulators in an optical correlator. To further the parallelism of this system, a novel CMOS SPA computes the post processing steps required by the HMT algorithm. The SPA reduces the subsequent bandwidth passed into the second, electronic image processing stage classifying the detected ROI. Limitations in the miss operation of the HMT suggest using only the hit operation for detecting ROI. This makes possible a single SPA chip approach using only the hit operation for ROI detection which may replace the optoelectronic correlator in the screening system. Both the HMT SPA postprocessor and the SPA ROI detector design provide compact, efficient, and low-cost optoelectronic solutions to performing ROI detection on cervical smears. Analysis of optoelectronic ROI detection with electronic ROI classification shows these systems have the potential to perform at, or above, the current error rates for manual classification of cervical smears.

  10. A learnable parallel processing architecture towards unity of memory and computing

    PubMed Central

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-01-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area. PMID:26271243

  11. A learnable parallel processing architecture towards unity of memory and computing

    NASA Astrophysics Data System (ADS)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  12. Age-related differences in task goal processing strategies during action cascading.

    PubMed

    Stock, Ann-Kathrin; Gohil, Krutika; Beste, Christian

    2016-06-01

    We are often faced with situations requiring the execution of a coordinated cascade of different actions to achieve a goal, but we can apply different strategies to do so. Until now, these different action cascading strategies have, however, not been examined with respect to possible effects of aging. We tackled this question in a systems neurophysiological study using EEG and source localization in healthy older adults and employing mathematical constraints to determine the strategy applied. The results suggest that older adults seem to apply a less efficient strategy when cascading different actions. Compared to younger adults, older adults seem to struggle to hierarchically organize their actions, which leads to an inefficient and more parallel processing of different task goals. On a systems level, the observed deficit is most likely due to an altered processing of task goals at the response selection level (P3 ERP) and related to changes of neural processes in the temporo-parietal junction.

  13. A piloted comparison of elastic and rigid blade-element rotor models using parallel processing technology

    NASA Technical Reports Server (NTRS)

    Hill, Gary; Du Val, Ronald W.; Green, John A.; Huynh, Loc C.

    1990-01-01

    A piloted comparison of rigid and aeroelastic blade-element rotor models was conducted at the Crew Station Research and Development Facility (CSRDF) at Ames Research Center. FLIGHTLAB, a new simulation development and analysis tool, was used to implement these models in real time using parallel processing technology. Pilot comments and quantitative analysis performed both on-line and off-line confirmed that elastic degrees of freedom significantly affect perceived handling qualities. Trim comparisons show improved correlation with flight test data when elastic modes are modeled. The results demonstrate the efficiency with which the mathematical modeling sophistication of existing simulation facilities can be upgraded using parallel processing, and the importance of these upgrades to simulation fidelity.

  14. Serial processing in reading aloud: no challenge for a parallel model.

    PubMed

    Zorzi, M

    2000-04-01

    K. Rastle and M. Coltheart (1999) challenged parallel models of reading by showing that the cost of irregularity in low-frequency exception words was modulated by the position of the irregularity in the word. This position-of-irregularity effect was taken as strong evidence of serial processing in reading. This article refutes Rastle and Coltheart's theoretical conclusions in 3 ways: First, a parallel model, the connectionist dual process model (M. Zorzi, G. Houghton, & B. Butterworth, 1998b), produces a position-of-irregularity effect. Second, the supposed serial effect can be reduced to a position-specific grapheme-phoneme consistency effect. Third, the position-of-irregularity effect vanishes when the experimental data are reanalyzed using grapheme-phoneme consistency as the covariate. This demonstration has broader implications for studies aiming at adjudicating between models: Strong inferences should be avoided until the computational models are actually tested.

  15. Serial and parallel processes in eye movement control: current controversies and future directions.

    PubMed

    Murray, Wayne S; Fischer, Martin H; Tatler, Benjamin W

    2013-01-01

    In this editorial for the special issue on serial and parallel processing in reading we explore the background to the current debate concerning whether the word recognition processes in reading are strictly serial-sequential or take place in an overlapping parallel fashion. We consider the history of the controversy and some of the underlying assumptions, together with an analysis of the types of evidence and arguments that have been adduced to both sides of the debate, concluding that both accounts necessarily presuppose some weakening of, or elasticity in, the eye-mind assumption. We then consider future directions, both for reading research and for scene viewing, and wrap up the editorial with a brief overview of the following articles and their conclusions.

  16. Accelerating the Gillespie Exact Stochastic Simulation Algorithm using hybrid parallel execution on graphics processing units.

    PubMed

    Komarov, Ivan; D'Souza, Roshan M

    2012-01-01

    The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.

  17. Modulator and VCSEL-MSM smart pixels for parallel pipeline networking and signal processing

    NASA Astrophysics Data System (ADS)

    Chen, C.-H.; Hoanca, Bogdan; Kuznia, C. B.; Pansatiankul, Dhawat E.; Zhang, Liping; Sawchuk, Alexander A.

    1999-07-01

    TRANslucent Smart Pixel Array (TRANSPAR) systems perform high performance parallel pipeline networking and signal processing based on optical propagation of 3D data packets. The TRANSPAR smart pixel devices use either self-electro- optic effect GaAs multiple quantum well modulators or CMOS- VCSEL-MSM (CMOS-Vertical Cavity Surface Emitting Laser- Metal-Semiconductor-Metal) technology. The data packets transfer among high throughput photonic network nodes using multiple access/collision detection or token-ring protocols.

  18. Eighth SIAM conference on parallel processing for scientific computing: Final program and abstracts

    SciTech Connect

    1997-12-31

    This SIAM conference is the premier forum for developments in parallel numerical algorithms, a field that has seen very lively and fruitful developments over the past decade, and whose health is still robust. Themes for this conference were: combinatorial optimization; data-parallel languages; large-scale parallel applications; message-passing; molecular modeling; parallel I/O; parallel libraries; parallel software tools; parallel compilers; particle simulations; problem-solving environments; and sparse matrix computations.

  19. Tuning of tool dynamics for increased stability of parallel (simultaneous) turning processes

    NASA Astrophysics Data System (ADS)

    Ozturk, E.; Comak, A.; Budak, E.

    2016-01-01

    Parallel (simultaneous) turning operations make use of more than one cutting tool acting on a common workpiece offering potential for higher productivity. However, dynamic interaction between the tools and workpiece and resulting chatter vibrations may create quality problems on machined surfaces. In order to determine chatter free cutting process parameters, stability models can be employed. In this paper, stability of parallel turning processes is formulated in frequency and time domain for two different parallel turning cases. Predictions of frequency and time domain methods demonstrated reasonable agreement with each other. In addition, the predicted stability limits are also verified experimentally. Simulation and experimental results show multi regional stability diagrams which can be used to select most favorable set of process parameters for higher stable material removal rates. In addition to parameter selection, developed models can be used to determine the best natural frequency ratio of tools resulting in the highest stable depth of cuts. It is concluded that the most stable operations are obtained when natural frequency of the tools are slightly off each other and worst stability occurs when the natural frequency of the tools are exactly the same.

  20. Fast phase processing in off-axis holography by CUDA including parallel phase unwrapping.

    PubMed

    Backoach, Ohad; Kariv, Saar; Girshovitz, Pinhas; Shaked, Natan T

    2016-02-22

    We present parallel processing implementation for rapid extraction of the quantitative phase maps from off-axis holograms on the Graphics Processing Unit (GPU) of the computer using computer unified device architecture (CUDA) programming. To obtain efficient implementation, we parallelized both the wrapped phase map extraction algorithm and the two-dimensional phase unwrapping algorithm. In contrast to previous implementations, we utilized unweighted least squares phase unwrapping algorithm that better suits parallelism. We compared the proposed algorithm run times on the CPU and the GPU of the computer for various sizes of off-axis holograms. Using the GPU implementation, we extracted the unwrapped phase maps from the recorded off-axis holograms at 35 frames per second (fps) for 4 mega pixel holograms, and at 129 fps for 1 mega pixel holograms, which presents the fastest processing framerates obtained so far, to the best of our knowledge. We then used common-path off-axis interferometric imaging to quantitatively capture the phase maps of a micro-organism with rapid flagellum movements.

  1. Parallel particle swarm optimization on a graphics processing unit with application to trajectory optimization

    NASA Astrophysics Data System (ADS)

    Wu, Q.; Xiong, F.; Wang, F.; Xiong, Y.

    2016-10-01

    In order to reduce the computational time, a fully parallel implementation of the particle swarm optimization (PSO) algorithm on a graphics processing unit (GPU) is presented. Instead of being executed on the central processing unit (CPU) sequentially, PSO is executed in parallel via the GPU on the compute unified device architecture (CUDA) platform. The processes of fitness evaluation, updating of velocity and position of all particles are all parallelized and introduced in detail. Comparative studies on the optimization of four benchmark functions and a trajectory optimization problem are conducted by running PSO on the GPU (GPU-PSO) and CPU (CPU-PSO). The impact of design dimension, number of particles and size of the thread-block in the GPU and their interactions on the computational time is investigated. The results show that the computational time of the developed GPU-PSO is much shorter than that of CPU-PSO, with comparable accuracy, which demonstrates the remarkable speed-up capability of GPU-PSO.

  2. Multichannel parallel free-space VCSEL optoelectronic interconnects for digital data transmission and processing

    NASA Astrophysics Data System (ADS)

    Liu, J. Jiang; Lawler, William B.; Riely, Brian P.; Chang, Wayne H.; Shen, Paul H.; Newman, Peter G.; Taysing-Lara, Monica A.; Olver, Kimberly; Koley, Bikash; Dagenais, Mario; Simonis, George J.

    2000-07-01

    A free-space integrated optoelectronic interconnect was built to explore parallel data transmission and processing. This interconnect comprises an 8 X 8 substrate-emitting 980-nm InGaAs/GaAs quantum-well vertical-cavity surface- emitting laser (VCSEL) array and an 8 X 8 InGaAs/InP P-I- N photodetector array. Both VCSEL and detector arrays were flip-chip bonded onto the complimentary metal-oxide- semiconductor (CMOS) circuitry, packaged in pin-grid array packages, and mounted on customized printed circuit boards. Individual data rates as high as 1.2 Gb/s on the VCSEL/CMOS transmitter array were measured. After the optical alignment, we carried out serial and parallel transmissions of digital data and live video scenes through this interconnect between two computers. Images captured by CCD camera were digitized to 8-bit data signals and transferred in serial bit-stream through multiple channels in this parallel VCSEL-detector optical interconnect. A data processing algorithm of edge detection was attempted during the data transfer. Final images were reconstructed back from optically transmitted and processed digital data. Although the transmitter and detector offered much higher data rates, we found that the overall image transfer rate was limited by the CMOS receiver circuits. A new design for the receiver circuitry was accomplished and submitted for fabrication.

  3. Scalability of preconditioners as a strategy for parallel computation of compressible fluid flow

    SciTech Connect

    Hansen, G.A.

    1996-05-01

    Parallel implementations of a Newton-Krylov-Schwarz algorithm are used to solve a model problem representing low Mach number compressible fluid flow over a backward-facing step. The Mach number is specifically selected to result in a numerically {open_quote}stiff{close_quotes} matrix problem, based on an implicit finite volume discretization of the compressible 2D Navier-Stokes/energy equations using primitive variables. Newton`s method is used to linearize the discrete system, and a preconditioned Krylov projection technique is used to solve the resulting linear system. Domain decomposition enables the development of a global preconditioner via the parallel construction of contributions derived from subdomains. Formation of the global preconditioner is based upon additive and multiplicative Schwarz algorithms, with and without subdomain overlap. The degree of parallelism of this technique is further enhanced with the use of a matrix-free approximation for the Jacobian used in the Krylov technique (in this case, GMRES(k)). Of paramount interest to this study is the implementation and optimization of these techniques on parallel shared-memory hardware, namely the Cray C90 and SGI Challenge architectures. These architectures were chosen as representative and commonly available to researchers interested in the solution of problems of this type. The Newton-Krylov-Schwarz solution technique is increasingly being investigated for computational fluid dynamics (CFD) applications due to the advantages of full coupling of all variables and equations, rapid non-linear convergence, and moderate memory requirements. A parallel version of this method that scales effectively on the above architectures would be extremely attractive to practitioners, resulting in efficient, cost-effective, parallel solutions exhibiting the benefits of the solution technique.

  4. GWM-VI: groundwater management with parallel processing for multiple MODFLOW versions

    USGS Publications Warehouse

    Banta, Edward R.; Ahlfeld, David P.

    2013-01-01

    Groundwater Management–Version Independent (GWM–VI) is a new version of the Groundwater Management Process of MODFLOW. The Groundwater Management Process couples groundwater-flow simulation with a capability to optimize stresses on the simulated aquifer based on an objective function and constraints imposed on stresses and aquifer state. GWM–VI extends prior versions of Groundwater Management in two significant ways—(1) it can be used with any version of MODFLOW that meets certain requirements on input and output, and (2) it is structured to allow parallel processing of the repeated runs of the MODFLOW model that are required to solve the optimization problem. GWM–VI uses the same input structure for files that describe the management problem as that used by prior versions of Groundwater Management. GWM–VI requires only minor changes to the input files used by the MODFLOW model. GWM–VI uses the Joint Universal Parameter IdenTification and Evaluation of Reliability Application Programming Interface (JUPITER-API) to implement both version independence and parallel processing. GWM–VI communicates with the MODFLOW model by manipulating certain input files and interpreting results from the MODFLOW listing file and binary output files. Nearly all capabilities of prior versions of Groundwater Management are available in GWM–VI. GWM–VI has been tested with MODFLOW-2005, MODFLOW-NWT (a Newton formulation for MODFLOW-2005), MF2005-FMP2 (the Farm Process for MODFLOW-2005), SEAWAT, and CFP (Conduit Flow Process for MODFLOW-2005). This report provides sample problems that demonstrate a range of applications of GWM–VI and the directory structure and input information required to use the parallel-processing capability.

  5. Real-time processing of radar return on a parallel computer

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1992-01-01

    NASA is working with the FAA to demonstrate the feasibility of pulse Doppler radar as a candidate airborne sensor to detect low altitude windshears. The need to provide the pilot with timely information about possible hazards has motivated a demand for real-time processing of a radar return. Investigated here is parallel processing as a means of accommodating the high data rates required. A PC based parallel computer, called the transputer, is used to investigate issues in real time concurrent processing of radar signals. A transputer network is made up of an array of single instruction stream processors that can be networked in a variety of ways. They are easily reconfigured and software development is largely independent of the particular network topology. The performance of the transputer is evaluated in light of the computational requirements. A number of algorithms have been implemented on the transputers in OCCAM, a language specially designed for parallel processing. These include signal processing algorithms such as the Fast Fourier Transform (FFT), pulse-pair, and autoregressive modelling, as well as routing software to support concurrency. The most computationally intensive task is estimating the spectrum. Two approaches have been taken on this problem, the first and most conventional of which is to use the FFT. By using table look-ups for the basis function and other optimizing techniques, an algorithm has been developed that is sufficient for real time. The other approach is to model the signal as an autoregressive process and estimate the spectrum based on the model coefficients. This technique is attractive because it does not suffer from the spectral leakage problem inherent in the FFT. Benchmark tests indicate that autoregressive modeling is feasible in real time.

  6. Parallel processing of general and specific threat during early stages of perception

    PubMed Central

    2016-01-01

    Differential processing of threat can consummate as early as 100 ms post-stimulus. Moreover, early perception not only differentiates threat from non-threat stimuli but also distinguishes among discrete threat subtypes (e.g. fear, disgust and anger). Combining spatial-frequency-filtered images of fear, disgust and neutral scenes with high-density event-related potentials and intracranial source estimation, we investigated the neural underpinnings of general and specific threat processing in early stages of perception. Conveyed in low spatial frequencies, fear and disgust images evoked convergent visual responses with similarly enhanced N1 potentials and dorsal visual (middle temporal gyrus) cortical activity (relative to neutral cues; peaking at 156 ms). Nevertheless, conveyed in high spatial frequencies, fear and disgust elicited divergent visual responses, with fear enhancing and disgust suppressing P1 potentials and ventral visual (occipital fusiform) cortical activity (peaking at 121 ms). Therefore, general and specific threat processing operates in parallel in early perception, with the ventral visual pathway engaged in specific processing of discrete threats and the dorsal visual pathway in general threat processing. Furthermore, selectively tuned to distinctive spatial-frequency channels and visual pathways, these parallel processes underpin dimensional and categorical threat characterization, promoting efficient threat response. These findings thus lend support to hybrid models of emotion. PMID:26412811

  7. Large-scale data-flow computer for parallel signal processing

    SciTech Connect

    Wong, F.S.; Ito, M.R.

    1982-01-01

    The authors describe a proposed data-driven, parallel computing machine for signal processing applications in which program codes are often executed repeatedly. This dataflow computer (DFC) consists of a large number of processing modules (PM) operating asynchronously; multiple concurrent activations of a single procedure could be supported by each PM without replication of codes. The architectural design emphasizes simplicity of system operations, modularity, speed and feasibility with current technology. Performance studies are carried out via software simulations. Results show some insights to the basic organization and the various modes of computation, the speed-ups and robustness of the design are also tested with the variations of several system parameters. 4 references.

  8. Nonlinear structural response using adaptive dynamic relaxation on a massively-parallel-processing system

    NASA Technical Reports Server (NTRS)

    Oakley, David R.; Knight, Norman F., Jr.

    1994-01-01

    A parallel adaptive dynamic relaxation (ADR) algorithm has been developed for nonlinear structural analysis. This algorithm has minimal memory requirements, is easily parallelizable and scalable to many processors, and is generally very reliable and efficient for highly nonlinear problems. Performance evaluations on single-processor computers have shown that the ADR algorithm is reliable and highly vectorizable, and that it is competitive with direct solution methods for the highly nonlinear problems considered. The present algorithm is implemented on the 512-processor Intel Touchstone DELTA system at Caltech, and it is designed to minimize the extent and frequency of interprocessor communication. The algorithm has been used to solve for the nonlinear static response of two and three dimensional hyperelastic systems involving contact. Impressive relative speedups have been achieved and demonstrate the high scalability of the ADR algorithm. For the class of problems addressed, the ADR algorithm represents a very promising approach for parallel-vector processing.

  9. Transform methods for developing parallel algorithms for cyclic-block signal processing

    NASA Astrophysics Data System (ADS)

    Marshall, T. G., Jr.

    A class of FIR and IIR single and multirate parallel filtering algorithms is introduced in which blocks of inputs and outputs are processed on-the-fly in a cyclic manner. There is no inherent latency introduced by the decomposition procedure giving the parallelism, the system latency being primarily due to the component processors. The structure is particularly well-suited for systems in which the component processors are the familiar DSP chips optimized for convolution although other component structures can be accommodated. In particular, the automatic data shifting feature of the TMS320 series processors can be utilized in these algorithms. A transform notation, introduced for digital filter banks, is recast in the desired form for this application. The resulting structure of the system, in this notation, is a circulant matrix for FIR filtering or a related matrix in other cases. The cyclic properties of the system and useful implementation flexibility result from this matrix structure.

  10. A message passing kernel for the hypercluster parallel processing test bed

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Quealy, Angela; Cole, Gary L.

    1989-01-01

    A Message-Passing Kernel (MPK) for the Hypercluster parallel-processing test bed is described. The Hypercluster is being developed at the NASA Lewis Research Center to support investigations of parallel algorithms and architectures for computational fluid and structural mechanics applications. The Hypercluster resembles the hypercube architecture except that each node consists of multiple processors communicating through shared memory. The MPK efficiently routes information through the Hypercluster, using a message-passing protocol when necessary and faster shared-memory communication whenever possible. The MPK also interfaces all of the processors with the Hypercluster operating system (HYCLOPS), which runs on a Front-End Processor (FEP). This approach distributes many of the I/O tasks to the Hypercluster processors and eliminates the need for a separate I/O support program on the FEP.

  11. A targeted enrichment strategy for massively parallel sequencing of angiosperm plastid genomes1

    PubMed Central

    Stull, Gregory W.; Moore, Michael J.; Mandala, Venkata S.; Douglas, Norman A.; Kates, Heather-Rose; Qi, Xinshuai; Brockington, Samuel F.; Soltis, Pamela S.; Soltis, Douglas E.; Gitzendanner, Matthew A.

    2013-01-01

    • Premise of the study: We explored a targeted enrichment strategy to facilitate rapid and low-cost next-generation sequencing (NGS) of numerous complete plastid genomes from across the phylogenetic breadth of angiosperms. • Methods and Results: A custom RNA probe set including the complete sequences of 22 previously sequenced eudicot plastomes was designed to facilitate hybridization-based targeted enrichment of eudicot plastid genomes. Using this probe set and an Agilent SureSelect targeted enrichment kit, we conducted an enrichment experiment including 24 angiosperms (22 eudicots, two monocots), which were subsequently sequenced on a single lane of the Illumina GAIIx with single-end, 100-bp reads. This approach yielded nearly complete to complete plastid genomes with exceptionally high coverage (mean coverage: 717×), even for the two monocots. • Conclusions: Our enrichment experiment was highly successful even though many aspects of the capture process employed were suboptimal. Hence, significant improvements to this methodology are feasible. With this general approach and probe set, it should be possible to sequence more than 300 essentially complete plastid genomes in a single Illumina GAIIx lane (achieving ∼50× mean coverage). However, given the complications of pooling numerous samples for multiplex sequencing and the limited number of barcodes (e.g., 96) available in commercial kits, we recommend 96 samples as a current practical maximum for multiplex plastome sequencing. This high-throughput approach should facilitate large-scale plastid genome sequencing at any level of phylogenetic diversity in angiosperms. PMID:25202518

  12. A parallel and distributed-processing model of joint attention, social cognition and autism.

    PubMed

    Mundy, Peter; Sullivan, Lisa; Mastergeorge, Ann M

    2009-02-01

    The impaired development of joint attention is a cardinal feature of autism. Therefore, understanding the nature of joint attention is central to research on this disorder. Joint attention may be best defined in terms of an information-processing system that begins to develop by 4-6 months of age. This system integrates the parallel processing of internal information about one's own visual attention with external information about the visual attention of other people. This type of joint encoding of information about self and other attention requires the activation of a distributed anterior and posterior cortical attention network. Genetic regulation, in conjunction with self-organizing behavioral activity, guides the development of functional connectivity in this network. With practice in infancy the joint processing of self-other attention becomes automatically engaged as an executive function. It can be argued that this executive joint attention is fundamental to human learning as well as the development of symbolic thought, social cognition and social competence throughout the life span. One advantage of this parallel and distributed-processing model of joint attention is that it directly connects theory on social pathology to a range of phenomena in autism associated with neural connectivity, constructivist and connectionist models of cognitive development, early intervention, activity-dependent gene expression and atypical ocular motor control.

  13. An experimental research on the mixing process of supersonic oxygen-iodine parallel streams

    NASA Astrophysics Data System (ADS)

    Wang, Zengqiang; Sang, Fengting; Zhang, Yuelong; Hui, Xiaokang; Xu, Mingxiu; Zhang, Peng; Zhao, Weili; Fang, Benjie; Duo, Liping; Jin, Yuqi

    2014-12-01

    The O2(1Δ)/I2 mixing process is one of the most important steps in chemical oxygen-iodine laser (COIL). Based on the chemical fluorescence method (CFM), a diagnostic system was set up to image electronically excited fluorescent I2(B3П0) by means of a high speed camera. An optimized data analysis approach was proposed to analyze the mixing process of supersonic oxygen-iodine parallel streams, employing a set of qualitative and quantitative parameters and a proper percentage boundary threshold of the fluorescence zone. A slit nozzle bank with supersonic parallel streams and a trip tab set for enhancing the mixing process were designed and fabricated. With the diagnostic system and the data analysis approach, the performance of the trip tab set was examined and is demonstrated in this work. With the mixing enhancement, the fluorescence zone area was enlarged 3.75 times. We have studied the mixing process under different flow conditions and demonstrated the mixing properties with different iodine buffer gases, including N2, Ar, He and CO2. It was found that, among the four tested gases, Ar had the best penetration ability, whilst He showed the best free diffusion ability, and both of them could be well used as the buffer gas in our experiments. These experimental results can be useful for designing and optimizing COIL systems.

  14. A Parallel and Distributed Processing Model of Joint Attention, Social-Cognition and Autism

    PubMed Central

    Mundy, Peter; Sullivan, Lisa; Mastergeorge, Ann M.

    2009-01-01

    Scientific Abstract The impaired development of joint attention is a cardinal feature of autism. Therefore, understanding the nature of joint attention is a central to research on this disorder. Joint attention may be best defined in terms of an information processing system that begins to develop by 4–6 months of age. This system integrates the parallel processing of internal information about one’s own visual attention with external information about the visual attention of other people. This type of joint encoding of information about self and other attention requires the activation of a distributed anterior and posterior cortical attention network. Genetic regulation, in conjunction with self-organizing behavioral activity guides the development of functional connectivity in this network. With practice in infancy the joint processing of self-other attention becomes automatically engaged as an executive function. It can be argued that this executive joint-attention is fundamental to human learning, as well as the development of symbolic thought, social-cognition and social-competence throughout the life span. One advantage of this parallel and distributed processing model of joint attention (PDPM) is that it directly connects theory on social pathology to a range of phenomenon in autism associated with neural connectivity, constructivist and connectionist models of cognitive development, early intervention, activity-dependent gene expression, and atypical ocular motor control. PMID:19358304

  15. Parallel optical control of spatiotemporal neuronal spike activity using high-speed digital light processing.

    PubMed

    Jerome, Jason; Foehring, Robert C; Armstrong, William E; Spain, William J; Heck, Detlef H

    2011-01-01

    Neurons in the mammalian neocortex receive inputs from and communicate back to thousands of other neurons, creating complex spatiotemporal activity patterns. The experimental investigation of these parallel dynamic interactions has been limited due to the technical challenges of monitoring or manipulating neuronal activity at that level of complexity. Here we describe a new massively parallel photostimulation system that can be used to control action potential firing in in vitro brain slices with high spatial and temporal resolution while performing extracellular or intracellular electrophysiological measurements. The system uses digital light processing technology to generate 2-dimensional (2D) stimulus patterns with >780,000 independently controlled photostimulation sites that operate at high spatial (5.4 μm) and temporal (>13 kHz) resolution. Light is projected through the quartz-glass bottom of the perfusion chamber providing access to a large area (2.76 mm × 2.07 mm) of the slice preparation. This system has the unique capability to induce temporally precise action potential firing in large groups of neurons distributed over a wide area covering several cortical columns. Parallel photostimulation opens up new opportunities for the in vitro experimental investigation of spatiotemporal neuronal interactions at a broad range of anatomical scales.

  16. Parallel Optical Control of Spatiotemporal Neuronal Spike Activity Using High-Speed Digital Light Processing

    PubMed Central

    Jerome, Jason; Foehring, Robert C.; Armstrong, William E.; Spain, William J.; Heck, Detlef H.

    2011-01-01

    Neurons in the mammalian neocortex receive inputs from and communicate back to thousands of other neurons, creating complex spatiotemporal activity patterns. The experimental investigation of these parallel dynamic interactions has been limited due to the technical challenges of monitoring or manipulating neuronal activity at that level of complexity. Here we describe a new massively parallel photostimulation system that can be used to control action potential firing in in vitro brain slices with high spatial and temporal resolution while performing extracellular or intracellular electrophysiological measurements. The system uses digital light processing technology to generate 2-dimensional (2D) stimulus patterns with >780,000 independently controlled photostimulation sites that operate at high spatial (5.4 μm) and temporal (>13 kHz) resolution. Light is projected through the quartz–glass bottom of the perfusion chamber providing access to a large area (2.76 mm × 2.07 mm) of the slice preparation. This system has the unique capability to induce temporally precise action potential firing in large groups of neurons distributed over a wide area covering several cortical columns. Parallel photostimulation opens up new opportunities for the in vitro experimental investigation of spatiotemporal neuronal interactions at a broad range of anatomical scales. PMID:21904526

  17. Equivalency-processing parallel photonic integrated circuit (EP3IC): equivalence search module based on multiwavelength guided-wave technology.

    PubMed

    Detofsky, A; Choo, P Y; Louri, A

    2000-02-10

    We present an optoelectronic module called the equivalency-processing parallel photonic integrated circuit (EP(3)IC) that is created specifically to implement high-speed parallel equivalence searches (i.e., database word searches). The module combines a parallel-computation model with multiwavelength photonic integrated-circuit technology to achieve high-speed data processing. On the basis of simulation and initial analytical computation, a single-step multicomparand word-parallel bit-parallel equality search can attain an aggregate processing speed of 82 Tbit/s. We outline the theoretical design of the monolithic module and the integrated components and compare this with a functionally identical bulk-optics implementation. This integrated-circuit solution provides relatively low-power operation, fast switching speed, a compact system footprint, vibration tolerance, and ease of manufacturing.

  18. Decomposition: A Strategy for Query Processing.

    ERIC Educational Resources Information Center

    Wong, Eugene; Youssefi, Karel

    Multivariable queries can be processed in the data base management system INGRES. The general procedure is to decompose the query into a sequence of one-variable queries using two processes. One process is reduction which requires breaking off components of the query which are joined to it by a single variable. The other process,…

  19. A Theory of Interactive Parallel Processing: New Capacity Measures and Predictions for a Response Time Inequality Series

    ERIC Educational Resources Information Center

    Townsend, James T.; Wenger, Michael J.

    2004-01-01

    The authors present a theory of stochastic interactive parallel processing with special emphasis on channel interactions and their relation to system capacity. The approach is based both on linear systems theory augmented with stochastic elements and decisional operators and on a metatheory of parallel channels' dependencies that incorporates…

  20. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    SciTech Connect

    Moreland, Kenneth; Geveci, Berk

    2014-11-01

    The evolution of the computing world from teraflop to petaflop has been relatively effortless, with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipeline model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.

  1. Mobile Monitoring Data Processing & Analysis Strategies

    EPA Science Inventory

    The development of portable, high-time resolution instruments for measuring the concentrations of a variety of air pollutants has made it possible to collect data while in motion. This strategy, known as mobile monitoring, involves mounting air sensors on variety of different pla...

  2. Mobile Monitoring Data Processing and Analysis Strategies

    EPA Science Inventory

    The development of portable, high-time resolution instruments for measuring the concentrations of a variety of air pollutants has made it possible to collect data while in motion. This strategy, known as mobile monitoring, involves mounting air sensors on variety of different pla...

  3. Parallel Demand-Withdraw Processes in Family Therapy for Adolescent Drug Abuse

    PubMed Central

    Rynes, Kristina N.; Rohrbaugh, Michael J.; Lebensohn-Chialvo, Florencia; Shoham, Varda

    2013-01-01

    Isomorphism, or parallel process, occurs in family therapy when patterns of therapist-client interaction replicate problematic interaction patterns within the family. This study investigated parallel demand-withdraw processes in Brief Strategic Family Therapy (BSFT) for adolescent drug abuse, hypothesizing that therapist-demand/adolescent-withdraw interaction (TD/AW) cycles observed early in treatment would predict poor adolescent outcomes at follow-up for families who exhibited entrenched parent-demand/adolescent-withdraw interaction (PD/AW) before treatment began. Participants were 91 families who received at least 4 sessions of BSFT in a multi-site clinical trial on adolescent drug abuse (Robbins et al., 2011). Prior to receiving therapy, families completed videotaped family interaction tasks from which trained observers coded PD/AW. Another team of raters coded TD/AW during two early BSFT sessions. The main dependent variable was the number of drug use days that adolescents reported in Timeline Follow-Back interviews 7 to 12 months after family therapy began. Zero-inflated Poisson (ZIP) regression analyses supported the main hypothesis, showing that PD/AW and TD/AW interacted to predict adolescent drug use at follow-up. For adolescents in high PD/AW families, higher levels of TD/AW predicted significant increases in drug use at follow-up, whereas for low PD/AW families, TD/AW and follow-up drug use were unrelated. Results suggest that attending to parallel demand-withdraw processes in parent/adolescent and therapist/adolescent dyads may be useful in family therapy for substance-using adolescents. PMID:23438248

  4. Adventures in Parallel Processing: Entry, Descent and Landing Simulation for the Genesis and Stardust Missions

    NASA Technical Reports Server (NTRS)

    Lyons, Daniel T.; Desai, Prasun N.

    2005-01-01

    This paper will describe the Entry, Descent and Landing simulation tradeoffs and techniques that were used to provide the Monte Carlo data required to approve entry during a critical period just before entry of the Genesis Sample Return Capsule. The same techniques will be used again when Stardust returns on January 15, 2006. Only one hour was available for the simulation which propagated 2000 dispersed entry states to the ground. Creative simulation tradeoffs combined with parallel processing were needed to provide the landing footprint statistics that were an essential part of the Go/NoGo decision that authorized release of the Sample Return Capsule a few hours before entry.

  5. An Optimization System with Parallel Processing for Reducing Common-Mode Current on Electronic Control Unit

    NASA Astrophysics Data System (ADS)

    Okazaki, Yuji; Uno, Takanori; Asai, Hideki

    In this paper, we propose an optimization system with parallel processing for reducing electromagnetic interference (EMI) on electronic control unit (ECU). We adopt simulated annealing (SA), genetic algorithm (GA) and taboo search (TS) to seek optimal solutions, and a Spice-like circuit simulator to analyze common-mode current. Therefore, the proposed system can determine the adequate combinations of the parasitic inductance and capacitance values on printed circuit board (PCB) efficiently and practically, to reduce EMI caused by the common-mode current. Finally, we apply the proposed system to an example circuit to verify the validity and efficiency of the system.

  6. Leveraging human oversight and intervention in large-scale parallel processing of open-source data

    NASA Astrophysics Data System (ADS)

    Casini, Enrico; Suri, Niranjan; Bradshaw, Jeffrey M.

    2015-05-01

    The popularity of cloud computing along with the increased availability of cheap storage have led to the necessity of elaboration and transformation of large volumes of open-source data, all in parallel. One way to handle such extensive volumes of information properly is to take advantage of distributed computing frameworks like Map-Reduce. Unfortunately, an entirely automated approach that excludes human intervention is often unpredictable and error prone. Highly accurate data processing and decision-making can be achieved by supporting an automatic process through human collaboration, in a variety of environments such as warfare, cyber security and threat monitoring. Although this mutual participation seems easily exploitable, human-machine collaboration in the field of data analysis presents several challenges. First, due to the asynchronous nature of human intervention, it is necessary to verify that once a correction is made, all the necessary reprocessing is done in chain. Second, it is often needed to minimize the amount of reprocessing in order to optimize the usage of resources due to limited availability. In order to improve on these strict requirements, this paper introduces improvements to an innovative approach for human-machine collaboration in the processing of large amounts of open-source data in parallel.

  7. Parenting and the parallel processes in parents' counseling supervision for eating-related problems.

    PubMed

    Golan, Moria

    2014-04-01

    This paper presents an integrative model for supervising counselors of parents who face eating-related problems in their families. The model is grounded in the theory of parallel processes which occur during the supervision of health-care professionals as well as the counseling of parents and patients. The aim of this model is to conceptualize components and processes in the supervision space, in order to: (a) create a nurturing environment for health-care facilitators, parents and children, (b) better understand the complex and difficult nature of parenting, the challenge counselors face, and the skills and practices used in parenting and in counseling, and (c) better own practices and oppose the judgment that often dominates in counseling and supervision. This paper reflects upon the tradition of supervision and offers a comprehensive view of this process, including its challenges, skills and practices.

  8. Towards a Standard Mixed-Signal Parallel Processing Architecture for Miniature and Microrobotics.

    PubMed

    Sadler, Brian M; Hoyos, Sebastian

    2014-01-01

    The conventional analog-to-digital conversion (ADC) and digital signal processing (DSP) architecture has led to major advances in miniature and micro-systems technology over the past several decades. The outlook for these systems is significantly enhanced by advances in sensing, signal processing, communications and control, and the combination of these technologies enables autonomous robotics on the miniature to micro scales. In this article we look at trends in the combination of analog and digital (mixed-signal) processing, and consider a generalized sampling architecture. Employing a parallel analog basis expansion of the input signal, this scalable approach is adaptable and reconfigurable, and is suitable for a large variety of current and future applications in networking, perception, cognition, and control.

  9. Towards a Standard Mixed-Signal Parallel Processing Architecture for Miniature and Microrobotics

    PubMed Central

    Sadler, Brian M; Hoyos, Sebastian

    2014-01-01

    The conventional analog-to-digital conversion (ADC) and digital signal processing (DSP) architecture has led to major advances in miniature and micro-systems technology over the past several decades. The outlook for these systems is significantly enhanced by advances in sensing, signal processing, communications and control, and the combination of these technologies enables autonomous robotics on the miniature to micro scales. In this article we look at trends in the combination of analog and digital (mixed-signal) processing, and consider a generalized sampling architecture. Employing a parallel analog basis expansion of the input signal, this scalable approach is adaptable and reconfigurable, and is suitable for a large variety of current and future applications in networking, perception, cognition, and control. PMID:26601042

  10. Churchill: an ultra-fast, deterministic, highly scalable and balanced parallelization strategy for the discovery of human genetic variation in clinical and population-scale genomics.

    PubMed

    Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter

    2015-01-20

    While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.

  11. Achieving National Security Strategy: An Effective Process?

    DTIC Science & Technology

    2008-01-01

    Smith , a career foreign service officer and former Deputy Chief ofMission, the strength ofDOS is its ability to operate with minimal guidance.28 The...DOS’s five-year strategic plan may offer the minimal guidance Mr. Smith suggests. This five- year plan, which is provided by the Secretary of State...outlines the departments overall strategy, which ~ves the latitude required to achieve its mission goals. Mr. Smith also recognizes that, "Most State

  12. Rankings in Institutional Strategies and Processes: Impact or Illusion?

    ERIC Educational Resources Information Center

    Hazelkorn, Ellen; Loukkola, Tia; Zhang, Thérèse

    2014-01-01

    The "Rankings in Institutional Strategies and Processes" (RISP) project is the first pan-European study of the impact and influence of rankings on European higher education institutions. The project has sought to build understanding of how rankings impact and influence the development of institutional strategies and processes and its…

  13. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    PubMed

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  14. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  15. MC64-ClustalWP2: A Highly-Parallel Hybrid Strategy to Align Multiple Sequences in Many-Core Architectures

    PubMed Central

    Díaz, David; Esteban, Francisco J.; Hernández, Pilar; Caballero, Juan Antonio; Guevara, Antonio

    2014-01-01

    We have developed the MC64-ClustalWP2 as a new implementation of the Clustal W algorithm, integrating a novel parallelization strategy and significantly increasing the performance when aligning long sequences in architectures with many cores. It must be stressed that in such a process, the detailed analysis of both the software and hardware features and peculiarities is of paramount importance to reveal key points to exploit and optimize the full potential of parallelism in many-core CPU systems. The new parallelization approach has focused into the most time-consuming stages of this algorithm. In particular, the so-called progressive alignment has drastically improved the performance, due to a fine-grained approach where the forward and backward loops were unrolled and parallelized. Another key approach has been the implementation of the new algorithm in a hybrid-computing system, integrating both an Intel Xeon multi-core CPU and a Tilera Tile64 many-core card. A comparison with other Clustal W implementations reveals the high-performance of the new algorithm and strategy in many-core CPU architectures, in a scenario where the sequences to align are relatively long (more than 10 kb) and, hence, a many-core GPU hardware cannot be used. Thus, the MC64-ClustalWP2 runs multiple alignments more than 18x than the original Clustal W algorithm, and more than 7x than the best x86 parallel implementation to date, being publicly available through a web service. Besides, these developments have been deployed in cost-effective personal computers and should be useful for life-science researchers, including the identification of identities and differences for mutation/polymorphism analyses, biodiversity and evolutionary studies and for the development of molecular markers for paternity testing, germplasm management and protection, to assist breeding, illegal traffic control, fraud prevention and for the protection of the intellectual property (identification

  16. Process performance of parallel bioreactors for batch cultivation of Streptomyces tendae.

    PubMed

    Hortsch, Ralf; Krispin, Harald; Weuster-Botz, Dirk

    2011-03-01

    Batch cultivations of the nikkomycin Z producer Streptomyces tendae were performed in three different parallel bioreactor systems (milliliter-scale stirred-tank reactors, shake flasks and shaken microtiter plate) in comparison to a standard liter-scale stirred-tank reactor as reference. Similar dry cell weight concentrations were measured as function of process time in stirred-tank reactors and shake flasks, whereas only poor growth was observed in the shaken microtiter plate. In contrast, the nikkomycin Z production differed significantly between the stirred and shaken bioreactors. The measured product concentrations and product formation kinetics were almost the same in the stirred-tank bioreactors of different scale. Much less nikkomycin Z was formed in the shake flasks and MTP cultivations, most probably due to oxygen limitations. To investigate the non-Newtonian shear-thinning behavior of the culture broth in small-scale bioreactors, a new and simple method was applied to estimate the rheological behavior. The apparent viscosities were found to be very similar in the stirred-tank bioreactors, whereas the apparent viscosity was up to two times increased in the shake flask cultivations due to a lower average shear rate of this reactor system. These data illustrate that different engineering characteristics of parallel bioreactors applied for process development can have major implications for scale-up of bioprocesses with non-Newtonian viscous culture broths.

  17. A parallel process growth model of avoidant personality disorder symptoms and personality traits.

    PubMed

    Wright, Aidan G C; Pincus, Aaron L; Lenzenweger, Mark F

    2013-07-01

    Avoidant personality disorder (AVPD), like other personality disorders, has historically been construed as a highly stable disorder. However, results from a number of longitudinal studies have found that the symptoms of AVPD demonstrate marked change over time. Little is known about which other psychological systems are related to this change. Although cross-sectional research suggests a strong relationship between AVPD and personality traits, no work has examined the relationship of their change trajectories. The current study sought to establish the longitudinal relationship between AVPD and basic personality traits using parallel process growth curve modeling. Parallel process growth curve modeling was applied to the trajectories of AVPD and basic personality traits from the Longitudinal Study of Personality Disorders (Lenzenweger, M. F., 2006, The longitudinal study of personality disorders: History, design considerations, and initial findings. Journal of Personality Disorders, 20, 645-670. doi:10.1521/pedi.2006.20.6.645), a naturalistic, prospective, multiwave, longitudinal study of personality disorder, temperament, and normal personality. The focus of these analyses is on the relationship between the rates of change in both AVPD symptoms and basic personality traits. AVPD symptom trajectories demonstrated significant negative relationships with the trajectories of interpersonal dominance and affiliation, and a significant positive relationship to rates of change in neuroticism. These results provide some of the first compelling evidence that trajectories of change in PD symptoms and personality traits are linked. These results have important implications for the ways in which temporal stability is conceptualized in AVPD specifically, and PD in general.

  18. Distributed representation of social odors indicates parallel processing in the antennal lobe of ants.

    PubMed

    Brandstaetter, Andreas Simon; Kleineidam, Christoph Johannes

    2011-11-01

    In colonies of eusocial Hymenoptera cooperation is organized through social odors, and particularly ants rely on a sophisticated odor communication system. Neuronal information about odors is represented in spatial activity patterns in the primary olfactory neuropile of the insect brain, the antennal lobe (AL), which is analog to the vertebrate olfactory bulb. The olfactory system is characterized by neuroanatomical compartmentalization, yet the functional significance of this organization is unclear. Using two-photon calcium imaging, we investigated the neuronal representation of multicomponent colony odors, which the ants assess to discriminate friends (nestmates) from foes (nonnestmates). In the carpenter ant Camponotus floridanus, colony odors elicited spatial activity patterns distributed across different AL compartments. Activity patterns in response to nestmate and nonnestmate colony odors were overlapping. This was expected since both consist of the same components at differing ratios. Colony odors change over time and the nervous system has to constantly adjust for this (template reformation). Measured activity patterns were variable, and variability was higher in response to repeated nestmate than to repeated nonnestmate colony odor stimulation. Variable activity patterns may indicate neuronal plasticity within the olfactory system, which is necessary for template reformation. Our results indicate that information about colony odors is processed in parallel in different neuroanatomical compartments, using the computational power of the whole AL network. Parallel processing might be advantageous, allowing reliable discrimination of highly complex social odors.

  19. Efficient Process Migration for Parallel Processing on Non-Dedicated Networks of Workstations

    NASA Technical Reports Server (NTRS)

    Chanchio, Kasidit; Sun, Xian-He

    1996-01-01

    This paper presents the design and preliminary implementation of MpPVM, a software system that supports process migration for PVM application programs in a non-dedicated heterogeneous computing environment. New concepts of migration point as well as migration point analysis and necessary data analysis are introduced. In MpPVM, process migrations occur only at previously inserted migration points. Migration point analysis determines appropriate locations to insert migration points; whereas, necessary data analysis provides a minimum set of variables to be transferred at each migration pint. A new methodology to perform reliable point-to-point data communications in a migration environment is also discussed. Finally, a preliminary implementation of MpPVM and its experimental results are presented, showing the correctness and promising performance of our process migration mechanism in a scalable non-dedicated heterogeneous computing environment. While MpPVM is developed on top of PVM, the process migration methodology introduced in this study is general and can be applied to any distributed software environment.

  20. Investigation of Mediational Processes Using Parallel Process Latent Growth Curve Modeling.

    ERIC Educational Resources Information Center

    Cheong, JeeWon; MacKinnon, David P.; Khoo, Siek Toon

    2003-01-01

    Investigated a method to evaluate mediational processes using latent growth curve modeling and tested it with empirical data from a longitudinal steroid use prevention program focusing on 1,506 high school football players over 4 years. Findings suggest the usefulness of the approach. (SLD)

  1. Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy

    PubMed Central

    Tian, Yuling; Zhang, Hongxian

    2016-01-01

    For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic–there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions. PMID:27487242

  2. Surface preparation strategies for improved parallelization and reproducible MALDI-TOF MS ligand binding assays.

    PubMed

    Roth, Michael J; Maresh, Erica M; Plymire, Daniel A; Zhang, Junmei; Corbett, John R; Robbins, Roger; Patrie, Steven M

    2013-01-01

    Immunoassays are employed in academia and the healthcare and biotech industries for high-throughput, quantitative screens of biomolecules. We have developed monolayer-based immunoassays for MALDI-TOF MS. To improve parallelization, we adapted the workflow to photolithography-generated arrays. Our work shows Parylene-C coatings provide excellent "solvent pinning" for reagents and biofluids, enabling sensitive MS detection of immobilized components. With a unique MALDI-matrix crystallization technique we show routine interassay RSD <10% at picomolar concentrations and highlight platform compatibility for relative and label-free quantitation applications. Parylene-arrays provide high sample densities and promise screening throughputs exceeding 1000 samples/h with modern liquid-handlers and MALDI-TOF systems.

  3. FPGA implementation of current-sharing strategy for parallel-connected SEPICs

    NASA Astrophysics Data System (ADS)

    Ezhilarasi, A.; Ramaswamy, M.

    2016-01-01

    The attempt echoes to evolve an equal current-sharing algorithm over a number of single-ended primary inductance converters connected in parallel. The methodology involves the development of state-space model to predict the condition for the existence of a stable equilibrium portrait. It acquires the role of a variable structure controller to guide the trajectory, with a view to circumvent the circuit non-linearities and arrive at a stable performance through a preferred operating range. The design elicits an acceptable servo and regulatory characteristics, the desired time response and ensures regulation of the load voltage. The simulation results validated through a field programmable gate array-based prototype serves to illustrate its suitability for present-day applications.

  4. Teaching ethics to engineers: ethical decision making parallels the engineering design process.

    PubMed

    Bero, Bridget; Kuhlman, Alana

    2011-09-01

    In order to fulfill ABET requirements, Northern Arizona University's Civil and Environmental engineering programs incorporate professional ethics in several of its engineering courses. This paper discusses an ethics module in a 3rd year engineering design course that focuses on the design process and technical writing. Engineering students early in their student careers generally possess good black/white critical thinking skills on technical issues. Engineering design is the first time students are exposed to "grey" or multiple possible solution technical problems. To identify and solve these problems, the engineering design process is used. Ethical problems are also "grey" problems and present similar challenges to students. Students need a practical tool for solving these ethical problems. The step-wise engineering design process was used as a model to demonstrate a similar process for ethical situations. The ethical decision making process of Martin and Schinzinger was adapted for parallelism to the design process and presented to students as a step-wise technique for identification of the pertinent ethical issues, relevant moral theories, possible outcomes and a final decision. Students had greatest difficulty identifying the broader, global issues presented in an ethical situation, but by the end of the module, were better able to not only identify the broader issues, but also to more comprehensively assess specific issues, generate solutions and a desired response to the issue.

  5. MiniGhost : a miniapp for exploring boundary exchange strategies using stencil computations in scientific parallel computing.

    SciTech Connect

    Barrett, Richard Frederick; Heroux, Michael Allen; Vaughan, Courtenay Thomas

    2012-04-01

    A broad range of scientific computation involves the use of difference stencils. In a parallel computing environment, this computation is typically implemented by decomposing the spacial domain, inducing a 'halo exchange' of process-owned boundary data. This approach adheres to the Bulk Synchronous Parallel (BSP) model. Because commonly available architectures provide strong inter-node bandwidth relative to latency costs, many codes 'bulk up' these messages by aggregating data into a message as a means of reducing the number of messages. A renewed focus on non-traditional architectures and architecture features provides new opportunities for exploring alternatives to this programming approach. In this report we describe miniGhost, a 'miniapp' designed for exploration of the capabilities of current as well as emerging and future architectures within the context of these sorts of applications. MiniGhost joins the suite of miniapps developed as part of the Mantevo project.

  6. Integration of optoelectronic technologies for chip-to- chip interconnections and parallel pipeline processing

    NASA Astrophysics Data System (ADS)

    Wu, Jenming

    Digital information services such as multimedia systems and data communications require the processing and transfer of tremendous amount of data. These data need to be stored, accessed and delivered efficiently and reliably at high speed for various user applications. This represents a great challenge for current electronic systems. Electronics is effective in providing high performance processing and computation, but its input/outputs (I/Os) bandwidth is unable to scale with its processing power. The signal I/Os or interconnections are needed between processors and input devices, between processors for multiprocessor systems, and between processors and storage devices. Novel chip-to-chip interconnect technologies are needed to meet this challenge. This work integrates optoelectronic technologies for chip-to-chip interconnects and parallel pipeline processing. Photonic and electronic technologies are complementary to each other in the sense that electronics is more suitable for high-speed, low cost computation, and photonics is more suitable for high-bandwidth information transmission. Smart pixel technology uses electronics for logic switching and optics for chip-to- chip interconnects, thus combining the abilities of photonics and electronics nicely. This work describes both vertical and horizontal integration of smart pixel technologies for chip-to-chip optical interconnects and its applications. We present smart pixel VLSI designs in both hybrid CMOS/MQW smart pixel and monolithic GaAs smart pixel technologies. We use the CMOS/MQW technology for smart pixel array cellular logic (SPARCL) processors for SIMD parallel pipeline processing. We have tested the chip and constructed a prototype system for device characterization and system demonstration. We have verified the functionality of the system and characterized the electrical functions of the chip and the optoelectronic properties of the MQW devices. We have developed algorithms that utilize SPARCL for various

  7. Method of moment solutions to scattering problems in a parallel processing environment

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Partee, Jonathan; Patterson, Jean

    1991-01-01

    This paper describes the implementation of a parallelized method of moments (MOM) code into an interactive workstation environment. The workstation allows interactive solid body modeling and mesh generation, MOM analysis, and the graphical display of results. After describing the parallel computing environment, the implementation and results of parallelizing a general MOM code are presented in detail.

  8. Neural processes in symmetry perception: a parallel spatio-temporal model.

    PubMed

    Zhu, Tao

    2014-04-01

    Symmetry is usually computationally expensive to detect reliably, while it is relatively easy to perceive. In spite of many attempts to understand the neurofunctional properties of symmetry processing, no symmetry-specific activation was found in earlier cortical areas. Psychophysical evidence relating to the processing mechanisms suggests that the basic processes of symmetry perception would not perform a serial, point-by-point comparison of structural features but rather operate in parallel. Here, modeling of neural processes in psychophysical detection of bilateral texture symmetry is considered. A simple fine-grained algorithm that is capable of performing symmetry estimation without explicit comparison of remote elements is introduced. A computational model of symmetry perception is then described to characterize the underlying mechanisms as one-dimensional spatio-temporal neural processes, each of which is mediated by intracellular horizontal connections in primary visual cortex and adopts the proposed algorithm for the neural computation. Simulated experiments have been performed to show the efficiency and the dynamics of the model. Model and human performances are comparable for symmetry perception of intensity images. Interestingly, the responses of V1 neurons to propagation activities reflecting higher-order perceptual computations have been reported in neurophysiologic experiments.

  9. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    PubMed

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  10. Mars sampling strategy and aeolian processes

    NASA Technical Reports Server (NTRS)

    Greeley, Ronald

    1988-01-01

    It is critical that the geological context of planetary samples (both in situ analyses and return samples) be well known and documented. Apollo experience showed that this goal is often difficult to achieve even for a planet on which surficial processes are relatively restricted. On Mars, the variety of present and past surface processes is much greater than on the Moon and establishing the geological context of samples will be much more difficult. In addition to impact hardening, Mars has been modified by running water, periglacial activity, wind, and other processes, all of which have the potential for profoundly affecting the geological integrity of potential samples. Aeolian, or wind, processes are ubiquitous on Mars. In the absence of liquid water on the surface, aeolian activity dominates the present surface as documented by frequent dust storms (both local and global), landforms such as dunes, and variable features, i.e., albedo patterns which change their size, shape, and position with time in response to the wind.

  11. Calculating Floquet states of large quantum systems: A parallelization strategy and its cluster implementation

    NASA Astrophysics Data System (ADS)

    Laptyeva, T. V.; Kozinov, E. A.; Meyerov, I. B.; Ivanchenko, M. V.; Denisov, S. V.; Hänggi, P.

    2016-04-01

    We present a numerical approach to calculate non-equilibrium eigenstates of a periodically time-modulated quantum system. The approach is based on the use of a chain of single-step propagating operators. Each operator is time-specific and constructed by combining the Magnus expansion of the time-dependent system Hamiltonian with the Chebyshev expansion of an operator exponent. The construction of the unitary Floquet operator, which evolves a system state over the full modulation period, is performed by propagating the identity matrix over the period. The independence of the evolution of basis vectors makes the propagation stage suitable for realization on a parallel cluster. Once the propagation stage is completed, a routine diagonalization of the Floquet matrix is performed. Finally, an additional propagation round, now involving the eigenvectors as the initial states, allows to resolve the time-dependence of the Floquet states and calculate their characteristics. We demonstrate the accuracy and scalability of the algorithm by applying it to calculate the Floquet states of two quantum models, namely (i) a synthesized random-matrix Hamiltonian and (ii) a many-body Bose-Hubbard dimer, both of the size up to 104 states.

  12. Parallelism and Epistasis in Skeletal Evolution Identified through Use of Phylogenomic Mapping Strategies

    PubMed Central

    Daane, Jacob M.; Rohner, Nicolas; Konstantinidis, Peter; Djuranovic, Sergej; Harris, Matthew P.

    2016-01-01

    The identification of genetic mechanisms underlying evolutionary change is critical to our understanding of natural diversity, but is presently limited by the lack of genetic and genomic resources for most species. Here, we present a new comparative genomic approach that can be applied to a broad taxonomic sampling of nonmodel species to investigate the genetic basis of evolutionary change. Using our analysis pipeline, we show that duplication and divergence of fgfr1a is correlated with the reduction of scales within fishes of the genus Phoxinellus. As a parallel genetic mechanism is observed in scale-reduction within independent lineages of cypriniforms, our finding exposes significant developmental constraint guiding morphological evolution. In addition, we identified fixed variation in fgf20a within Phoxinellus and demonstrated that combinatorial loss-of-function of fgfr1a and fgf20a within zebrafish phenocopies the evolved scalation pattern. Together, these findings reveal epistatic interactions between fgfr1a and fgf20a as a developmental mechanism regulating skeletal variation among fishes. PMID:26452532

  13. Molecular tailoring approach for geometry optimization of large molecules: energy evaluation and parallelization strategies.

    PubMed

    Ganesh, V; Dongare, Rameshwar K; Balanarayan, P; Gadre, Shridhar R

    2006-09-14

    A linear-scaling scheme for estimating the electronic energy, gradients, and Hessian of a large molecule at ab initio level of theory based on fragment set cardinality is presented. With this proposition, a general, cardinality-guided molecular tailoring approach (CG-MTA) for ab initio geometry optimization of large molecules is implemented. The method employs energy gradients extracted from fragment wave functions, enabling computations otherwise impractical on PC hardware. Further, the method is readily amenable to large scale coarse-grain parallelization with minimal communication among nodes, resulting in a near-linear speedup. CG-MTA is applied for density-functional-theory-based geometry optimization of a variety of molecules including alpha-tocopherol, taxol, gamma-cyclodextrin, and two conformations of polyglycine. In the tests performed, energy and gradient estimates obtained from CG-MTA during optimization runs show an excellent agreement with those obtained from actual computation. Accuracy of the Hessian obtained employing CG-MTA provides good hope for the application of Hessian-based geometry optimization to large molecules.

  14. Molecular tailoring approach for geometry optimization of large molecules: Energy evaluation and parallelization strategies

    NASA Astrophysics Data System (ADS)

    Ganesh, V.; Dongare, Rameshwar K.; Balanarayan, P.; Gadre, Shridhar R.

    2006-09-01

    A linear-scaling scheme for estimating the electronic energy, gradients, and Hessian of a large molecule at ab initio level of theory based on fragment set cardinality is presented. With this proposition, a general, cardinality-guided molecular tailoring approach (CG-MTA) for ab initio geometry optimization of large molecules is implemented. The method employs energy gradients extracted from fragment wave functions, enabling computations otherwise impractical on PC hardware. Further, the method is readily amenable to large scale coarse-grain parallelization with minimal communication among nodes, resulting in a near-linear speedup. CG-MTA is applied for density-functional-theory-based geometry optimization of a variety of molecules including α-tocopherol, taxol, γ-cyclodextrin, and two conformations of polyglycine. In the tests performed, energy and gradient estimates obtained from CG-MTA during optimization runs show an excellent agreement with those obtained from actual computation. Accuracy of the Hessian obtained employing CG-MTA provides good hope for the application of Hessian-based geometry optimization to large molecules.

  15. Distinct cerebellar lobules process arousal, valence and their interaction in parallel following a temporal hierarchy.

    PubMed

    Styliadis, Charis; Ioannides, Andreas A; Bamidis, Panagiotis D; Papadelis, Christos

    2015-04-15

    The cerebellum participates in emotion-related neural circuits formed by different cortical and subcortical areas, which sub-serve arousal and valence. Recent neuroimaging studies have shown a functional specificity of cerebellar lobules in the processing of emotional stimuli. However, little is known about the temporal component of this process. The goal of the current study is to assess the spatiotemporal profile of neural responses within the cerebellum during the processing of arousal and valence. We hypothesized that the excitation and timing of distinct cerebellar lobules is influenced by the emotional content of the stimuli. By using magnetoencephalography, we recorded magnetic fields from twelve healthy human individuals while passively viewing affective pictures rated along arousal and valence. By using a beamformer, we localized gamma-band activity in the cerebellum across time and we related the foci of activity to the anatomical organization of the cerebellum. Successive cerebellar activations were observed within distinct lobules starting ~160ms after the stimuli onset. Arousal was processed within both vermal (VI and VIIIa) and hemispheric (left Crus II) lobules. Valence (left VI) and its interaction (left V and left Crus I) with arousal were processed only within hemispheric lobules. Arousal processing was identified first at early latencies (160ms) and was long-lived (until 980ms). In contrast, the processing of valence and its interaction to arousal was short lived at later stages (420-530ms and 570-640ms respectively). Our findings provide for the first time evidence that distinct cerebellar lobules process arousal, valence, and their interaction in a parallel yet temporally hierarchical manner determined by the emotional content of the stimuli.

  16. Computational Investigation of Synchronized Multibeam Strategies for the Selective Laser Melting Process

    NASA Astrophysics Data System (ADS)

    Heeling, Thorsten; Wegener, Konrad

    The selective laser melting process features a nearly incomparable freedom of design. But its potential is still limited due to remaining porosity, cracking, distortion, low build-up rates and a limited range of materials. While there is some progress in process control and multiple parallel scan fields to tackle these issues, the potential of synchronized multibeam strategies has not yet been investigated. The presented synchronized multibeam approach is characterized by two widely overlapping scan fields fed by two independent laser sources that can be controlled to work in a synchronized manner with or without a defined offset. This allows a selective manipulation of the local temperature field and thus of melt pool dynamics, the temperature gradients and cooling rates, which are all influencing the processes' porosity, cracking and distortion behavior. Therefore the influences of these strategies on the melt pool dimensions and dynamics as well as the temperature gradients are investigated in this work.

  17. A Latent Trait Model for Differential Strategies in Cognitive Processes.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    Some cognitive psychologists, who have tried to approach psychometric theories, say that the psychometric approach does not provide them with theories and methods with which they can deal with differential strategies. In this paper, a general latent trait model for differential strategies in cognitive processes is proposed which includes three…

  18. The Effects of Cultural Schemata on Reading Processing Strategies.

    ERIC Educational Resources Information Center

    Pritchard, Robert

    A study examined the relationship between cultural schemata and the reading process to identify the strategies proficient readers employ to develop their understanding of culturally familiar and unfamiliar passages and to examine those strategies in relation to the cultural backgrounds of the readers and the cultural perspectives of the reading…

  19. A neurally plausible parallel distributed processing model of event-related potential word reading data.

    PubMed

    Laszlo, Sarah; Plaut, David C

    2012-03-01

    The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between explicit, computational models and physiological data collected during the performance of cognitive tasks, we developed a PDP model of visual word recognition which simulates key results from the ERP reading literature, while simultaneously being able to successfully perform lexical decision-a benchmark task for reading models. Simulations reveal that the model's success depends on the implementation of several neurally plausible features in its architecture which are sufficiently domain-general to be relevant to cognitive modeling more generally.

  20. Mean-field analysis for parallel asymmetric exclusion process with anticipation effect.

    PubMed

    Hao, Qing-Yi; Jiang, Rui; Hu, Mao-Bin; Wu, Qing-Song

    2010-08-01

    This paper studies an extended parallel asymmetric exclusion process, in which the anticipation effect is taken into account. The fundamental diagram of the model has been investigated via cluster mean field analysis. Different from previous mean field analysis, in which the n -cluster probabilities P(σ{i},…,σ{i+n-1}) involve the (n+2) -cluster probabilities P(τ{i-1},…,τ{i+n}) , our mean-field analysis is asymmetric because the three-cluster probabilities P(σ{i},σ{i+1},σ{i+2}) involve the six-cluster probabilities P(τ{i-1},…,τ{i+4}) . We find an excellent agreement between Monte Carlo simulations and cluster mean field analysis, which indicates that the mean field analysis might give the exact expression.

  1. Parallel-Processing CMOS Circuitry for M-QAM and 8PSK TCM

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Lee, Dennis; Hoy, Scott; Fisher, Dave; Fong, Wai; Ghuman, Parminder

    2009-01-01

    There has been some additional development of parts reported in "Multi-Modulator for Bandwidth-Efficient Communication" (NPO-40807), NASA Tech Briefs, Vol. 32, No. 6 (June 2009), page 34. The focus was on 1) The generation of M-order quadrature amplitude modulation (M-QAM) and octonary-phase-shift-keying, trellis-coded modulation (8PSK TCM), 2) The use of square-root raised-cosine pulse-shaping filters, 3) A parallel-processing architecture that enables low-speed [complementary metal oxide/semiconductor (CMOS)] circuitry to perform the coding, modulation, and pulse-shaping computations at a high rate; and 4) Implementation of the architecture in a CMOS field-programmable gate array.

  2. Improved object segmentation using Markov random fields, artificial neural networks, and parallel processing techniques

    NASA Astrophysics Data System (ADS)

    Foulkes, Stephen B.; Booth, David M.

    1997-07-01

    Object segmentation is the process by which a mask is generated which identifies the area of an image which is occupied by an object. Many object recognition techniques depend on the quality of such masks for shape and underlying brightness information, however, segmentation remains notoriously unreliable. This paper considers how the image restoration technique of Geman and Geman can be applied to the improvement of object segmentations generated by a locally adaptive background subtraction technique. Also presented is how an artificial neural network hybrid, consisting of a single layer Kohonen network with each of its nodes connected to a different multi-layer perceptron, can be used to approximate the image restoration process. It is shown that the restoration techniques are very well suited for parallel processing and in particular the artificial neural network hybrid has the potential for near real time image processing. Results are presented for the detection of ships in SPOT panchromatic imagery and the detection of vehicles in infrared linescan images, these being a fair representation of the wider class of problem.

  3. When parallel processing in visual word recognition is not enough: new evidence from naming.

    PubMed

    Roberts, Martha Anne; Rastle, Kathleen; Coltheart, Max; Besner, Derek

    2003-06-01

    Low-frequency irregular words are named more slowly and are more error prone than low-frequency regular words (the regularity effect). Rastle and Coltheart (1999) reported that this irregularity cost is modulated by the serial position of the irregular grapheme-phoneme correspondence, such that words with early irregularities exhibit a larger cost than words with late ones. They argued that these data implicate rule-based serial processing, and they also reported a successful simulation with a model that has a rule-based serial component--the DRC model of reading aloud (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001). However, Zorzi (2000) also simulated these data with a model that operates solely in parallel. Furthermore, Kwantes and Mewhort (1999) simulated these data with a serial processing model that has no rules for converting orthography to phonology. The human data reported by Rastle and Coltheart therefore neither require a serial processing account, nor successfully discriminate among a number of computational models of reading aloud. New data are presented wherein an interaction between the effects of regularity and serial position of irregularity is again reported for human readers. The DRC model simulated this interaction; no other implemented computational model does so. The present results are thus consistent with rule-based serial processing in reading aloud.

  4. Risk communication strategy development using the aerospace systems engineering process

    NASA Technical Reports Server (NTRS)

    Dawson, S.; Sklar, M.

    2004-01-01

    This paper explains the goals and challenges of NASA's risk communication efforts and how the Aerospace Systems Engineering Process (ASEP) was used to map the risk communication strategy used at the Jet Propulsion Laboratory to achieve these goals.

  5. NMR Spectroscopy: Processing Strategies (by Peter Bigler)

    NASA Astrophysics Data System (ADS)

    Mills, Nancy S.

    1998-06-01

    Peter Bigler. VCH: New York, 1997. 249 pp. ISBN 3-527-28812-0. $99.00. This book, part of a four-volume series planned to deal with all aspects of a standard NMR experiment, is almost the exact book I have been hoping to find. My department has acquired, as have hundreds of other undergraduate institutions, high-field NMR instrumentation and the capability of doing extremely sophisticated experiments. However, the training is often a one- or two-day experience in which the material retained by the faculty trained is garbled and filled with holes, not unlike the information our students seem to retain. This text, and the accompanying exercises based on data contained on a CD-ROM, goes a long way to fill in the gaps and clarify misunderstandings about NMR processing.

  6. Comparison of microbial community shifts in two parallel multi-step drinking water treatment processes.

    PubMed

    Xu, Jiajiong; Tang, Wei; Ma, Jun; Wang, Hong

    2017-04-11

    Drinking water treatment processes remove undesirable chemicals and microorganisms from source water, which is vital to public health protection. The purpose of this study was to investigate the effects of treatment processes and configuration on the microbiome by comparing microbial community shifts in two series of different treatment processes operated in parallel within a full-scale drinking water treatment plant (DWTP) in Southeast China. Illumina sequencing of 16S rRNA genes of water samples demonstrated little effect of coagulation/sedimentation and pre-oxidation steps on bacterial communities, in contrast to dramatic and concurrent microbial community shifts during ozonation, granular activated carbon treatment, sand filtration, and disinfection for both series. A large number of unique operational taxonomic units (OTUs) at these four treatment steps further illustrated their strong shaping power towards the drinking water microbial communities. Interestingly, multidimensional scaling analysis revealed tight clustering of biofilm samples collected from different treatment steps, with Nitrospira, the nitrite-oxidizing bacteria, noted at higher relative abundances in biofilm compared to water samples. Overall, this study provides a snapshot of step-to-step microbial evolvement in multi-step drinking water treatment systems, and the results provide insight to control and manipulation of the drinking water microbiome via optimization of DWTP design and operation.

  7. Architecture and design of a 500-MHz gallium-arsenide processing element for a parallel supercomputer

    NASA Technical Reports Server (NTRS)

    Fouts, Douglas J.; Butner, Steven E.

    1991-01-01

    The design of the processing element of GASP, a GaAs supercomputer with a 500-MHz instruction issue rate and 1-GHz subsystem clocks, is presented. The novel, functionally modular, block data flow architecture of GASP is described. The architecture and design of a GASP processing element is then presented. The processing element (PE) is implemented in a hybrid semiconductor module with 152 custom GaAs ICs of eight different types. The effects of the implementation technology on both the system-level architecture and the PE design are discussed. SPICE simulations indicate that parts of the PE are capable of being clocked at 1 GHz, while the rest of the PE uses a 500-MHz clock. The architecture utilizes data flow techniques at a program block level, which allows efficient execution of parallel programs while maintaining reasonably good performance on sequential programs. A simulation study of the architecture indicates that an instruction execution rate of over 30,000 MIPS can be attained with 65 PEs.

  8. Comparing Binaural Pre-processing Strategies II

    PubMed Central

    Hu, Hongmei; Krawczyk-Becker, Martin; Marquardt, Daniel; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Bomke, Katrin; Plotz, Karsten; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias

    2015-01-01

    Several binaural audio signal enhancement algorithms were evaluated with respect to their potential to improve speech intelligibility in noise for users of bilateral cochlear implants (CIs). 50% speech reception thresholds (SRT50) were assessed using an adaptive procedure in three distinct, realistic noise scenarios. All scenarios were highly nonstationary, complex, and included a significant amount of reverberation. Other aspects, such as the perfectly frontal target position, were idealized laboratory settings, allowing the algorithms to perform better than in corresponding real-world conditions. Eight bilaterally implanted CI users, wearing devices from three manufacturers, participated in the study. In all noise conditions, a substantial improvement in SRT50 compared to the unprocessed signal was observed for most of the algorithms tested, with the largest improvements generally provided by binaural minimum variance distortionless response (MVDR) beamforming algorithms. The largest overall improvement in speech intelligibility was achieved by an adaptive binaural MVDR in a spatially separated, single competing talker noise scenario. A no-pre-processing condition and adaptive differential microphones without a binaural link served as the two baseline conditions. SRT50 improvements provided by the binaural MVDR beamformers surpassed the performance of the adaptive differential microphones in most cases. Speech intelligibility improvements predicted by instrumental measures were shown to account for some but not all aspects of the perceptually obtained SRT50 improvements measured in bilaterally implanted CI users. PMID:26721921

  9. Membrane Transport Processes Analyzed by a Highly Parallel Nanopore Chip System at Single Protein Resolution.

    PubMed

    Urban, Michael; Vor der Brüggen, Marc; Tampé, Robert

    2016-08-16

    Membrane protein transport on the single protein level still evades detailed analysis, if the substrate translocated is non-electrogenic. Considerable efforts have been made in this field, but techniques enabling automated high-throughput transport analysis in combination with solvent-free lipid bilayer techniques required for the analysis of membrane transporters are rare. This class of transporters however is crucial in cell homeostasis and therefore a key target in drug development and methodologies to gain new insights desperately needed. The here presented manuscript describes the establishment and handling of a novel biochip for the analysis of membrane protein mediated transport processes at single transporter resolution. The biochip is composed of microcavities enclosed by nanopores that is highly parallel in its design and can be produced in industrial grade and quantity. Protein-harboring liposomes can directly be applied to the chip surface forming self-assembled pore-spanning lipid bilayers using SSM-techniques (solid supported lipid membranes). Pore-spanning parts of the membrane are freestanding, providing the interface for substrate translocation into or out of the cavity space, which can be followed by multi-spectral fluorescent readout in real-time. The establishment of standard operating procedures (SOPs) allows the straightforward establishment of protein-harboring lipid bilayers on the chip surface of virtually every membrane protein that can be reconstituted functionally. The sole prerequisite is the establishment of a fluorescent read-out system for non-electrogenic transport substrates. High-content screening applications are accomplishable by the use of automated inverted fluorescent microscopes recording multiple chips in parallel. Large data sets can be analyzed using the freely available custom-designed analysis software. Three-color multi spectral fluorescent read-out furthermore allows for unbiased data discrimination into different

  10. SIAM Conference on Parallel Processing for Scientific Computing - March 12-14, 2008

    SciTech Connect

    Kolata, William G.

    2008-09-08

    The themes of the 2008 conference included, but were not limited to: Programming languages, models, and compilation techniques; The transition to ubiquitous multicore/manycore processors; Scientific computing on special-purpose processors (Cell, GPUs, etc.); Architecture-aware algorithms; From scalable algorithms to scalable software; Tools for software development and performance evaluation; Global perspectives on HPC; Parallel computing in industry; Distributed/grid computing; Fault tolerance; Parallel visualization and large scale data management; and The future of parallel architectures.

  11. Individual differences in speech-in-noise perception parallel neural speech processing and attention in preschoolers.

    PubMed

    Thompson, Elaine C; Woodruff Carr, Kali; White-Schwoch, Travis; Otto-Meyer, Sebastian; Kraus, Nina

    2017-02-01

    From bustling classrooms to unruly lunchrooms, school settings are noisy. To learn effectively in the unwelcome company of numerous distractions, children must clearly perceive speech in noise. In older children and adults, speech-in-noise perception is supported by sensory and cognitive processes, but the correlates underlying this critical listening skill in young children (3-5 year olds) remain undetermined. Employing a longitudinal design (two evaluations separated by ∼12 months), we followed a cohort of 59 preschoolers, ages 3.0-4.9, assessing word-in-noise perception, cognitive abilities (intelligence, short-term memory, attention), and neural responses to speech. Results reveal changes in word-in-noise perception parallel changes in processing of the fundamental frequency (F0), an acoustic cue known for playing a role central to speaker identification and auditory scene analysis. Four unique developmental trajectories (speech-in-noise perception groups) confirm this relationship, in that improvements and declines in word-in-noise perception couple with enhancements and diminishments of F0 encoding, respectively. Improvements in word-in-noise perception also pair with gains in attention. Word-in-noise perception does not relate to strength of neural harmonic representation or short-term memory. These findings reinforce previously-reported roles of F0 and attention in hearing speech in noise in older children and adults, and extend this relationship to preschool children.

  12. Early and parallel processing of pragmatic and semantic information in speech acts: neurophysiological evidence.

    PubMed

    Egorova, Natalia; Shtyrov, Yury; Pulvermüller, Friedemann

    2013-01-01

    Although language is a tool for communication, most research in the neuroscience of language has focused on studying words and sentences, while little is known about the brain mechanisms of speech acts, or communicative functions, for which words and sentences are used as tools. Here the neural processing of two types of speech acts, Naming and Requesting, was addressed using the time-resolved event-related potential (ERP) technique. The brain responses for Naming and Request diverged as early as ~120 ms after the onset of the critical words, at the same time as, or even before, the earliest brain manifestations of semantic word properties could be detected. Request-evoked potentials were generally larger in amplitude than those for Naming. The use of identical words in closely matched settings for both speech acts rules out explanation of the difference in terms of phonological, lexical, semantic properties, or word expectancy. The cortical sources underlying the ERP enhancement for Requests were found in the fronto-central cortex, consistent with the activation of action knowledge, as well as in the right temporo-parietal junction (TPJ), possibly reflecting additional implications of speech acts for social interaction and theory of mind. These results provide the first evidence for surprisingly early access to pragmatic and social interactive knowledge, which possibly occurs in parallel with other types of linguistic processing, and thus supports the near-simultaneous access to different subtypes of psycholinguistic information.

  13. Examining Mechanisms Underlying Fear-Control in the Extended Parallel Process Model.

    PubMed

    Quick, Brian L; LaVoie, Nicole R; Reynolds-Tylus, Tobias; Martinez-Gonzalez, Andrea; Skurka, Chris

    2017-01-17

    This investigation sought to advance the extended parallel process model in important ways by testing associations among the strengths of efficacy and threat appeals with fear as well as two outcomes of fear-control processing, psychological reactance and message minimization. Within the context of print ads admonishing against noise-induced hearing loss (NIHL) and the fictitious Trepidosis virus, partial support was found for the additive model with no support for the multiplicative model. High efficacy appeals mitigated freedom threat perceptions across both contexts. Fear was positively associated with both freedom threat perceptions within the NIHL context and favorable attitudes for both NIHL and Trepidosis virus contexts. In line with psychological reactance theory, a freedom threat was positively associated with psychological reactance. Reactance, in turn, was positively associated with message minimization. The models supported reactance preceding message minimization across both message contexts. Both the theoretical and practical implications are discussed with an emphasis on future research opportunities within the fear-appeal literature.

  14. Evidence for a parallel input serial analysis model of word processing.

    PubMed

    Allen, P A; Madden, D J

    1990-02-01

    A parallel input serial analysis (PISA) model of word processing was developed and tested. The goal was to expand on the "critical processing duration" hypothesis of Johnson, Allen, and Strand (1989) so that both single-word and multiple-word presentation, letter detection data could be explained. In Experiments 1-3 four different word frequency categories on a single-presentation, letter detection task were used. These three experiments indicated that there was a curvilinear relationship between word frequency and letter detection reaction time (RT). That is, letter detection RTs for medium-high-frequency words were significantly longer than letter detection RTs for very-high-, low-, and very-low-frequency words. These results support the PISA model rather than the Healy, Oliver, and McNamara (1987) version of the unitization model. In Experiments 4-5 multiple-presentation (i.e., two words), letter detection tasks were used. The PISA model could also account for the results from these two experiments, but the unitization model could not.

  15. Early and parallel processing of pragmatic and semantic information in speech acts: neurophysiological evidence

    PubMed Central

    Egorova, Natalia; Shtyrov, Yury; Pulvermüller, Friedemann

    2013-01-01

    Although language is a tool for communication, most research in the neuroscience of language has focused on studying words and sentences, while little is known about the brain mechanisms of speech acts, or communicative functions, for which words and sentences are used as tools. Here the neural processing of two types of speech acts, Naming and Requesting, was addressed using the time-resolved event-related potential (ERP) technique. The brain responses for Naming and Request diverged as early as ~120 ms after the onset of the critical words, at the same time as, or even before, the earliest brain manifestations of semantic word properties could be detected. Request-evoked potentials were generally larger in amplitude than those for Naming. The use of identical words in closely matched settings for both speech acts rules out explanation of the difference in terms of phonological, lexical, semantic properties, or word expectancy. The cortical sources underlying the ERP enhancement for Requests were found in the fronto-central cortex, consistent with the activation of action knowledge, as well as in the right temporo-parietal junction (TPJ), possibly reflecting additional implications of speech acts for social interaction and theory of mind. These results provide the first evidence for surprisingly early access to pragmatic and social interactive knowledge, which possibly occurs in parallel with other types of linguistic processing, and thus supports the near-simultaneous access to different subtypes of psycholinguistic information. PMID:23543248

  16. Parallel processing in an identified neural circuit: the Aplysia californica gill-withdrawal response model system.

    PubMed

    Leonard, Janet L; Edstrom, John P

    2004-02-01

    /or 'Back-propagation' type. Such models may offer a more biologically realistic representation of nervous system organisation than has been thought. In this model, the six parallel GMNs of the CNS correspond to a hidden layer within one module of the gill-control system. That is, the gill-control system appears to be organised as a distributed system with several parallel modules, some of which are neural networks in their own right. A new model is presented here which predicts that the six GMNs serve as components of a 'push-pull' gain control system, along with known but largely unidentified inhibitory motor neurons from the PVG. This 'push-pull' gain control system sets the responsiveness of the peripheral gill motor system. Neither causal nor correlational links between specific forms of neural plasticity and behavioural plasticity have been demonstrated in the GWR model system. However, the GWR model system does provide an opportunity to observe and describe directly the physiological and biochemical mechanisms of distributed representation and parallel processing in a largely identifiable 'wetware' neural network.

  17. T3PS v1.0: Tool for Parallel Processing in Parameter Scans

    NASA Astrophysics Data System (ADS)

    Maurer, Vinzenz

    2016-01-01

    T3PS is a program that can be used to quickly design and perform parameter scans while easily taking advantage of the multi-core architecture of current processors. It takes an easy to read and write parameter scan definition file format as input. Based on the parameter ranges and other options contained therein, it distributes the calculation of the parameter space over multiple processes and possibly computers. The derived data is saved in a plain text file format readable by most plotting software. The supported scanning strategies include: grid scan, random scan, Markov Chain Monte Carlo, numerical optimization. Several example parameter scans are shown and compared with results in the literature.

  18. Design of a massively parallel computer using bit serial processing elements

    NASA Technical Reports Server (NTRS)

    Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing

    1995-01-01

    A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.

  19. The Development of Reading and Spelling in Arabic Orthography: Two Parallel Processes?

    ERIC Educational Resources Information Center

    Taha, Haitham

    2016-01-01

    The parallels between reading and spelling skills in Arabic were tested. One-hundred forty-three native Arab students, with typical reading development, from second, fourth, and sixth grades were tested with reading, spelling and orthographic decision tasks. The results indicated a full parallel between the reading and spelling performances within…

  20. Information-Limited Parallel Processing in Difficult Heterogeneous Covert Visual Search

    ERIC Educational Resources Information Center

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2010-01-01

    Difficult visual search is often attributed to time-limited serial attention operations, although neural computations in the early visual system are parallel. Using probabilistic search models (Dosher, Han, & Lu, 2004) and a full time-course analysis of the dynamics of covert visual search, we distinguish unlimited capacity parallel versus serial…

  1. The Myriad Strategies for Seeking Control in the Dying Process

    ERIC Educational Resources Information Center

    Schroepfer, Tracy A.; Noh, Hyunjin; Kavanaugh, Melinda

    2009-01-01

    Purpose: This study explored the role control plays in the dying process of terminally ill elders by investigating the aspects of the dying process over which they seek to exercise control, the strategies they use, and whether they desire to exercise more control. Design and Methods: In-depth face-to-face interviews were conducted with 84…

  2. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by dynamically adjusting local routing strategies

    DOEpatents

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-03-16

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. Each node implements a respective routing strategy for routing data through the network, the routing strategies not necessarily being the same in every node. The routing strategies implemented in the nodes are dynamically adjusted during application execution to shift network workload as required. Preferably, adjustment of routing policies in selective nodes is performed at synchronization points. The network may be dynamically monitored, and routing strategies adjusted according to detected network conditions.

  3. Parallel processing of Eulerian-Lagrangian, cell-based adaptive method for moving boundary problems

    NASA Astrophysics Data System (ADS)

    Kuan, Chih-Kuang

    In this study, issues and techniques related to the parallel processing of the Eulerian-Lagrangian method for multi-scale moving boundary computation are investigated. The scope of the study consists of the Eulerian approach for field equations, explicit interface-tracking, Lagrangian interface modification and reconstruction algorithms, and a cell-based unstructured adaptive mesh refinement (AMR) in a distributed-memory computation framework. We decomposed the Eulerian domain spatially along with AMR to balance the computational load of solving field equations, which is a primary cost of the entire solver. The Lagrangian domain is partitioned based on marker vicinities with respect to the Eulerian partitions to minimize inter-processor communication. Overall, the performance of an Eulerian task peaks at 10,000-20,000 cells per processor, and it is the upper bound of the performance of the Eulerian- Lagrangian method. Moreover, the load imbalance of the Lagrangian task is not as influential as the communication overhead of the Eulerian-Lagrangian tasks on the overall performance. To assess the parallel processing capabilities, a high Weber number drop collision is simulated. The high convective to viscous length scale ratios result in disparate length scale distributions; together with the moving and topologically irregular interfaces, the computational tasks require temporally and spatially resolved treatment adaptively. The techniques presented enable us to perform original studies to meet such computational requirements. Coalescence, stretch, and break-up of satellite droplets due to the interfacial instability are observed in current study, and the history of interface evolution is in good agreement with the experimental data. The competing mechanisms of the primary and secondary droplet break up, along with the gas-liquid interfacial dynamics are systematically investigated. This study shows that Rayleigh-Taylor instability on the edge of an extruding sheet

  4. Processes in arithmetic strategy selection: a fMRI study.

    PubMed

    Taillan, Julien; Ardiale, Eléonore; Anton, Jean-Luc; Nazarian, Bruno; Félician, Olivier; Lemaire, Patrick

    2015-01-01

    This neuroimaging (functional magnetic resonance imaging) study investigated neural correlates of strategy selection. Young adults performed an arithmetic task in two different conditions. In both conditions, participants had to provide estimates of two-digit multiplication problems like 54 × 78. In the choice condition, participants had to select the better of two available rounding strategies, rounding-up (RU) strategy (i.e., doing 60 × 80 = 4,800) or rounding-down (RD) strategy (i.e., doing 50 × 70 = 3,500 to estimate product of 54 × 78). In the no-choice condition, participants did not have to select strategy on each problem but were told which strategy to use; they executed RU and RD strategies each on a series of problems. Participants also had a control task (i.e., providing correct products of multiplication problems like 40 × 50). Brain activations and performance were analyzed as a function of these conditions. Participants were able to frequently choose the better strategy in the choice condition; they were also slower when they executed the difficult RU than the easier RD. Neuroimaging data showed greater brain activations in right anterior cingulate cortex (ACC), dorso-lateral prefrontal cortex (DLPFC), and angular gyrus (ANG), when selecting (relative to executing) the better strategy on each problem. Moreover, RU was associated with more parietal cortex activation than RD. These results suggest an important role of fronto-parietal network in strategy selection and have important implications for our further understanding and modeling cognitive processes underlying strategy selection.

  5. Analysis of squeeze film process between non-parallel circular surfaces

    NASA Astrophysics Data System (ADS)

    Radulescu, A. V.; Radulescu, I.

    2017-02-01

    The purpose of the paper is to determine the performance of a Newtonian fluid, in the case of squeezing process between non-parallel circular surfaces, from theoretical and experimental point of view. The theoretical analysis is based on the two-dimensional Reynolds equation, which is solved assuming the simplifying hypothesis of the “narrow bearing theory”. With this approximation, it is possible to neglect certain terms in the Reynolds equation, and an analytical expression for the pressure distribution on the superior surface can be written. The theoretical results have been compared with the experimental determination of the pressure distribution, obtained on a modified Weissenberg rheogoniometer. For the squeezing experiment, two oils specific for internal combustion engines have been used. Their viscosity was measured with a con and plate Brookfield viscometer. The stand has the possibility to measure the pressure variation with the film thickness, in three points, for different squeezing velocity and for an imposed geometry of the circular plates. In conclusion it can observe a good correlation between theory and experiment, in the case of thick lubricant films. At low values of the thickness of lubricant film, the theoretical model has to be improved, using finite theory method for flow modelling.

  6. The effect of preventive educational program in cigarette smoking: Extended Parallel Process Model

    PubMed Central

    Gharlipour, Zabihollah; Hazavehei, Seyed Mohammad Mehdi; Moeini, Babak; Nazari, Mahin; Beigi, Abbas Moghim; Tavassoli, Elahe; Heydarabadi, Akbar Babaei; Reisi, Mahnoush; Barkati, Hasan

    2015-01-01

    Background: Cigarette smoking is one of the preventable causes of diseases and deaths. The most important preventive measure is technique to resist against peer pressure. Any educational program should design with an emphasis upon theories of behavioral change and based on effective educational program. To investigate the interventions through educational program in prevention of cigarette smoking, this paper has used the Extended Parallel Process Model (EPPM). Materials and Methods: This study is a quasi-experimental study. Two middle schools were randomly selected from male students in Shiraz. Therefore, we randomly selected 120 students for the experimental group and 120 students for the control group. After diagnostic evaluation, educational interventions on the consequences of smoking and preventive skills were applied. Results: Our results indicated that there was a significant difference between students in the control and experimental groups in the means of perceived susceptibility (P < 0.000, t = 6.84), perceived severity (P < 0.000, t = −11.46), perceived response efficacy (P < 0.000, t = −7.07), perceived self-efficacy (P < 0.000, t = −11.64), and preventive behavior (P < 0.000, t = −24.36). Conclusions: EPPM along with educating skills necessary to resist against peer pressure had significant level of efficiency in improving preventive behavior of cigarette smoking among adolescents. However, this study recommends further studies on ways of increasing perceived susceptibility in cigarette smoking among adolescents. PMID:25767815

  7. Optimization of Parallel Legendre Transform using Graphics Processing Unit (GPU) for a Geodynamo Code

    NASA Astrophysics Data System (ADS)

    Lokavarapu, H. V.; Matsui, H.

    2015-12-01

    Convection and magnetic field of the Earth's outer core are expected to have vast length scales. To resolve these flows, high performance computing is required for geodynamo simulations using spherical harmonics transform (SHT), a significant portion of the execution time is spent on the Legendre transform. Calypso is a geodynamo code designed to model magnetohydrodynamics of a Boussinesq fluid in a rotating spherical shell, such as the outer core of the Earth. The code has been shown to scale well on computer clusters capable of computing at the order of 10⁵ cores using Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) parallelization for CPUs. To further optimize, we investigate three different algorithms of the SHT using GPUs. One is to preemptively compute the Legendre polynomials on the CPU before executing SHT on the GPU within the time integration loop. In the second approach, both the Legendre polynomials and the SHT are computed on the GPU simultaneously. In the third approach , we initially partition the radial grid for the forward transform and the harmonic order for the backward transform between the CPU and GPU. There after, the partitioned works are simultaneously computed in the time integration loop. We examine the trade-offs between space and time, memory bandwidth and GPU computations on Maverick, a Texas Advanced Computing Center (TACC) supercomputer. We have observed improved performance using a GPU enabled Legendre transform. Furthermore, we will compare and contrast the different algorithms in the context of GPUs.

  8. Word learning and the cerebral hemispheres: from serial to parallel processing of written words.

    PubMed

    Ellis, Andrew W; Ferreira, Roberto; Cathles-Hagan, Polly; Holt, Kathryn; Jarvis, Lisa; Barca, Laura

    2009-12-27

    Reading familiar words differs from reading unfamiliar non-words in two ways. First, word reading is faster and more accurate than reading of unfamiliar non-words. Second, effects of letter length are reduced for words, particularly when they are presented in the right visual field in familiar formats. Two experiments are reported in which right-handed participants read aloud non-words presented briefly in their left and right visual fields before and after training on those items. The non-words were interleaved with familiar words in the naming tests. Before training, naming was slow and error prone, with marked effects of length in both visual fields. After training, fewer errors were made, naming was faster, and the effect of length was much reduced in the right visual field compared with the left. We propose that word learning creates orthographic word forms in the mid-fusiform gyrus of the left cerebral hemisphere. Those word forms allow words to access their phonological and semantic representations on a lexical basis. But orthographic word forms also interact with more posterior letter recognition systems in the middle/inferior occipital gyri, inducing more parallel processing of right visual field words than is possible for any left visual field stimulus, or for unfamiliar non-words presented in the right visual field.

  9. A parallel process growth mixture model of conduct problems and substance use with risky sexual behavior.

    PubMed

    Wu, Johnny; Witkiewitz, Katie; McMahon, Robert J; Dodge, Kenneth A

    2010-10-01

    Conduct problems, substance use, and risky sexual behavior have been shown to coexist among adolescents, which may lead to significant health problems. The current study was designed to examine relations among these problem behaviors in a community sample of children at high risk for conduct disorder. A latent growth model of childhood conduct problems showed a decreasing trend from grades K to 5. During adolescence, four concurrent conduct problem and substance use trajectory classes were identified (high conduct problems and high substance use, increasing conduct problems and increasing substance use, minimal conduct problems and increasing substance use, and minimal conduct problems and minimal substance use) using a parallel process growth mixture model. Across all substances (tobacco, binge drinking, and marijuana use), higher levels of childhood conduct problems during kindergarten predicted a greater probability of classification into more problematic adolescent trajectory classes relative to less problematic classes. For tobacco and binge drinking models, increases in childhood conduct problems over time also predicted a greater probability of classification into more problematic classes. For all models, individuals classified into more problematic classes showed higher proportions of early sexual intercourse, infrequent condom use, receiving money for sexual services, and ever contracting an STD. Specifically, tobacco use and binge drinking during early adolescence predicted higher levels of sexual risk taking into late adolescence. Results highlight the importance of studying the conjoint relations among conduct problems, substance use, and risky sexual behavior in a unified model.

  10. Application of parallel processing techniques to the simulation of power system electromagnetic transients

    SciTech Connect

    Falcao, D.M.; Kaszkurewicz, E. . COPPE-EE Almeida, H.L.S. . Centro de Pesquisas de Energia Electrica)

    1993-02-01

    This paper proposes the use of parallel techniques for the computation of power system electromagnetic transients in a multiprocessor environment. System partitioning and parallel solution methods are described. Questions regarding computational load balancing and communication overheads are discussed and techniques are presented to improve the proposed method with respect to those matters. In order to demonstrate the feasibility and to assess the performance of the proposed techniques, a parallel electromagnetic transients program for a multiprocessor environment has been developed. Tests using real power networks of different sizes, executed in an 8-processor hypercube machine, have shown promising performance indices.

  11. Integer-encoded massively parallel processing of fast-learning fuzzy ARTMAP neural networks

    NASA Astrophysics Data System (ADS)

    Bahr, Hubert A.; DeMara, Ronald F.; Georgiopoulos, Michael

    1997-04-01

    In this paper we develop techniques that are suitable for the parallel implementation of Fuzzy ARTMAP networks. Speedup and learning performance results are provided for execution on a DECmpp/Sx-1208 parallel processor consisting of a DEC RISC Workstation Front-End and MasPar MP-1 Back-End with 8,192 processors. Experiments of the parallel implementation were conducted on the Letters benchmark database developed by Frey and Slate. The results indicate a speedup on the order of 1000-fold which allows combined training and testing time of under four minutes.

  12. Matrix distributed processing: a set of C++ tools for implementing generic lattice computations on parallel systems

    NASA Astrophysics Data System (ADS)

    Di Pierro, Massimo

    2001-11-01

    We present a set of programming tools (classes and functions written in C++ and based on Message Passing Interface) for fast development of generic parallel (and non-parallel) lattice simulations. They are collectively called MDP 1.2. These programming tools include classes and algorithms for matrices, random number generators, distributed lattices (with arbitrary topology), fields and parallel iterations. No previous knowledge of MPI is required in order to use them. Some applications in electromagnetism, electronics, condensed matter and lattice QCD are presented.

  13. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study

    PubMed Central

    Klingner, Carsten M.; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W.

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI. PMID:28066197

  14. Extensive separations (CLEAN) processing strategy compared to TRUEX strategy and sludge wash ion exchange

    SciTech Connect

    Knutson, B.J.; Jansen, G.; Zimmerman, B.D.; Seeman, S.E.; Lauerhass, L.; Hoza, M.

    1994-08-01

    Numerous pretreatment flowsheets have been proposed for processing the radioactive wastes in Hanford`s 177 underground storage tanks. The CLEAN Option is examined along with two other flowsheet alternatives to quantify the trade-off of greater capital equipment and operating costs for aggressive separations with the reduced waste disposal costs and decreased environmental/health risks. The effect on the volume of HLW glass product and radiotoxicity of the LLW glass or grout product is predicted with current assumptions about waste characteristics and separations processes using a mass balance model. The prediction is made on three principal processing options: washing of tank wastes with removal of cesium and technetium from the supernatant, with washed solids routed directly to the glass (referred to as the Sludge Wash C processing strategy); the previous steps plus dissolution of the solids and removal of transuranic (TRU) elements, uranium, and strontium using solvent extraction processes (referred to as the Transuranic Extraction Option C (TRUEX-C) processing strategy); and an aggressive yet feasible processing strategy for separating the waste components to meet several main goals or objectives (referred to as the CLEAN Option processing strategy), such as the LLW is required to meet the US Nuclear Regulatory Commission Class A limits; concentrations of technetium, iodine, and uranium are reduced as low as reasonably achievable; and HLW will be contained within 1,000 borosilicate glass canisters that meet current Hanford Waste Vitrification Plant glass specifications.

  15. Stochastic dynamics of small ensembles of non-processive molecular motors: the parallel cluster model.

    PubMed

    Erdmann, Thorsten; Albert, Philipp J; Schwarz, Ulrich S

    2013-11-07

    Non-processive molecular motors have to work together in ensembles in order to generate appreciable levels of force or movement. In skeletal muscle, for example, hundreds of myosin II molecules cooperate in thick filaments. In non-muscle cells, by contrast, small groups with few tens of non-muscle myosin II motors contribute to essential cellular processes such as transport, shape changes, or mechanosensing. Here we introduce a detailed and analytically tractable model for this important situation. Using a three-state crossbridge model for the myosin II motor cycle and exploiting the assumptions of fast power stroke kinetics and equal load sharing between motors in equivalent states, we reduce the stochastic reaction network to a one-step master equation for the binding and unbinding dynamics (parallel cluster model) and derive the rules for ensemble movement. We find that for constant external load, ensemble dynamics is strongly shaped by the catch bond character of myosin II, which leads to an increase of the fraction of bound motors under load and thus to firm attachment even for small ensembles. This adaptation to load results in a concave force-velocity relation described by a Hill relation. For external load provided by a linear spring, myosin II ensembles dynamically adjust themselves towards an isometric state with constant average position and load. The dynamics of the ensembles is now determined mainly by the distribution of motors over the different kinds of bound states. For increasing stiffness of the external spring, there is a sharp transition beyond which myosin II can no longer perform the power stroke. Slow unbinding from the pre-power-stroke state protects the ensembles against detachment.

  16. Stochastic dynamics of small ensembles of non-processive molecular motors: The parallel cluster model

    SciTech Connect

    Erdmann, Thorsten; Albert, Philipp J.; Schwarz, Ulrich S.

    2013-11-07

    Non-processive molecular motors have to work together in ensembles in order to generate appreciable levels of force or movement. In skeletal muscle, for example, hundreds of myosin II molecules cooperate in thick filaments. In non-muscle cells, by contrast, small groups with few tens of non-muscle myosin II motors contribute to essential cellular processes such as transport, shape changes, or mechanosensing. Here we introduce a detailed and analytically tractable model for this important situation. Using a three-state crossbridge model for the myosin II motor cycle and exploiting the assumptions of fast power stroke kinetics and equal load sharing between motors in equivalent states, we reduce the stochastic reaction network to a one-step master equation for the binding and unbinding dynamics (parallel cluster model) and derive the rules for ensemble movement. We find that for constant external load, ensemble dynamics is strongly shaped by the catch bond character of myosin II, which leads to an increase of the fraction of bound motors under load and thus to firm attachment even for small ensembles. This adaptation to load results in a concave force-velocity relation described by a Hill relation. For external load provided by a linear spring, myosin II ensembles dynamically adjust themselves towards an isometric state with constant average position and load. The dynamics of the ensembles is now determined mainly by the distribution of motors over the different kinds of bound states. For increasing stiffness of the external spring, there is a sharp transition beyond which myosin II can no longer perform the power stroke. Slow unbinding from the pre-power-stroke state protects the ensembles against detachment.

  17. Reconstruction for Time-Domain In Vivo EPR 3D Multigradient Oximetric Imaging—A Parallel Processing Perspective

    PubMed Central

    Dharmaraj, Christopher D.; Thadikonda, Kishan; Fletcher, Anthony R.; Doan, Phuc N.; Devasahayam, Nallathamby; Matsumoto, Shingo; Johnson, Calvin A.; Cook, John A.; Mitchell, James B.; Subramanian, Sankaran; Krishna, Murali C.

    2009-01-01

    Three-dimensional Oximetric Electron Paramagnetic Resonance Imaging using the Single Point Imaging modality generates unpaired spin density and oxygen images that can readily distinguish between normal and tumor tissues in small animals. It is also possible with fast imaging to track the changes in tissue oxygenation in response to the oxygen content in the breathing air. However, this involves dealing with gigabytes of data for each 3D oximetric imaging experiment involving digital band pass filtering and background noise subtraction, followed by 3D Fourier reconstruction. This process is rather slow in a conventional uniprocessor system. This paper presents a parallelization framework using OpenMP runtime support and parallel MATLAB to execute such computationally intensive programs. The Intel compiler is used to develop a parallel C++ code based on OpenMP. The code is executed on four Dual-Core AMD Opteron shared memory processors, to reduce the computational burden of the filtration task significantly. The results show that the parallel code for filtration has achieved a speed up factor of 46.66 as against the equivalent serial MATLAB code. In addition, a parallel MATLAB code has been developed to perform 3D Fourier reconstruction. Speedup factors of 4.57 and 4.25 have been achieved during the reconstruction process and oximetry computation, for a data set with 23 × 23 × 23 gradient steps. The execution time has been computed for both the serial and parallel implementations using different dimensions of the data and presented for comparison. The reported system has been designed to be easily accessible even from low-cost personal computers through local internet (NIHnet). The experimental results demonstrate that the parallel computing provides a source of high computational power to obtain biophysical parameters from 3D EPR oximetric imaging, almost in real-time. PMID:19672315

  18. Strategy choice mediates the link between auditory processing and spelling.

    PubMed

    Kwong, Tru E; Brachman, Kyle J

    2014-01-01

    Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.

  19. Testing the Theoretical Design of a Health Risk Message: Reexamining the Major Tenets of the Extended Parallel Process Model

    ERIC Educational Resources Information Center

    Gore, Thomas D.; Bracken, Cheryl Campanella

    2005-01-01

    This study examined the fear control/danger control responses that are predicted by the Extended Parallel Process Model (EPPM). In a campaign designed to inform college students about the symptoms and dangers of meningitis, participants were given either a high-threat/no-efficacy or high-efficacy/no-threat health risk message, thus testing the…

  20. Parallel versus Serial Processing Dependencies in the Perisylvian Speech Network: A Granger Analysis of Intracranial EEG Data

    ERIC Educational Resources Information Center

    Gow, David W., Jr.; Keller, Corey J.; Eskandar, Emad; Meng, Nate; Cash, Sydney S.

    2009-01-01

    In this work, we apply Granger causality analysis to high spatiotemporal resolution intracranial EEG (iEEG) data to examine how different components of the left perisylvian language network interact during spoken language perception. The specific focus is on the characterization of serial versus parallel processing dependencies in the dominant…

  1. Parallel Processing of Big Point Clouds Using Z-Order Partitioning

    NASA Astrophysics Data System (ADS)

    Alis, C.; Boehm, J.; Liu, K.

    2016-06-01

    As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112) is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest neighbour algorithm

  2. Graphics Processing Unit Acceleration and Parallelization of GENESIS for Large-Scale Molecular Dynamics Simulations.

    PubMed

    Jung, Jaewoon; Naurse, Akira; Kobayashi, Chigusa; Sugita, Yuji

    2016-10-11

    The graphics processing unit (GPU) has become a popular computational platform for molecular dynamics (MD) simulations of biomolecules. A significant speedup in the simulations of small- or medium-size systems using only a few computer nodes with a single or multiple GPUs has been reported. Because of GPU memory limitation and slow communication between GPUs on different computer nodes, it is not straightforward to accelerate MD simulations of large biological systems that contain a few million or more atoms on massively parallel supercomputers with GPUs. In this study, we develop a new scheme in our MD software, GENESIS, to reduce the total computational time on such computers. Computationally intensive real-space nonbonded interactions are computed mainly on GPUs in the scheme, while less intensive bonded interactions and communication-intensive reciprocal-space interactions are performed on CPUs. On the basis of the midpoint cell method as a domain decomposition scheme, we invent the single particle interaction list for reducing the GPU memory usage. Since total computational time is limited by the reciprocal-space computation, we utilize the RESPA multiple time-step integration and reduce the CPU resting time by assigning a subset of nonbonded interactions on CPUs as well as on GPUs when the reciprocal-space computation is skipped. We validated our GPU implementations in GENESIS on BPTI and a membrane protein, porin, by MD simulations and an alanine-tripeptide by REMD simulations. Benchmark calculations on TSUBAME supercomputer showed that an MD simulation of a million atoms system was scalable up to 256 computer nodes with GPUs.

  3. Process control strategy for ITER central solenoid operation

    NASA Astrophysics Data System (ADS)

    Maekawa, R.; Takami, S.; Iwamoto, A.; Chang, H.-S.; Forgeas, A.; Chalifour, M.

    2016-12-01

    ITER Central Solenoid (CS) pulse operation induces significant flow disturbance in the forced-flow Supercritical Helium (SHe) cooling circuit, which could impact primarily on the operation of cold circulator (SHe centrifugal pump) in Auxiliary Cold Box (ACB). Numerical studies using Venecia®, SUPERMAGNET and 4C have identified reverse flow at the CS module inlet due to the substantial thermal energy deposition at the inner-most winding. To assess the reliable operation of ACB-CS (dedicated ACB for CS), the process analyses have been conducted with a dynamic process simulation model developed by Cryogenic Process REal-time SimulaTor (C-PREST). As implementing process control of hydrodynamic instability, several strategies have been applied to evaluate their feasibility. The paper discusses control strategy to protect the centrifugal type cold circulator/compressor operations and its impact on the CS cooling.

  4. The Impact of Reading Purposes on Text Processing Strategies

    ERIC Educational Resources Information Center

    Zhou, Ruiqi

    2011-01-01

    Readers' reading purpose is of great significance to their use of text processing strategies, their achievement in recall and reading comprehension in L1 reading research. To test its role in EFL learning, the author conducted an experiment to 18 EFL Chinese learners of English. Think-aloud and written recall methods were adopted and the result…

  5. The creative process in biomedical visualization: strategies and management.

    PubMed

    Anderson, P A

    1990-01-01

    The phases of the creative process (identification, preparation, incubation, insight, and elaboration/verification) are related to strategies for management of biomedical illustration projects. The idea that creativity is a mystery--and is unpredictable and uncontrollable--is not accepted. This article presents a practical way to encourage creative thinking in the biomedical visualization studio.

  6. Strategy, Policy and Contingency Planning. The US Defense Planning Process.

    DTIC Science & Technology

    1984-05-23

    planning process has difficulty in trans- ’ lating national -level policy guidance into viable defense contingency plans * which, if implemented, produce...translating national -level policy guidance into viable defense contingency plans which, if implemented, produce winning outcomes. Some of the concerns...8217D-R148 851 STRATEGY POLICY AND CONTINGENCY PLANNING THE US DEFENSE i/i PLANNING PROCESS(U) ARMY WAR COLL CARLISLE BARRACKS PA J R CARLSON ET AL. 23

  7. Parallel processing of information about location in the amygdala, entorhinal cortex and hippocampus.

    PubMed

    Gaskin, Stephane; White, Norman M

    2013-11-01

    The conditioned cue preference paradigm was used to study how rats use extra-maze cues to discriminate between 2 adjacent arms on an 8-arm radial maze, a situation in which most of the same cues can be seen from both arms but only one arm contains food. Since the food-restricted rats eat while passively confined on the food-paired arm no responses are reinforced, so the discrimination is due to Pavlovian stimulus-reward (or outcome) learning. Consistent with other evidence that rats must move around in an environment to acquire a spatial map, we found that learning the adjacent arms CCP (ACCP) required a minimum amount of active exploration of the maze with no reinforcers present prior to passive pairing of the extra-maze cues with the food reinforcer, an instance of latent learning. Temporary inactivation of the hippocampus during the pre-exposure sessions had no effect on ACCP learning, confirming other evidence that the hippocampus is not involved in latent learning. A series of experiments indentified a circuit involving fimbria-fornix and dorsal entorhinal cortex as the neural basis of latent learning in this situation. In contrast, temporary inactivation of the entorhinal cortex or hippocampus during passive training or during testing blocked ACCP learning and expression, respectively, suggesting that these two structures co-operate in using spatial information to learn the location of food on the maze during passive pairing and to express this combined information during testing. In parallel with these processes we found that the amygdala processes information leading to an equal tendency to enter both adjacent arms (even though only one was paired with food) suggesting that the stimulus information available to this structure is not sufficiently precise to discriminate between the ambiguous cues visible from the adjacent arms. Expression of the ACCP in normal rats depends on hippocampus-based learning to avoid the unpaired arm which competes with the

  8. Application of bistable optical logic gate arrays to all-optical digital parallel processing

    NASA Astrophysics Data System (ADS)

    Walker, A. C.

    1986-05-01

    Arrays of bistable optical gates can form the basis of an all-optical digital parallel processor. Two classes of signal input geometry exist - on- and off-axis - and lead to distinctly different device characteristics. The optical implementation of multisignal fan-in to an array of intrinsically bistable optical gates using the more efficient off-axis option is discussed together with the construction of programmable read/write memories from optically bistable devices. Finally the design of a demonstration all-optical parallel processor incorporating these concepts is presented.

  9. Parallel Processing of Numerical Tsunami Simulations on a High Performance Cluster based on the GDAL Library

    NASA Astrophysics Data System (ADS)

    Schroeder, Matthias; Jankowski, Cedric; Hammitzsch, Martin; Wächter, Joachim

    2014-05-01

    Thousands of numerical tsunami simulations allow the computation of inundation and run-up along the coast for vulnerable areas over the time. A so-called Matching Scenario Database (MSDB) [1] contains this large number of simulations in text file format. In order to visualize these wave propagations the scenarios have to be reprocessed automatically. In the TRIDEC project funded by the seventh Framework Programme of the European Union a Virtual Scenario Database (VSDB) and a Matching Scenario Database (MSDB) were established amongst others by the working group of the University of Bologna (UniBo) [1]. One part of TRIDEC was the developing of a new generation of a Decision Support System (DSS) for tsunami Early Warning Systems (TEWS) [2]. A working group of the GFZ German Research Centre for Geosciences was responsible for developing the Command and Control User Interface (CCUI) as central software application which support operator activities, incident management and message disseminations. For the integration and visualization in the CCUI, the numerical tsunami simulations from MSDB must be converted into the shapefiles format. The usage of shapefiles enables a much easier integration into standard Geographic Information Systems (GIS). Since also the CCUI is based on two widely used open source products (GeoTools library and uDig), whereby the integration of shapefiles is provided by these libraries a priori. In this case, for an example area around the Western Iberian margin several thousand tsunami variations were processed. Due to the mass of data only a program-controlled process was conceivable. In order to optimize the computing efforts and operating time the use of an existing GFZ High Performance Computing Cluster (HPC) had been chosen. Thus, a geospatial software was sought after that is capable for parallel processing. The FOSS tool Geospatial Data Abstraction Library (GDAL/OGR) was used to match the coordinates with the wave heights and generates the

  10. Distributed Computing for Signal Processing: Modeling of Asynchronous Parallel Computation. Appendix G. On the Design and Modeling of Special Purpose Parallel Processing Systems.

    DTIC Science & Technology

    1985-05-01

    Data Corp., Cyber- Ikon Image Processing System Con- cepts., Digital Systems Division, Control Data Corp., Minneapolis, MN, Jan. 1977. [CDC77b] Control...Data Corp., Cyber- Ikon Flexible Processor Programming Textbook, Digital Systems Division, Control Data Corp., Minneap- olis, MN, Nov. 1977. [Che8O

  11. High Level Waste (HLW) Feed Process Control Strategy

    SciTech Connect

    STAEHR, T.W.

    2000-06-14

    The primary purpose of this document is to describe the overall process control strategy for monitoring and controlling the functions associated with the Phase 1B high-level waste feed delivery. This document provides the basis for process monitoring and control functions and requirements needed throughput the double-shell tank system during Phase 1 high-level waste feed delivery. This document is intended to be used by (1) the developers of the future Process Control Plan and (2) the developers of the monitoring and control system.

  12. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  13. VECPAR 2000: 4th International Meeting on Vector and Parallel Processing, Part 1

    DTIC Science & Technology

    2000-06-01

    covering topics in all areas of vector, parallel and distributed computing applied to a broad range of research disciplines with a focus on engineering... Distributed Computing and Operating Systems, Fault Tolerant Systems, Imaging and Graphics, Interconnection Networks, Languages and Tools, Numerical Methods

  14. VECPAR 2000: 4th International Meeting on Vector and Parallel Processing, Part 3

    DTIC Science & Technology

    2000-06-01

    covering topics in all areas of vector, parallel and distributed computing applied to a broad range of research disciplines with a focus on engineering... Distributed Computing and Operating Systems, Fault Tolerant Systems, Imaging and Graphics, Interconnection Networks, Languages and Tools, Numerical Methods

  15. VECPAR 2000: 4th International Meeting on Vector and Parallel Processing, Part 2

    DTIC Science & Technology

    2000-06-01

    covering topics in all areas of vector, parallel and distributed computing applied to a broad range of research disciplines with a focus on engineering... Distributed Computing and Operating Systems, Fault Tolerant Systems, Imaging and Graphics, Interconnection Networks, Languages and Tools, Numerical Methods

  16. The CERBERUS code: Experiments with parallel processing using RELAP5/MOD3

    SciTech Connect

    Makowitz, H. )

    1989-01-01

    CERBERUS, a six-equation parallel thermal-hydraulic system simulation code, is being developed at the Idaho National Engineering Laboratory (INEL). CERBERUS Ver.00 performs parallel computations only for the heat transfer model. It is projected that CERBERUS Ver.01 will have a parallel heat transfer and hydraulic module, excluding the matrix solver, and CERBERUS Ver.02 will contain Ver.01 plus the solver. Three implementations of the CERBERUS Ver.00 code with constructs of varying overhead have been developed using a META language. These implementations are under study on shared-memory Cray-like computer architectures. Results for the hybrid code version, which utilizes all three construct sets simultaneously (i.e., CRAY AUTO, MICRO, and MULTI TASKING) on 2- and 8-CPU Cray machines, indicate the importance of load balancing for overhead reduction, and indicate that greater speedup factors may be achievable than previously believed with a RELAP-based parallel code. Extrapolations based on Y-MP/832 overhead measurements indicate that a speedup factor of > 10 may be obtainable with the CERBERUS Ver.02 code on a 16-CPU machine.

  17. Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Navaz, Homayun K.

    2002-01-01

    -equation turbulence model, and two-phase flow. To overcome these limitations, the LTCP code is rewritten to include the multi-zone capability with domain decomposition that makes it suitable for parallel processing, i.e., enabling the code to run every zone or sub-domain on a separate processor. This can reduce the run time by a factor of 6 to 8, depending on the problem.

  18. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  19. Parallel pipeline networking and signal processing with field-programmable gate arrays (FPGAs) and VCSEL-MSM smart pixels

    NASA Astrophysics Data System (ADS)

    Kuznia, C. B.; Sawchuk, Alexander A.; Zhang, Liping; Hoanca, Bogdan; Hong, Sunkwang; Min, Chris; Pansatiankul, Dhawat E.; Alpaslan, Zahir Y.

    2000-05-01

    We present a networking and signal processing architecture called Transpar-TR (Translucent Smart Pixel Array-Token- Ring) that utilizes smart pixel technology to perform 2D parallel optical data transfer between digital processing nodes. Transpar-TR moves data through the network in the form of 3D packets (2D spatial and 1D time). By utilizing many spatial parallel channels, Transpar-TR can achieve high throughput, low latency communication between nodes, even with each channel operating at moderate data rates. The 2D array of optical channels is created by an array of smart pixels, each with an optical input and optical output. Each smart pixel consists of two sections, an optical network interface and ALU-based processor with local memory. The optical network interface is responsible for transmitting and receiving optical data packets using a slotted token ring network protocol. The smart pixel array operates as a single-instruction multiple-data processor when processing data. The Transpar-TR network, consisting of networked smart pixel arrays, can perform pipelined parallel processing very efficiently on 2D data structures such as images and video. This paper discusses the Transpar-TR implementation in which each node is the printed circuit board integration of a VCSEL-MSM chip, a transimpedance receiver array chip and an FPGA chip.

  20. Parallel pivoting combined with parallel reduction

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita

    1987-01-01

    Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

  1. [A new strategy for Chinese medicine processing technologies: coupled with individuation processed and cybernetics].

    PubMed

    Zhang, Ding-kun; Yang, Ming; Han, Xue; Lin, Jun-zhi; Wang, Jia-bo; Xiao, Xiao-he

    2015-08-01

    The stable and controllable quality of decoction pieces is an important factor to ensure the efficacy of clinical medicine. Considering the dilemma that the existing standardization of processing mode cannot effectively eliminate the variability of quality raw ingredients, and ensure the stability between different batches, we first propose a new strategy for Chinese medicine processing technologies that coupled with individuation processed and cybernetics. In order to explain this thinking, an individual study case about different grades aconite is provided. We hope this strategy could better serve for clinical medicine, and promote the inheritance and innovation of Chinese medicine processing skills and theories.

  2. Comparisons of elastic and rigid blade-element rotor models using parallel processing technology for piloted simulations

    NASA Technical Reports Server (NTRS)

    Hill, Gary; Duval, Ronald W.; Green, John A.; Huynh, Loc C.

    1991-01-01

    A piloted comparison of rigid and aeroelastic blade-element rotor models was conducted at the Crew Station Research and Development Facility (CSRDF) at Ames Research Center. A simulation development and analysis tool, FLIGHTLAB, was used to implement these models in real time using parallel processing technology. Pilot comments and quantitative analysis performed both on-line and off-line confirmed that elastic degrees of freedom significantly affect perceived handling qualities. Trim comparisons show improved correlation with flight test data when elastic modes are modeled. The results demonstrate the efficiency with which the mathematical modeling sophistication of existing simulation facilities can be upgraded using parallel processing, and the importance of these upgrades to simulation fidelity.

  3. Comparison of elastic and rigid blade-element rotor models using parallel processing technology for piloted simulations

    NASA Technical Reports Server (NTRS)

    Hill, Gary; Du Val, Ronald W.; Green, John A.; Huynh, Loc C.

    1991-01-01

    A piloted comparison of rigid and aeroelastic blade-element rotor models was conducted at the Crew Station Research and Development Facility (CSRDF) at Ames Research Center. A simulation development and analysis tool, FLIGHTLAB, was used to implement these models in real time using parallel processing technology. Pilot comments and qualitative analysis performed both on-line and off-line confirmed that elastic degrees of freedom significantly affect perceived handling qualities. Trim comparisons show improved correlation with flight test data when elastic modes are modeled. The results demonstrate the efficiency with which the mathematical modeling sophistication of existing simulation facilities can be upgraded using parallel processing, and the importance of these upgrades to simulation fidelity.

  4. Does phonological encoding in speech production always follow the retrieval of semantic knowledge? Electrophysiological evidence for parallel processing.

    PubMed

    Abdel Rahman, Rasha; Sommer, Werner

    2003-05-01

    In this article a new approach to the distinction between serial/contingent and parallel/independent processing in the human cognitive system is applied to semantic knowledge retrieval and phonological encoding of the word form in picture naming. In two-choice go/nogo tasks pictures of objects were manually classified on the basis of semantic and phonological information. An additional manipulation of the duration of the faster and presumably mediating process (semantic retrieval) allowed to derive differential predictions from the two alternative models. These predictions were tested with two event-related brain potentials (ERPs), the lateralized readiness potential (LRP) and the N200. The findings indicate that phonological encoding can proceed in parallel to the retrieval of semantic features. A suggestion is made how to accommodate these findings with models of speech production.

  5. Nonlinear Memory Capacity of Parallel Time-Delay Reservoir Computers in the Processing of Multidimensional Signals.

    PubMed

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2016-07-01

    This letter addresses the reservoir design problem in the context of delay-based reservoir computers for multidimensional input signals, parallel architectures, and real-time multitasking. First, an approximating reservoir model is presented in those frameworks that provides an explicit functional link between the reservoir architecture and its performance in the execution of a specific task. Second, the inference properties of the ridge regression estimator in the multivariate context are used to assess the impact of finite sample training on the decrease of the reservoir capacity. Finally, an empirical study is conducted that shows the adequacy of the theoretical results with the empirical performances exhibited by various reservoir architectures in the execution of several nonlinear tasks with multidimensional inputs. Our results confirm the robustness properties of the parallel reservoir architecture with respect to task misspecification and parameter choice already documented in the literature.

  6. Contributions of topography and parallel processing to odor coding in the vertebrate olfactory pathway.

    PubMed

    Kauer, J S

    1991-02-01

    Odor information appears to be encoded by activity distributed across many neurons at each level in the olfactory pathway. Thus olfactory circuits function as parallel distributed processors. New methods for observing distributed activity in such systems permit computer simulations to be constructed that are constrained by patterns of activity observed in the real system. Analysis of the system using a combination of physiological measurements and computational approaches might elucidate the principles by which odors are discriminated.

  7. Intermediate Level Computer Vision Processing Algorithm Development for the Content Addressable Array Parallel Processor.

    DTIC Science & Technology

    1986-11-29

    Madison, Wiscon- sin, August 1982. [161 Fitzpatrick, D. T., Foderaro, J. K., Katevenis, M . G. H., Landman, H. A.. Patterson, D. A., Peek, J. B ., Peshkess...October 18-22, 1982. [33] Levitan , S. P., Parallel Algorithms and Architectures: A Programmer’s Per- 35 AN I%. . m ,,-1we, V .r V . , - .7...e. . . e. ** -! ~ * ~ - . . . . . 0.Wty C^11Cri m . op~ bo* pa, U FILE- copy(4 REPORT DOCUMENTATION PAGE e PQTSIC%.RSTV C6AUSIPCATION 16

  8. Parallel processing implementations of a contextual classifier for multispectral remote sensing data

    NASA Technical Reports Server (NTRS)

    Siegel, H. J.; Swain, P. H.; Smith, B. W.

    1980-01-01

    Contextual classifiers are being developed as a method to exploit the spatial/spectral context of a pixel to achieve accurate classification. Classification algorithms such as the contextual classifier typically require large amounts of computation time. One way to reduce the execution time of these tasks is through the use of parallelism. The applicability of the CDC flexible processor system and of a proposed multimicroprocessor system (PASM) for implementing contextual classifiers is examined.

  9. On the Optimality of Serial and Parallel Processing in the Psychological Refractory Period Paradigm: Effects of the Distribution of Stimulus Onset Asynchronies

    ERIC Educational Resources Information Center

    Miller, Jeff; Ulrich, Rolf; Rolke, Bettina

    2009-01-01

    Within the context of the psychological refractory period (PRP) paradigm, we developed a general theoretical framework for deciding when it is more efficient to process two tasks in serial and when it is more efficient to process them in parallel. This analysis suggests that a serial mode is more efficient than a parallel mode under a wide variety…

  10. [Position dependent influence that sensitivity correction processing gives the signal-to-noise ratio measurement in parallel imaging].

    PubMed

    Murakami, Koichi; Yoshida, Koji; Yanagimoto, Shinichi

    2012-01-01

    We studied the position dependent influence that sensitivity correction processing gave the signal-to-noise ratio (SNR) measurement of parallel imaging (PI). Sensitivity correction processing that referred to the sensitivity distribution of the body coil improved regional uniformity more than the sensitivity uniformity correction filter with a fixed correction factor. In addition, the position dependent influence to give the SNR measurement in PI was different from the sensitivity correction processing. Therefore, if we divide SNR of the sensitivity correction processing image by SNR of the original image in each pixel and calculate SNR ratio, we can show the position dependent influence that sensitivity correction processing gives the SNR measurement in PI. It is with an index of the sensitivity correction processing precision.

  11. Distributed Computing for Signal Processing: Modeling of Asynchronous Parallel Computation. Appendix A.

    DTIC Science & Technology

    1983-03-01

    Modeling of Asynchronous Parallel Computation 1983 Progress Report L.J. Siegel, H.J. Siegel, PMH . Swain, G.B. Adams 111, W.E. Kuhn III, R .J...used in reduction machines), and vector. A complete and very detailed taxonomy based on these concepts is presented in Ji181]. Its length is too great...representation [(1oL731. A banyan is a directed graph composed of vertices and edges or arcs such that. it is irreflexive, asymmetric , and intransitive (a

  12. Parallelizing Serial Code for a Distributed Processing Environment with an Application to High Frequency Electromagnetic Scattering

    DTIC Science & Technology

    1991-12-01

    1,k) 2p(i+’/2,j+ /,k) 1+++ At •1(22) B(i +l/’j +/2’k)58 1+ 0.m(i +𔃼,j +/2,k) 2p(i+ 1A ,j+1/,k) ,[ ZEn(i+/2J+1’k)-E(i ’ ij k] where 5 is the lattice...1991. 19. Tipler , Paul A. Physics. New York: Worth Publishers Inc., 1976. 20. Work, Paul Rt and Gary B. Lamont. "Efficient Parallelization of Serial

  13. Hadamard NMR spectroscopy for two-dimensional quantum information processing and parallel search algorithms.

    PubMed

    Gopinath, T; Kumar, Anil

    2006-12-01

    Hadamard spectroscopy has earlier been used to speed-up multi-dimensional NMR experiments. In this work, we speed-up the two-dimensional quantum computing scheme, by using Hadamard spectroscopy in the indirect dimension, resulting in a scheme which is faster and requires the Fourier transformation only in the direct dimension. Two and three qubit quantum gates are implemented with an extra observer qubit. We also use one-dimensional Hadamard spectroscopy for binary information storage by spatial encoding and implementation of a parallel search algorithm.

  14. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    NASA Technical Reports Server (NTRS)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  15. High-throughput fabrication of micrometer-sized compound parabolic mirror arrays by using parallel laser direct-write processing

    NASA Astrophysics Data System (ADS)

    Yan, Wensheng; Cumming, Benjamin P.; Gu, Min

    2015-07-01

    Micrometer-sized parabolic mirror arrays have significant applications in both light emitting diodes and solar cells. However, low fabrication throughput has been identified as major obstacle for the mirror arrays towards large-scale applications due to the serial nature of the conventional method. Here, the mirror arrays are fabricated by using a parallel laser direct-write processing, which addresses this barrier. In addition, it is demonstrated that the parallel writing is able to fabricate complex arrays besides simple arrays and thus offers wider applications. Optical measurements show that each single mirror confines the full-width at half-maximum value to as small as 17.8 μm at the height of 150 μm whilst providing a transmittance of up to 68.3% at a wavelength of 633 nm in good agreement with the calculation values.

  16. Investigation of the applicability of a functional programming model to fault-tolerant parallel processing for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Harper, Richard

    1989-01-01

    In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.

  17. Optimized Parallelization for Nonlocal Means Based Low Dose CT Image Processing

    PubMed Central

    Zhang, Libo; Yang, Benqiang; Zhuang, Zhikun; Hu, Yining; Chen, Yang; Luo, Limin; Shu, Huazhong

    2015-01-01

    Low dose CT (LDCT) images are often significantly degraded by severely increased mottled noise/artifacts, which can lead to lowered diagnostic accuracy in clinic. The nonlocal means (NLM) filtering can effectively remove mottled noise/artifacts by utilizing large-scale patch similarity information in LDCT images. But the NLM filtering application in LDCT imaging also requires high computation cost because intensive patch similarity calculation within a large searching window is often required to be used to include enough structure-similarity information for noise/artifact suppression. To improve its clinical feasibility, in this study we further optimize the parallelization of NLM filtering by avoiding the repeated computation with the row-wise intensity calculation and the symmetry weight calculation. The shared memory with fast I/O speed is also used in row-wise intensity calculation for the proposed method. Quantitative experiment demonstrates that significant acceleration can be achieved with respect to the traditional straight pixel-wise parallelization. PMID:26078781

  18. Distributed Parallel Processing and Dynamic Load Balancing Techniques for Multidisciplinary High Speed Aircraft Design

    NASA Technical Reports Server (NTRS)

    Krasteva, Denitza T.

    1998-01-01

    Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.

  19. Rigid body constraints realized in massively-parallel molecular dynamics on graphics processing units

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung Dac; Phillips, Carolyn L.; Anderson, Joshua A.; Glotzer, Sharon C.

    2011-11-01

    Molecular dynamics (MD) methods compute the trajectory of a system of point particles in response to a potential function by numerically integrating Newton's equations of motion. Extending these basic methods with rigid body constraints enables composite particles with complex shapes such as anisotropic nanoparticles, grains, molecules, and rigid proteins to be modeled. Rigid body constraints are added to the GPU-accelerated MD package, HOOMD-blue, version 0.10.0. The software can now simulate systems of particles, rigid bodies, or mixed systems in microcanonical (NVE), canonical (NVT), and isothermal-isobaric (NPT) ensembles. It can also apply the FIRE energy minimization technique to these systems. In this paper, we detail the massively parallel scheme that implements these algorithms and discuss how our design is tuned for the maximum possible performance. Two different case studies are included to demonstrate the performance attained, patchy spheres and tethered nanorods. In typical cases, HOOMD-blue on a single GTX 480 executes 2.5-3.6 times faster than LAMMPS executing the same simulation on any number of CPU cores in parallel. Simulations with rigid bodies may now be run with larger systems and for longer time scales on a single workstation than was previously even possible on large clusters.

  20. A VLSI Architecture for Output Probability Computations of HMM-Based Recognition Systems with Store-Based Block Parallel Processing

    NASA Astrophysics Data System (ADS)

    Nakamura, Kazuhiro; Yamamoto, Masatoshi; Takagi, Kazuyoshi; Takagi, Naofumi

    In this paper, a fast and memory-efficient VLSI architecture for output probability computations of continuous Hidden Markov Models (HMMs) is presented. These computations are the most time-consuming part of HMM-based recognition systems. High-speed VLSI architectures with small registers and low-power dissipation are required for the development of mobile embedded systems with capable human interfaces. We demonstrate store-based block parallel processing (StoreBPP) for output probability computations and present a VLSI architecture that supports it. When the number of HMM states is adequate for accurate recognition, compared with conventional stream-based block parallel processing (StreamBPP) architectures, the proposed architecture requires fewer registers and processing elements and less processing time. The processing elements used in the StreamBPP architecture are identical to those used in the StoreBPP architecture. From a VLSI architectural viewpoint, a comparison shows the efficiency of the proposed architecture through efficient use of registers for storing input feature vectors and intermediate results during computation.

  1. Synchronization Of Parallel Discrete Event Simulations

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  2. Real-time parallel implementation of Pulse-Doppler radar signal processing chain on a massively parallel machine based on multi-core DSP and Serial RapidIO interconnect

    NASA Astrophysics Data System (ADS)

    Klilou, Abdessamad; Belkouch, Said; Elleaume, Philippe; Le Gall, Philippe; Bourzeix, François; Hassani, Moha M'Rabet

    2014-12-01

    Pulse-Doppler radars require high-computing power. A massively parallel machine has been developed in this paper to implement a Pulse-Doppler radar signal processing chain in real-time fashion. The proposed machine consists of two C6678 digital signal processors (DSPs), each with eight DSP cores, interconnected with Serial RapidIO (SRIO) bus. In this study, each individual core is considered as the basic processing element; hence, the proposed parallel machine contains 16 processing elements. A straightforward model has been adopted to distribute the Pulse-Doppler radar signal processing chain. This model provides low latency, but communication inefficiency limits system performance. This paper proposes several optimizations that greatly reduce the inter-processor communication in a straightforward model and improves the parallel efficiency of the system. A use case of the Pulse-Doppler radar signal processing chain has been used to illustrate and validate the concept of the proposed mapping model. Experimental results show that the parallel efficiency of the proposed parallel machine is about 90%.

  3. Ultraviolet absorption measurements of CF2 in the parallel plate pyrolytic chemical vapour deposition process

    NASA Astrophysics Data System (ADS)

    Cruden, Brett A.; Gleason, Karen K.; Sawin, Herbert H.

    2002-03-01

    Polytetrafluoroethylene films have been deposited for use as low dielectric constant materials. Deposition is performed through pyrolysis of hexafluoropropylene oxide (HFPO) to produce CF2, which can then polymerize and deposit as a thin film. The variation of CF2 concentration as a function of reactor conditions has been characterized by ultraviolet absorption spectroscopy. CF2 concentration is observed to go through a maximum with respect to both pressure and pyrolysis temperature when it is present in large amounts (~1014 cm-3). A one-dimensional model including known kinetic reactions for HFPO decomposition and CF2 recombination and multi-component diffusive transport has been applied to the parallel plate system. The result is seen to overestimate the measured concentration and does not capture the maxima observed versus pressure and temperature. An additional mechanism of particle formation, by CF2 insertion into (CF2)n oligomers, has been introduced to produce a kinetic model that explains the CF2 concentration measurements.

  4. Open parallel cooperative and competitive decision processes: a potential provenance for quantum probability decision models.

    PubMed

    Fuss, Ian G; Navarro, Daniel J

    2013-10-01

    In recent years quantum probability models have been used to explain many aspects of human decision making, and as such quantum models have been considered a viable alternative to Bayesian models based on classical probability. One criticism that is often leveled at both kinds of models is that they lack a clear interpretation in terms of psychological mechanisms. In this paper we discuss the mechanistic underpinnings of a quantum walk model of human decision making and response time. The quantum walk model is compared to standard sequential sampling models, and the architectural assumptions of both are considered. In particular, we show that the quantum model has a natural interpretation in terms of a cognitive architecture that is both massively parallel and involves both co-operative (excitatory) and competitive (inhibitory) interactions between units. Additionally, we introduce a family of models that includes aspects of the classical and quantum walk models.

  5. Parallel processing demonstrator with plug-on-top free-space interconnect optics

    NASA Astrophysics Data System (ADS)

    Berger, Christoph; Wang, Xiaoqing; Ekman, Jeremy T.; Marchand, Philippe J.; Spaanenburg, Henk; Wang, Mark M.; Kiamilev, Fouad E.; Esener, Sadik C.

    2001-05-01

    We demonstrate a setup with 10 optically interconnected chips,k which can perform a distributed radix-2-butterfly calculation for fast Fourier transformation. The setup consists of a motherboard, five multi-chip-modules (MCMs, with processor/transceiver chips and laser/detector chips), four plug-on-top optics modules that provide the bi- directional optical links between the MCMs, and external control electronics. The design of the optics and optomechanics satisfies numerous real-world constraints, such as compact size (< 1 inch thick), suitability for mass-production, suitability for large arrays (up to 103 parallel channels), compatibility with standard electronics fabrication and packaging technology, and potential for active misalignment compensation by integrating MEMS technology.

  6. I/O Strategies for Multicore Processing in ATLAS

    NASA Astrophysics Data System (ADS)

    van Gemmeren, P.; Binet, S.; Calafiura, P.; Lavrijsen, W.; Malon, D.; Tsulaia, V.

    2012-12-01

    A critical component of any multicore/manycore application architecture is the handling of input and output. Even in the simplest of models, design decisions interact both in obvious and in subtle ways with persistence strategies. When multiple workers handle I/O independently using distinct instances of a serial I/O framework, for example, it may happen that because of the way data from consecutive events are compressed together, there may be serious inefficiencies, with workers redundantly reading the same buffers, or multiple instances thereof. With shared reader strategies, caching and buffer management by the persistence infrastructure and by the control framework may have decisive performance implications for a variety of design choices. Providing the next event may seem straightforward when all event data are contiguously stored in a block, but there may be performance penalties to such strategies when only a subset of a given event's data are needed; conversely, when event data are partitioned by type in persistent storage, providing the next event becomes more complicated, requiring marshalling of data from many I/O buffers. Output strategies pose similarly subtle problems, with complications that may lead to significant serialization and the possibility of serial bottlenecks, either during writing or in post-processing, e.g., during data stream merging. In this paper we describe the I/O components of AthenaMP, the multicore implementation of the ATLAS control framework, and the considerations that have led to the current design, with attention to how these I/O components interact with ATLAS persistent data organization and infrastructure.

  7. Materials processing in space - A strategy for commercialization

    NASA Technical Reports Server (NTRS)

    Naumann, R. J.

    1978-01-01

    Major aerospace companies are talking about space factories manufacturing billions of dollars worth of high technology materials per year. On the other hand, a recent National Academy of Sciences study team saw little prospect for space manufacturing because, in their opinion, most of the disturbing effects of gravity in the processes they considered could be overcome on the ground for much less expenditure. This paper presents a current assessment of the problems and promises of the Materials Processing in Space Program and outlines a strategy for developing the first products of commercial value. These early products are expected to serve as paradigms of what can be accomplished by manufacturing in space and should stimulate industry to develop space manufacturing to whatever degree is economically justifiable.

  8. Parallel Representation of Value-Based and Finite State-Based Strategies in the Ventral and Dorsal Striatum.

    PubMed

    Ito, Makoto; Doya, Kenji

    2015-11-01

    Previous theoretical studies of animal and human behavioral learning have focused on the dichotomy of the value-based strategy using action value functions to predict rewards and the model-based strategy using internal models to predict environmental states. However, animals and humans often take simple procedural behaviors, such as the "win-stay, lose-switch" strategy without explicit prediction of rewards or states. Here we consider another strategy, the finite state-based strategy, in which a subject selects an action depending on its discrete internal state and updates the state depending on the action chosen and the reward outcome. By analyzing choice behavior of rats in a free-choice task, we found that the finite state-based strategy fitted their behavioral choices more accurately than value-based and model-based strategies did. When fitted models were run autonomously with the same task, only the finite state-based strategy could reproduce the key feature of choice sequences. Analyses of neural activity recorded from the dorsolateral striatum (DLS), the dorsomedial striatum (DMS), and the ventral striatum (VS) identified significant fractions of neurons in all three subareas for which activities were correlated with individual states of the finite state-based strategy. The signal of internal states at the time of choice was found in DMS, and for clusters of states was found in VS. In addition, action values and state values of the value-based strategy were encoded in DMS and VS, respectively. These results suggest that both the value-based strategy and the finite state-based strategy are implemented in the striatum.

  9. New prototype of acousto-optical radio-wave spectrometer with parallel frequency processing for astrophysical applications

    NASA Astrophysics Data System (ADS)

    Shcherbakov, Alexandre S.; Chavez Dagostino, Miguel; Arellanes, Adan O.; Aguirre Lopez, Arturo

    2016-09-01

    We develop a multi-band spectrometer with a few spatially parallel optical arms for the combined processing of their data flow. Such multi-band capability has various applications in astrophysical scenarios at different scales: from objects in the distant universe to planetary atmospheres in the Solar system. Each optical arm exhibits original performances to provide parallel multi-band observations with different scales simultaneously. Similar possibility is based on designing each optical arm individually via exploiting different materials for acousto-optical cells operating within various regimes, frequency ranges and light wavelengths from independent light sources. Individual beam shapers provide both the needed incident light polarization and the required apodization to increase the dynamic range of a system. After parallel acousto-optical processing, data flows are united by the joint CCD matrix on the stage of the combined electronic data processing. At the moment, the prototype combines still three bands, i.e. includes three spatial optical arms. The first low-frequency arm operates at the central frequencies 60-80 MHz with frequency bandwidth 40 MHz. The second arm is oriented to middle-frequencies 350-500 MHz with frequency bandwidth 200-300 MHz. The third arm is intended for ultra-high-frequency radio-wave signals about 1.0-1.5 GHz with frequency bandwidth <300 MHz. To-day, this spectrometer has the following preliminary performances. The first arm exhibits frequency resolution 20 KHz; while the second and third arms give the resolution 150-200 KHz. The numbers of resolvable spots are 1500- 2000 depending on the regime of operation. The fourth optical arm at the frequency range 3.5 GHz is currently under construction.

  10. Parallel demodulation system and signal-processing method for extrinsic Fabry-Perot interferometer and fiber Bragg grating sensors.

    PubMed

    Jiang, Junfeng; Liu, Tiegen; Zhang, Yimo; Liu, Lina; Zha, Ying; Zhang, Fan; Wang, Yunxin; Long, Pin

    2005-03-15

    A parallel demodulation system for extrinsic Fabry-Perot interferometer (EFPI) and fiber Bragg grating (FBG) sensors is presented that is based on a Michelson interferometer and combines the methods of low-coherence interference and Fourier transform spectrum. Signals from EFPI and FBG sensors are obtained simultaneously by scanning one arm of a Michelson interferometer, and an algorithm model is established to process the signals and retrieve both the wavelength of the FBG and the cavity length of the EFPI at the same time, which are then used to determine the strain and temperature.

  11. Utilization of parallel processing in solving the inviscid form of the average-passage equation system for multistage turbomachinery

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.

    1987-01-01

    A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to are specific to the Cray X-MP line of computers and its associated SSD (Solid-State Disk). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.

  12. The utilization of parallel processing in solving the inviscid form of the average-passage equation system for multistage turbomachinery

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.

    1987-01-01

    A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to in this paper are specific to the Cray X-MP line of computers and its associated SSD (Solid-state Storage Device). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.

  13. Parallel Olfactory Processing in the Honey Bee Brain: Odor Learning and Generalization under Selective Lesion of a Projection Neuron Tract

    PubMed Central

    Carcaud, Julie; Giurfa, Martin; Sandoz, Jean Christophe

    2016-01-01

    The function of parallel neural processing is a fundamental problem in Neuroscience, as it is found across sensory modalities and evolutionary lineages, from insects to humans. Recently, parallel processing has attracted increased attention in the olfactory domain, with the demonstration in both insects and mammals that different populations of second-order neurons encode and/or process odorant information differently. Among insects, Hymenoptera present a striking olfactory system with a clear neural dichotomy from the periphery to higher-order centers, based on two main tracts of second-order (projection) neurons: the medial and lateral antennal lobe tracts (m-ALT and l-ALT). To unravel the functional role of these two pathways, we combined specific lesions of the m-ALT tract with behavioral experiments, using the classical conditioning of the proboscis extension response (PER conditioning). Lesioned and intact bees had to learn to associate an odorant (1-nonanol) with sucrose. Then the bees were subjected to a generalization procedure with a range of odorants differing in terms of their carbon chain length or functional group. We show that m-ALT lesion strongly affects acquisition of an odor-sucrose association. However, lesioned bees that still learned the association showed a normal gradient of decreasing generalization responses to increasingly dissimilar odorants. Generalization responses could be predicted to some extent by in vivo calcium imaging recordings of l-ALT neurons. The m-ALT pathway therefore seems necessary for normal classical olfactory conditioning performance. PMID:26834589

  14. Smart pixel camera based signal processing in an interferometric test station for massive parallel inspection of MEMS and MOEMS

    NASA Astrophysics Data System (ADS)

    Styk, Adam; Lambelet, Patrick; Røyset, Arne; Kujawińska, Małgorzata; Gastinger, Kay

    2010-09-01

    The paper presents the electro-optical design of an interferometric inspection system for massive parallel inspection of Micro(Opto)ElectroMechanicalSystems (M(O)EMS). The basic idea is to adapt a micro-optical probing wafer to the M(O)EMS wafer under test. The probing wafer is exchangeable and contains a micro-optical interferometer array: a low coherent interferometer (LCI) array based on a Mirau configuration and a laser interferometer (LI) array based on a Twyman-Green configuration. The interference signals are generated in the micro-optical interferometers and are applied for M(O)EMS shape and deformation measurements by means of LCI and for M(O)EMS vibration analysis (the resonance frequency and spatial mode distribution) by means of LI. Distributed array of 5×5 smart pixel imagers detects the interferometric signals. The signal processing is based on the "on pixel" processing capacity of the smart pixel camera array, which can be utilised for phase shifting, signal demodulation or envelope maximum determination. Each micro-interferometer image is detected by the 140 × 146 pixels sub-array distributed in the imaging plane. In the paper the architecture of cameras with smart-pixel approach are described and their application for massive parallel electrooptical detection and data reduction is discussed. The full data processing paths for laser interferometer and low coherent interferometer are presented.

  15. Parallel Olfactory Processing in the Honey Bee Brain: Odor Learning and Generalization under Selective Lesion of a Projection Neuron Tract.

    PubMed

    Carcaud, Julie; Giurfa, Martin; Sandoz, Jean Christophe

    2015-01-01

    The function of parallel neural processing is a fundamental problem in Neuroscience, as it is found across sensory modalities and evolutionary lineages, from insects to humans. Recently, parallel processing has attracted increased attention in the olfactory domain, with the demonstration in both insects and mammals that different populations of second-order neurons encode and/or process odorant information differently. Among insects, Hymenoptera present a striking olfactory system with a clear neural dichotomy from the periphery to higher-order centers, based on two main tracts of second-order (projection) neurons: the medial and lateral antennal lobe tracts (m-ALT and l-ALT). To unravel the functional role of these two pathways, we combined specific lesions of the m-ALT tract with behavioral experiments, using the classical conditioning of the proboscis extension response (PER conditioning). Lesioned and intact bees had to learn to associate an odorant (1-nonanol) with sucrose. Then the bees were subjected to a generalization procedure with a range of odorants differing in terms of their carbon chain length or functional group. We show that m-ALT lesion strongly affects acquisition of an odor-sucrose association. However, lesioned bees that still learned the association showed a normal gradient of decreasing generalization responses to increasingly dissimilar odorants. Generalization responses could be predicted to some extent by in vivo calcium imaging recordings of l-ALT neurons. The m-ALT pathway therefore seems necessary for normal classical olfactory conditioning performance.

  16. Functionally Approached Body (FAB) Strategies for Young Children Who Have Behavioral and Sensory Processing Challenges

    ERIC Educational Resources Information Center

    Pagano, John

    2005-01-01

    Functionally Approached Body (FAB) Strategies offer a clinical approach to help parents of young children with behavioral and sensory processing strategies. This article introduces the FAB Strategies, clinical strategies developed by the author for understanding and addressing young children's behavioral and sensory processing challenges. The FAB…

  17. Parallel processing and learning in simple systems. Annual report, 10 January 1987-9 January 1988

    SciTech Connect

    Mpitsos, G.J.

    1988-03-11

    To date it has been demonstrated that an experimental animal, the sea slug Pleurobranchaea, is capable of one-trial food-aversion learning, and that the muscarinic antagonist scopolamine in low doses causes an enhancement of learning. Pharmacologic binding studies using a new, /sup 125/I-form of quinuclidinyl benzilate, in addition to studies using the /sup 3/H form of this ligand, have uncovered not only the classical types of muscarinic receptors that are typical of vertebrate cortex, but also a new form that is not found in other invertebrates tested. Usually muscarinic receptors are found in low densities in invertebrate neural membranes, but the density of the new form in this animal's neural membranes is similar to the density of the classic receptors in mammalian cortex. Neurophysiological studies of individual neurons in small groups of identifiable neurons have shown that their activity is variable, as is the behavior that they take part in generating, and that the variability fits the definition of low-dimensional chaos. Findings show that such variability is an important feature of the emergence of adaptive responses arising from parallel, distributed neural networks in biological systems.

  18. Bell-contoured, parallel flow nozzles for reducing overspray in thermal spray processes

    SciTech Connect

    Beason, G.P. Jr.; McKechnie, T.N.; Zimmerman, F.R.

    1995-12-31

    Thermal spray guns which exhaust supersonic plasmas currently employ anodes incorporating conical nozzles. These nozzles do not ideally expand the plasma flow and, therefore, produce disruptive shock waves and expansion fans in the plume. Shock waves and expansion fans turn the flow, allowing injected particles to escape and resolidify. Also, the divergent, linear walls produce tangential flow velocity components that are not parallel to the nozzle center axis. The divergent flow components, in turn, impart divergent trajectories to many injected powder particles which facilitates their escape from the plasma flow. To solve this problem, bell-contoured nozzles were designed and fabricated to ideally expand the plasma and, thus, eliminate disruptive flow phenomena while exhausting a collimated flow. As a result, injected powder particles remained in the plasma, and overspray was reduced substantially. Additionally, the flow exiting the bell nozzles did not impart divergent components to injected particles; therefore, the impact velocities of the particles were maximized. Consequently, test results show that bell-contoured nozzles have reduced overspray by 50 percent.

  19. Parallel Mechanisms of Sentence Processing: Assigning Roles to Constituents of Sentences.

    ERIC Educational Resources Information Center

    McClelland, James L.; Kawamoto, Alan H.

    This paper describes and illustrates a simulation model for the processing of grammatical elements in a sentence, focusing on one aspect of sentence comprehension: the assignment of the constituent elements of a sentence to the correct thematic case roles. The model addresses questions about sentence processing from a perspective very different…

  20. PANMIN: sequential and parallel global optimization procedures with a variety of options for the local search strategy

    NASA Astrophysics Data System (ADS)

    Theos, F. V.; Lagaris, I. E.; Papageorgiou, D. G.

    2004-05-01

    We present two sequential and one parallel global optimization codes, that belong to the stochastic class, and an interface routine that enables the use of the Merlin/MCL environment as a non-interactive local optimizer. This interface proved extremely important, since it provides flexibility, effectiveness and robustness to the local search task that is in turn employed by the global procedures. We demonstrate the use of the parallel code to a molecular conformation problem. Program summaryTitle of program: PANMIN Catalogue identifier: ADSU Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSU Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: PANMIN is designed for UNIX machines. The parallel code runs on either shared memory architectures or on a distributed system. The code has been tested on a SUN Microsystems ENTERPRISE 450 with four CPUs, and on a 48-node cluster under Linux, with both the GNU g77 and the Portland group compilers. The parallel implementation is based on MPI and has been tested with LAM MPI and MPICH Installation: University of Ioannina, Greece Programming language used: Fortran-77 Memory required to execute with typical data: Approximately O( n2) words, where n is the number of variables No. of bits in a word: 64 No. of processors used: 1 or many Has the code been vectorised or parallelized?: Parallelized using MPI No. of bytes in distributed program, including test data, etc.: 147163 No. of lines in distributed program, including the test data, etc.: 14366 Distribution format: gzipped tar file Nature of physical problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques can be

  1. Parallel memory processing by the CA1 region of the dorsal hippocampus and the basolateral amygdala.

    PubMed

    Cammarota, Martín; Bevilaqua, Lia R; Rossato, Janine I; Lima, Ramón H; Medina, Jorge H; Izquierdo, Iván

    2008-07-29

    There is abundant literature on the role of the basolateral amygdala (BLA) and the CA1 region of the hippocampus in memory formation of inhibitory avoidance (IA) and other behaviorally arousing tasks. Here, we investigate molecular correlates of IA consolidation in the two structures and their relation to NMDA receptors (NMDArs) and beta-adrenergic receptors (beta-ADrs). The separate posttraining administration of antagonists of NMDAr and beta-ADr to BLA and CA1 is amnesic. IA training is followed by an increase of the phosphorylation of calcium and calmodulin-dependent protein kinase II (CaMKII) and ERK2 in CA1 but only an increase of the phosphorylation of ERK2 in BLA. The changes are blocked by NMDAr antagonists but not beta-ADr antagonists in CA1, and they are blocked by beta-ADr but not NMDAr antagonists in BLA. In addition, the changes are accompanied by increased phosphorylation of tyrosine hydroxylase in BLA but not in CA1, suggesting that beta-AD modulation results from local catecholamine synthesis in the former but not in the latter structure. NMDAr blockers in CA1 do not alter the learning-induced neurochemical changes in BLA, and beta-ADr blockade in BLA does not hinder those in CA1. When put together with other data from the literature, the present findings suggest that CA1 and BLA play a role in consolidation, but they operate to an extent in parallel, suggesting that each is probably involved with different aspects of the task studied.

  2. Massively Parallel Geostatistical Inversion of Coupled Processes in Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Ngo, A.; Schwede, R. L.; Li, W.; Bastian, P.; Ippisch, O.; Cirpka, O. A.

    2012-04-01

    another level of parallelization has been added.

  3. Event-based hydrological modeling for detecting dominant hydrological process and suitable model strategy for semi-arid catchments

    NASA Astrophysics Data System (ADS)

    Huang, Pengnian; Li, Zhijia; Chen, Ji; Li, Qiaoling; Yao, Cheng

    2016-11-01

    To simulate the hydrological processes in semi-arid areas properly is still challenging. This study assesses the impact of different modeling strategies on simulating flood processes in semi-arid catchments. Four classic hydrological models, TOPMODEL, XINANJIANG (XAJ), SAC-SMA and TANK, were selected and applied to three semi-arid catchments in North China. Based on analysis and comparison of the simulation results of these classic models, four new flexible models were constructed and used to further investigate the suitability of various modeling strategies for semi-arid environments. Numerical experiments were also designed to examine the performances of the models. The results show that in semi-arid catchments a suitable model needs to include at least one nonlinear component to simulate the main process of surface runoff generation. If there are more than two nonlinear components in the hydrological model, they should be arranged in parallel, rather than in series. In addition, the results show that the parallel nonlinear components should be combined by multiplication rather than addition. Moreover, this study reveals that the key hydrological process over semi-arid catchments is the infiltration excess surface runoff, a non-linear component.

  4. Selecting a Control Strategy for Plug and Process Loads

    SciTech Connect

    Lobato, C.; Sheppy, M.; Brackney, L.; Pless, S.; Torcellini, P.

    2012-09-01

    Plug and Process Loads (PPLs) are building loads that are not related to general lighting, heating, ventilation, cooling, and water heating, and typically do not provide comfort to the building occupants. PPLs in commercial buildings account for almost 5% of U.S. primary energy consumption. On an individual building level, they account for approximately 25% of the total electrical load in a minimally code-compliant commercial building, and can exceed 50% in an ultra-high efficiency building such as the National Renewable Energy Laboratory's (NREL) Research Support Facility (RSF) (Lobato et al. 2010). Minimizing these loads is a primary challenge in the design and operation of an energy-efficient building. A complex array of technologies that measure and manage PPLs has emerged in the marketplace. Some fall short of manufacturer performance claims, however. NREL has been actively engaged in developing an evaluation and selection process for PPLs control, and is using this process to evaluate a range of technologies for active PPLs management that will cap RSF plug loads. Using a control strategy to match plug load use to users' required job functions is a huge untapped potential for energy savings.

  5. Live, video-rate super-resolution microscopy using structured illumination and rapid GPU-based parallel processing.

    PubMed

    Lefman, Jonathan; Scott, Keana; Stranick, Stephan

    2011-04-01

    Structured illumination fluorescence microscopy is a powerful super-resolution method that is capable of achieving a resolution below 100 nm. Each super-resolution image is computationally constructed from a set of differentially illuminated images. However, real-time application of structured illumination microscopy (SIM) has generally been limited due to the computational overhead needed to generate super-resolution images. Here, we have developed a real-time SIM system that incorporates graphic processing unit (GPU) based in-line parallel processing of raw/differentially illuminated images. By using GPU processing, the system has achieved a 90-fold increase in processing speed compared to performing equivalent operations on a multiprocessor computer--the total throughput of the system is limited by data acquisition speed, but not by image processing. Overall, more than 350 raw images (16-bit depth, 512 × 512 pixels) can be processed per second, resulting in a maximum frame rate of 39 super-resolution images per second. This ultrafast processing capability is used to provide immediate feedback of super-resolution images for real-time display. These developments are increasing the potential for sophisticated super-resolution imaging applications.

  6. Cross-linguistic parallels in processing derivational morphology: evidence from Polish.

    PubMed

    Bozic, Mirjana; Szlachta, Zanna; Marslen-Wilson, William D

    2013-12-01

    Neuroimaging evidence in English suggests that the neurocognitive processing of derivationally complex words primarily reflects their properties as whole forms. The current experiment provides a cross-linguistic examination of these proposals by investigating the processing of derivationally complex words in the rich morphological system of Polish. Within the framework of a dual language system approach, we asked whether there is evidence for decompositional processing of derivationally complex Polish stems - reflected in the activation of a linguistically specific decompositional system in the left hemisphere - or for increased competition between the derived stem and its embedded base stem in the bilateral system. The results showed activation in the bilateral system and no evidence for selective engagement of the left hemisphere decompositional system. This provides a cross-linguistic validation for the hypothesis that the neurocognitive processing of derived stems primarily reflects their properties as stored forms.

  7. The evolution of concepts of vestibular peripheral information processing: toward the dynamic, adaptive, parallel processing macular model.

    PubMed

    Ross, Muriel D

    2003-09-01

    In a letter to Robert Hooke, written on 5 February, 1675, Isaac Newton wrote "If I have seen further than certain other men it is by standing upon the shoulders of giants." In his context, Newton was referring to the work of Galileo and Kepler, who preceded him. However, every field has its own giants, those men and women who went before us and, often with few tools at their disposal, uncovered the facts that enabled later researchers to advance knowledge in a particular area. This review traces the history of the evolution of views from early giants in the field of vestibular research to modern concepts of vestibular organ organization and function. Emphasis will be placed on the mammalian maculae as peripheral processors of linear accelerations acting on the head. This review shows that early, correct findings were sometimes unfortunately disregarded, impeding later investigations into the structure and function of the vestibular organs. The central themes are that the macular organs are highly complex, dynamic, adaptive, distributed parallel processors of information, and that historical references can help us to understand our own place in advancing knowledge about their complicated structure and functions.

  8. The evolution of concepts of vestibular peripheral information processing: toward the dynamic, adaptive, parallel processing macular model

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.

    2003-01-01

    In a letter to Robert Hooke, written on 5 February, 1675, Isaac Newton wrote "If I have seen further than certain other men it is by standing upon the shoulders of giants." In his context, Newton was referring to the work of Galileo and Kepler, who preceded him. However, every field has its own giants, those men and women who went before us and, often with few tools at their disposal, uncovered the facts that enabled later researchers to advance knowledge in a particular area. This review traces the history of the evolution of views from early giants in the field of vestibular research to modern concepts of vestibular organ organization and function. Emphasis will be placed on the mammalian maculae as peripheral processors of linear accelerations acting on the head. This review shows that early, correct findings were sometimes unfortunately disregarded, impeding later investigations into the structure and function of the vestibular organs. The central themes are that the macular organs are highly complex, dynamic, adaptive, distributed parallel processors of information, and that historical references can help us to understand our own place in advancing knowledge about their complicated structure and functions.

  9. Decomposition strategies in the problems of simulation of additive laser technology processes

    NASA Astrophysics Data System (ADS)

    Khomenko, M. D.; Dubrov, A. V.; Mirzade, F. Kh.

    2016-11-01

    The development of additive technologies and their application in industry is associated with the possibility of predicting the final properties of a crystallized added material. This paper describes the problem characterized by a dynamic and spatially nonuniform computational complexity, which, in the case of uniform decomposition of a computational domain, leads to an unbalanced load on computational cores. The strategy of partitioning of the computational domain is used, which minimizes the CPU time losses in the serial computations of the additive technological process. The chosen strategy is optimal from the standpoint of a priori unknown dynamic computational load distribution. The scaling of the computational problem on the cluster of the Institute on Laser and Information Technologies (RAS) that uses the InfiniBand interconnect is determined. The use of the parallel code with optimal decomposition made it possible to significantly reduce the computational time (down to several hours), which is important in the context of development of the software package for support of engineering activity in the field of additive technology.

  10. Parallel Atomistic Simulations

    SciTech Connect

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  11. Language constructs for modular parallel programs

    SciTech Connect

    Foster, I.

    1996-03-01

    We describe programming language constructs that facilitate the application of modular design techniques in parallel programming. These constructs allow us to isolate resource management and processor scheduling decisions from the specification of individual modules, which can themselves encapsulate design decisions concerned with concurrence, communication, process mapping, and data distribution. This approach permits development of libraries of reusable parallel program components and the reuse of these components in different contexts. In particular, alternative mapping strategies can be explored without modifying other aspects of program logic. We describe how these constructs are incorporated in two practical parallel programming languages, PCN and Fortran M. Compilers have been developed for both languages, allowing experimentation in substantial applications.

  12. Design of a high-speed digital processing element for parallel simulation

    NASA Technical Reports Server (NTRS)

    Milner, E. J.; Cwynar, D. S.

    1983-01-01

    A prototype of a custom designed computer to be used as a processing element in a multiprocessor based jet engine simulator is described. The purpose of the custom design was to give the computer the speed and versatility required to simulate a jet engine in real time. Real time simulations are needed for closed loop testing of digital electronic engine controls. The prototype computer has a microcycle time of 133 nanoseconds. This speed was achieved by: prefetching the next instruction while the current one is executing, transporting data using high speed data busses, and using state of the art components such as a very large scale integration (VLSI) multiplier. Included are discussions of processing element requirements, design philosophy, the architecture of the custom designed processing element, the comprehensive instruction set, the diagnostic support software, and the development status of the custom design.

  13. Parallel computing with graphics processing units for high-speed Monte Carlo simulation of photon migration.

    PubMed

    Alerstam, Erik; Svensson, Tomas; Andersson-Engels, Stefan

    2008-01-01

    General-purpose computing on graphics processing units (GPGPU) is shown to dramatically increase the speed of Monte Carlo simulations of photon migration. In a standard simulation of time-resolved photon migration in a semi-infinite geometry, the proposed methodology executed on a low-cost graphics processing unit (GPU) is a factor 1000 faster than simulation performed on a single standard processor. In addition, we address important technical aspects of GPU-based simulations of photon migration. The technique is expected to become a standard method in Monte Carlo simulations of photon migration.

  14. Serial or parallel? Using depth-of-processing to examine attention allocation during reading.

    PubMed

    Reichle, Erik D; Vanyukov, Polina M; Laurent, Patryk A; Warren, Tessa

    2008-08-01

    This paper presents an experiment investigating attention allocation in four tasks requiring varied degrees of lexical processing of 1-4 simultaneously displayed words. Response times and eye movements were only modestly affected by the number of words in an asterisk-detection task but increased markedly with the number of words in letter-detection, rhyme-judgment, and semantic-judgment tasks, suggesting that attention may not be serial for tasks that do not require significant lexical processing (e.g., detecting visual features), but is approximately serial for tasks that do (e.g., retrieving word meanings). The implications of these results for models of readers' eye movements are discussed.

  15. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  16. Study of SSIN (Single Stage Interconnection Networks) Parallel Processing Interconnection Networks

    DTIC Science & Technology

    1988-10-31

    Processing Networks,----_ 𔄃 ABSTRACT (Continue on reverse if necessary and identify by bloc;umr.ber) The increase in dynamic average path length ( DAPL ...increase in dynamic average path length ( DAPL ) with network size is moderate while it is significantly less than log 2N , the number of stages needed in

  17. SIAM Conference on Parallel Processing for Scientific Computing, 4th, Chicago, IL, Dec. 11-13, 1989, Proceedings

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack (Editor); Messina, Paul (Editor); Sorensen, Danny C. (Editor); Voigt, Robert G. (Editor)

    1990-01-01

    Attention is given to such topics as an evaluation of block algorithm variants in LAPACK and presents a large-grain parallel sparse system solver, a multiprocessor method for the solution of the generalized Eigenvalue problem on an interval, and a parallel QR algorithm for iterative subspace methods on the CM2. A discussion of numerical methods includes the topics of asynchronous numerical solutions of PDEs on parallel computers, parallel homotopy curve tracking on a hypercube, and solving Navier-Stokes equations on the Cedar Multi-Cluster system. A section on differential equations includes a discussion of a six-color procedure for the parallel solution of elliptic systems using the finite quadtree structure, data parallel algorithms for the finite element method, and domain decomposition methods in aerodynamics. Topics dealing with massively parallel computing include hypercube vs. 2-dimensional meshes and massively parallel computation of conservation laws. Performance and tools are also discussed.

  18. Do Sixth-Grade Writers Need Process Strategies?

    ERIC Educational Resources Information Center

    Torrance, Mark; Fidalgo, Raquel; Robledo, Patricia

    2015-01-01

    Background: Strategy-focused writing instruction trains students both to set explicit product goals and to adopt specific procedural strategies, particularly for planning text. A number of studies have demonstrated that strategy-focused writing instruction is effective in developing writing performance. Aim: This study aimed to determine whether…

  19. Multi-target Parallel Processing Approach for Gene-to-structure Determination of the Influenza Polymerase PB2 Subunit

    PubMed Central

    Moen, Spencer O.; Smith, Eric; Raymond, Amy C.; Fairman, James W.; Stewart, Lance J.; Staker, Bart L.; Begley, Darren W.; Edwards, Thomas E.; Lorimer, Donald D.

    2013-01-01

    Pandemic outbreaks of highly virulent influenza strains can cause widespread morbidity and mortality in human populations worldwide. In the United States alone, an average of 41,400 deaths and 1.86 million hospitalizations are caused by influenza virus infection each year 1. Point mutations in the polymerase basic protein 2 subunit (PB2) have been linked to the adaptation of the viral infection in humans 2. Findings from such studies have revealed the biological significance of PB2 as a virulence factor, thus highlighting its potential as an antiviral drug target. The structural genomics program put forth by the National Institute of Allergy and Infectious Disease (NIAID) provides funding to Emerald Bio and three other Pacific Northwest institutions that together make up the Seattle Structural Genomics Center for Infectious Disease (SSGCID). The SSGCID is dedicated to providing the scientific community with three-dimensional protein structures of NIAID category A-C pathogens. Making such structural information available to the scientific community serves to accelerate structure-based drug design. Structure-based drug design plays an important role in drug development. Pursuing multiple targets in parallel greatly increases the chance of success for new lead discovery by targeting a pathway or an entire protein family. Emerald Bio has developed a high-throughput, multi-target parallel processing pipeline (MTPP) for gene-to-structure determination to support the consortium. Here we describe the protocols used to determine the structure of the PB2 subunit from four different influenza A strains. PMID:23851357

  20. Multi-target parallel processing approach for gene-to-structure determination of the influenza polymerase PB2 subunit.

    PubMed

    Armour, Brianna L; Barnes, Steve R; Moen, Spencer O; Smith, Eric; Raymond, Amy C; Fairman, James W; Stewart, Lance J; Staker, Bart L; Begley, Darren W; Edwards, Thomas E; Lorimer, Donald D

    2013-06-28

    Pandemic outbreaks of highly virulent influenza strains can cause widespread morbidity and mortality in human populations worldwide. In the United States alone, an average of 41,400 deaths and 1.86 million hospitalizations are caused by influenza virus infection each year (1). Point mutations in the polymerase basic protein 2 subunit (PB2) have been linked to the adaptation of the viral infection in humans (2). Findings from such studies have revealed the biological significance of PB2 as a virulence factor, thus highlighting its potential as an antiviral drug target. The structural genomics program put forth by the National Institute of Allergy and Infectious Disease (NIAID) provides funding to Emerald Bio and three other Pacific Northwest institutions that together make up the Seattle Structural Genomics Center for Infectious Disease (SSGCID). The SSGCID is dedicated to providing the scientific community with three-dimensional protein structures of NIAID category A-C pathogens. Making such structural information available to the scientific community serves to accelerate structure-based drug design. Structure-based drug design plays an important role in drug development. Pursuing multiple targets in parallel greatly increases the chance of success for new lead discovery by targeting a pathway or an entire protein family. Emerald Bio has developed a high-throughput, multi-target parallel processing pipeline (MTPP) for gene-to-structure determination to support the consortium. Here we describe the protocols used to determine the structure of the PB2 subunit from four different influenza A strains.

  1. Fully automated cellular-resolution vertebrate screening platform with parallel animal processing

    PubMed Central

    Chang, Tsung-Yao; Pardo-Martin, Carlos; Allalou, Amin; Wählby, Carolina; Yanik, Mehmet Fatih

    2012-01-01

    The zebrafish larva is an optically-transparent vertebrate model with complex organs that is widely used to study genetics, developmental biology, and to model various human diseases. In this article, we present a set of novel technologies that significantly increase the throughput and capabilities of previously described vertebrate automated screening technology (VAST). We developed a robust multi-thread system that can simultaneously process multiple animals. System throughput is limited only by the image acquisition speed rather than by the fluidic or mechanical processes. We developed image recognition algorithms that fully automate manipulation of animals, including orienting and positioning regions of interest within the microscope’s field of view. We also identified the optimal capillary materials for high-resolution, distortion-free, low-background imaging of zebrafish larvae. PMID:22159032

  2. Embedded parallel processing based ground control systems for small satellite telemetry

    NASA Technical Reports Server (NTRS)

    Forman, Michael L.; Hazra, Tushar K.; Troendly, Gregory M.; Nickum, William G.

    1994-01-01

    The use of networked terminals which utilize embedded processing techniques results in totally integrated, flexible, high speed, reliable, and scalable systems suitable for telemetry and data processing applications such as mission operations centers (MOC). Synergies of these terminals, coupled with the capability of terminal to receive incoming data, allow the viewing of any defined display by any terminal from the start of data acquisition. There is no single point of failure (other than with network input) such as exists with configurations where all input data goes through a single front end processor and then to a serial string of workstations. Missions dedicated to NASA's ozone measurements program utilize the methodologies which are discussed, and result in a multimission configuration of low cost, scalable hardware and software which can be run by one flight operations team with low risk.

  3. A Study of Parallel Software Development with HPF and MPI for Composite Process Modeling Simulations

    DTIC Science & Technology

    2011-01-01

    molding ( RTM ) and its variants such as the vacuum-assisted resin transfer molding ( VARTM ). Both of these techniques use ber pre- forms made from...leading to warped ber tows. Failure to achieve appropriate vacuum in VARTM may lead to a less than desired ber fraction. Two prominent and potentially...and its physical accuracy and computational eciency used in our simulations is brie y discussed. 3.1 Resin Impregnation and Process Modeling in RTM

  4. Optical diagnostics of a single evaporating droplet using fast parallel computing on graphics processing units

    NASA Astrophysics Data System (ADS)

    Jakubczyk, D.; Migacz, S.; Derkachov, G.; Woźniak, M.; Archer, J.; Kolwas, K.

    2016-09-01

    We report on the first application of the graphics processing units (GPUs) accelerated computing technology to improve performance of numerical methods used for the optical characterization of evaporating microdroplets. Single microdroplets of various liquids with different volatility and molecular weight (glycerine, glycols, water, etc.), as well as mixtures of liquids and diverse suspensions evaporate inside the electrodynamic trap under the chosen temperature and composition of atmosphere. The series of scattering patterns recorded from the evaporating microdroplets are processed by fitting complete Mie theory predictions with gradientless lookup table method. We showed that computations on GPUs can be effectively applied to inverse scattering problems. In particular, our technique accelerated calculations of the Mie scattering theory on a single-core processor in a Matlab environment over 800 times and almost 100 times comparing to the corresponding code in C language. Additionally, we overcame problems of the time-consuming data post-processing when some of the parameters (particularly the refractive index) of an investigated liquid are uncertain. Our program allows us to track the parameters characterizing the evaporating droplet nearly simultaneously with the progress of evaporation.

  5. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    NASA Astrophysics Data System (ADS)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative

  6. Green supply chain management strategy selection using analytic network process: case study at PT XYZ

    NASA Astrophysics Data System (ADS)

    Adelina, W.; Kusumastuti, R. D.

    2017-01-01

    This study is about business strategy selection for green supply chain management (GSCM) for PT XYZ by using Analytic Network Process (ANP). GSCM is initiated as a response to reduce environmental impacts from industrial activities. The purposes of this study are identifying criteria and sub criteria in selecting GSCM Strategy, and analysing a suitable GSCM strategy for PT XYZ. This study proposes ANP network with 6 criteria and 29 sub criteria, which are obtained from the literature and experts’ judgements. One of the six criteria contains GSCM strategy options, namely risk-based strategy, efficiency-based strategy, innovation-based strategy, and closed loop strategy. ANP solves complex GSCM strategy-selection by using a more structured process and considering green perspectives from experts. The result indicates that innovation-based strategy is the most suitable green supply chain management strategy for PT XYZ.

  7. On the flow processes in sharply inclined and stalled airfoils in parallel movement and rotation

    NASA Technical Reports Server (NTRS)

    Kohler, M.

    1984-01-01

    The purpose of this study is to obtain a deeper insight into the complicated flow processes on airfoils in the region of the buoyancy maxima. To this end calculated and experimental investigations are carried out on a straight stationary, a twisted stationary and a straight rotating rectangular wing. According to the available results the method gives results which can be applied sufficiently for flow applied firmly on all sides for all rotation values. The reliability of the method may be questioned for a flow undergoing transition from the attached to the separated state or for totally separated flow and higher rotation values.

  8. Behavioural evidence for parallel information processing in the visual system of insects.

    PubMed

    Zhang, S; Srinivasan, M V

    1993-01-01

    Many flying insects display remarkable visual agility in capturing prey or pursuing a potential mate. They are capable of detecting, recognising, tracking and capturing a rapidly moving object on the wing. These manoeuvres are usually completed in a couple of seconds. The interval of time between the absorption of light quanta by the photoreceptors and the generation of an appropriate behavioural response is very short, encompassing only a few tens of milliseconds. In this time the visual nervous system has abstracted the essential features of the object, and recognized it (where appropriate), or measured its movement and computed an interception course. As an elementary unit of computation, we know that a neuron in the nervous system is considerably slower than, say, a flip-flop in the CPU of a modern computer. However, it is evident from the visual performance of an insect that the nervous system as a whole processes optical information much faster than a modern computer does. Rapid processing of visual information by animals therefore has to be attributed to the structure and the modus operandi of the nervous system.

  9. A real time, FEM based optimal control algorithm and its implementation using parallel processing hardware (transistors) in a microprocessor environment

    NASA Technical Reports Server (NTRS)

    Patten, William Neff

    1989-01-01

    There is an evident need to discover a means of establishing reliable, implementable controls for systems that are plagued by nonlinear and, or uncertain, model dynamics. The development of a generic controller design tool for tough-to-control systems is reported. The method utilizes a moving grid, time infinite element based solution of the necessary conditions that describe an optimal controller for a system. The technique produces a discrete feedback controller. Real time laboratory experiments are now being conducted to demonstrate the viability of the method. The algorithm that results is being implemented in a microprocessor environment. Critical computational tasks are accomplished using a low cost, on-board, multiprocessor (INMOS T800 Transputers) and parallel processing. Progress to date validates the methodology presented. Applications of the technique to the control of highly flexible robotic appendages are suggested.

  10. How Are Child Restricted and Repetitive Behaviors Associated with Caregiver Stress Over Time? A Parallel Process Multilevel Growth Model.

    PubMed

    Harrop, Clare; McBee, Matthew; Boyd, Brian A

    2016-05-01

    The impact of raising a child with autism spectrum disorder (ASD) is frequently accompanied by elevated caregiver stress. Examining the variables that predict these elevated rates will help us understand how caregiver stress is impacted by and impacts child behaviors. This study explored how restricted and repetitive behaviors (RRBs) contributed concurrently and longitudinally to caregiver stress in a large sample of preschoolers with ASD using parallel process multilevel growth models. Results indicated that initial rates of and change in RRBs predicted fluctuations in caregiver stress over time. When caregivers reported increased child RRBs, this was mirrored by increases in caregiver stress. Our data support the importance of targeted treatments for RRBs as change in this domain may lead to improvements in caregiver wellbeing.

  11. Using the extended parallel process model to prevent noise-induced hearing loss among coal miners in Appalachia

    SciTech Connect

    Murray-Johnson, L.; Witte, K.; Patel, D.; Orrego, V.; Zuckerman, C.; Maxfield, A.M.; Thimons, E.D.

    2004-12-15

    Occupational noise-induced hearing loss is the second most self-reported occupational illness or injury in the United States. Among coal miners, more than 90% of the population reports a hearing deficit by age 55. In this formative evaluation, focus groups were conducted with coal miners in Appalachia to ascertain whether miners perceive hearing loss as a major health risk and if so, what would motivate the consistent wearing of hearing protection devices (HPDs). The theoretical framework of the Extended Parallel Process Model was used to identify the miners' knowledge, attitudes, beliefs, and current behaviors regarding hearing protection. Focus group participants had strong perceived severity and varying levels of perceived susceptibility to hearing loss. Various barriers significantly reduced the self-efficacy and the response efficacy of using hearing protection.

  12. Using the extended parallel process model to prevent noise-induced hearing loss among coal miners in Appalachia.

    PubMed

    Murray-Johnson, Lisa; Witte, Kim; Patel, Dhaval; Orrego, Victoria; Zuckerman, Cynthia; Maxfield, Andrew M; Thimons, Edward D

    2004-12-01

    Occupational noise-induced hearing loss is the second most self-reported occupational illness or injury in the United States. Among coal miners, more than 90% of the population reports a hearing deficit by age 55. In this formative evaluation, focus groups were conducted with coal miners in Appalachia to ascertain whether miners perceive hearing loss as a major health risk and if so, what would motivate the consistent wearing of hearing protection devices (HPDs). The theoretical framework of the Extended Parallel Process Model was used to identify the miners' knowledge, attitudes, beliefs, and current behaviors regarding hearing protection. Focus group participants had strong perceived severity and varying levels of perceived susceptibility to hearing loss. Various barriers significantly reduced the self-efficacy and the response efficacy of using hearing protection.

  13. Operation and performance of a longitudinal damping system using parallel digital signal processing

    SciTech Connect

    Fox, J.D.; Hindi, H.; Linscott, I.

    1994-06-01

    A programmable longitudinal feedback system based on four AT&T 1610 digital signal processors has been developed as a component of the PEP-II R&D program. This Longitudinal Quick Prototype is a proof of concept for the PEP-II system and implements full speed bunch-by-bunch signal processing for storage rings with bunch spacings of 4 ns. The design implements, via software, a general purpose feedback controller which allows the system to be operated at several accelerator facilities. The system configuration used for tests at the LBL Advanced Light Source is described. Open and closed loop results showing the detection and calculation of feedback signals from bunch motion are presented, and the system is shown to damp coupled-bunch instabilities in the ALS. Use of the system for accelerator diagnostics is illustrated via measurement of injection transients and analysis of open loop bunch motion.

  14. QLWFPC2: Parallel-Processing Quick-Look WFPC2 Stellar Photometry Based on the Message Passing Interface

    NASA Astrophysics Data System (ADS)

    Mighell, K. J.

    2004-07-01

    I describe a new parallel-processing stellar photometry code called QLWFPC2 (http://www.noao.edu/staff/mighell/qlwfpc2) which is designed to do quick-look analysis of two entire WFPC2 observations from the Hubble Space Telescope in under 5 seconds using a fast Beowulf cluster with a Gigabit-Ethernet local network. This program is written in ANSI C and uses MPICH implementation of the Message Passing Interface from the Argonne National Laboratory for the parallel-processing communications, the CFITSIO library (from HEASARC at NASA's GSFC) for reading the standard FITS files from the HST Data Archive, and the Parameter Interface Library (from the INTEGRAL Science Data Center) for the IRAF parameter-file user interface. QLWFPC2 running on 4 processors takes about 2.4 seconds to analyze the WFPC2 archive datasets u37ga407r.c0.fits (F555W; 300 s) and u37ga401r.c0.fits (F814W; 300 s) of M54 (NGC 6715) which is the bright massive globular cluster near the center of the nearby Sagittarius dwarf spheroidal galaxy. The analysis of these HST observations of M54 lead to the serendipitous discovery of more than 50 new bright variable stars in the central region of M54. Most of the candidate variables stars are found on the PC1 images of the cluster center --- a region where no variables have been reported by previous ground-based studies of variables in M54. This discovery is an example of how QLWFPC2 can be used to quickly explore the time domain of observations in the HST Data Archive.

  15. QLWFPC2: Parallel-Processing Quick-Look WFPC2 Stellar Photometry based on the Message Passing Interface

    NASA Astrophysics Data System (ADS)

    Mighell, K. J.

    2003-12-01

    I describe a new parallel-processing stellar photometry code called QLWFPC2 which is designed to do quick-look analysis of two entire WFPC2 observations from the Hubble Space Telescope in under 5 seconds using a fast Beowulf cluster with a Gigabit Ethernet local network. This program is written in ANSI/ISO C and uses the MPICH implementation of the Message Passing Interface from the Argonne National Laboratory for the parallel-processing communications, the CFITSIO library (from HEASARC at NASA's GSFC) for reading the standard FITS files from the HST Data Archive and the Parameter Interface Library (from the INTEGRAL Science Data Center) for the IRAF parameter-file user interface. QLWFPC2 running on 4 processors takes about 2.4 seconds to analyze WFPC2 archive datasets u37ga407r.c0.fits (F555W; 300 s) and u37ga401r.c0.fits (F814W; 300 s) of M54 (NGC 6715) which is the bright massive globular cluster near the center of the nearby Sagittarius dwarf spheroidal galaxy. The analysis of these HST observations of M54 lead to the serendipitous discovery of more than 50 new bright variable stars in the central region of M54. Most of the candidate variables stars are found on the PC1 images of the cluster center --- a region where no variables have been reported by previous ground-based studies of variables in M54. This discovery is an example of how QLWFPC2 can be used to quickly explore the time domain of observations in the HST Data Archive. This work is supported by a grant from the National Aeronautics and Space Administration (NASA), Order No. S-13811-G, which was awarded by the Applied Information Systems Research Program (AISRP) of NASA's Office of Space Science (NRA 01-OSS-01).

  16. A VLSI Architecture with Multiple Fast Store-Based Block Parallel Processing for Output Probability and Likelihood Score Computations in HMM-Based Isolated Word Recognition

    NASA Astrophysics Data System (ADS)

    Nakamura, Kazuhiro; Shimazaki, Ryo; Yamamoto, Masatoshi; Takagi, Kazuyoshi; Takagi, Naofumi

    This paper presents a memory-efficient VLSI architecture for output probability computations (OPCs) of continuous hidden Markov models (HMMs) and likelihood score computations (LSCs). These computations are the most time consuming part of HMM-based isolated word recognition systems. We demonstrate multiple fast store-based block parallel processing (MultipleFastStoreBPP) for OPCs and LSCs and present a VLSI architecture that supports it. Compared with conventional fast store-based block parallel processing (FastStoreBPP) and stream-based block parallel processing (StreamBPP) architectures, the proposed architecture requires fewer registers and less processing time. The processing elements (PEs) used in the FastStoreBPP and StreamBPP architectures are identical to those used in the MultipleFastStoreBPP architecture. From a VLSI architectural viewpoint, a comparison shows that the proposed architecture is an improvement over the others, through efficient use of PEs and registers for storing input feature vectors.

  17. A continuous process to align electrospun nanofibers into parallel and crossed arrays

    NASA Astrophysics Data System (ADS)

    Laudenslager, Michael J.; Sigmund, Wolfgang M.

    2013-04-01

    Electrical, optical, and mechanical properties of nanofibers are strongly affected by their orientation. Electrospinning is a nanofiber processing technique that typically produces nonwoven meshes of randomly oriented fibers. While several alignment techniques exist, they are only able to produce either a very thin layer of aligned fibers or larger quantities of fibers with less control over their alignment and orientation. The technique presented herein fills the gap between these two methods allowing one to produce thick meshes of highly oriented nanofibers. In addition, this technique is not limited to collection of fibers along a single axis. Modifications to the basic setup allow collection of crossed fibers without stopping and repositioning the apparatus. The technique works for a range of fiber sizes. In this study, fiber diameters ranged from 100 nm to 1 micron. This allows a few fibers at a time to rapidly deposit in alternating directions creating an almost woven structure. These aligned nanofibers have the potential to improve the performance of energy storage and thermoelectric devices and hold great promise for directed cell growth applications.

  18. Characterization and monitoring of subsurface processes using parallel computing and electrical resistivity imaging

    SciTech Connect

    Johnson, Timothy C.; Truex, Michael J.; Wellman, Dawn M.; Marble, Justin

    2011-12-01

    This newsletter discusses recent advancement in subsurface resistivity characterization and monitoring capabilities. The BC Cribs field desiccation treatability test resistivity monitoring data is use an example to demonstrate near-real time 3D subsurface imaging capabilities. Electrical resistivity tomography (ERT) is a method of imaging the electrical resistivity distribution of the subsurface. An ERT data collection system consists of an array of electrodes, deployed on the ground surface or within boreholes, that are connected to a control unit which can access each electrode independently (Figure 1). A single measurement is collected by injecting current across a pair of current injection electrodes (source and sink), and measuring the resulting potential generated across a pair of potential measurement electrodes (positive and negative). An ERT data set is generated by collecting many such measurements using strategically selected current and potential electrode pairs. This data set is then processed using an inversion algorithm, which reconstructs an estimate (or image) of the electrical conductivity (i.e. the inverse of resistivity) distribution that gave rise to the measured data.

  19. Ultrafast ultrasound and photoacoustic co-registered imaging system based on FPGA parallel processing

    NASA Astrophysics Data System (ADS)

    Alqasemi, Umar; Li, Hai; Yuan, Guangqian; Aguirre, Andres; Zhu, Quing

    2012-02-01

    Co-registered Ultrasound and Photoacoustic images provide complimentary structure and functional information for cancer diagnosis and assessment of therapy response. In SPIE Photonics West 2011, we reported a system that acquires from 64 channels and displays up to 1 frame per second (fps) ultrasound pulse-echo images, 5 fps photoacoustic images, and 0.5 fps co-registered images. In this year, we report an upgraded system which acquires from 128 channels and displays up to 15 fps co-registered ultrasound and photoacoustic images limited by our laser pulse repetition rate. The system architecture is novel and it provides real-time co-registration of images, the ability of acquiring the channel RF data for both modalities, and the flexibility of adjusting every parameter involved in the imaging process for both modalities. The digital signal processor board is upgraded to an FPGA-based PCIe board that collects the data from the acquisition modules and transfers them to the PC memory at 2.5GT/s rate through an x8 DDR PCIe bus running at 100MHz clock frequency. The modules FPGA code is also upgraded to form a beam line in 90 microseconds and to communicate through ultrafast differential tracks with the PCIe board. Furthermore, the printed circuit board (PCB) design of the system was adjusted to provide a maximum of 80dB signal-to-noise ratio at 60dB gain, which is comparable to some commercial ultrasound machines. The real-time system allows capturing co-registered US/PAT images free of motion artifacts and also provides ultrafast dynamic information when a contrast agent is used. The system is built for clinical use to assist the diagnosis of ovarian cancer. However, the hardware is still under testing and evaluation stage, experimental and clinical results will be reported later.

  20. Parallel Information Processing (Image Transmission Via Fiber Bundle and Multimode Fiber

    NASA Technical Reports Server (NTRS)

    Kukhtarev, Nicholai

    2003-01-01

    Growing demand for visual, user-friendly representation of information inspires search for the new methods of image transmission. Currently used in-series (sequential) methods of information processing are inherently slow and are designed mainly for transmission of one or two dimensional arrays of data. Conventional transmission of data by fibers requires many fibers with array of laser diodes and photodetectors. In practice, fiber bundles are also used for transmission of images. Image is formed on the fiber-optic bundle entrance surface and each fiber transmits the incident image to the exit surface. Since the fibers do not preserve phase, only 2D intensity distribution can be transmitted in this way. Each single mode fiber transmit only one pixel of an image. Multimode fibers may be also used, so that each mode represent different pixel element. Direct transmission of image through multimode fiber is hindered by the mode scrambling and phase randomization. To overcome these obstacles wavelength and time-division multiplexing have been used, with each pixel transmitted on a separate wavelength or time interval. Phase-conjugate techniques also was tested in, but only in the unpractical scheme when reconstructed image return back to the fiber input end. Another method of three-dimensional imaging over single mode fibers was demonstrated in, using laser light of reduced spatial coherence. Coherence encoding, needed for a transmission of images by this methods, was realized with grating interferometer or with the help of an acousto-optic deflector. We suggest simple practical holographic method of image transmission over single multimode fiber or over fiber bundle with coherent light using filtering by holographic optical elements. Originally this method was successfully tested for the single multimode fiber. In this research we have modified holographic method for transmission of laser illuminated images over commercially available fiber bundle (fiber endoscope, or

  1. Fear-potentiated startle processing in humans: Parallel fMRI and orbicularis EMG assessment during cue conditioning and extinction.

    PubMed

    Lindner, Katja; Neubert, Jörg; Pfannmöller, Jörg; Lotze, Martin; Hamm, Alfons O; Wendt, Julia

    2015-12-01

    Studying neural networks and behavioral indices such as potentiated startle responses during fear conditioning has a long tradition in both animal and human research. However, most of the studies in humans do not link startle potentiation and neural activity during fear acquisition and extinction. Therefore, we examined startle blink responses measured with electromyography (EMG) and brain activity measured with functional MRI simultaneously during differential conditioning. Furthermore, we combined these behavioral fear indices with brain network activity by analyzing the brain activity evoked by the startle probe stimulus presented during conditioned visual threat and safety cues as well as in the absence of visual stimulation. In line with previous research, we found a fear-induced potentiation of the startle blink responses when elicited during a conditioned threat stimulus and a rapid decline of amygdala activity after an initial differentiation of threat and safety cues in early acquisition trials. Increased activation during processing of threat cues was also found in the anterior insula, the anterior cingulate cortex (ACC), and the periaqueductal gray (PAG). More importantly, our results depict an increase of brain activity to probes presented during threatening in comparison to safety cues indicating an involvement of the anterior insula, the ACC, the thalamus, and the PAG in fear-potentiated startle processing during early extinction trials. Our study underlines that parallel assessment of fear-potentiated startle in fMRI paradigms can provide a helpful method to investigate common and distinct processing pathways in humans and animals and, thus, contributes to translational research.

  2. Non-CAR resists and advanced materials for Massively Parallel E-Beam Direct Write process integration

    NASA Astrophysics Data System (ADS)

    Pourteau, Marie-Line; Servin, Isabelle; Lepinay, Kévin; Essomba, Cyrille; Dal'Zotto, Bernard; Pradelles, Jonathan; Lattard, Ludovic; Brandt, Pieter; Wieland, Marco

    2016-03-01

    The emerging Massively Parallel-Electron Beam Direct Write (MP-EBDW) is an attractive high resolution high throughput lithography technology. As previously shown, Chemically Amplified Resists (CARs) meet process/integration specifications in terms of dose-to-size, resolution, contrast, and energy latitude. However, they are still limited by their line width roughness. To overcome this issue, we tested an alternative advanced non-CAR and showed it brings a substantial gain in sensitivity compared to CAR. We also implemented and assessed in-line post-lithographic treatments for roughness mitigation. For outgassing-reduction purpose, a top-coat layer is added to the total process stack. A new generation top-coat was tested and showed improved printing performances compared to the previous product, especially avoiding dark erosion: SEM cross-section showed a straight pattern profile. A spin-coatable charge dissipation layer based on conductive polyaniline has also been tested for conductivity and lithographic performances, and compatibility experiments revealed that the underlying resist type has to be carefully chosen when using this product. Finally, the Process Of Reference (POR) trilayer stack defined for 5 kV multi-e-beam lithography was successfully etched with well opened and straight patterns, and no lithography-etch bias.

  3. Bilingual Strategies from the Perspective of a Processing Model

    ERIC Educational Resources Information Center

    Hartsuiker, Robert J.

    2013-01-01

    Muysken argues for four general "strategies" that characterize language contact phenomena across several levels of description. These strategies are (A) maximize structural coherence of the first language (L1); (B) maximize structural coherence of the second language (L2); (C) match between L1 and L2 patterns where possible; and (D) use…

  4. The Strategic Planning Process and the Need for Grand Strategy

    DTIC Science & Technology

    2007-11-02

    Basil Liddell Hart defines the purpose of grand strategy as the means “to coordinate and direct all of the resources of the nation, or band of nations...University Press. Murray, William, MacGregor Knox, and Alvin Bernstein . The Making of Strategy. New York: Cambridge University Press, 1994. Naveh, Shimon

  5. An extension of the extended parallel process model (EPPM) in television health news: the influence of health consciousness on individual message processing and acceptance.

    PubMed

    Hong, Hyehyun

    2011-06-01

    The purpose of this study is to examine the role of health consciousness in processing TV news that contains potential health threats and preventive recommendations. Based on the extended parallel process model (Witte, 1992), relationships among health consciousness, perceived severity, perceived susceptibility, perceived response efficacy, perceived self-efficacy, and message acceptance/rejection were hypothesized. Responses collected from 175 participants after viewing four TV health news stories were analyzed using the bootstrapping analysis (Preacher & Hayes, 2008). Results confirmed three mediators (i.e., perceived severity, response efficacy, self-efficacy) in the influence of health consciousness on message acceptance. A negative association found between health consciousness and perceived susceptibility is discussed in relation to characteristics of health conscious individuals and optimistic bias of health risks.

  6. Natural language processing in an intelligent writing strategy tutoring system.

    PubMed

    McNamara, Danielle S; Crossley, Scott A; Roscoe, Rod

    2013-06-01

    The Writing Pal is an intelligent tutoring system that provides writing strategy training. A large part of its artificial intelligence resides in the natural language processing algorithms to assess essay quality and guide feedback to students. Because writing is often highly nuanced and subjective, the development of these algorithms must consider a broad array of linguistic, rhetorical, and contextual features. This study assesses the potential for computational indices to predict human ratings of essay quality. Past studies have demonstrated that linguistic indices related to lexical diversity, word frequency, and syntactic complexity are significant predictors of human judgments of essay quality but that indices of cohesion are not. The present study extends prior work by including a larger data sample and an expanded set of indices to assess new lexical, syntactic, cohesion, rhetorical, and reading ease indices. Three models were assessed. The model reported by McNamara, Crossley, and McCarthy (Written Communication 27:57-86, 2010) including three indices of lexical diversity, word frequency, and syntactic complexity accounted for only 6% of the variance in the larger data set. A regression model including the full set of indices examined in prior studies of writing predicted 38% of the variance in human scores of essay quality with 91% adjacent accuracy (i.e., within 1 point). A regression model that also included new indices related to rhetoric and cohesion predicted 44% of the variance with 94% adjacent accuracy. The new indices increased accuracy but, more importantly, afford the means to provide more meaningful feedback in the context of a writing tutoring system.

  7. Post-processing strategies in image scanning microscopy.

    PubMed

    McGregor, J E; Mitchell, C A; Hartell, N A

    2015-10-15

    Image scanning microscopy (ISM) coupled with pixel reassignment offers a resolution improvement of √2 over standard widefield imaging. By scanning point-wise across the specimen and capturing an image of the fluorescent signal generated at each scan position, additional information about specimen structure is recorded and the highest accessible spatial frequency is doubled. Pixel reassignment can be achieved optically in real time or computationally a posteriori and is frequently combined with the use of a physical or digital pinhole to reject out of focus light. Here, we simulate an ISM dataset using a test image and apply standard and non-standard processing methods to address problems typically encountered in computational pixel reassignment and pinholing. We demonstrate that the predicted improvement in resolution is achieved by applying standard pixel reassignment to a simulated dataset and explore the effect of realistic displacements between the reference and true excitation positions. By identifying the position of the detected fluorescence maximum using localisation software and centring the digital pinhole on this co-ordinate before scaling around translated excitation positions, we can recover signal that would otherwise be degraded by the use of a pinhole aligned to an inaccurate excitation reference. This strategy is demonstrated using experimental data from a multiphoton ISM instrument. Finally we investigate the effect that imaging through tissue has on the positions of excitation foci at depth and observe a global scaling with respect to the applied reference grid. Using simulated and experimental data we explore the impact of a globally scaled reference on the ISM image and, by pinholing around the detected maxima, recover the signal across the whole field of view.

  8. Combining self-affirmation with the extended parallel process model: the consequences for motivation to eat more fruit and vegetables.

    PubMed

    Napper, Lucy E; Harris, Peter R; Klein, William M P

    2014-01-01

    There is potential for fruitful integration of research using the Extended Parallel Process Model (EPPM) with research using Self-affirmation Theory. However, to date no studies have attempted to do this. This article reports an experiment that tests whether (a) the effects of a self-affirmation manipulation add to those of EPPM variables in predicting intentions to improve a health behavior and (b) self-affirmation moderates the relationship between EPPM variables and intentions. Participants (N = 80) were randomized to either a self-affirmation or control condition prior to receiving personally relevant health information about the risks of not eating at least five portions of fruit and vegetables per day. A hierarchical regression model revealed that efficacy, threat × efficacy, self-affirmation, and self-affirmation × efficacy all uniquely contributed to the prediction of intentions to eat at least five portions per day. Self-affirmed participants and those with higher efficacy reported greater motivation to change. Threat predicted intentions at low levels of efficacy, but not at high levels. Efficacy had a stronger relationship with intentions in the nonaffirmed condition than in the self-affirmed condition. The findings indicate that self-affirmation processes can moderate the impact of variables in the EPPM and also add to the variance explained. We argue that there is potential for integration of the two traditions of research, to the benefit of both.

  9. Serial and parallel processing in reading: investigating the effects of parafoveal orthographic information on nonisolated word recognition.

    PubMed

    Dare, Natasha; Shillcock, Richard

    2013-01-01

    We present a novel lexical decision task and three boundary paradigm eye-tracking experiments that clarify the picture of parallel processing in word recognition in context. First, we show that lexical decision is facilitated by associated letter information to the left and right of the word, with no apparent hemispheric specificity. Second, we show that parafoveal preview of a repeat of word n at word n + 1 facilitates reading of word n relative to a control condition with an unrelated word at word n + 1. Third, using a version of the boundary paradigm that allowed for a regressive eye movement, we show no parafoveal "postview" effect on reading word n of repeating word n at word n - 1. Fourth, we repeat the second experiment but compare the effects of parafoveal previews consisting of a repeated word n with a transposed central bigram (e.g., caot for coat) and a substituted central bigram (e.g., ceit for coat), showing the latter to have a deleterious effect on processing word n, thereby demonstrating that the parafoveal preview effect is at least orthographic and not purely visual.

  10. Developing Local Lifelong Guidance Strategies.

    ERIC Educational Resources Information Center

    Watts, A. G.; Hawthorn, Ruth; Hoffbrand, Jill; Jackson, Heather; Spurling, Andrea

    1997-01-01

    Outlines the background, rationale, methodology, and outcomes of developing local lifelong guidance strategies in four geographic areas. Analyzes the main components of the strategies developed and addresses a number of issues relating to the process of strategy development. Explores implications for parallel work in other localities. (RJM)

  11. Are all letters really processed equally and in parallel? Further evidence of a robust first letter advantage.

    PubMed

    Scaltritti, Michele; Balota, David A

    2013-10-01

    This present study examined accuracy and response latency of letter processing as a function of position within a horizontal array. In a series of 4 Experiments, target-strings were briefly (33ms for Experiments 1 to 3, 83ms for Experiment 4) displayed and both forward and backward masked. Participants then made a two alternative forced choice. The two alternative responses differed just in one element of the string, and position of mismatch was systematically manipulated. In Experiment 1, words of different lengths (from 3 to 6 letters) were presented in separate blocks. Across different lengths, there was a robust advantage in performance when the alternative response was different for the letter occurring at the first position, compared to when the difference occurred at any other position. Experiment 2 replicated this finding with the same materials used in Experiment 1, but with words of different lengths randomly intermixed within blocks. Experiment 3 provided evidence of the first position advantage with legal nonwords and strings of consonants, but did not provide any first position advantage for non-alphabetic symbols. The lack of a first position advantage for symbols was replicated in Experiment 4, where target-strings were displayed for a longer duration (83ms). Taken together these results suggest that the first position advantage is a phenomenon that occurs specifically and selectively for letters, independent of lexical constraints. We argue that the results are consistent with models that assume a processing advantage for coding letters in the first position, and are inconsistent with the commonly held assumption in visual word recognition models that letters are equally processed in parallel independent of letter position.

  12. 77 FR 34062 - Announcement of the U.S. Geological Survey Science Strategy Planning Feedback Process

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-08

    ....S. Geological Survey Announcement of the U.S. Geological Survey Science Strategy Planning Feedback.... Geological Survey is creating 10-year strategies for each of its Mission Areas: Climate and Land Use Change.... This process involves gathering input from the public on draft strategy documents. Feedback can...

  13. Photon counting imaging with an electron-bombarded CCD: Towards a parallel-processing photoelectronic time-to-amplitude converter

    SciTech Connect

    Hirvonen, Liisa M.; Jiggins, Stephen; Sergent, Nicolas; Zanda, Gianmarco; Suhling, Klaus

    2014-12-15

    We have used an electron-bombarded CCD for optical photon counting imaging. The photon event pulse height distribution was found to be linearly dependent on the gain voltage. We propose on this basis that a gain voltage sweep during exposure in an electron-bombarded sensor would allow photon arrival time determination with sub-frame exposure time resolution. This effectively uses an electron-bombarded sensor as a parallel-processing photoelectronic time-to-amplitude converter, or a two-dimensional photon counting streak camera. Several applications that require timing of photon arrival, including Fluorescence Lifetime Imaging Microscopy, may benefit from such an approach. A simulation of a voltage sweep performed with experimental data collected with different acceleration voltages validates the principle of this approach. Moreover, photon event centroiding was performed and a hybrid 50% Gaussian/Centre of Gravity + 50% Hyperbolic cosine centroiding algorithm was found to yield the lowest fixed pattern noise. Finally, the camera was mounted on a fluorescence microscope to image F-actin filaments stained with the fluorescent dye Alexa 488 in fixed cells.

  14. Photon counting imaging with an electron-bombarded CCD: Towards a parallel-processing photoelectronic time-to-amplitude converter

    NASA Astrophysics Data System (ADS)

    Hirvonen, Liisa M.; Jiggins, Stephen; Sergent, Nicolas; Zanda, Gianmarco; Suhling, Klaus

    2014-12-01

    We have used an electron-bombarded CCD for optical photon counting imaging. The photon event pulse height distribution was found to be linearly dependent on the gain voltage. We propose on this basis that a gain voltage sweep during exposure in an electron-bombarded sensor would allow photon arrival time determination with sub-frame exposure time resolution. This effectively uses an electron-bombarded sensor as a parallel-processing photoelectronic time-to-amplitude converter, or a two-dimensional photon counting streak camera. Several applications that require timing of photon arrival, including Fluorescence Lifetime Imaging Microscopy, may benefit from such an approach. A simulation of a voltage sweep performed with experimental data collected with different acceleration voltages validates the principle of this approach. Moreover, photon event centroiding was performed and a hybrid 50% Gaussian/Centre of Gravity + 50% Hyperbolic cosine centroiding algorithm was found to yield the lowest fixed pattern noise. Finally, the camera was mounted on a fluorescence microscope to image F-actin filaments stained with the fluorescent dye Alexa 488 in fixed cells.

  15. GPUDePiCt: A Parallel Implementation of a Clustering Algorithm for Computing Degenerate Primers on Graphics Processing Units.

    PubMed

    Cickovski, Trevor; Flor, Tiffany; Irving-Sachs, Galen; Novikov, Philip; Parda, James; Narasimhan, Giri

    2015-01-01

    In order to make multiple copies of a target sequence in the laboratory, the technique of Polymerase Chain Reaction (PCR) requires the design of "primers", which are short fragments of nucleotides complementary to the flanking regions of the target sequence. If the same primer is to amplify multiple closely related target sequences, then it is necessary to make the primers "degenerate", which would allow it to hybridize to target sequences with a limited amount of variability that may have been caused by mutations. However, the PCR technique can only allow a limited amount of degeneracy, and therefore the design of degenerate primers requires the identification of reasonably well-conserved regions in the input sequences. We take an existing algorithm for designing degenerate primers that is based on clustering and parallelize it in a web-accessible software package GPUDePiCt, using a shared memory model and the computing power of Graphics Processing Units (GPUs). We test our implementation on large sets of aligned sequences from the human genome and show a multi-fold speedup for clustering using our hybrid GPU/CPU implementation over a pure CPU approach for these sequences, which consist of more than 7,500 nucleotides. We also demonstrate that this speedup is consistent over larger numbers and longer lengths of aligned sequences.

  16. Authentic Leadership Strategies in Support of Mentoring Processes

    ERIC Educational Resources Information Center

    Shapira-Lishchinsky, Orly; Levy-Gazenfrantz, Tania

    2015-01-01

    The aim of the study was to determine whether teacher-mentees perceive their mentors as authentic leaders and if so, how these perceptions affected their leadership strategies. The sample included 60 Israeli teacher-mentees from different school levels and different sectors, who volunteered to participate in the study. Semi-structured interviews…

  17. Communication Strategies and Psychological Processes Underlying Lexical Simplification.

    ERIC Educational Resources Information Center

    Kumaravadivelu, B.

    1988-01-01

    Analyzes interlanguage written discourse produced by advanced Tamil-speaking learners of English as a second language. Eight communication strategies are discussed, including: 1) extended use of lexical items; 2) lexical paraphrase; 3) word coinage; 4) native language (L1) equivalence; 5) literal translation of L1 idiom; 6) L1 mode of emphasis; 7)…

  18. Strategic Processing in Grammar Learning: Do Multilinguals Use More Strategies?

    ERIC Educational Resources Information Center

    Kemp, Charlotte

    2007-01-01

    Multilinguals appear to become better at learning additional languages the more languages they know, and in particular, to be faster at learning grammar. This study investigates the use of grammar learning strategies in 144 participants who knew between 2 and 12 languages each, using a language background questionnaire, a set of 40 grammar…

  19. Hierarchical versus parallel processing in tactile object recognition: a behavioural-neuroanatomical study of aperceptive tactile agnosia.

    PubMed

    Bohlhalter, S; Fretz, C; Weder, B

    2002-11-01

    neuroanatomical findings suggest a degradation of serial information processing within postcentral gyrus. In one case tactile agnosia was almost complete due to additionally impaired perception of macrogeometrical properties of objects, which correlated with the extension of lesion to the posterior parietal cortex. Importantly, the findings indicate traces of two distributed networks for tactile information processing and the associated parallel processing of complementary micro- and macrogeometrical information within postcentral gyrus and posterior parietal lobe.

  20. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174