Sample records for parallel processing perspective

  1. Engineering Play: Exploring Associations with Executive Function, Mathematical Ability, and Spatial Ability in Preschool

    ERIC Educational Resources Information Center

    Gold, Zachary Samuel

    2017-01-01

    Engineering play is a new perspective on preschool education that views constructive play as an engineering design process that parallels the way engineers think and work when they develop engineered solutions to human problems (Bairaktarova, Evangelou, Bagiati, & Brophy, 2011). Early research from this perspective supports its use in framing…

  2. Adult Children of Dysfunctional Families: Treatment from a Disenfranchised Grief Perspective.

    ERIC Educational Resources Information Center

    Zupanick, Corinne E.

    1994-01-01

    Generalizes concept of disenfranchised grief to understanding of recovery process for adult children of dysfunctional families. Describes recovery process of this population as parallel to grief process. Identifies two layers of unrecognized loss: loss of one's childhood and loss of one's fantasized and idealized parent. Suggests specific…

  3. Parallel Mechanisms of Sentence Processing: Assigning Roles to Constituents of Sentences.

    ERIC Educational Resources Information Center

    McClelland, James L.; Kawamoto, Alan H.

    This paper describes and illustrates a simulation model for the processing of grammatical elements in a sentence, focusing on one aspect of sentence comprehension: the assignment of the constituent elements of a sentence to the correct thematic case roles. The model addresses questions about sentence processing from a perspective very different…

  4. A Developmental Perspective on Peer Rejection, Deviant Peer Affiliation, and Conduct Problems Among Youth.

    PubMed

    Chen, Diane; Drabick, Deborah A G; Burgers, Darcy E

    2015-12-01

    Peer rejection and deviant peer affiliation are linked consistently to the development and maintenance of conduct problems. Two proposed models may account for longitudinal relations among these peer processes and conduct problems: the (a) sequential mediation model, in which peer rejection in childhood and deviant peer affiliation in adolescence mediate the link between early externalizing behaviors and more serious adolescent conduct problems; and (b) parallel process model, in which peer rejection and deviant peer affiliation are considered independent processes that operate simultaneously to increment risk for conduct problems. In this review, we evaluate theoretical models and evidence for associations among conduct problems and (a) peer rejection and (b) deviant peer affiliation. We then consider support for the sequential mediation and parallel process models. Next, we propose an integrated model incorporating both the sequential mediation and parallel process models. Future research directions and implications for prevention and intervention efforts are discussed.

  5. A Developmental Perspective on Peer Rejection, Deviant Peer Affiliation, and Conduct Problems among Youth

    PubMed Central

    Chen, Diane; Drabick, Deborah A. G.; Burgers, Darcy E.

    2015-01-01

    Peer rejection and deviant peer affiliation are linked consistently to the development and maintenance of conduct problems. Two proposed models may account for longitudinal relations among these peer processes and conduct problems: the (a) sequential mediation model, in which peer rejection in childhood and deviant peer affiliation in adolescence mediate the link between early externalizing behaviors and more serious adolescent conduct problems; and (b) parallel process model, in which peer rejection and deviant peer affiliation are considered independent processes that operate simultaneously to increment risk for conduct problems. In this review, we evaluate theoretical models and evidence for associations among conduct problems and (a) peer rejection and (b) deviant peer affiliation. We then consider support for the sequential mediation and parallel process models. Next, we propose an integrated model incorporating both the sequential mediation and parallel process models. Future research directions and implications for prevention and intervention efforts are discussed. PMID:25410430

  6. Generalized parallel-perspective stereo mosaics from airborne video.

    PubMed

    Zhu, Zhigang; Hanson, Allen R; Riseman, Edward M

    2004-02-01

    In this paper, we present a new method for automatically and efficiently generating stereoscopic mosaics by seamless registration of images collected by a video camera mounted on an airborne platform. Using a parallel-perspective representation, a pair of geometrically registered stereo mosaics can be precisely constructed under quite general motion. A novel parallel ray interpolation for stereo mosaicing (PRISM) approach is proposed to make stereo mosaics seamless in the presence of obvious motion parallax and for rather arbitrary scenes. Parallel-perspective stereo mosaics generated with the PRISM method have better depth resolution than perspective stereo due to the adaptive baseline geometry. Moreover, unlike previous results showing that parallel-perspective stereo has a constant depth error, we conclude that the depth estimation error of stereo mosaics is in fact a linear function of the absolute depths of a scene. Experimental results on long video sequences are given.

  7. An Analysis of the Role of ATC in the AILS Concept

    NASA Technical Reports Server (NTRS)

    Waller, Marvin C.; Doyle, Thomas M.; McGee, Frank G.

    2000-01-01

    Airborne information for lateral spacing (AILS) is a concept for making approaches to closely spaced parallel runways in instrument meteorological conditions (IMC). Under the concept, each equipped aircraft will assume responsibility for accurately managing its flight path along the approach course and maintaining separation from aircraft on the parallel approach. This document presents the results of an analysis of the AILS concept from an Air Traffic Control (ATC) perspective. The process has been examined in a step by step manner to determine ATC system support necessary to safely conduct closely spaced parallel approaches using the AILS concept. The analysis resulted in recognizing a number of issues related to integrating the process into the airspace system and proposes operating procedures.

  8. Parallel image reconstruction for 3D positron emission tomography from incomplete 2D projection data

    NASA Astrophysics Data System (ADS)

    Guerrero, Thomas M.; Ricci, Anthony R.; Dahlbom, Magnus; Cherry, Simon R.; Hoffman, Edward T.

    1993-07-01

    The problem of excessive computational time in 3D Positron Emission Tomography (3D PET) reconstruction is defined, and we present an approach for solving this problem through the construction of an inexpensive parallel processing system and the adoption of the FAVOR algorithm. Currently, the 3D reconstruction of the 610 images of a total body procedure would require 80 hours and the 3D reconstruction of the 620 images of a dynamic study would require 110 hours. An inexpensive parallel processing system for 3D PET reconstruction is constructed from the integration of board level products from multiple vendors. The system achieves its computational performance through the use of 6U VME four i860 processor boards, the processor boards from five manufacturers are discussed from our perspective. The new 3D PET reconstruction algorithm FAVOR, FAst VOlume Reconstructor, that promises a substantial speed improvement is adopted. Preliminary results from parallelizing FAVOR are utilized in formulating architectural improvements for this problem. In summary, we are addressing the problem of excessive computational time in 3D PET image reconstruction, through the construction of an inexpensive parallel processing system and the parallelization of a 3D reconstruction algorithm that uses the incomplete data set that is produced by current PET systems.

  9. The Star Wars Scroll Illusion.

    PubMed

    Shapiro, Arthur G

    2015-10-01

    The Star Wars Scroll Illusion is a dynamic version of the Leaning Tower Illusion. When two copies of a Star-Wars-like scrolling text are placed side by side (with separate vanishing points), the two scrolls appear to head in different directions even though they are physically parallel in the picture plane. Variations of the illusion are shown with one vanishing point, as well as from an inverted perspective where the scrolls appear to originate in the distance. The demos highlight the conflict between the physical lines in the picture plane and perspective interpretation: With two perspective points, the scrolling texts are parallel to each other in the picture plane but not in perspective interpretation; with one perspective point, the texts are not parallel to each other in the picture plane but are parallel to each other in perspective interpretation. The size of the effect is linearly related to the angle of rotation of the scrolls into the third dimension; the Scroll Illusion is stronger than the Leaning Tower Illusion for rotation angles between 35° and 90°. There is no effect of motion per se on the strength of the illusion.

  10. A Cognitive-Behavioral Perspective on Intelligence.

    ERIC Educational Resources Information Center

    Meichenbaum, Donald

    1980-01-01

    The first purpose of this editorial is to indicate a parallel trend in two disparate areas of research: information processing and cognitive-behavior modification (CBM). The second purpose is to highlight the role that affect plays in intellectual functioning, noting implications following the assessment of intelligence. (Author/RD)

  11. Investigating Learning with an Interactive Tutorial: A Mixed-Methods Strategy

    ERIC Educational Resources Information Center

    de Villiers, M. R.; Becker, Daphne

    2017-01-01

    From the perspective of parallel mixed-methods research, this paper describes interactivity research that employed usability-testing technology to analyse cognitive learning processes; personal learning styles and times; and errors-and-recovery of learners using an interactive e-learning tutorial called "Relations." "Relations"…

  12. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  13. The CAN Microcluster: Parallel Processing over the Controller Area Network

    ERIC Educational Resources Information Center

    Kuban, Paul A.; Ragade, Rammohan K.

    2005-01-01

    Most electrical engineering and computer science undergraduate programs include at least one course on microcontrollers and assembly language programming. Some departments offer legacy courses in C programming, but few include C programming from an embedded systems perspective, where it is still regularly used. Distributed computing and parallel…

  14. The Star Wars Scroll Illusion

    PubMed Central

    2015-01-01

    The Star Wars Scroll Illusion is a dynamic version of the Leaning Tower Illusion. When two copies of a Star-Wars-like scrolling text are placed side by side (with separate vanishing points), the two scrolls appear to head in different directions even though they are physically parallel in the picture plane. Variations of the illusion are shown with one vanishing point, as well as from an inverted perspective where the scrolls appear to originate in the distance. The demos highlight the conflict between the physical lines in the picture plane and perspective interpretation: With two perspective points, the scrolling texts are parallel to each other in the picture plane but not in perspective interpretation; with one perspective point, the texts are not parallel to each other in the picture plane but are parallel to each other in perspective interpretation. The size of the effect is linearly related to the angle of rotation of the scrolls into the third dimension; the Scroll Illusion is stronger than the Leaning Tower Illusion for rotation angles between 35° and 90°. There is no effect of motion per se on the strength of the illusion. PMID:27648216

  15. Oxytocin: parallel processing in the social brain?

    PubMed

    Dölen, Gül

    2015-06-01

    Early studies attempting to disentangle the network complexity of the brain exploited the accessibility of sensory receptive fields to reveal circuits made up of synapses connected both in series and in parallel. More recently, extension of this organisational principle beyond the sensory systems has been made possible by the advent of modern molecular, viral and optogenetic approaches. Here, evidence supporting parallel processing of social behaviours mediated by oxytocin is reviewed. Understanding oxytocinergic signalling from this perspective has significant implications for the design of oxytocin-based therapeutic interventions aimed at disorders such as autism, where disrupted social function is a core clinical feature. Moreover, identification of opportunities for novel technology development will require a better appreciation of the complexity of the circuit-level organisation of the social brain. © 2015 The Authors. Journal of Neuroendocrinology published by John Wiley & Sons Ltd on behalf of British Society for Neuroendocrinology.

  16. The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review

    PubMed Central

    Sheridan, Heather; Reingold, Eyal M.

    2017-01-01

    In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise. PMID:29033865

  17. The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review.

    PubMed

    Sheridan, Heather; Reingold, Eyal M

    2017-01-01

    In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise.

  18. GPU Based Software Correlators - Perspectives for VLBI2010

    NASA Technical Reports Server (NTRS)

    Hobiger, Thomas; Kimura, Moritaka; Takefuji, Kazuhiro; Oyama, Tomoaki; Koyama, Yasuhiro; Kondo, Tetsuro; Gotoh, Tadahiro; Amagai, Jun

    2010-01-01

    Caused by historical separation and driven by the requirements of the PC gaming industry, Graphics Processing Units (GPUs) have evolved to massive parallel processing systems which entered the area of non-graphic related applications. Although a single processing core on the GPU is much slower and provides less functionality than its counterpart on the CPU, the huge number of these small processing entities outperforms the classical processors when the application can be parallelized. Thus, in recent years various radio astronomical projects have started to make use of this technology either to realize the correlator on this platform or to establish the post-processing pipeline with GPUs. Therefore, the feasibility of GPUs as a choice for a VLBI correlator is being investigated, including pros and cons of this technology. Additionally, a GPU based software correlator will be reviewed with respect to energy consumption/GFlop/sec and cost/GFlop/sec.

  19. Topical perspective on massive threading and parallelism.

    PubMed

    Farber, Robert M

    2011-09-01

    Unquestionably computer architectures have undergone a recent and noteworthy paradigm shift that now delivers multi- and many-core systems with tens to many thousands of concurrent hardware processing elements per workstation or supercomputer node. GPGPU (General Purpose Graphics Processor Unit) technology in particular has attracted significant attention as new software development capabilities, namely CUDA (Compute Unified Device Architecture) and OpenCL™, have made it possible for students as well as small and large research organizations to achieve excellent speedup for many applications over more conventional computing architectures. The current scientific literature reflects this shift with numerous examples of GPGPU applications that have achieved one, two, and in some special cases, three-orders of magnitude increased computational performance through the use of massive threading to exploit parallelism. Multi-core architectures are also evolving quickly to exploit both massive-threading and massive-parallelism such as the 1.3 million threads Blue Waters supercomputer. The challenge confronting scientists in planning future experimental and theoretical research efforts--be they individual efforts with one computer or collaborative efforts proposing to use the largest supercomputers in the world is how to capitalize on these new massively threaded computational architectures--especially as not all computational problems will scale to massive parallelism. In particular, the costs associated with restructuring software (and potentially redesigning algorithms) to exploit the parallelism of these multi- and many-threaded machines must be considered along with application scalability and lifespan. This perspective is an overview of the current state of threading and parallelize with some insight into the future. Published by Elsevier Inc.

  20. Implementing Cycles of Assess, Plan, Do, Review: A Literature Review of Practitioner Perspectives

    ERIC Educational Resources Information Center

    Greenwood, Jo; Kelly, Catherine

    2017-01-01

    This article uses a literature review process to explore current literature on Response to Intervention (RtI), an approach to the identification of and provision for students with special educational needs introduced in the USA by the Individuals with Disabilities Education Improvement Act of 2004. Parallels are made between RtI and the graduated…

  1. Big-BOE: Fusing Spanish Official Gazette with Big Data Technology.

    PubMed

    Basanta-Val, Pablo; Sánchez-Fernández, Luis

    2018-06-01

    The proliferation of new data sources, stemmed from the adoption of open-data schemes, in combination with an increasing computing capacity causes the inception of new type of analytics that process Internet of things with low-cost engines to speed up data processing using parallel computing. In this context, the article presents an initiative, called BIG-Boletín Oficial del Estado (BOE), designed to process the Spanish official government gazette (BOE) with state-of-the-art processing engines, to reduce computation time and to offer additional speed up for big data analysts. The goal of including a big data infrastructure is to be able to process different BOE documents in parallel with specific analytics, to search for several issues in different documents. The application infrastructure processing engine is described from an architectural perspective and from performance, showing evidence on how this type of infrastructure improves the performance of different types of simple analytics as several machines cooperate.

  2. MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control

    NASA Astrophysics Data System (ADS)

    Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming

    2017-09-01

    Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.

  3. Darwinian perspectives on the evolution of human languages.

    PubMed

    Pagel, Mark

    2017-02-01

    Human languages evolve by a process of descent with modification in which parent languages give rise to daughter languages over time and in a manner that mimics the evolution of biological species. Descent with modification is just one of many parallels between biological and linguistic evolution that, taken together, offer up a Darwinian perspective on how languages evolve. Combined with statistical methods borrowed from evolutionary biology, this Darwinian perspective has brought new opportunities to the study of the evolution of human languages. These include the statistical inference of phylogenetic trees of languages, the study of how linguistic traits evolve over thousands of years of language change, the reconstruction of ancestral or proto-languages, and using language change to date historical events.

  4. Perspectives of Students on Acceptance of Tablets and Self-Directed Learning with Technology

    ERIC Educational Resources Information Center

    Gokcearslan, Sahin

    2017-01-01

    Recent mobile learning technologies offer the opportunity for students to take charge of the learning process both inside and outside the classroom. One of these tools is the tablet PC (hereafter "tablet"). In parallel with increased access to e-content, the role of tablets in learning has recently begun to be examined. This study aims…

  5. A Grounded Research Perspective for Motivating College Students' Self-Regulated Learning Behaviors: Preparing and Gaining the Cooperation, Commitment of Teachers.

    ERIC Educational Resources Information Center

    Talbot, Gilles L.

    This paper suggests that the processes one would have college teachers use to motivate students closely parallel those that should be used to gain the cooperation, commitment, and preparation of teachers for this task. It discusses the "learning orientation" versus "grading orientation" of students, along with "class-side manners" that college…

  6. CFD in design - A government perspective

    NASA Technical Reports Server (NTRS)

    Kutler, Paul; Gross, Anthony R.

    1989-01-01

    Some of the research programs involving the use of CFD in the aerodynamic design process at government laboratories around the United States are presented. Technology transfer issues and future directions in the discipline or CFD are addressed. The major challengers in the aerosciences as well as other disciplines that will require high-performance computing resources such as massively parallel computers are examined.

  7. Accelerated Adaptive MGS Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang

    2011-01-01

    The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.

  8. The efficiency evaluation of support vibration isolation with mechanic inertial motion converter for vibroactive process equipment

    NASA Astrophysics Data System (ADS)

    Buryan, Yu. A.; Babichev, D. O.; Silkov, M. V.; Shtripling, L. O.; Kalashnikov, B. A.

    2017-08-01

    This research refers to the problems of processing equipment protection from vibration influence. The theory issues of vibration isolation for vibroactive objects such as engines, pumps, compressors, fans, piping, etc. are considered. The design of the perspective air spring with the parallel mounted mechanical inertial motion converter is offered. The mathematical model of the suspension, allowing selecting options to reduce the factor of the force transmission to the base in a certain frequency range is obtained.

  9. [Burden and capability of damaged parents--how refugee children can grow in exile].

    PubMed

    Adam, Hubertus; Walter, Joachim

    2012-01-01

    In trauma, dialectical tension arises between the inner perspective of the traumatized subject and the outside perspective (objective situation), between environmental stress and the subjective attribution of meaning, as well as between experience and behaviour. The traumatic process--the subject's endeavour to comprehend the overwhelming, often inconceivable experience and integrate it into its concepts of self and world--is understood against the backdrop of these interacting dimensions. The process phases "emerge from each other, run parallel, and permeate each other" (Fischer u. Riedesser, 2003). Problems that arise in the aftermath of trauma are rarely overcome by the victims alone. Attempts to process and self-heal have a social dimension, and family members are affected by war, persecution and flight in individual, varying ways. The impacts of violence experienced by parents from different crisis regions are examined in case studies with regard to the psychological development of indirectly impacted children growing up in exile.

  10. Reconstruction for time-domain in vivo EPR 3D multigradient oximetric imaging--a parallel processing perspective.

    PubMed

    Dharmaraj, Christopher D; Thadikonda, Kishan; Fletcher, Anthony R; Doan, Phuc N; Devasahayam, Nallathamby; Matsumoto, Shingo; Johnson, Calvin A; Cook, John A; Mitchell, James B; Subramanian, Sankaran; Krishna, Murali C

    2009-01-01

    Three-dimensional Oximetric Electron Paramagnetic Resonance Imaging using the Single Point Imaging modality generates unpaired spin density and oxygen images that can readily distinguish between normal and tumor tissues in small animals. It is also possible with fast imaging to track the changes in tissue oxygenation in response to the oxygen content in the breathing air. However, this involves dealing with gigabytes of data for each 3D oximetric imaging experiment involving digital band pass filtering and background noise subtraction, followed by 3D Fourier reconstruction. This process is rather slow in a conventional uniprocessor system. This paper presents a parallelization framework using OpenMP runtime support and parallel MATLAB to execute such computationally intensive programs. The Intel compiler is used to develop a parallel C++ code based on OpenMP. The code is executed on four Dual-Core AMD Opteron shared memory processors, to reduce the computational burden of the filtration task significantly. The results show that the parallel code for filtration has achieved a speed up factor of 46.66 as against the equivalent serial MATLAB code. In addition, a parallel MATLAB code has been developed to perform 3D Fourier reconstruction. Speedup factors of 4.57 and 4.25 have been achieved during the reconstruction process and oximetry computation, for a data set with 23 x 23 x 23 gradient steps. The execution time has been computed for both the serial and parallel implementations using different dimensions of the data and presented for comparison. The reported system has been designed to be easily accessible even from low-cost personal computers through local internet (NIHnet). The experimental results demonstrate that the parallel computing provides a source of high computational power to obtain biophysical parameters from 3D EPR oximetric imaging, almost in real-time.

  11. What is "the patient perspective" in patient engagement programs? Implicit logics and parallels to feminist theories.

    PubMed

    Rowland, Paula; McMillan, Sarah; McGillicuddy, Patti; Richards, Joy

    2017-01-01

    Public and patient involvement (PPI) in health care may refer to many different processes, ranging from participating in decision-making about one's own care to participating in health services research, health policy development, or organizational reforms. Across these many forms of public and patient involvement, the conceptual and theoretical underpinnings remain poorly articulated. Instead, most public and patient involvement programs rely on policy initiatives as their conceptual frameworks. This lack of conceptual clarity participates in dilemmas of program design, implementation, and evaluation. This study contributes to the development of theoretical understandings of public and patient involvement. In particular, we focus on the deployment of patient engagement programs within health service organizations. To develop a deeper understanding of the conceptual underpinnings of these programs, we examined the concept of "the patient perspective" as used by patient engagement practitioners and participants. Specifically, we focused on the way this phrase was used in the singular: "the" patient perspective or "the" patient voice. From qualitative analysis of interviews with 20 patient advisers and 6 staff members within a large urban health network in Canada, we argue that "the patient perspective" is referred to as a particular kind of situated knowledge, specifically an embodied knowledge of vulnerability. We draw parallels between this logic of patient perspective and the logic of early feminist theory, including the concepts of standpoint theory and strong objectivity. We suggest that champions of patient engagement may learn much from the way feminist theorists have constructed their arguments and addressed critique.

  12. The Perspective Structure of Visual Space

    PubMed Central

    2015-01-01

    Luneburg’s model has been the reference for experimental studies of visual space for almost seventy years. His claim for a curved visual space has been a source of inspiration for visual scientists as well as philosophers. The conclusion of many experimental studies has been that Luneburg’s model does not describe visual space in various tasks and conditions. Remarkably, no alternative model has been suggested. The current study explores perspective transformations of Euclidean space as a model for visual space. Computations show that the geometry of perspective spaces is considerably different from that of Euclidean space. Collinearity but not parallelism is preserved in perspective space and angles are not invariant under translation and rotation. Similar relationships have shown to be properties of visual space. Alley experiments performed early in the nineteenth century have been instrumental in hypothesizing curved visual spaces. Alleys were computed in perspective space and compared with reconstructed alleys of Blumenfeld. Parallel alleys were accurately described by perspective geometry. Accurate distance alleys were derived from parallel alleys by adjusting the interstimulus distances according to the size-distance invariance hypothesis. Agreement between computed and experimental alleys and accommodation of experimental results that rejected Luneburg’s model show that perspective space is an appropriate model for how we perceive orientations and angles. The model is also appropriate for perceived distance ratios between stimuli but fails to predict perceived distances. PMID:27648222

  13. The processing of linear perspective and binocular information for action and perception.

    PubMed

    Bruggeman, Hugo; Yonas, Albert; Konczak, Jürgen

    2007-04-08

    To investigate the processing of linear perspective and binocular information for action and for the perceptual judgment of depth, we presented viewers with an actual Ames trapezoidal window. The display, when presented perpendicular to the line of sight, provided perspective information for a rectangular window slanted in depth, while binocular information specified a planar surface in the fronto-parallel plane. We compared pointing towards the display-edges with perceptual judgment of their positions in depth as the display orientation was varied under monocular and binocular view. On monocular trials, pointing and depth judgment were based on the perspective information and failed to respond accurately to changes in display orientation because pictorial information did not vary sufficiently to specify the small differences in orientation. For binocular trials, pointing was based on binocular information and precisely matched the changes in display orientation whereas depth judgment was short of such adjustment and based upon both binocular and perspective-specified slant information. The finding, that on binocular trials pointing was considerably less responsive to the illusion than perceptual judgment, supports an account of two separate processing streams in the human visual system, a ventral pathway involved in object recognition and a dorsal pathway that produces visual information for the control of actions. Previously, similar differences between perception and action were explained by an alternate explanation, that is, viewers selectively attend to different parts of a display in the two tasks. The finding that under monocular view participants responded to perspective information in both the action and the perception task rules out the attention-based argument.

  14. Parallel Algorithms for Least Squares and Related Computations.

    DTIC Science & Technology

    1991-03-22

    for dense computations in linear algebra . The work has recently been published in a general reference book on parallel algorithms by SIAM. AFO SR...written his Ph.D. dissertation with the principal investigator. (See publication 6.) • Parallel Algorithms for Dense Linear Algebra Computations. Our...and describe and to put into perspective a selection of the more important parallel algorithms for numerical linear algebra . We give a major new

  15. Genetics and language: a neurobiological perspective on the missing link (-ing hypotheses).

    PubMed

    Poeppel, David

    2011-12-01

    The paper argues that both evolutionary and genetic approaches to studying the biological foundations of speech and language could benefit from fractionating the problem at a finer grain, aiming not to map genetics to "language"-or even subdomains of language such as "phonology" or "syntax"-but rather to link genetic results to component formal operations that underlie processing the comprehension and production of linguistic representations. Neuroanatomic and neurophysiological research suggests that language processing is broken down in space (distributed functional anatomy along concurrent pathways) and time (concurrent processing on multiple time scales). These parallel neuronal pathways and their local circuits form the infrastructure of speech and language and are the actual targets of evolution/genetics. Therefore, investigating the mapping from gene to brain circuit to linguistic phenotype at the level of generic computational operations (subroutines actually executable in these circuits) stands to provide a new perspective on the biological foundations in the healthy and challenged brain.

  16. Understanding the role of floral development in the evolution of angiosperm flowers: clarifications from a historical and physico-dynamic perspective.

    PubMed

    Ronse De Craene, Louis

    2018-05-01

    Flower morphology results from the interaction of an established genetic program, the influence of external forces induced by pollination systems, and physical forces acting before, during and after initiation. Floral ontogeny, as the process of development from a meristem to a fully developed flower, can be approached either from a historical perspective, as a "recapitulation of the phylogeny" mainly explained as a process of genetic mutations through time, or from a physico-dynamic perspective, where time, spatial pressures, and growth processes are determining factors in creating the floral morphospace. The first (historical) perspective clarifies how flower morphology is the result of development over time, where evolutionary changes are only possible using building blocks that are available at a certain stage in the developmental history. Flowers are regulated by genetically determined constraints and development clarifies specific transitions between different floral morphs. These constraints are the result of inherent mutations or are induced by the interaction of flowers with pollinators. The second (physico-dynamic) perspective explains how changes in the physical environment of apical meristems create shifts in ontogeny and this is reflected in the morphospace of flowers. Changes in morphology are mainly induced by shifts in space, caused by the time of initiation (heterochrony), pressure of organs, and alterations of the size of the floral meristem, and these operate independently or in parallel with genetic factors. A number of examples demonstrate this interaction and its importance in the establishment of different floral forms. Both perspectives are complementary and should be considered in the understanding of factors regulating floral development. It is suggested that floral evolution is the result of alternating bursts of physical constraints and genetic stabilization processes following each other in succession. Future research needs to combine these different perspectives in understanding the evolution of floral systems and their diversification.

  17. Neuroscience is awaiting for a breakthrough: an essay bridging the concepts of Descartes, Einstein, Heisenberg, Hebb and Hayek with the explanatory formulations in this special issue.

    PubMed

    Başar, Erol; Karakaş, Sirel

    2006-05-01

    The paper presents gedankenmodels which, based on the theories and models in the present special issue, describe the conditions for a breakthrough in brain sciences and neuroscience. The new model is based on contemporary findings which show that the brain and its cognitive processes show super-synchronization. Accordingly, understanding the brain/body-mind complex is possible only when these three are considered as a wholistic entity and not as discrete structures or functions. Such a breakthrough and the related perspectives to the brain/body-mind complex will involve a transition from the mechanistic Cartesian system to a nebulous Cartesian system, one that is basically characterized by parallel computing and is further parallel to quantum mechanics. This integrated outlook on the brain/body-mind, or dynamic functionality, will make the treatment of also the meta-cognitive processes and the greater part of the iceberg, the unconscious, possible. All this will be possible only through the adoption of a multidisciplinary approach that will bring together the knowledge and the technology of the four P's which consist of physics, physiology, psychology and philosophy. The genetic approach to the functional dynamics of the brain/body-mind, where the oscillatory responses were found to be laws of brain activity, is presented in this volume as one of the most recent perspectives of neuroscience.

  18. The extent of visual space inferred from perspective angles

    PubMed Central

    Erkelens, Casper J.

    2015-01-01

    Retinal images are perspective projections of the visual environment. Perspective projections do not explain why we perceive perspective in 3-D space. Analysis of underlying spatial transformations shows that visual space is a perspective transformation of physical space if parallel lines in physical space vanish at finite distance in visual space. Perspective angles, i.e., the angle perceived between parallel lines in physical space, were estimated for rails of a straight railway track. Perspective angles were also estimated from pictures taken from the same point of view. Perspective angles between rails ranged from 27% to 83% of their angular size in the retinal image. Perspective angles prescribe the distance of vanishing points of visual space. All computed distances were shorter than 6 m. The shallow depth of a hypothetical space inferred from perspective angles does not match the depth of visual space, as it is perceived. Incongruity between the perceived shape of a railway line on the one hand and the experienced ratio between width and length of the line on the other hand is huge, but apparently so unobtrusive that it has remained unnoticed. The incompatibility between perspective angles and perceived distances casts doubt on evidence for a curved visual space that has been presented in the literature and was obtained from combining judgments of distances and angles with physical positions. PMID:26034567

  19. Implications for therapeutic judging (TJ) of a psychoanalytical approach to the judicial role - Reflections on Robert Burt's contribution.

    PubMed

    Sourdin, Tania; Cornes, Richard

    Robert Burt in, "The Yale School of Law and Psychoanalysis, from 1963 Onward", in this issue, explains and laments a decline in influence of psychoanalytic ideas in legal thinking. He notes "the fundamental similarity that both litigation and psychotherapy involve recollections of past events", buttressing his argument with eight parallels between the two. In this article we take up Burt's theme, first noting the relationship between therapeutic jurisprudence and psychoanalytic concepts before presenting an outline for a psychoanalytical understanding of the judicial role. We then consider the litigation process from the linked perspectives of therapeutic jurisprudence and psychoanalysis before closing with a reflection on the eight parallels elaborated by Burt. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. A Large Deviations Analysis of Certain Qualitative Properties of Parallel Tempering and Infinite Swapping Algorithms

    DOE PAGES

    Doll, J.; Dupuis, P.; Nyquist, P.

    2017-02-08

    Parallel tempering, or replica exchange, is a popular method for simulating complex systems. The idea is to run parallel simulations at different temperatures, and at a given swap rate exchange configurations between the parallel simulations. From the perspective of large deviations it is optimal to let the swap rate tend to infinity and it is possible to construct a corresponding simulation scheme, known as infinite swapping. In this paper we propose a novel use of large deviations for empirical measures for a more detailed analysis of the infinite swapping limit in the setting of continuous time jump Markov processes. Usingmore » the large deviations rate function and associated stochastic control problems we consider a diagnostic based on temperature assignments, which can be easily computed during a simulation. We show that the convergence of this diagnostic to its a priori known limit is a necessary condition for the convergence of infinite swapping. The rate function is also used to investigate the impact of asymmetries in the underlying potential landscape, and where in the state space poor sampling is most likely to occur.« less

  1. Tomographic image reconstruction using the cell broadband engine (CBE) general purpose hardware

    NASA Astrophysics Data System (ADS)

    Knaup, Michael; Steckmann, Sven; Bockenbach, Olivier; Kachelrieß, Marc

    2007-02-01

    Tomographic image reconstruction, such as the reconstruction of CT projection values, of tomosynthesis data, PET or SPECT events, is computational very demanding. In filtered backprojection as well as in iterative reconstruction schemes, the most time-consuming steps are forward- and backprojection which are often limited by the memory bandwidth. Recently, a novel general purpose architecture optimized for distributed computing became available: the Cell Broadband Engine (CBE). Its eight synergistic processing elements (SPEs) currently allow for a theoretical performance of 192 GFlops (3 GHz, 8 units, 4 floats per vector, 2 instructions, multiply and add, per clock). To maximize image reconstruction speed we modified our parallel-beam and perspective backprojection algorithms which are highly optimized for standard PCs, and optimized the code for the CBE processor. 1-3 In addition, we implemented an optimized perspective forwardprojection on the CBE which allows us to perform statistical image reconstructions like the ordered subset convex (OSC) algorithm. 4 Performance was measured using simulated data with 512 projections per rotation and 5122 detector elements. The data were backprojected into an image of 512 3 voxels using our PC-based approaches and the new CBE- based algorithms. Both the PC and the CBE timings were scaled to a 3 GHz clock frequency. On the CBE, we obtain total reconstruction times of 4.04 s for the parallel backprojection, 13.6 s for the perspective backprojection and 192 s for a complete OSC reconstruction, consisting of one initial Feldkamp reconstruction, followed by 4 OSC iterations.

  2. A medical perspective on the adventures of Sherlock Holmes.

    PubMed

    Reed, J

    2001-12-01

    The adventures of Sherlock Holmes, although primarily famous as stories of detection of crime, offer a considerable amount to interest the medical reader. There are many medical references in the stories, and the influence of Conan Doyle's medical background is clearly seen in the main characters. Aspects of the stories also reflect Conan Doyle's medical career, and also something of his attitude towards the profession. From Holmes's sayings and accounts of his methods, parallels can be drawn between Holmesian deduction and the diagnostic process. It is concluded, however, that deduction cannot be used as a direct paradigm since medical problems are rarely soluble through a process of logic alone.

  3. The Fight Deck Perspective of the NASA Langley AILS Concept

    NASA Technical Reports Server (NTRS)

    Rine, Laura L.; Abbott, Terence S.; Lohr, Gary W.; Elliott, Dawn M.; Waller, Marvin C.; Perry, R. Brad

    2000-01-01

    Many US airports depend on parallel runway operations to meet the growing demand for day to day operations. In the current airspace system, Instrument Meteorological Conditions (IMC) reduce the capacity of close parallel runway operations; that is, runways spaced closer than 4300 ft. These capacity losses can result in landing delays causing inconveniences to the traveling public, interruptions in commerce, and increased operating costs to the airlines. This document presents the flight deck perspective component of the Airborne Information for Lateral Spacing (AILS) approaches to close parallel runways in IMC. It represents the ideas the NASA Langley Research Center (LaRC) AILS Development Team envisions to integrate a number of components and procedures into a workable system for conducting close parallel runway approaches. An initial documentation of the aspects of this concept was sponsored by LaRC and completed in 1996. Since that time a number of the aspects have evolved to a more mature state. This paper is an update of the earlier documentation.

  4. Relationship of Individual and Group Change: Ontogeny and Phylogeny in Biology.

    ERIC Educational Resources Information Center

    Gould, Steven Jay

    1984-01-01

    Considers the issue of parallels between ontogeny and phylogeny from an historical perspective. Discusses such parallels in relationship to two ontogenetic principles concerning recapitulation and sequence of stages. Differentiates between Piaget's use of the idea of recapitulation and Haeckel's biogenetic law. (Author/RH)

  5. Career Preparation: A Longitudinal, Process-Oriented Examination

    PubMed Central

    Stringer, Kate; Kerpelman, Jennifer; Skorikov, Vladimir

    2011-01-01

    Preparing for an adult career through careful planning, choosing a career, and gaining confidence to achieve career goals is a primary task during adolescence and early adulthood. The current study bridged identity process literature and career construction theory (Savickas, 2005) by examining the commitment component of career adaptability, career preparation (i.e., career planning, career decision-making, and career confidence), from an identity process perspective (Luyckx, Goossens, & Soenens, 2006). Research has suggested that career preparation dimensions are interrelated during adolescence and early adulthood; however, what remains to be known is how each dimension changes over time and the interrelationships among the dimensions during the transition from high school. Drawing parallels between career preparation and identity development dimensions, the current study addressed these questions by examining the patterns of change in each career preparation dimension and parallel process models that tested associations among the slopes and intercepts of the career preparation dimensions. Results showed that the career preparation dimensions were not developing similarly over time, although each dimension was associated cross-sectionally and longitudinally with the other dimensions. Results also suggested that career planning and decision-making precede career confidence. The results of the current study supported career construction theory and showed similarities between the processes of career preparation and identity development. PMID:21804641

  6. Career Preparation: A Longitudinal, Process-Oriented Examination.

    PubMed

    Stringer, Kate; Kerpelman, Jennifer; Skorikov, Vladimir

    2011-08-01

    Preparing for an adult career through careful planning, choosing a career, and gaining confidence to achieve career goals is a primary task during adolescence and early adulthood. The current study bridged identity process literature and career construction theory (Savickas, 2005) by examining the commitment component of career adaptability, career preparation (i.e., career planning, career decision-making, and career confidence), from an identity process perspective (Luyckx, Goossens, & Soenens, 2006). Research has suggested that career preparation dimensions are interrelated during adolescence and early adulthood; however, what remains to be known is how each dimension changes over time and the interrelationships among the dimensions during the transition from high school. Drawing parallels between career preparation and identity development dimensions, the current study addressed these questions by examining the patterns of change in each career preparation dimension and parallel process models that tested associations among the slopes and intercepts of the career preparation dimensions. Results showed that the career preparation dimensions were not developing similarly over time, although each dimension was associated cross-sectionally and longitudinally with the other dimensions. Results also suggested that career planning and decision-making precede career confidence. The results of the current study supported career construction theory and showed similarities between the processes of career preparation and identity development.

  7. A psychodynamic perspective on elections.

    PubMed

    Clemens, Norman A

    2010-11-01

    In a democracy, elections are the way in which the collective thought processes of the voters arrive at a decision to direct their government. The author explores how the individual voter assesses and resolves many conflicting internal and external forces to arrive at a vote. The midterm elections of 2010 illustrate the parallel between individual resolution of conflicting forces and the process of a campaign leading to the outcome of an election. The psychodynamic concepts of conflict and compromise, affects, aggression, unconscious forces, mechanisms of defense, superego, and the ego's integrative functions are evident in both the individual voter and the collective electoral process. The author expresses concern about the historical vulnerability of democracies and the unbalancing effect of allowing limitless infusion of anonymous corporate money to pour into campaigns.

  8. Code Optimization and Parallelization on the Origins: Looking from Users' Perspective

    NASA Technical Reports Server (NTRS)

    Chang, Yan-Tyng Sherry; Thigpen, William W. (Technical Monitor)

    2002-01-01

    Parallel machines are becoming the main compute engines for high performance computing. Despite their increasing popularity, it is still a challenge for most users to learn the basic techniques to optimize/parallelize their codes on such platforms. In this paper, we present some experiences on learning these techniques for the Origin systems at the NASA Advanced Supercomputing Division. Emphasis of this paper will be on a few essential issues (with examples) that general users should master when they work with the Origins as well as other parallel systems.

  9. Gestalt and Adventure Therapy: Parallels and Perspectives.

    ERIC Educational Resources Information Center

    Gilsdorf, Rudiger

    This paper calls attention to parallels in the literature of adventure education and that of Gestalt therapy, demonstrating that both are rooted in an experiential tradition. The philosophies of adventure or experiential education and Gestalt therapy have the following areas in common: (1) emphasis on personal growth and the development of present…

  10. Exploring Time Perspective in Greek Young Adults: Validation of the Zimbardo Time Perspective Inventory and Relationships with Mental Health Indicators

    ERIC Educational Resources Information Center

    Anagnostopoulos, Fotios; Griva, Fay

    2012-01-01

    In this article we examine the factorial structure of the Greek version of the Zimbardo Time Perspective Inventory (ZTPI; Zimbardo and Boyd in "J Personal Soc Psychol" 77:1271-1288, 1999), in a sample of 337 university students, using principal axis factoring (PAF) with oblique rotation, and its dimensionality using parallel analysis.…

  11. A unifying framework for rigid multibody dynamics and serial and parallel computational issues

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Jain, Abhinandan

    1989-01-01

    A unifying framework for various formulations of the dynamics of open-chain rigid multibody systems is discussed. Their suitability for serial and parallel processing is assessed. The framework is based on the derivation of intrinsic, i.e., coordinate-free, equations of the algorithms which provides a suitable abstraction and permits a distinction to be made between the computational redundancy in the intrinsic and extrinsic equations. A set of spatial notation is used which allows the derivation of the various algorithms in a common setting and thus clarifies the relationships among them. The three classes of algorithms viz., O(n), O(n exp 2) and O(n exp 3) or the solution of the dynamics problem are investigated. Researchers begin with the derivation of O(n exp 3) algorithms based on the explicit computation of the mass matrix and it provides insight into the underlying basis of the O(n) algorithms. From a computational perspective, the optimal choice of a coordinate frame for the projection of the intrinsic equations is discussed and the serial computational complexity of the different algorithms is evaluated. The three classes of algorithms are also analyzed for suitability for parallel processing. It is shown that the problem belongs to the class of N C and the time and processor bounds are of O(log2/2(n)) and O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 2) processors, and results from the parallelization of the O(n exp 3) serial algorithm.

  12. An Alternative Methodology for Creating Parallel Test Forms Using the IRT Information Function.

    ERIC Educational Resources Information Center

    Ackerman, Terry A.

    The purpose of this paper is to report results on the development of a new computer-assisted methodology for creating parallel test forms using the item response theory (IRT) information function. Recently, several researchers have approached test construction from a mathematical programming perspective. However, these procedures require…

  13. A further extension of the Extended Parallel Process Model (E-EPPM): implications of cognitive appraisal theory of emotion and dispositional coping style.

    PubMed

    So, Jiyeon

    2013-01-01

    For two decades, the extended parallel process model (EPPM; Witte, 1992 ) has been one of the most widely used theoretical frameworks in health risk communication. The model has gained much popularity because it recognizes that, ironically, preceding fear appeal models do not incorporate the concept of fear as a legitimate and central part of them. As a remedy to this situation, the EPPM aims at "putting the fear back into fear appeals" ( Witte, 1992 , p. 330). Despite this attempt, however, this article argues that the EPPM still does not fully capture the essence of fear as an emotion. Specifically, drawing upon Lazarus's (1991 ) cognitive appraisal theory of emotion and the concept of dispositional coping style ( Miller, 1995 ), this article seeks to further extend the EPPM. The revised EPPM incorporates a more comprehensive perspective on risk perceptions as a construct involving both cognitive and affective aspects (i.e., fear and anxiety) and integrates the concept of monitoring and blunting coping style as a moderator of further information seeking regarding a given risk topic.

  14. Educational preparation of black nurses: a historical perspective.

    PubMed

    Carnegie, M Elizabeth

    2005-01-01

    To where minority nursing needs to proceed, the minority nursing community must understand where we have been. This historical perspective traces our roots through every level of nursing education. Parallels are drawn between minority nurse educational evolution and the historical events occurring in the greater society in the United States.

  15. Cognitive and Sociocultural Perspectives: Two Parallel SLA Worlds?

    ERIC Educational Resources Information Center

    Zuengler, Jane; Miller, Elizabeth R.

    2006-01-01

    Looking back at the past 15 years in the field of second language acquisition (SLA), the authors select and discuss several important developments. One is the impact of various sociocultural perspectives such as Vygotskian sociocultural theory, language socialization, learning as changing participation in situated practices, Bakhtin and the…

  16. Graphics Processing Unit–Enhanced Genetic Algorithms for Solving the Temporal Dynamics of Gene Regulatory Networks

    PubMed Central

    García-Calvo, Raúl; Guisado, JL; Diaz-del-Rio, Fernando; Córdoba, Antonio; Jiménez-Morales, Francisco

    2018-01-01

    Understanding the regulation of gene expression is one of the key problems in current biology. A promising method for that purpose is the determination of the temporal dynamics between known initial and ending network states, by using simple acting rules. The huge amount of rule combinations and the nonlinear inherent nature of the problem make genetic algorithms an excellent candidate for finding optimal solutions. As this is a computationally intensive problem that needs long runtimes in conventional architectures for realistic network sizes, it is fundamental to accelerate this task. In this article, we study how to develop efficient parallel implementations of this method for the fine-grained parallel architecture of graphics processing units (GPUs) using the compute unified device architecture (CUDA) platform. An exhaustive and methodical study of various parallel genetic algorithm schemes—master-slave, island, cellular, and hybrid models, and various individual selection methods (roulette, elitist)—is carried out for this problem. Several procedures that optimize the use of the GPU’s resources are presented. We conclude that the implementation that produces better results (both from the performance and the genetic algorithm fitness perspectives) is simulating a few thousands of individuals grouped in a few islands using elitist selection. This model comprises 2 mighty factors for discovering the best solutions: finding good individuals in a short number of generations, and introducing genetic diversity via a relatively frequent and numerous migration. As a result, we have even found the optimal solution for the analyzed gene regulatory network (GRN). In addition, a comparative study of the performance obtained by the different parallel implementations on GPU versus a sequential application on CPU is carried out. In our tests, a multifold speedup was obtained for our optimized parallel implementation of the method on medium class GPU over an equivalent sequential single-core implementation running on a recent Intel i7 CPU. This work can provide useful guidance to researchers in biology, medicine, or bioinformatics in how to take advantage of the parallelization on massively parallel devices and GPUs to apply novel metaheuristic algorithms powered by nature for real-world applications (like the method to solve the temporal dynamics of GRNs). PMID:29662297

  17. Graphics Processing Unit-Enhanced Genetic Algorithms for Solving the Temporal Dynamics of Gene Regulatory Networks.

    PubMed

    García-Calvo, Raúl; Guisado, J L; Diaz-Del-Rio, Fernando; Córdoba, Antonio; Jiménez-Morales, Francisco

    2018-01-01

    Understanding the regulation of gene expression is one of the key problems in current biology. A promising method for that purpose is the determination of the temporal dynamics between known initial and ending network states, by using simple acting rules. The huge amount of rule combinations and the nonlinear inherent nature of the problem make genetic algorithms an excellent candidate for finding optimal solutions. As this is a computationally intensive problem that needs long runtimes in conventional architectures for realistic network sizes, it is fundamental to accelerate this task. In this article, we study how to develop efficient parallel implementations of this method for the fine-grained parallel architecture of graphics processing units (GPUs) using the compute unified device architecture (CUDA) platform. An exhaustive and methodical study of various parallel genetic algorithm schemes-master-slave, island, cellular, and hybrid models, and various individual selection methods (roulette, elitist)-is carried out for this problem. Several procedures that optimize the use of the GPU's resources are presented. We conclude that the implementation that produces better results (both from the performance and the genetic algorithm fitness perspectives) is simulating a few thousands of individuals grouped in a few islands using elitist selection. This model comprises 2 mighty factors for discovering the best solutions: finding good individuals in a short number of generations, and introducing genetic diversity via a relatively frequent and numerous migration. As a result, we have even found the optimal solution for the analyzed gene regulatory network (GRN). In addition, a comparative study of the performance obtained by the different parallel implementations on GPU versus a sequential application on CPU is carried out. In our tests, a multifold speedup was obtained for our optimized parallel implementation of the method on medium class GPU over an equivalent sequential single-core implementation running on a recent Intel i7 CPU. This work can provide useful guidance to researchers in biology, medicine, or bioinformatics in how to take advantage of the parallelization on massively parallel devices and GPUs to apply novel metaheuristic algorithms powered by nature for real-world applications (like the method to solve the temporal dynamics of GRNs).

  18. The Challenge and Challenging of Childhood Studies? Learning from Disability Studies and Research with Disabled Children

    ERIC Educational Resources Information Center

    Tisdall, E. Kay M.

    2012-01-01

    Childhood studies have argued for the social construction of childhood, respecting children and childhood in the present, and recognising children's agency and rights. Such perspectives have parallels to, and challenges for, disability studies. This article considers such parallels and challenges, leading to a (re)consideration of research claims…

  19. Automated target recognition and tracking using an optical pattern recognition neural network

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin

    1991-01-01

    The on-going development of an automatic target recognition and tracking system at the Jet Propulsion Laboratory is presented. This system is an optical pattern recognition neural network (OPRNN) that is an integration of an innovative optical parallel processor and a feature extraction based neural net training algorithm. The parallel optical processor provides high speed and vast parallelism as well as full shift invariance. The neural network algorithm enables simultaneous discrimination of multiple noisy targets in spite of their scales, rotations, perspectives, and various deformations. This fully developed OPRNN system can be effectively utilized for the automated spacecraft recognition and tracking that will lead to success in the Automated Rendezvous and Capture (AR&C) of the unmanned Cargo Transfer Vehicle (CTV). One of the most powerful optical parallel processors for automatic target recognition is the multichannel correlator. With the inherent advantages of parallel processing capability and shift invariance, multiple objects can be simultaneously recognized and tracked using this multichannel correlator. This target tracking capability can be greatly enhanced by utilizing a powerful feature extraction based neural network training algorithm such as the neocognitron. The OPRNN, currently under investigation at JPL, is constructed with an optical multichannel correlator where holographic filters have been prepared using the neocognitron training algorithm. The computation speed of the neocognitron-type OPRNN is up to 10(exp 14) analog connections/sec that enabling the OPRNN to outperform its state-of-the-art electronics counterpart by at least two orders of magnitude.

  20. The Effects of Argumentation Implementation on Environmental Education Self Efficacy Beliefs and Perspectives According to Environmental Problems

    ERIC Educational Resources Information Center

    Fettahlioglu, Pinar

    2018-01-01

    The purpose of this study is to investigate the effect of argumentation implementation applied in the environmental science course on science teacher candidates' environmental education self-efficacy beliefs and perspectives according to environmental problems. In this mixed method research study, convergent parallel design was utilized.…

  1. Two Perspectives on Proportional Relationships: Extending Complementary Origins of Multiplication in Terms of Quantities

    ERIC Educational Resources Information Center

    Beckmann, Sybilla; Izsák, Andrew

    2015-01-01

    In this article, we present a mathematical analysis that distinguishes two distinct quantitative perspectives on ratios and proportional relationships: variable number of fixed quantities and fixed numbers of variable parts. This parallels the distinction between measurement and partitive meanings for division and between two meanings for…

  2. Work at the Uddevalla Volvo Plant from the Perspective of the Demand-Control Model

    ERIC Educational Resources Information Center

    Lottridge, Danielle

    2004-01-01

    The Uddevalla Volvo plant represents a different paradigm for automotive assembly. In parallel-flow work, self-managed work groups assemble entire automobiles with comparable productivity as conventional series-flow assembly lines. From the perspective of the demand-control model, operators at the Uddevalla plant have low physical and timing…

  3. Assembling the elephant: Integrating perspectives in personality psychology. Comment on "Personality from a cognitive-biological perspective" by Y. Neuman

    NASA Astrophysics Data System (ADS)

    Haslam, Nick; Holland, Elise

    2014-12-01

    Neuman [1] has made an ambitious attempt to integrate perspectives on the psychology of personality that usually run in parallel. The field calls to mind the fable of the blind men and the elephant: each perspective makes different claims about the person based on the aspect it apprehends. Neuman links cognition, affective neuroscience and psychodynamics in a bold effort to sketch the entire beast. However, his hefty framework has some elephantine elements, and is at times conceptually loose and baggy.

  4. Interoception, contemplative practice, and health

    PubMed Central

    Farb, Norman; Daubenmier, Jennifer; Price, Cynthia J.; Gard, Tim; Kerr, Catherine; Dunn, Barnaby D.; Klein, Anne Carolyn; Paulus, Martin P.; Mehling, Wolf E.

    2015-01-01

    Interoception can be broadly defined as the sense of signals originating within the body. As such, interoception is critical for our sense of embodiment, motivation, and well-being. And yet, despite its importance, interoception remains poorly understood within modern science. This paper reviews interdisciplinary perspectives on interoception, with the goal of presenting a unified perspective from diverse fields such as neuroscience, clinical practice, and contemplative studies. It is hoped that this integrative effort will advance our understanding of how interoception determines well-being, and identify the central challenges to such understanding. To this end, we introduce an expanded taxonomy of interoceptive processes, arguing that many of these processes can be understood through an emerging predictive coding model for mind–body integration. The model, which describes the tension between expected and felt body sensation, parallels contemplative theories, and implicates interoception in a variety of affective and psychosomatic disorders. We conclude that maladaptive construal of bodily sensations may lie at the heart of many contemporary maladies, and that contemplative practices may attenuate these interpretative biases, restoring a person’s sense of presence and agency in the world. PMID:26106345

  5. Synthetic environment employing a craft for providing user perspective reference

    DOEpatents

    Maples, Creve; Peterson, Craig A.

    1997-10-21

    A multi-dimensional user oriented synthetic environment system allows application programs to be programmed and accessed with input/output device independent, generic functional commands which are a distillation of the actual functions performed by any application program. A shared memory structure allows the translation of device specific commands to device independent, generic functional commands. Complete flexibility of the mapping of synthetic environment data to the user is thereby allowed. Accordingly, synthetic environment data may be provided to the user on parallel user information processing channels allowing the subcognitive mind to act as a filter, eliminating irrelevant information and allowing the processing of increase amounts of data by the user. The user is further provided with a craft surrounding the user within the synthetic environment, which craft, imparts important visual referential an motion parallax cues, enabling the user to better appreciate distances and directions within the synthetic environment. Display of this craft in close proximity to the user's point of perspective may be accomplished without substantially degrading the image resolution of the displayed portions of the synthetic environment.

  6. Morbidity, Self-Perceived Health and Mortality Among non-Western Immigrants and Their Descendants in Denmark in a Life Phase Perspective.

    PubMed

    Jervelund, Signe Smith; Malik, Sanam; Ahlmark, Nanna; Villadsen, Sarah Fredsted; Nielsen, Annemette; Vitus, Kathrine

    2017-04-01

    To enable preventive policies to address health inequity across ethnic groups, this review overviews the current knowledge on morbidity, self-perceived health and mortality among non-Western immigrants and their descendants in Denmark. A systematic search in PUBMED, SCOPUS, Embase and Cochrane as well as in national databases was undertaken. The final number of publications included was 45. Adult immigrants had higher morbidity, but lower mortality compared to ethnic Danes. Immigrant children had higher mortality and morbidity compared to ethnic Danes. Immigrants' health is critical to reach the political goals of integration. Despite non-Western immigrants' higher morbidity than ethnic Danes, no national strategy targeting immigrants' health has been implemented. Future research should include elderly immigrants and children, preferably employing a life-course perspective to enhance understanding of parallel processes of societal adaptation and health.

  7. H2 formation on interstellar dust grains: The viewpoints of theory, experiments, models and observations

    NASA Astrophysics Data System (ADS)

    Wakelam, Valentine; Bron, Emeric; Cazaux, Stephanie; Dulieu, Francois; Gry, Cécile; Guillard, Pierre; Habart, Emilie; Hornekær, Liv; Morisset, Sabine; Nyman, Gunnar; Pirronello, Valerio; Price, Stephen D.; Valdivia, Valeska; Vidali, Gianfranco; Watanabe, Naoki

    2017-12-01

    Molecular hydrogen is the most abundant molecule in the universe. It is the first one to form and survive photo-dissociation in tenuous environments. Its formation involves catalytic reactions on the surface of interstellar grains. The micro-physics of the formation process has been investigated intensively in the last 20 years, in parallel of new astrophysical observational and modeling progresses. In the perspectives of the probable revolution brought by the future satellite JWST, this article has been written to present what we think we know about the H2 formation in a variety of interstellar environments.

  8. A communication library for the parallelization of air quality models on structured grids

    NASA Astrophysics Data System (ADS)

    Miehe, Philipp; Sandu, Adrian; Carmichael, Gregory R.; Tang, Youhua; Dăescu, Dacian

    PAQMSG is an MPI-based, Fortran 90 communication library for the parallelization of air quality models (AQMs) on structured grids. It consists of distribution, gathering and repartitioning routines for different domain decompositions implementing a master-worker strategy. The library is architecture and application independent and includes optimization strategies for different architectures. This paper presents the library from a user perspective. Results are shown from the parallelization of STEM-III on Beowulf clusters. The PAQMSG library is available on the web. The communication routines are easy to use, and should allow for an immediate parallelization of existing AQMs. PAQMSG can also be used for constructing new models.

  9. “Scar-cinoma”: viewing the fibrotic lung mesenchymal cell in the context of cancer biology

    PubMed Central

    Horowitz, Jeffrey C.; Osterholzer, John J.; Marazioti, Antonia; Stathopoulos, Georgios T.

    2017-01-01

    Lung cancer and pulmonary fibrosis are common, yet distinct, pathological processes that represent urgent unmet medical needs. Striking clinical and mechanistic parallels exist between these distinct disease entities. The goal of this article is to examine lung fibrosis from the perspective of cancer-associated phenotypic hallmarks, to discuss areas of mechanistic overlap and distinction, and to highlight profibrotic mechanisms that contribute to carcinogenesis. Ultimately, we speculate that such comparisons might identify opportunities to leverage our current understanding of the pathobiology of each disease process in order to advance novel therapeutic approaches for both. We anticipate that such “outside the box” concepts could be translated to a more precise and individualised approach to fibrotic diseases of the lung. PMID:27030681

  10. Motivation and Engagement in the Workplace: Examining a Multidimensional Framework and Instrument from a Measurement and Evaluation Perspective

    ERIC Educational Resources Information Center

    Martin, Andrew J.

    2009-01-01

    This investigation conducts measurement and evaluation of a multidimensional model of workplace motivation and engagement from a construct validation perspective. Two studies were conducted, one using the multi-item multidimensional Motivation and Engagement Scale-Work (N = 637 school personnel) and one using a parallel short form (N = 574 school…

  11. Advanced Material Strategies for Next-Generation Additive Manufacturing

    PubMed Central

    Chang, Jinke; He, Jiankang; Zhou, Wenxing; Lei, Qi; Li, Xiao; Li, Dichen

    2018-01-01

    Additive manufacturing (AM) has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing. PMID:29361754

  12. Advanced Material Strategies for Next-Generation Additive Manufacturing.

    PubMed

    Chang, Jinke; He, Jiankang; Mao, Mao; Zhou, Wenxing; Lei, Qi; Li, Xiao; Li, Dichen; Chua, Chee-Kai; Zhao, Xin

    2018-01-22

    Additive manufacturing (AM) has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing.

  13. Reliability Modeling Methodology for Independent Approaches on Parallel Runways Safety Analysis

    NASA Technical Reports Server (NTRS)

    Babcock, P.; Schor, A.; Rosch, G.

    1998-01-01

    This document is an adjunct to the final report An Integrated Safety Analysis Methodology for Emerging Air Transport Technologies. That report presents the results of our analysis of the problem of simultaneous but independent, approaches of two aircraft on parallel runways (independent approaches on parallel runways, or IAPR). This introductory chapter presents a brief overview and perspective of approaches and methodologies for performing safety analyses for complex systems. Ensuing chapter provide the technical details that underlie the approach that we have taken in performing the safety analysis for the IAPR concept.

  14. Partitioning in parallel processing of production systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oflazer, K.

    1987-01-01

    This thesis presents research on certain issues related to parallel processing of production systems. It first presents a parallel production system interpreter that has been implemented on a four-processor multiprocessor. This parallel interpreter is based on Forgy's OPS5 interpreter and exploits production-level parallelism in production systems. Runs on the multiprocessor system indicate that it is possible to obtain speed-up of around 1.7 in the match computation for certain production systems when productions are split into three sets that are processed in parallel. The next issue addressed is that of partitioning a set of rules to processors in a parallel interpretermore » with production-level parallelism, and the extent of additional improvement in performance. The partitioning problem is formulated and an algorithm for approximate solutions is presented. The thesis next presents a parallel processing scheme for OPS5 production systems that allows some redundancy in the match computation. This redundancy enables the processing of a production to be divided into units of medium granularity each of which can be processed in parallel. Subsequently, a parallel processor architecture for implementing the parallel processing algorithm is presented.« less

  15. Parallel processing considerations for image recognition tasks

    NASA Astrophysics Data System (ADS)

    Simske, Steven J.

    2011-01-01

    Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.

  16. Enabling Chemistry Technologies and Parallel Synthesis-Accelerators of Drug Discovery Programmes.

    PubMed

    Vasudevan, A; Bogdan, A R; Koolman, H F; Wang, Y; Djuric, S W

    There is a pressing need to improve overall productivity in the pharmaceutical industry. Judicious investments in chemistry technologies can have a significant impact on cycle times, cost of goods and probability of technical success. This perspective describes some of these technologies developed and implemented at AbbVie, and their applications to the synthesis of novel scaffolds and to parallel synthesis. © 2017 Elsevier B.V. All rights reserved.

  17. Some thoughts about parallel process and psychotherapy supervision: when is a parallel just a parallel?

    PubMed

    Watkins, C Edward

    2012-09-01

    In a way not done before, Tracey, Bludworth, and Glidden-Tracey ("Are there parallel processes in psychotherapy supervision: An empirical examination," Psychotherapy, 2011, advance online publication, doi.10.1037/a0026246) have shown us that parallel process in psychotherapy supervision can indeed be rigorously and meaningfully researched, and their groundbreaking investigation provides a nice prototype for future supervision studies to emulate. In what follows, I offer a brief complementary comment to Tracey et al., addressing one matter that seems to be a potentially important conceptual and empirical parallel process consideration: When is a parallel just a parallel? PsycINFO Database Record (c) 2012 APA, all rights reserved.

  18. Seeing the forest for the trees: Networked workstations as a parallel processing computer

    NASA Technical Reports Server (NTRS)

    Breen, J. O.; Meleedy, D. M.

    1992-01-01

    Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.

  19. The dentist's care-taking perspective of dental fear patients - a continuous and changing challenge.

    PubMed

    Gyllensvärd, K; Qvarnström, M; Wolf, E

    2016-08-01

    The aim was to analyse the care taking of dental fear patients from the perspective of the dentist, using a qualitative methodology. In total, 11 dentists from both the private and public dental service were selected through a purposive sampling according to their experience of treating dental fear patients, their gender, age, service affiliation and location of undergraduate education. Data were obtained using one semi-structured interview with each informant. The interviews were taped and verbatim transcribed. The text was analysed using qualitative content analysis. The theme, 'The transforming autodidactic process of care taking', covering the interpretative level of data content was identified. The first main category covering the descriptive level of data was 'The continuous and changing challenge', with the subcategories 'The emotional demand' and 'The financial stress'. The second main category identified was 'The repeated collection of experience', with the subcategories 'The development of resources' and 'The emotional change'. The dentists' experience of treating dental fear patients was considered a challenging self-taught process under continuous transformation. The competence and routine platform expanded over time, parallel to a change of connected emotions from frustration towards safety, although challenges remained. © 2016 John Wiley & Sons Ltd.

  20. Parallel Processing at the High School Level.

    ERIC Educational Resources Information Center

    Sheary, Kathryn Anne

    This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

  1. Multi-objective problem of the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints

    NASA Astrophysics Data System (ADS)

    Amallynda, I.; Santosa, B.

    2017-11-01

    This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.

  2. Aging and efficiency in living systems: Complexity, adaptation and self-organization.

    PubMed

    Chatterjee, Atanu; Georgiev, Georgi; Iannacchione, Germano

    2017-04-01

    Living systems are open, out-of-equilibrium thermodynamic entities, that maintain order by locally reducing their entropy. Aging is a process by which these systems gradually lose their ability to maintain their out-of-equilibrium state, as measured by their free-energy rate density, and hence, their order. Thus, the process of aging reduces the efficiency of those systems, making them fragile and less adaptive to the environmental fluctuations, gradually driving them towards the state of thermodynamic equilibrium. In this paper, we discuss the various metrics that can be used to understand the process of aging from a complexity science perspective. Among all the metrics that we propose, action efficiency, is observed to be of key interest as it can be used to quantify order and self-organization in any physical system. Based upon our arguments, we present the dependency of other metrics on the action efficiency of a system, and also argue as to how each of the metrics, influences all the other system variables. In order to support our claims, we draw parallels between technological progress and biological growth. Such parallels are used to support the universal applicability of the metrics and the methodology presented in this paper. Therefore, the results and the arguments presented in this paper throw light on the finer nuances of the science of aging. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Brief note: Applying developmental intergroup perspectives to the social ecologies of bullying: Lessons from developmental social psychology.

    PubMed

    Brenick, Alaina; Halgunseth, Linda C

    2017-08-01

    Over the past decades, the field of bullying research has seen dramatic growth, notably with the integration of the social-ecological approach to understanding bullying. Recently, researchers (Hymel et al., 2015; Hawley & Williford, 2015) have called for further extension of the field by incorporating constructs of group processes into our investigation of the social ecologies of bullying. This brief note details the critical connections between power, social identity, group norms, social and moral reasoning about discrimination and victimization, and experiences of, evaluations of, and responses to bullying. The authors highlight a parallel development in the bridging of developmental social-ecological and social psychological perspectives utilized in the field of social exclusion that provides a roadmap for extending the larger field of bullying research. This article is part of a Special Issue entitled [VSI: Bullying] IG000050. Copyright © 2017 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  4. Human factors in anaesthesia: lessons from aviation.

    PubMed

    Toff, N J

    2010-07-01

    Aviation safety has evolved over more than a century and has achieved remarkable results. Applying some of the lessons learned may help make healthcare safer. From the perspective of an anaesthetic background and some thousands of hours of airline flying, I offer a personal perspective, try to give a sense of the place of human factors in airline operations and some of the current problems, and make some suggestions as to what the NHS and anaesthesia might learn from this. Although many of the ingredients for safe operation are frequently already present in our hospitals, and some individual clinical areas and departments achieve high levels of reliability and safety, I will emphasize my firm belief that we cannot expect improvements in human factors training and awareness to be fully effective in the healthcare setting without the parallel development of a simple and strong safety system across organizations. In the process, we may find that the safe hospital turns out somewhat differently to the safe airline.

  5. The source of dual-task limitations: Serial or parallel processing of multiple response selections?

    PubMed Central

    Marois, René

    2014-01-01

    Although it is generally recognized that the concurrent performance of two tasks incurs costs, the sources of these dual-task costs remain controversial. The serial bottleneck model suggests that serial postponement of task performance in dual-task conditions results from a central stage of response selection that can only process one task at a time. Cognitive-control models, by contrast, propose that multiple response selections can proceed in parallel, but that serial processing of task performance is predominantly adopted because its processing efficiency is higher than that of parallel processing. In the present study, we empirically tested this proposition by examining whether parallel processing would occur when it was more efficient and financially rewarded. The results indicated that even when parallel processing was more efficient and was incentivized by financial reward, participants still failed to process tasks in parallel. We conclude that central information processing is limited by a serial bottleneck. PMID:23864266

  6. Parallel Activation in Bilingual Phonological Processing

    ERIC Educational Resources Information Center

    Lee, Su-Yeon

    2011-01-01

    In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

  7. Active imaginative listening-a neuromusical critique.

    PubMed

    Rosenboom, David

    2014-01-01

    The parallel study of music in science and creative practice can be traced back to the ancients; and paralleling the emergence of music neuroscience, creative musical practitioners have employed neurobiological phenomena extensively in music composition and performance. Several examples from the author's work in this area, which began in the 1960s, are cited and briefly described. From this perspective, the author also explores questions pertinent to current agendas evident in music neuroscience and speculates on potentially potent future directions.

  8. Large Spatial Scale Ground Displacement Mapping through the P-SBAS Processing of Sentinel-1 Data on a Cloud Computing Environment

    NASA Astrophysics Data System (ADS)

    Casu, F.; Bonano, M.; de Luca, C.; Lanari, R.; Manunta, M.; Manzo, M.; Zinno, I.

    2017-12-01

    Since its launch in 2014, the Sentinel-1 (S1) constellation has played a key role on SAR data availability and dissemination all over the World. Indeed, the free and open access data policy adopted by the European Copernicus program together with the global coverage acquisition strategy, make the Sentinel constellation as a game changer in the Earth Observation scenario. Being the SAR data become ubiquitous, the technological and scientific challenge is focused on maximizing the exploitation of such huge data flow. In this direction, the use of innovative processing algorithms and distributed computing infrastructures, such as the Cloud Computing platforms, can play a crucial role. In this work we present a Cloud Computing solution for the advanced interferometric (DInSAR) processing chain based on the Parallel SBAS (P-SBAS) approach, aimed at processing S1 Interferometric Wide Swath (IWS) data for the generation of large spatial scale deformation time series in efficient, automatic and systematic way. Such a DInSAR chain ingests Sentinel 1 SLC images and carries out several processing steps, to finally compute deformation time series and mean deformation velocity maps. Different parallel strategies have been designed ad hoc for each processing step of the P-SBAS S1 chain, encompassing both multi-core and multi-node programming techniques, in order to maximize the computational efficiency achieved within a Cloud Computing environment and cut down the relevant processing times. The presented P-SBAS S1 processing chain has been implemented on the Amazon Web Services platform and a thorough analysis of the attained parallel performances has been performed to identify and overcome the major bottlenecks to the scalability. The presented approach is used to perform national-scale DInSAR analyses over Italy, involving the processing of more than 3000 S1 IWS images acquired from both ascending and descending orbits. Such an experiment confirms the big advantage of exploiting large computational and storage resources of Cloud Computing platforms for large scale DInSAR analysis. The presented Cloud Computing P-SBAS processing chain can be a precious tool in the perspective of developing operational services disposable for the EO scientific community related to hazard monitoring and risk prevention and mitigation.

  9. Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.

    1997-12-01

    Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.

  10. The Goddard Space Flight Center Program to develop parallel image processing systems

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.

    1972-01-01

    Parallel image processing which is defined as image processing where all points of an image are operated upon simultaneously is discussed. Coherent optical, noncoherent optical, and electronic methods are considered parallel image processing techniques.

  11. Black hole demography at the dawn of gravitational-wave astronomy: state-of-the art and future perspectives

    NASA Astrophysics Data System (ADS)

    Mapelli, Michela

    2018-02-01

    The first four LIGO detections have confirmed the existence of massive black holes (BHs), with mass 30-40 M⊙. Such BHs might originate from massive metal-poor stars (Z < 0:3 Z⊙) or from gravitational instabilities in the early Universe. The formation channels of merging BHs are still poorly constrained. The measure of mass, spin and redshift distribution of merging BHs will give us fundamental clues to distinguish between different models. In parallel, a better understanding of several astrophysical processes (e.g. common envelope, core-collapse SNe, and dynamical evolution of BHs) is decisive, to shed light on the formation channels of merging BHs.

  12. What makes a good home-based nocturnal seizure detector? A value sensitive design.

    PubMed

    van Andel, Judith; Leijten, Frans; van Delden, Hans; van Thiel, Ghislaine

    2015-01-01

    A device for the in-home detection of nocturnal seizures is currently being developed in the Netherlands, to improve care for patients with severe epilepsy. It is recognized that the design of medical technology is not value neutral: perspectives of users and developers are influential in design, and design choices influence these perspectives. However, during development processes, these influences are generally ignored and value-related choices remain implicit and poorly argued for. In the development process of the seizure detector we aimed to take values of all stakeholders into consideration. Therefore, we performed a parallel ethics study, using "value sensitive design." Analysis of stakeholder communication (in meetings and e-mail messages) identified five important values, namely, health, trust, autonomy, accessibility, and reliability. Stakeholders were then asked to give feedback on the choice of these values and how they should be interpreted. In a next step, the values were related to design choices relevant for the device, and then the consequences (risks and benefits) of these choices were investigated. Currently the process of design and testing of the device is still ongoing. The device will be validated in a trial in which the identified consequences of design choices are measured as secondary endpoints. Value sensitive design methodology is feasible for the development of new medical technology and can help designers substantiate the choices in their design.

  13. Thread concept for automatic task parallelization in image analysis

    NASA Astrophysics Data System (ADS)

    Lueckenhaus, Maximilian; Eckstein, Wolfgang

    1998-09-01

    Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.

  14. Studies in optical parallel processing. [All optical and electro-optic approaches

    NASA Technical Reports Server (NTRS)

    Lee, S. H.

    1978-01-01

    Threshold and A/D devices for converting a gray scale image into a binary one were investigated for all-optical and opto-electronic approaches to parallel processing. Integrated optical logic circuits (IOC) and optical parallel logic devices (OPA) were studied as an approach to processing optical binary signals. In the IOC logic scheme, a single row of an optical image is coupled into the IOC substrate at a time through an array of optical fibers. Parallel processing is carried out out, on each image element of these rows, in the IOC substrate and the resulting output exits via a second array of optical fibers. The OPAL system for parallel processing which uses a Fabry-Perot interferometer for image thresholding and analog-to-digital conversion, achieves a higher degree of parallel processing than is possible with IOC.

  15. Targeted proteomics coming of age - SRM, PRM and DIA performance evaluated from a core facility perspective.

    PubMed

    Kockmann, Tobias; Trachsel, Christian; Panse, Christian; Wahlander, Asa; Selevsek, Nathalie; Grossmann, Jonas; Wolski, Witold E; Schlapbach, Ralph

    2016-08-01

    Quantitative mass spectrometry is a rapidly evolving methodology applied in a large number of omics-type research projects. During the past years, new designs of mass spectrometers have been developed and launched as commercial systems while in parallel new data acquisition schemes and data analysis paradigms have been introduced. Core facilities provide access to such technologies, but also actively support the researchers in finding and applying the best-suited analytical approach. In order to implement a solid fundament for this decision making process, core facilities need to constantly compare and benchmark the various approaches. In this article we compare the quantitative accuracy and precision of current state of the art targeted proteomics approaches single reaction monitoring (SRM), parallel reaction monitoring (PRM) and data independent acquisition (DIA) across multiple liquid chromatography mass spectrometry (LC-MS) platforms, using a readily available commercial standard sample. All workflows are able to reproducibly generate accurate quantitative data. However, SRM and PRM workflows show higher accuracy and precision compared to DIA approaches, especially when analyzing low concentrated analytes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Parallel evolution of Nitric Oxide signaling: Diversity of synthesis & memory pathways

    PubMed Central

    Moroz, Leonid L.; Kohn, Andrea B.

    2014-01-01

    The origin of NO signaling can be traceable back to the origin of life with the large scale of parallel evolution of NO synthases (NOSs). Inducible-like NOSs may be the most basal prototype of all NOSs and that neuronal-like NOS might have evolved several times from this prototype. Other enzymatic and non-enzymatic pathways for NO synthesis have been discovered using reduction of nitrites, an alternative source of NO. Diverse synthetic mechanisms can co-exist within the same cell providing a complex NO-oxygen microenvironment tightly coupled with cellular energetics. The dissection of multiple sources of NO formation is crucial in analysis of complex biological processes such as neuronal integration and learning mechanisms when NO can act as a volume transmitter within memory-forming circuits. In particular, the molecular analysis of learning mechanisms (most notably in insects and gastropod molluscs) opens conceptually different perspectives to understand the logic of recruiting evolutionarily conserved pathways for novel functions. Giant uniquely identified cells from Aplysia and related species precent unuque opportunities for integrative analysis of NO signaling at the single cell level. PMID:21622160

  17. Parallel workflow tools to facilitate human brain MRI post-processing

    PubMed Central

    Cui, Zaixu; Zhao, Chenxi; Gong, Gaolang

    2015-01-01

    Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues. PMID:26029043

  18. Cooperative storage of shared files in a parallel computing system with dynamic block size

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  19. Active imaginative listening—a neuromusical critique

    PubMed Central

    Rosenboom, David

    2014-01-01

    The parallel study of music in science and creative practice can be traced back to the ancients; and paralleling the emergence of music neuroscience, creative musical practitioners have employed neurobiological phenomena extensively in music composition and performance. Several examples from the author's work in this area, which began in the 1960s, are cited and briefly described. From this perspective, the author also explores questions pertinent to current agendas evident in music neuroscience and speculates on potentially potent future directions. PMID:25202231

  20. Efficient multitasking: parallel versus serial processing of multiple tasks

    PubMed Central

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling. PMID:26441742

  1. Efficient multitasking: parallel versus serial processing of multiple tasks.

    PubMed

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.

  2. Contribution to terminology internationalization by word alignment in parallel corpora.

    PubMed

    Deléger, Louise; Merkel, Magnus; Zweigenbaum, Pierre

    2006-01-01

    Creating a complete translation of a large vocabulary is a time-consuming task, which requires skilled and knowledgeable medical translators. Our goal is to examine to which extent such a task can be alleviated by a specific natural language processing technique, word alignment in parallel corpora. We experiment with translation from English to French. Build a large corpus of parallel, English-French documents, and automatically align it at the document, sentence and word levels using state-of-the-art alignment methods and tools. Then project English terms from existing controlled vocabularies to the aligned word pairs, and examine the number and quality of the putative French translations obtained thereby. We considered three American vocabularies present in the UMLS with three different translation statuses: the MeSH, SNOMED CT, and the MedlinePlus Health Topics. We obtained several thousand new translations of our input terms, this number being closely linked to the number of terms in the input vocabularies. Our study shows that alignment methods can extract a number of new term translations from large bodies of text with a moderate human reviewing effort, and thus contribute to help a human translator obtain better translation coverage of an input vocabulary. Short-term perspectives include their application to a corpus 20 times larger than that used here, together with more focused methods for term extraction.

  3. Contribution to Terminology Internationalization by Word Alignment in Parallel Corpora

    PubMed Central

    Deléger, Louise; Merkel, Magnus; Zweigenbaum, Pierre

    2006-01-01

    Background and objectives Creating a complete translation of a large vocabulary is a time-consuming task, which requires skilled and knowledgeable medical translators. Our goal is to examine to which extent such a task can be alleviated by a specific natural language processing technique, word alignment in parallel corpora. We experiment with translation from English to French. Methods Build a large corpus of parallel, English-French documents, and automatically align it at the document, sentence and word levels using state-of-the-art alignment methods and tools. Then project English terms from existing controlled vocabularies to the aligned word pairs, and examine the number and quality of the putative French translations obtained thereby. We considered three American vocabularies present in the UMLS with three different translation statuses: the MeSH, SNOMED CT, and the MedlinePlus Health Topics. Results We obtained several thousand new translations of our input terms, this number being closely linked to the number of terms in the input vocabularies. Conclusion Our study shows that alignment methods can extract a number of new term translations from large bodies of text with a moderate human reviewing effort, and thus contribute to help a human translator obtain better translation coverage of an input vocabulary. Short-term perspectives include their application to a corpus 20 times larger than that used here, together with more focused methods for term extraction. PMID:17238328

  4. Capturing domain knowledge from multiple sources: the rare bone disorders use case.

    PubMed

    Groza, Tudor; Tudorache, Tania; Robinson, Peter N; Zankl, Andreas

    2015-01-01

    Lately, ontologies have become a fundamental building block in the process of formalising and storing complex biomedical information. The community-driven ontology curation process, however, ignores the possibility of multiple communities building, in parallel, conceptualisations of the same domain, and thus providing slightly different perspectives on the same knowledge. The individual nature of this effort leads to the need of a mechanism to enable us to create an overarching and comprehensive overview of the different perspectives on the domain knowledge. We introduce an approach that enables the loose integration of knowledge emerging from diverse sources under a single coherent interoperable resource. To accurately track the original knowledge statements, we record the provenance at very granular levels. We exemplify the approach in the rare bone disorders domain by proposing the Rare Bone Disorders Ontology (RBDO). Using RBDO, researchers are able to answer queries, such as: "What phenotypes describe a particular disorder and are common to all sources?" or to understand similarities between disorders based on divergent groupings (classifications) provided by the underlying sources. RBDO is available at http://purl.org/skeletome/rbdo. In order to support lightweight query and integration, the knowledge captured by RBDO has also been made available as a SPARQL Endpoint at http://bio-lark.org/se_skeldys.html.

  5. Multitasking as a choice: a perspective.

    PubMed

    Broeker, Laura; Liepelt, Roman; Poljac, Edita; Künzell, Stefan; Ewolds, Harald; de Oliveira, Rita F; Raab, Markus

    2018-01-01

    Performance decrements in multitasking have been explained by limitations in cognitive capacity, either modelled as static structural bottlenecks or as the scarcity of overall cognitive resources that prevent humans, or at least restrict them, from processing two tasks at the same time. However, recent research has shown that individual differences, flexible resource allocation, and prioritization of tasks cannot be fully explained by these accounts. We argue that understanding human multitasking as a choice and examining multitasking performance from the perspective of judgment and decision-making (JDM), may complement current dual-task theories. We outline two prominent theories from the area of JDM, namely Simple Heuristics and the Decision Field Theory, and adapt these theories to multitasking research. Here, we explain how computational modelling techniques and decision-making parameters used in JDM may provide a benefit to understanding multitasking costs and argue that these techniques and parameters have the potential to predict multitasking behavior in general, and also individual differences in behavior. Finally, we present the one-reason choice metaphor to explain a flexible use of limited capacity as well as changes in serial and parallel task processing. Based on this newly combined approach, we outline a concrete interdisciplinary future research program that we think will help to further develop multitasking research.

  6. Modeling borehole microseismic and strain signals measured by a distributed fiber optic sensor

    NASA Astrophysics Data System (ADS)

    Mellors, R. J.; Sherman, C. S.; Ryerson, F. J.; Morris, J.; Allen, G. S.; Messerly, M. J.; Carr, T.; Kavousi, P.

    2017-12-01

    The advent of distributed fiber optic sensors installed in boreholes provides a new and data-rich perspective on the subsurface environment. This includes the long-term capability for vertical seismic profiles, monitoring of active borehole processes such as well stimulation, and measuring of microseismic signals. The distributed fiber sensor, which measures strain (or strain-rate), is an active sensor with highest sensitivity parallel to the fiber and subject to varying types of noise, both external and internal. We take a systems approach and include the response of the electronics, fiber/cable, and subsurface to improve interpretation of the signals. This aids in understanding noise sources, assessing error bounds on amplitudes, and developing appropriate algorithms for improving the image. Ultimately, a robust understanding will allow identification of areas for future improvement and possible optimization in fiber and cable design. The subsurface signals are simulated in two ways: 1) a massively parallel multi-physics code that is capable of modeling hydraulic stimulation of heterogeneous reservoir with a pre-existing discrete fracture network, and 2) a parallelized 3D finite difference code for high-frequency seismic signals. Geometry and parameters for the simulations are derived from fiber deployments, including the Marcellus Shale Energy and Environment Laboratory (MSEEL) project in West Virginia. The combination mimics both the low-frequency strain signals generated during the fracture process and high-frequency signals from microseismic and perforation shots. Results are compared with available fiber data and demonstrate that quantitative interpretation of the fiber data provides valuable constraints on the fracture geometry and microseismic activity. These constraints appear difficult, if not impossible, to obtain otherwise.

  7. Cerebellar learning mechanisms

    PubMed Central

    Freeman, John H.

    2014-01-01

    The mechanisms underlying cerebellar learning are reviewed with an emphasis on old arguments and new perspectives on eyeblink conditioning. Eyeblink conditioning has been used for decades a model system for elucidating cerebellar learning mechanisms. The standard model of the mechanisms underlying eyeblink conditioning is that there two synaptic plasticity processes within the cerebellum that are necessary for acquisition of the conditioned response: 1) long-term depression (LTD) at parallel fiber-Purkinje cell synapses and 2) long-term potentiation (LTP) at mossy fiber-interpositus nucleus synapses. Additional Purkinje cell plasticity mechanisms may also contribute to eyeblink conditioning including LTP, excitability, and entrainment of deep nucleus activity. Recent analyses of the sensory input pathways necessary for eyeblink conditioning indicate that the cerebellum regulates its inputs to facilitate learning and maintain plasticity. Cerebellar learning during eyeblink conditioning is therefore a dynamic interactive process which maximizes responding to significant stimuli and suppresses responding to irrelevant or redundant stimuli. PMID:25289586

  8. The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform

    NASA Astrophysics Data System (ADS)

    Xie, Qingyun

    2016-06-01

    This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  9. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  10. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    PubMed

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  11. Recent Advances in Techniques for Hyperspectral Image Processing

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; Benediktsson, Jon Atli; Boardman, Joseph W.; Brazile, Jason; Bruzzone, Lorenzo; Camps-Valls, Gustavo; Chanussot, Jocelyn; Fauvel, Mathieu; Gamba, Paolo; Gualtieri, Anthony; hide

    2009-01-01

    Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than 30 years from being a sparse research tool into a commodity product available to a broad user community. Currently, there is a need for standardized data processing techniques able to take into account the special properties of hyperspectral data. In this paper, we provide a seminal view on recent advances in techniques for hyperspectral image processing. Our main focus is on the design of techniques able to deal with the highdimensional nature of the data, and to integrate the spatial and spectral information. Performance of the discussed techniques is evaluated in different analysis scenarios. To satisfy time-critical constraints in specific applications, we also develop efficient parallel implementations of some of the discussed algorithms. Combined, these parts provide an excellent snapshot of the state-of-the-art in those areas, and offer a thoughtful perspective on future potentials and emerging challenges in the design of robust hyperspectral imaging algorithms

  12. Model Calibration in Watershed Hydrology

    NASA Technical Reports Server (NTRS)

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  13. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  14. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    PubMed

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Anatomically constrained neural network models for the categorization of facial expression

    NASA Astrophysics Data System (ADS)

    McMenamin, Brenton W.; Assadi, Amir H.

    2004-12-01

    The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.

  16. Anatomically constrained neural network models for the categorization of facial expression

    NASA Astrophysics Data System (ADS)

    McMenamin, Brenton W.; Assadi, Amir H.

    2005-01-01

    The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.

  17. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, D.B.

    1996-12-31

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor to a plurality of slave processors to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor`s status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer, a digital signal processor, a parallel transfer controller, and two three-port memory devices. A communication switch within each node connects it to a fast parallel hardware channel through which all high density data arrives or leaves the node. 6 figs.

  18. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, Dario B.

    1996-01-01

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor (100) to a plurality of slave processors (200) to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor's status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer (104), a digital signal processor (114), a parallel transfer controller (106), and two three-port memory devices. A communication switch (108) within each node (100) connects it to a fast parallel hardware channel (70) through which all high density data arrives or leaves the node.

  19. Super and parallel computers and their impact on civil engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamat, M.P.

    1986-01-01

    This book presents the papers given at a conference on the use of supercomputers in civil engineering. Topics considered at the conference included solving nonlinear equations on a hypercube, a custom architectured parallel processing system, distributed data processing, algorithms, computer architecture, parallel processing, vector processing, computerized simulation, and cost benefit analysis.

  20. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  1. Performance evaluation of canny edge detection on a tiled multicore architecture

    NASA Astrophysics Data System (ADS)

    Brethorst, Andrew Z.; Desai, Nehal; Enright, Douglas P.; Scrofano, Ronald

    2011-01-01

    In the last few years, a variety of multicore architectures have been used to parallelize image processing applications. In this paper, we focus on assessing the parallel speed-ups of different Canny edge detection parallelization strategies on the Tile64, a tiled multicore architecture developed by the Tilera Corporation. Included in these strategies are different ways Canny edge detection can be parallelized, as well as differences in data management. The two parallelization strategies examined were loop-level parallelism and domain decomposition. Loop-level parallelism is achieved through the use of OpenMP,1 and it is capable of parallelization across the range of values over which a loop iterates. Domain decomposition is the process of breaking down an image into subimages, where each subimage is processed independently, in parallel. The results of the two strategies show that for the same number of threads, programmer implemented, domain decomposition exhibits higher speed-ups than the compiler managed, loop-level parallelism implemented with OpenMP.

  2. Use of parallel computing in mass processing of laser data

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.

    2015-12-01

    The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.

  3. Feminism, eating, and mental health.

    PubMed

    White, J H

    1991-03-01

    Eating disorders are prevalent health problems for women today. The traditional biomedical or psychiatric approaches offer a narrow perspective of the problem, its courses, and its treatment. Analyzing disordered eating from a feminist perspective, this article discusses cultural, political, and social phenomena that have had a significant impact on the development of these disorders. Parallels of eating disorders and other women's mental illnesses and the medicalization of their symptoms is explored. A "new view" of disordered eating in women is proposed that can be advanced only through feminist research.

  4. Achieving an empathic stance: dialogical sequence analysis of a change episode.

    PubMed

    Tikkanen, Soile; Stiles, William B; Leiman, Mikael

    2013-01-01

    Abstract This study examined a client's therapeutic progress within one session of an 18-session child neurological assessment. The analysis focused on a parent-psychologist dialogue in one session of the assessment process. Dialogical sequence analysis (DSA; Leiman, 2004, 2012) was used as a micro-analytic method to examine the developing discourse. The analysis traced the mother's developing of a reflective stance toward herself and her problematic ways of interacting with her daughter, who was the client. During the dialogue, the mother began to recognize her own contribution in maintaining the problematic pattern. Her gradual acknowledgment of the child's perspective and her growing sense of the child's otherness were mediated by an observer position (third-person view) toward the problematic pattern, which allowed a flexible exchange between the perspectives of self and the other. The results demonstrate the parallel development of intrapersonal and interpersonal empathy shown previously to characterize the transition from stage 3 (problem statement/clarification) to stage 4 (understanding/insight) in the assimilation of problematic experiences sequence (Brinegar, Salvi, Stiles, & Greenberg, 2006).

  5. Trends in extreme learning machines: a review.

    PubMed

    Huang, Gao; Huang, Guang-Bin; Song, Shiji; You, Keyou

    2015-01-01

    Extreme learning machine (ELM) has gained increasing interest from various research fields recently. In this review, we aim to report the current state of the theoretical research and practical advances on this subject. We first give an overview of ELM from the theoretical perspective, including the interpolation theory, universal approximation capability, and generalization ability. Then we focus on the various improvements made to ELM which further improve its stability, sparsity and accuracy under general or specific conditions. Apart from classification and regression, ELM has recently been extended for clustering, feature selection, representational learning and many other learning tasks. These newly emerging algorithms greatly expand the applications of ELM. From implementation aspect, hardware implementation and parallel computation techniques have substantially sped up the training of ELM, making it feasible for big data processing and real-time reasoning. Due to its remarkable efficiency, simplicity, and impressive generalization performance, ELM have been applied in a variety of domains, such as biomedical engineering, computer vision, system identification, and control and robotics. In this review, we try to provide a comprehensive view of these advances in ELM together with its future perspectives.

  6. Childhood and [re]habilitation: pragmatic political realities in the Colombian context.

    PubMed

    Pava-Ripoll, Nora Aneth; Granada-Echeverry, Patricia

    2016-01-01

    In this article, we outline some intersections between the concepts of childhood and [re] habilitation, which have undergone parallel development, especially since the 20th century. This complex interaction is mediated and constructed from scientific discourses that have consolidated around childhood. We emphasize this analysis from two perspectives: 1) academic positions that, from professions such as physical therapy, speech therapy, and occupational therapy, touch upon [re]habilitation in childhood and 2) public policy perspectives, which tend towards the creation of places to professionally practice [re]habilitation. A literature review driven by the question "What does it mean to [re]habilitate children in Colombia?" is cited in each section of this text, divided historically into 1) the rise of these [re]habilitative professions in Colombia, 2) the decade of the 1990s, marked by great changes through Colombian political reforms, and 3) the technological developments of the 21st century. We conclude that medical hegemony continues to guide the processes of [re]habilitation within a context that has changed and which imposes new challenges and requires new understanding and great conceptual and practical mobilization.

  7. Financial management and dental school strength, Part I: Strategy.

    PubMed

    Chambers, David W; Bergstrom, Roy

    2004-04-01

    The ultimate goal of financial management in a dental school is to accumulate assets that are available for strategic growth, which is a parallel objective to the profit motive in business. Budget development is often grounded in an income statement framework where the goal is to match revenues and expenses. Only when a balance sheet perspective (assets = liabilities + equity) is adopted can strategic growth be fully addressed. Four views of budgeting are presented in this article: 1) covering expenses, 2) shopping, 3) strategic support, and 4) budgeting as strategy. These perceptions of the budgeting process form a continuum, moving from a weak strategic position (covering expenses) to a strong one (budgeting as strategy) that encourages the accumulation of assets that build equity in the organization.

  8. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    USDA-ARS?s Scientific Manuscript database

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  9. Massively parallel information processing systems for space applications

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.

    1979-01-01

    NASA is developing massively parallel systems for ultra high speed processing of digital image data collected by satellite borne instrumentation. Such systems contain thousands of processing elements. Work is underway on the design and fabrication of the 'Massively Parallel Processor', a ground computer containing 16,384 processing elements arranged in a 128 x 128 array. This computer uses existing technology. Advanced work includes the development of semiconductor chips containing thousands of feedthrough paths. Massively parallel image analog to digital conversion technology is also being developed. The goal is to provide compact computers suitable for real-time onboard processing of images.

  10. Parallel log structured file system collective buffering to achieve a compact representation of scientific and/or dimensional data

    DOEpatents

    Grider, Gary A.; Poole, Stephen W.

    2015-09-01

    Collective buffering and data pattern solutions are provided for storage, retrieval, and/or analysis of data in a collective parallel processing environment. For example, a method can be provided for data storage in a collective parallel processing environment. The method comprises receiving data to be written for a plurality of collective processes within a collective parallel processing environment, extracting a data pattern for the data to be written for the plurality of collective processes, generating a representation describing the data pattern, and saving the data and the representation.

  11. schwimmbad: A uniform interface to parallel processing pools in Python

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.; Foreman-Mackey, Daniel

    2017-09-01

    Many scientific and computing problems require doing some calculation on all elements of some data set. If the calculations can be executed in parallel (i.e. without any communication between calculations), these problems are said to be perfectly parallel. On computers with multiple processing cores, these tasks can be distributed and executed in parallel to greatly improve performance. A common paradigm for handling these distributed computing problems is to use a processing "pool": the "tasks" (the data) are passed in bulk to the pool, and the pool handles distributing the tasks to a number of worker processes when available. schwimmbad provides a uniform interface to parallel processing pools and enables switching easily between local development (e.g., serial processing or with multiprocessing) and deployment on a cluster or supercomputer (via, e.g., MPI or JobLib).

  12. Parallel Signal Processing and System Simulation using aCe

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2003-01-01

    Recently, networked and cluster computation have become very popular for both signal processing and system simulation. A new language is ideally suited for parallel signal processing applications and system simulation since it allows the programmer to explicitly express the computations that can be performed concurrently. In addition, the new C based parallel language (ace C) for architecture-adaptive programming allows programmers to implement algorithms and system simulation applications on parallel architectures by providing them with the assurance that future parallel architectures will be able to run their applications with a minimum of modification. In this paper, we will focus on some fundamental features of ace C and present a signal processing application (FFT).

  13. Parallel processing in finite element structural analysis

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1987-01-01

    A brief review is made of the fundamental concepts and basic issues of parallel processing. Discussion focuses on parallel numerical algorithms, performance evaluation of machines and algorithms, and parallelism in finite element computations. A computational strategy is proposed for maximizing the degree of parallelism at different levels of the finite element analysis process including: 1) formulation level (through the use of mixed finite element models); 2) analysis level (through additive decomposition of the different arrays in the governing equations into the contributions to a symmetrized response plus correction terms); 3) numerical algorithm level (through the use of operator splitting techniques and application of iterative processes); and 4) implementation level (through the effective combination of vectorization, multitasking and microtasking, whenever available).

  14. Connectionism, parallel constraint satisfaction processes, and gestalt principles: (re) introducing cognitive dynamics to social psychology.

    PubMed

    Read, S J; Vanman, E J; Miller, L C

    1997-01-01

    We argue that recent work in connectionist modeling, in particular the parallel constraint satisfaction processes that are central to many of these models, has great importance for understanding issues of both historical and current concern for social psychologists. We first provide a brief description of connectionist modeling, with particular emphasis on parallel constraint satisfaction processes. Second, we examine the tremendous similarities between parallel constraint satisfaction processes and the Gestalt principles that were the foundation for much of modem social psychology. We propose that parallel constraint satisfaction processes provide a computational implementation of the principles of Gestalt psychology that were central to the work of such seminal social psychologists as Asch, Festinger, Heider, and Lewin. Third, we then describe how parallel constraint satisfaction processes have been applied to three areas that were key to the beginnings of modern social psychology and remain central today: impression formation and causal reasoning, cognitive consistency (balance and cognitive dissonance), and goal-directed behavior. We conclude by discussing implications of parallel constraint satisfaction principles for a number of broader issues in social psychology, such as the dynamics of social thought and the integration of social information within the narrow time frame of social interaction.

  15. Using Parallel Processing for Problem Solving.

    DTIC Science & Technology

    1979-12-01

    are the basic parallel proces- sing primitive . Different goals of the system can be pursued in parallel by placing them in separate activities...Language primitives are provided for manipulating running activities. Viewpoints are a generalization of context FOM -(over "*’ DD I FON 1473 ’EDITION OF I...arc the basic parallel processing primitive . Different goals of the system can be pursued in parallel by placing them in separate activities. Language

  16. Examining Quality Management Audits in Nuclear Medicine Practice as a lifelong learning process: opportunities and challenges to the nuclear medicine professional and beyond.

    PubMed

    Pascual, Thomas N B

    2016-08-01

    This essay will explore the critical issues and challenges surrounding lifelong learning for professionals, initially exploring within the profession and organizational context of nuclear medicine practice. It will critically examine how the peer-review process called Quality Management Audits in Nuclear Medicine Practice (QUANUM) of the International Atomic Energy Agency (IAEA) can be considered a lifelong learning opportunity to instill a culture of quality to improve patient care and elevate the status of the nuclear medicine profession and practice within the demands of social changes, policy, and globalization. This will be explored initially by providing contextual background to the identity of the IAEA as an organization responsible for nuclear medicine professionals, followed by the benefits that QUANUM can offer. Further key debates surrounding lifelong learning, such as compulsification of lifelong learning and impact on professional change, will then be weaved through the discussion using theoretical grounding through a qualitative review of the literature. Keeping in mind that there is very limited literature focusing on the implications of QUANUM as a lifelong learning process for nuclear medicine professionals, this essay uses select narratives and observations of QUANUM as a lifelong learning process from an auditor's perspective and will further provide a comparative perspective of QUANUM on the basis of other lifelong learning opportunities such as continuing professional development activities and observe parallelisms on its benefits and challenges that it will offer to other professionals in other medical speciality fields and in the teaching profession.

  17. A concise history of central venous access.

    PubMed

    Beheshti, Michael V

    2011-12-01

    Central venous access has become a mainstay of modern interventional radiology practice. Its history has paralleled and enabled many current medical therapies. This short overview provides an interesting historical perspective of these increasingly common interventional procedures. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. A PARALLEL LEAST-SQUARES FINITE ELEMENT METHOD FOR INCOMPRESSIBLE FLOWS. (R825200)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  19. 47 CFR 32.9000 - Glossary of terms.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of this system of accounts. Accounting system means the total set of interrelated principles, rules... entity from a financial perspective. An accounting system generally consists of a chart of accounts, various parallel subsystems and subsidiary records. An accounting system is utilized to provide the...

  20. Real-time implementations of image segmentation algorithms on shared memory multicore architecture: a survey (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Akil, Mohamed

    2017-05-01

    The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.

  1. Parallel Large-Scale Molecular Dynamics Simulation Opens New Perspective to Clarify the Effect of a Porous Structure on the Sintering Process of Ni/YSZ Multiparticles.

    PubMed

    Xu, Jingxiang; Higuchi, Yuji; Ozawa, Nobuki; Sato, Kazuhisa; Hashida, Toshiyuki; Kubo, Momoji

    2017-09-20

    Ni sintering in the Ni/YSZ porous anode of a solid oxide fuel cell changes the porous structure, leading to degradation. Preventing sintering and degradation during operation is a great challenge. Usually, a sintering molecular dynamics (MD) simulation model consisting of two particles on a substrate is used; however, the model cannot reflect the porous structure effect on sintering. In our previous study, a multi-nanoparticle sintering modeling method with tens of thousands of atoms revealed the effect of the particle framework and porosity on sintering. However, the method cannot reveal the effect of the particle size on sintering and the effect of sintering on the change in the porous structure. In the present study, we report a strategy to reveal them in the porous structure by using our multi-nanoparticle modeling method and a parallel large-scale multimillion-atom MD simulator. We used this method to investigate the effect of YSZ particle size and tortuosity on sintering and degradation in the Ni/YSZ anodes. Our parallel large-scale MD simulation showed that the sintering degree decreased as the YSZ particle size decreased. The gas fuel diffusion path, which reflects the overpotential, was blocked by pore coalescence during sintering. The degradation of gas diffusion performance increased as the YSZ particle size increased. Furthermore, the gas diffusion performance was quantified by a tortuosity parameter and an optimal YSZ particle size, which is equal to that of Ni, was found for good diffusion after sintering. These findings cannot be obtained by previous MD sintering studies with tens of thousands of atoms. The present parallel large-scale multimillion-atom MD simulation makes it possible to clarify the effects of the particle size and tortuosity on sintering and degradation.

  2. Image Processing Using a Parallel Architecture.

    DTIC Science & Technology

    1987-12-01

    ENG/87D-25 Abstract This study developed a set o± low level image processing tools on a parallel computer that allows concurrent processing of images...environment, the set of tools offers a significant reduction in the time required to perform some commonly used image processing operations. vI IMAGE...step toward developing these systems, a structured set of image processing tools was implemented using a parallel computer. More important than

  3. Visual representation of spatiotemporal structure

    NASA Astrophysics Data System (ADS)

    Schill, Kerstin; Zetzsche, Christoph; Brauer, Wilfried; Eisenkolb, A.; Musto, A.

    1998-07-01

    The processing and representation of motion information is addressed from an integrated perspective comprising low- level signal processing properties as well as higher-level cognitive aspects. For the low-level processing of motion information we argue that a fundamental requirement is the existence of a spatio-temporal memory. Its key feature, the provision of an orthogonal relation between external time and its internal representation, is achieved by a mapping of temporal structure into a locally distributed activity distribution accessible in parallel by higher-level processing stages. This leads to a reinterpretation of the classical concept of `iconic memory' and resolves inconsistencies on ultra-short-time processing and visual masking. The spatial-temporal memory is further investigated by experiments on the perception of spatio-temporal patterns. Results on the direction discrimination of motion paths provide evidence that information about direction and location are not processed and represented independent of each other. This suggests a unified representation on an early level, in the sense that motion information is internally available in form of a spatio-temporal compound. For the higher-level representation we have developed a formal framework for the qualitative description of courses of motion that may occur with moving objects.

  4. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    PubMed

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  5. Design of a dataway processor for a parallel image signal processing system

    NASA Astrophysics Data System (ADS)

    Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu

    1995-04-01

    Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.

  6. Search asymmetries: parallel processing of uncertain sensory information.

    PubMed

    Vincent, Benjamin T

    2011-08-01

    What is the mechanism underlying search phenomena such as search asymmetry? Two-stage models such as Feature Integration Theory and Guided Search propose parallel pre-attentive processing followed by serial post-attentive processing. They claim search asymmetry effects are indicative of finding pairs of features, one processed in parallel, the other in serial. An alternative proposal is that a 1-stage parallel process is responsible, and search asymmetries occur when one stimulus has greater internal uncertainty associated with it than another. While the latter account is simpler, only a few studies have set out to empirically test its quantitative predictions, and many researchers still subscribe to the 2-stage account. This paper examines three separate parallel models (Bayesian optimal observer, max rule, and a heuristic decision rule). All three parallel models can account for search asymmetry effects and I conclude that either people can optimally utilise the uncertain sensory data available to them, or are able to select heuristic decision rules which approximate optimal performance. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Early dialogue with health technology assessment bodies: a European perspective.

    PubMed

    Cuche, Matthieu; Beckerman, Rachel; Chowdhury, Cyrus A; van Weelden, Marije A

    2014-12-01

    Evidence requirements may differ across HTA bodies, and so pharmaceutical companies must plan to synergize their evidence generation strategy, across global regulatory and HTA bodies. Until recently, companies had no official platform to discuss the clinical development of a drug with HTA bodies; however, this is changing. To achieve broad usage in the EU, products must achieve both regulatory and reimbursement approval, the latter of which is based on HTA appraisal in many markets. The objective of this study is to present and evaluate the different options available for early HTA consultation (during drug development/Phase III) in the major European markets from the industry perspective. An exploratory (nonsystematic) literature review was performed to identify the European markets offering early HTA consultations, and each process was analyzed using a set of predefined metrics that are relevant to industry (the ability to consult with the regulatory body in parallel, consultation fees, length of consultation meeting, language of consultation meeting, maximum number of pharmaceutical company employees attending, procedural timelines, nature of data for which consultative advice can be sought, the output of the process, and the ability to involve external experts). Four different types of early HTA consultation processes were identified across the major European HTA markets. The nature of these processes varied in terms of the types and number of questions that can be addressed, the length of the meeting, the reporting output, and the ability to involve external experts. The availability of various options for early HTA consultation may help to avoid a mismatch between the evidence generated by means of a product's clinical development program, and the evidence expected by HTA bodies and payers, which can facilitate the pricing and reimbursement process upon a product's market authorization.

  8. 77 FR 47573 - Approval and Promulgation of Implementation Plans; Mississippi; 110(a)(2)(E)(ii) Infrastructure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-09

    ... Mississippi Department of Environmental Quality (MDEQ), on July 13, 2012, for parallel processing. This... of Contents I. What is parallel processing? II. Background III. What elements are required under... Executive Order Reviews I. What is parallel processing? Consistent with EPA regulations found at 40 CFR Part...

  9. Double Take: Parallel Processing by the Cerebral Hemispheres Reduces Attentional Blink

    ERIC Educational Resources Information Center

    Scalf, Paige E.; Banich, Marie T.; Kramer, Arthur F.; Narechania, Kunjan; Simon, Clarissa D.

    2007-01-01

    Recent data have shown that parallel processing by the cerebral hemispheres can expand the capacity of visual working memory for spatial locations (J. F. Delvenne, 2005) and attentional tracking (G. A. Alvarez & P. Cavanagh, 2005). Evidence that parallel processing by the cerebral hemispheres can improve item identification has remained elusive.…

  10. On the costs of parallel processing in dual-task performance: The case of lexical processing in word production.

    PubMed

    Paucke, Madlen; Oppermann, Frank; Koch, Iring; Jescheniak, Jörg D

    2015-12-01

    Previous dual-task picture-naming studies suggest that lexical processes require capacity-limited processes and prevent other tasks to be carried out in parallel. However, studies involving the processing of multiple pictures suggest that parallel lexical processing is possible. The present study investigated the specific costs that may arise when such parallel processing occurs. We used a novel dual-task paradigm by presenting 2 visual objects associated with different tasks and manipulating between-task similarity. With high similarity, a picture-naming task (T1) was combined with a phoneme-decision task (T2), so that lexical processes were shared across tasks. With low similarity, picture-naming was combined with a size-decision T2 (nonshared lexical processes). In Experiment 1, we found that a manipulation of lexical processes (lexical frequency of T1 object name) showed an additive propagation with low between-task similarity and an overadditive propagation with high between-task similarity. Experiment 2 replicated this differential forward propagation of the lexical effect and showed that it disappeared with longer stimulus onset asynchronies. Moreover, both experiments showed backward crosstalk, indexed as worse T1 performance with high between-task similarity compared with low similarity. Together, these findings suggest that conditions of high between-task similarity can lead to parallel lexical processing in both tasks, which, however, does not result in benefits but rather in extra performance costs. These costs can be attributed to crosstalk based on the dual-task binding problem arising from parallel processing. Hence, the present study reveals that capacity-limited lexical processing can run in parallel across dual tasks but only at the expense of extraordinary high costs. (c) 2015 APA, all rights reserved).

  11. Parallel computing on Unix workstation arrays

    NASA Astrophysics Data System (ADS)

    Reale, F.; Bocchino, F.; Sciortino, S.

    1994-12-01

    We have tested arrays of general-purpose Unix workstations used as MIMD systems for massive parallel computations. In particular we have solved numerically a demanding test problem with a 2D hydrodynamic code, generally developed to study astrophysical flows, by exucuting it on arrays either of DECstations 5000/200 on Ethernet LAN, or of DECstations 3000/400, equipped with powerful Alpha processors, on FDDI LAN. The code is appropriate for data-domain decomposition, and we have used a library for parallelization previously developed in our Institute, and easily extended to work on Unix workstation arrays by using the PVM software toolset. We have compared the parallel efficiencies obtained on arrays of several processors to those obtained on a dedicated MIMD parallel system, namely a Meiko Computing Surface (CS-1), equipped with Intel i860 processors. We discuss the feasibility of using non-dedicated parallel systems and conclude that the convenience depends essentially on the size of the computational domain as compared to the relative processor power and network bandwidth. We point out that for future perspectives a parallel development of processor and network technology is important, and that the software still offers great opportunities of improvement, especially in terms of latency times in the message-passing protocols. In conditions of significant gain in terms of speedup, such workstation arrays represent a cost-effective approach to massive parallel computations.

  12. Graphical Representation of Parallel Algorithmic Processes

    DTIC Science & Technology

    1990-12-01

    interface with the AAARF main process . The source code for the AAARF class-common library is in the common subdi- rectory and consists of the following files... for public release; distribution unlimited AFIT/GCE/ENG/90D-07 Graphical Representation of Parallel Algorithmic Processes THESIS Presented to the...goal of this study is to develop an algorithm animation facility for parallel processes executing on different architectures, from multiprocessor

  13. Safeguarding the provision of ecosystem services in catchment systems.

    PubMed

    Everard, Mark

    2013-04-01

    A narrow technocentric focus on a few favored ecosystem services (generally provisioning services) has led to ecosystem degradation globally, including catchment systems and their capacities to support human well-being. Increasing recognition of the multiple benefits provided by ecosystems is slowly being translated into policy and some areas of practice, although there remains a significant shortfall in the incorporation of a systemic perspective into operation management and decision-making tools. Nevertheless, a range of ecosystem-based solutions to issues as diverse as flooding and green space provision in the urban environment offers hope for improving habitat and optimization of beneficial services. The value of catchment ecosystem processes and their associated services is also being increasingly recognized and internalized by the water industry, improving water quality and quantity through catchment land management rather than at greater expense in the treatment costs of contaminated water abstracted lower in catchments. Parallel recognition of the value of working with natural processes, rather than "defending" built assets when catchment hydrology is adversely affected by unsympathetic upstream development, is being progressively incorporated into flood risk management policy. This focus on wider catchment processes also yields a range of cobenefits for fishery, wildlife, amenity, flood risk, and other interests, which may be optimized if multiple stakeholders and their diverse value systems are included in decision-making processes. Ecosystem services, particularly implemented as a central element of the ecosystem approach, provide an integrated framework for building in these different perspectives and values, many of them formerly excluded, into commercial and resource management decision-making processes, thereby making tractable the integrative aspirations of sustainable development. This can help redress deeply entrenched inherited assumptions, habits, and vested interests, replacing them in many management situations with wider recognition of the multiple values of ecosystems and their services. Global interest in taking an ecosystem approach is promoting novel scientific and policy thinking, yet there is a shortfall in its translation into practical management tools. Professional associations may have key roles to play in breaking down barriers to the "mainstreaming" of systemic perspectives into common practice, particularly through joining u different sectors of society essential to their implementation and ongoing adaptive management. Copyright © 2012 SETAC.

  14. Potential conflict between TRIPS and GATT concerning parallel importation of drugs and possible solution to prevent undesirable market segmentation.

    PubMed

    Lo, Chang-Fa

    2011-01-01

    From international perspective, parallel importation, especially with respect to drugs, has to do with the exhaustion principle in Article 6 of the TRIPS Agreement and the general exception in Article XX of the GATT 1994. Issues concerning the TRIPS Agreement have been constant topics of discussion. However, parallel importation in relation to the general rules of the GATT 1994 as well as to its exceptions provided in Article XX was not seriously discussed. In the view of the paper, there is a conflict between the provisions in these two agreements. The paper explains such conflict and tries to propose a method of interpretation to resolve the conflict between GATT Article XX and TRIPS Article 6 concerning parallel importation for the purpose of reducing the possible undesirable market segmentation in pharmaceutical sector. The method suggested in the paper is a proper application of good faith principle in the Vienna Convention to interpret GATT Article XX, so that there could be some flexibility for those prohibitions of parallel importation which have positive effect on international trade.

  15. An Organizational Perspective to the Creation of the Research Field.

    PubMed

    Talamo, Alessandra; Mellini, Barbara; Camilli, Marco; Ventura, Stefano; Di Lucchio, Loredana

    2016-09-01

    The aim of the paper is to contribute to the definition and analysis of the "access to the field" (Feldman et al. 2003) through an inter-organizational perspective. The paper discusses a case study on the access of a researcher to a hospital department where both organizations and actors are shown as actively constructing the research site. Both researcher and participants are described in terms of work organizations originally engaged in parallel systems of activity. Dynamics of negotiation "tied" the different actors' activities in a new activity system where researcher and participants concur to the effectiveness of both organizations (i.e., the research and the hospital ward). An Activity Theory perspective (Leont'ev 1978) is used with the aim of focusing the analysis on the activities in charge to the different actors. The approach adopted introduces the idea that, from the outset, research is made possible by a process of co-construction that works through the development of a completely new and shared work space arising around the encounter between researchers and participants. It is the balance between improvised actions and the co-creation of "boundary objects" (Star and Griesemer 1989), which makes interlacement possible between the two activity systems. The concept of "knotworking" (Engeström 2007a) is adopted to interpret specific actions by both organizations and actors intended to build a knot of activities whereby the new research system takes place.

  16. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  17. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  18. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  19. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  20. Toward a Model Framework of Generalized Parallel Componential Processing of Multi-Symbol Numbers

    ERIC Educational Resources Information Center

    Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph

    2015-01-01

    In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining…

  1. [Breastfeeding from the perspective of teenage mothers in Bogotá].

    PubMed

    Forero, Yibby; Rodríguez, Sandra Milena; Isaács, María Alexandra; Hernández, Jenny Alexandra

    2013-01-01

    In Colombia, breastfeeding is inadequate and -especially in teenage girls- short. Given that adolescents are a social group with their own lifestyles, we need to know what meanings they have regarding breastfeeding, and also what the characteristics of their breastfeeding experience are, in order to identify issues that limit or facilitate this practice, which will produce the knowledge to improve breastfeeding promotion strategies. To characterize the experience of breastfeeding in nursing adolescents and identify strengths, limitations and perceived needs from their own perspective. This was a phenomenological qualitative study. We conducted 24 interviews and had three focal groups with female adolescents in different postpartum periods. Data collection was carried out in Bogotá, with women participating in a program of the Secretaría Distrital de Integración Social. The systematic process was developed in parallel with the analysis process. It involved the relationships between categories and the networks that form among them. Teenagers do not breastfeed exclusively, identifying several difficulties in the act of breastfeeding. Complementary feeding includes unnatural foods. Maternity and breastfeeding are not consistent with the perception of being a teenager. Adolescents recognize the benefits of breastfeeding for their children and for them; however, their breastfeeding experience differs from the recommendations to achieve exclusive breastfeeding and a healthy complementary feeding. Among the identified causes, we highlight the lack of accurate backing and timely support.

  2. Repercussion of geometric and dynamic constraints on the 3D rendering quality in structurally adaptive multi-view shooting systems

    NASA Astrophysics Data System (ADS)

    Ali-Bey, Mohamed; Moughamir, Saïd; Manamanni, Noureddine

    2011-12-01

    in this paper a simulator of a multi-view shooting system with parallel optical axes and structurally variable configuration is proposed. The considered system is dedicated to the production of 3D contents for auto-stereoscopic visualization. The global shooting/viewing geometrical process, which is the kernel of this shooting system, is detailed and the different viewing, transformation and capture parameters are then defined. An appropriate perspective projection model is afterward derived to work out a simulator. At first, this latter is used to validate the global geometrical process in the case of a static configuration. Next, the simulator is used to show the limitations of a static configuration of this shooting system type by considering the case of dynamic scenes and then a dynamic scheme is achieved to allow a correct capture of this kind of scenes. After that, the effect of the different geometrical capture parameters on the 3D rendering quality and the necessity or not of their adaptation is studied. Finally, some dynamic effects and their repercussions on the 3D rendering quality of dynamic scenes are analyzed using error images and some image quantization tools. Simulation and experimental results are presented throughout this paper to illustrate the different studied points. Some conclusions and perspectives end the paper. [Figure not available: see fulltext.

  3. Parallel processing via a dual olfactory pathway in the honeybee.

    PubMed

    Brill, Martin F; Rosenbaum, Tobias; Reus, Isabelle; Kleineidam, Christoph J; Nawrot, Martin P; Rössler, Wolfgang

    2013-02-06

    In their natural environment, animals face complex and highly dynamic olfactory input. Thus vertebrates as well as invertebrates require fast and reliable processing of olfactory information. Parallel processing has been shown to improve processing speed and power in other sensory systems and is characterized by extraction of different stimulus parameters along parallel sensory information streams. Honeybees possess an elaborate olfactory system with unique neuronal architecture: a dual olfactory pathway comprising a medial projection-neuron (PN) antennal lobe (AL) protocerebral output tract (m-APT) and a lateral PN AL output tract (l-APT) connecting the olfactory lobes with higher-order brain centers. We asked whether this neuronal architecture serves parallel processing and employed a novel technique for simultaneous multiunit recordings from both tracts. The results revealed response profiles from a high number of PNs of both tracts to floral, pheromonal, and biologically relevant odor mixtures tested over multiple trials. PNs from both tracts responded to all tested odors, but with different characteristics indicating parallel processing of similar odors. Both PN tracts were activated by widely overlapping response profiles, which is a requirement for parallel processing. The l-APT PNs had broad response profiles suggesting generalized coding properties, whereas the responses of m-APT PNs were comparatively weaker and less frequent, indicating higher odor specificity. Comparison of response latencies within and across tracts revealed odor-dependent latencies. We suggest that parallel processing via the honeybee dual olfactory pathway provides enhanced odor processing capabilities serving sophisticated odor perception and olfactory demands associated with a complex olfactory world of this social insect.

  4. Widespread mechanosensing controls the structure behind the architecture in plants.

    PubMed

    Hamant, Olivier

    2013-10-01

    Mechanical forces play an instructing role for many aspects of animal cell biology, such as division, polarity and fate. Although the associated mechanoperception pathways still remain largely elusive in plants, physical cues have long been thought to guide development in parallel to biochemical factors. With the development of new imaging techniques, micromechanics tools and modeling approaches, the role of mechanical signals in plant development is now re-examined and fully integrated with modern cell biology. Using recent examples from the literature, I propose to use a multiscale perspective, from the whole plant down to the cell wall, to fully appreciate the diversity of developmental processes that depend on mechanical signals. Incidentally, this also illustrates how conceptually rich this field is. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. The game of go as a complex network

    NASA Astrophysics Data System (ADS)

    Georgeot, Bertrand; Giraud, Olivier; Kandiah, Vivek

    2014-03-01

    We have studied the game of go, one of the most ancient and complex board games, from a complex network perspective. We have defined a proper categorization of moves taking into account the local environment, and shown that in this case Zipf's law emerges from data taken from real games. The network shows differences between professional and amateur games, different level of amateurs, or different phases of the game. Certain eigenvectors are localized on specific groups of moves which correspond to different strategies (communities of moves). The point of view developed should allow to better modelize such games and could also help to design simulators which could in the future beat good human players. Our approach could be used for other types of games, and in parallel shed light on the human decision making process.

  6. Visual analysis of inter-process communication for large-scale parallel computing.

    PubMed

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  7. Parallel processing for nonlinear dynamics simulations of structures including rotating bladed-disk assemblies

    NASA Technical Reports Server (NTRS)

    Hsieh, Shang-Hsien

    1993-01-01

    The principal objective of this research is to develop, test, and implement coarse-grained, parallel-processing strategies for nonlinear dynamic simulations of practical structural problems. There are contributions to four main areas: finite element modeling and analysis of rotational dynamics, numerical algorithms for parallel nonlinear solutions, automatic partitioning techniques to effect load-balancing among processors, and an integrated parallel analysis system.

  8. Layperson's preference regarding orientation of the transverse occlusal plane and commissure line from the frontal perspective.

    PubMed

    Silva, Bruno Pereira; Jiménez-Castellanos, Emilio; Finkel, Sivan; Macias, Inmaculada Redondo; Chu, Stephen J

    2017-04-01

    Facial asymmetries in features such as lip commissure and interpupillary plane canting have been described as common conditions affecting smile esthetics. When presented with these asymmetries, the clinician must choose the reference line with which to orient the transverse occlusal plane of the planned dental restorations. The purpose of the online survey described in this study was to determine lay preferences regarding the transverse occlusal plane orientation in faces that display a cant of the commissure line viewed from the frontal perspective. From a digitally created symmetrical facial model with the transverse occlusal plane and commissure line parallel to the interpupillary line (horizontal) and a model constructed in a previous study (control), a new facial model was created with 3 degrees of cant of the commissure line. Three digital tooth mountings were designed with different transverse occlusal plane orientations: parallel to the interpupillary line (A), parallel to the commissure line (B), and the mean angulation plane formed between the interpupillary and commissure line (C), resulting in a total of 4 images. All images, including the control, were organized into 6 pairs and evaluated by 247 selected laypersons through an online Web site survey. Each participant was asked to choose the more attractive face from each of the 6 pairs of images. The control image was preferred by 72.9% to 74.5% of the participants compared with the other 3 images, all of which represented a commissure line cant. Among the 3 pairs which represent a commissure line cant, 59.1% to 61.1% preferred a transverse plane of occlusion cant (B and C) compared with a plane of occlusion parallel to the interpupillary, line and 61.1% preferred a plane of occlusion parallel to the commissure line (B) compared with the mean angulation plane (C). Laypeople prefer faces with a commissure line and transverse occlusal plane parallel to the horizontal plane or horizon. When faces present a commissure line cant, laypeople prefer a transverse occlusal plane with a similar and coincident cant. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  9. Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization

    NASA Technical Reports Server (NTRS)

    Jones, James Patton; Nitzberg, Bill

    1999-01-01

    The NAS facility has operated parallel supercomputers for the past 11 years, including the Intel iPSC/860, Intel Paragon, Thinking Machines CM-5, IBM SP-2, and Cray Origin 2000. Across this wide variety of machine architectures, across a span of 10 years, across a large number of different users, and through thousands of minor configuration and policy changes, the utilization of these machines shows three general trends: (1) scheduling using a naive FIFO first-fit policy results in 40-60% utilization, (2) switching to the more sophisticated dynamic backfilling scheduling algorithm improves utilization by about 15 percentage points (yielding about 70% utilization), and (3) reducing the maximum allowable job size further increases utilization. Most surprising is the consistency of these trends. Over the lifetime of the NAS parallel systems, we made hundreds, perhaps thousands, of small changes to hardware, software, and policy, yet, utilization was affected little. In particular these results show that the goal of achieving near 100% utilization while supporting a real parallel supercomputing workload is unrealistic.

  10. [Gender perspective relevant in many medical school subjects. Essential to perceive men and women holistically].

    PubMed

    Hamberg, Katarina

    2003-12-04

    Gender perspective in medicine implies that people are seen as biological as well as social and cultural creatures and the concept of wholeness is important. Still, it is common that biological explanations dominate when gender differences in various symptoms and disorders are discussed in medicine and medical training. Applying a gender perspective implies a change in that attention is then also paid to social conditions for men and women in various contexts, for example in education, on the labour market, and in different ethnic groups, parallel and simultaneously to biological causes. In this article it is shown that a gender perspective is relevant in many fields of medical training. A gender perspective can bring new insights in education about the healthy and diseased body, investigation and treatment of disease, communication and the patient-doctor-relationship, as well as career and speciality choices. The need for education of teachers on gender issues is a crucial issue for those responsible for the academic syllabus.

  11. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study.

    PubMed

    Klingner, Carsten M; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI.

  12. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study

    PubMed Central

    Klingner, Carsten M.; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W.

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI. PMID:28066197

  13. Leisure as a Campus Developmental Resource.

    ERIC Educational Resources Information Center

    Bloland, Paul A.

    Despite the obvious parallel which can be drawn between the uses of leisure to benefit the individual, and the use of nonacademic activities and environment to promote individual growth and development, the two perspectives have evolved independently on college campuses. Research into the role, function, and outcomes of leisure have shown that…

  14. Towards a New Image of American Indian Women.

    ERIC Educational Resources Information Center

    Jaimes, Marie Annette

    1982-01-01

    Examines matriarchy, androgyny, and spiritual unity among men and women from a traditional indigenous world view. Parallels this with brain theory from both Indian and non-Indian perspectives. Asserts that Indian women must reclaim their "power" and strength by finding that source in their traditional past and among their spiritual…

  15. Career Counseling in a Volatile Job Market: Tiedeman's Perspective Revisited

    ERIC Educational Resources Information Center

    Duys, David K.; Ward, Janice E.; Maxwell, Jane A.; Eaton-Comerford, Leslie

    2008-01-01

    This article explores implications of Tiedeman's original theory for career counselors. Some components of the theory seem to be compatible with existing volatile job market conditions. Notions of career path recycling, development in reverse, nonlinear progress, and parallel streams in career development are explored. Suggestions are made for…

  16. Online Socialization through Social Software and Networks from an Educational Perspective

    ERIC Educational Resources Information Center

    Gülbahar, Yasemin

    2015-01-01

    The potential represented by the usage of Internet-based communication technologies in parallel with e-instruction is enabling learners to cooperate and collaborate throughout the world. However, an important dimension, namely the socialization of learners through online dialogues via e-mail, discussion forums, chats, blogs, wikis and virtual…

  17. Student Teachers' Team Teaching during Field Experiences: An Evaluation by Their Mentors

    ERIC Educational Resources Information Center

    Simons, Mathea; Baeten, Marlies

    2016-01-01

    Since collaboration within schools gains importance, teacher educators are looking for alternative models of field experience inspired by collaborative learning. Team teaching is such a model. This study explores two team teaching models (parallel and sequential teaching) by investigating the mentors' perspective. Semi-structured interviews were…

  18. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  19. Age-related emotional bias in processing two emotionally valenced tasks.

    PubMed

    Allen, Philip A; Lien, Mei-Ching; Jardin, Elliott

    2017-01-01

    Previous studies suggest that older adults process positive emotions more efficiently than negative emotions, whereas younger adults show the reverse effect. We examined whether this age-related difference in emotional bias still occurs when attention is engaged in two emotional tasks. We used a psychological refractory period paradigm and varied the emotional valence of Task 1 and Task 2. In both experiments, Task 1 was emotional face discrimination (happy vs. angry faces) and Task 2 was sound discrimination (laugh, punch, vs. cork pop in Experiment 1 and laugh vs. scream in Experiment 2). The backward emotional correspondence effect for positively and negatively valenced Task 2 on Task 1 was measured. In both experiments, younger adults showed a backward correspondence effect from a negatively valenced Task 2, suggesting parallel processing of negatively valenced stimuli. Older adults showed similar negativity bias in Experiment 2 with a more salient negative sound ("scream" relative to "punch"). These results are consistent with an arousal-bias competition model [Mather and Sutherland (Perspectives in Psychological Sciences 6:114-133, 2011)], suggesting that emotional arousal modulates top-down attentional control settings (emotional regulation) with age.

  20. The Design and Evaluation of "CAPTools"--A Computer Aided Parallelization Toolkit

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Frumkin, Michael; Hribar, Michelle; Jin, Haoqiang; Waheed, Abdul; Johnson, Steve; Cross, Jark; Evans, Emyr; Ierotheou, Constantinos; Leggett, Pete; hide

    1998-01-01

    Writing applications for high performance computers is a challenging task. Although writing code by hand still offers the best performance, it is extremely costly and often not very portable. The Computer Aided Parallelization Tools (CAPTools) are a toolkit designed to help automate the mapping of sequential FORTRAN scientific applications onto multiprocessors. CAPTools consists of the following major components: an inter-procedural dependence analysis module that incorporates user knowledge; a 'self-propagating' data partitioning module driven via user guidance; an execution control mask generation and optimization module for the user to fine tune parallel processing of individual partitions; a program transformation/restructuring facility for source code clean up and optimization; a set of browsers through which the user interacts with CAPTools at each stage of the parallelization process; and a code generator supporting multiple programming paradigms on various multiprocessors. Besides describing the rationale behind the architecture of CAPTools, the parallelization process is illustrated via case studies involving structured and unstructured meshes. The programming process and the performance of the generated parallel programs are compared against other programming alternatives based on the NAS Parallel Benchmarks, ARC3D and other scientific applications. Based on these results, a discussion on the feasibility of constructing architectural independent parallel applications is presented.

  1. Ethical issues in the reuse of qualitative data: perspectives from literature, practice, and participants.

    PubMed

    Yardley, Sarah J; Watts, Kate M; Pearson, Jennifer; Richardson, Jane C

    2014-01-01

    In this article, we explore ethical issues in qualitative secondary analysis through a comparison of the literature with practitioner and participant perspectives. To achieve this, we integrated critical narrative review findings with data from two discussion groups: qualitative researchers and research users/consumers. In the literature, we found that theoretical debate ran parallel to practical action rather than being integrated with it. We identified an important and novel theme of relationships that was emerging from the perspectives of researchers and users. Relationships were significant with respect to trust, sharing data, transparency and clarity, anonymity, permissions, and responsibility. We provide an example of practice development that we hope will prompt researchers to re-examine the issues in their own setting. Informing the research community of research practitioner and user perspectives on ethical issues in the reuse of qualitative data is the first step toward developing mechanisms to better integrate theoretical and empirical work.

  2. Reusable Rocket Engine Operability Modeling and Analysis

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Komar, D. R.

    1998-01-01

    This paper describes the methodology, model, input data, and analysis results of a reusable launch vehicle engine operability study conducted with the goal of supporting design from an operations perspective. Paralleling performance analyses in schedule and method, this requires the use of metrics in a validated operations model useful for design, sensitivity, and trade studies. Operations analysis in this view is one of several design functions. An operations concept was developed given an engine concept and the predicted operations and maintenance processes incorporated into simulation models. Historical operations data at a level of detail suitable to model objectives were collected, analyzed, and formatted for use with the models, the simulations were run, and results collected and presented. The input data used included scheduled and unscheduled timeline and resource information collected into a Space Transportation System (STS) Space Shuttle Main Engine (SSME) historical launch operations database. Results reflect upon the importance not only of reliable hardware but upon operations and corrective maintenance process improvements.

  3. The Influence of Youth Music Television Viewership on Changes in Cigarette Use and Association with Smoking Peers: A Social Identity, Reinforcing Spirals Perspective

    PubMed Central

    Slater, Michael D.; Hayes, Andrew F.

    2010-01-01

    Prior research has found strong evidence of a prospective association between R movie exposure and teen smoking. Using parallel process latent-growth modeling, the present study examines prospective associations between viewing of music video channels on television (e.g., MTV and VH-1) and changes over time in smoking and association with smoking peers. Results showed that baseline viewing of music-oriented channels such as MTV and VH-1 robustly predicted increasing trajectories of smoking and of associating with smoking peers, even after application of a variety of controls including parent reports of monitoring behavior. These results are consistent with the arguments from the reinforcing spirals model that such media use serves as a means of developing emergent adolescent social identities consistent with associating with smoking peers and acquiring smoking and other risk behaviors; evidence also suggests that media choice in reinforcing spiral processes are dynamic and evolve as social identity evolves. PMID:21318085

  4. An Integrative Theory of Psychotherapy: Research and Practice

    PubMed Central

    Epstein, Seymour; Epstein, Martha L.

    2016-01-01

    A dual-process personality theory and supporting research are presented. The dual processes comprise an experiential system and a rational system. The experiential system is an adaptive, associative learning system that humans share with other higher-order animals. The rational system is a uniquely human, primarily verbal, reasoning system. It is assumed that when humans developed language they did not abandon their previous ways of adapting, they simply added language to their experiential system. The two systems are assumed to operate in parallel and are bi-directionally interactive. The validity of these assumptions is supported by extensive research. Of particular relevance for psychotherapy, the experiential system, which is compatible with evolutionary theory, replaces the Freudian maladaptive unconscious system that is indefensible from an evolutionary perspective, as sub-human animals would then have only a single system that is maladaptive. The aim of psychotherapy is to produce constructive changes in the experiential system. Changes in the rational system are useful only to the extent that they contribute to constructive changes in the experiential system. PMID:27672302

  5. An Integrative Theory of Psychotherapy: Research and Practice.

    PubMed

    Epstein, Seymour; Epstein, Martha L

    2016-06-01

    A dual-process personality theory and supporting research are presented. The dual processes comprise an experiential system and a rational system. The experiential system is an adaptive, associative learning system that humans share with other higher-order animals. The rational system is a uniquely human, primarily verbal, reasoning system. It is assumed that when humans developed language they did not abandon their previous ways of adapting, they simply added language to their experiential system. The two systems are assumed to operate in parallel and are bi-directionally interactive. The validity of these assumptions is supported by extensive research. Of particular relevance for psychotherapy, the experiential system, which is compatible with evolutionary theory, replaces the Freudian maladaptive unconscious system that is indefensible from an evolutionary perspective, as sub-human animals would then have only a single system that is maladaptive. The aim of psychotherapy is to produce constructive changes in the experiential system. Changes in the rational system are useful only to the extent that they contribute to constructive changes in the experiential system.

  6. Indigenous Bacteria and Fungi Drive Traditional Kimoto Sake Fermentations

    PubMed Central

    Bokulich, Nicholas A.; Ohta, Moe; Lee, Morgan

    2014-01-01

    Sake (Japanese rice wine) production is a complex, multistage process in which fermentation is performed by a succession of mixed fungi and bacteria. This study employed high-throughput rRNA marker gene sequencing, quantitative PCR, and terminal restriction fragment length polymorphism to characterize the bacterial and fungal communities of spontaneous sake production from koji to product as well as brewery equipment surfaces. Results demonstrate a dynamic microbial succession, with koji and early moto fermentations dominated by Bacillus, Staphylococcus, and Aspergillus flavus var. oryzae, succeeded by Lactobacillus spp. and Saccharomyces cerevisiae later in the fermentations. The microbiota driving these fermentations were also prevalent in the production environment, illustrating the reservoirs and routes for microbial contact in this traditional food fermentation. Interrogating the microbial consortia of production environments in parallel with food products is a valuable approach for understanding the complete ecology of food production systems and can be applied to any food system, leading to enlightened perspectives for process control and food safety. PMID:24973064

  7. A review of GPU-based medical image reconstruction.

    PubMed

    Després, Philippe; Jia, Xun

    2017-10-01

    Tomographic image reconstruction is a computationally demanding task, even more so when advanced models are used to describe a more complete and accurate picture of the image formation process. Such advanced modeling and reconstruction algorithms can lead to better images, often with less dose, but at the price of long calculation times that are hardly compatible with clinical workflows. Fortunately, reconstruction tasks can often be executed advantageously on Graphics Processing Units (GPUs), which are exploited as massively parallel computational engines. This review paper focuses on recent developments made in GPU-based medical image reconstruction, from a CT, PET, SPECT, MRI and US perspective. Strategies and approaches to get the most out of GPUs in image reconstruction are presented as well as innovative applications arising from an increased computing capacity. The future of GPU-based image reconstruction is also envisioned, based on current trends in high-performance computing. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Indigenous bacteria and fungi drive traditional kimoto sake fermentations.

    PubMed

    Bokulich, Nicholas A; Ohta, Moe; Lee, Morgan; Mills, David A

    2014-09-01

    Sake (Japanese rice wine) production is a complex, multistage process in which fermentation is performed by a succession of mixed fungi and bacteria. This study employed high-throughput rRNA marker gene sequencing, quantitative PCR, and terminal restriction fragment length polymorphism to characterize the bacterial and fungal communities of spontaneous sake production from koji to product as well as brewery equipment surfaces. Results demonstrate a dynamic microbial succession, with koji and early moto fermentations dominated by Bacillus, Staphylococcus, and Aspergillus flavus var. oryzae, succeeded by Lactobacillus spp. and Saccharomyces cerevisiae later in the fermentations. The microbiota driving these fermentations were also prevalent in the production environment, illustrating the reservoirs and routes for microbial contact in this traditional food fermentation. Interrogating the microbial consortia of production environments in parallel with food products is a valuable approach for understanding the complete ecology of food production systems and can be applied to any food system, leading to enlightened perspectives for process control and food safety. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  9. Face Recognition in Humans and Machines

    NASA Astrophysics Data System (ADS)

    O'Toole, Alice; Tistarelli, Massimo

    The study of human face recognition by psychologists and neuroscientists has run parallel to the development of automatic face recognition technologies by computer scientists and engineers. In both cases, there are analogous steps of data acquisition, image processing, and the formation of representations that can support the complex and diverse tasks we accomplish with faces. These processes can be understood and compared in the context of their neural and computational implementations. In this chapter, we present the essential elements of face recognition by humans and machines, taking a perspective that spans psychological, neural, and computational approaches. From the human side, we overview the methods and techniques used in the neurobiology of face recognition, the underlying neural architecture of the system, the role of visual attention, and the nature of the representations that emerges. From the computational side, we discuss face recognition technologies and the strategies they use to overcome challenges to robust operation over viewing parameters. Finally, we conclude the chapter with a look at some recent studies that compare human and machine performances at face recognition.

  10. The Influence of Youth Music Television Viewership on Changes in Cigarette Use and Association with Smoking Peers: A Social Identity, Reinforcing Spirals Perspective.

    PubMed

    Slater, Michael D; Hayes, Andrew F

    2010-12-01

    Prior research has found strong evidence of a prospective association between R movie exposure and teen smoking. Using parallel process latent-growth modeling, the present study examines prospective associations between viewing of music video channels on television (e.g., MTV and VH-1) and changes over time in smoking and association with smoking peers. Results showed that baseline viewing of music-oriented channels such as MTV and VH-1 robustly predicted increasing trajectories of smoking and of associating with smoking peers, even after application of a variety of controls including parent reports of monitoring behavior. These results are consistent with the arguments from the reinforcing spirals model that such media use serves as a means of developing emergent adolescent social identities consistent with associating with smoking peers and acquiring smoking and other risk behaviors; evidence also suggests that media choice in reinforcing spiral processes are dynamic and evolve as social identity evolves.

  11. Self-optimizing approach for automated laser resonator alignment

    NASA Astrophysics Data System (ADS)

    Brecher, C.; Schmitt, R.; Loosen, P.; Guerrero, V.; Pyschny, N.; Pavim, A.; Gatej, A.

    2012-02-01

    Nowadays, the assembly of laser systems is dominated by manual operations, involving elaborate alignment by means of adjustable mountings. From a competition perspective, the most challenging problem in laser source manufacturing is price pressure, a result of cost competition exerted mainly from Asia. From an economical point of view, an automated assembly of laser systems defines a better approach to produce more reliable units at lower cost. However, the step from today's manual solutions towards an automated assembly requires parallel developments regarding product design, automation equipment and assembly processes. This paper introduces briefly the idea of self-optimizing technical systems as a new approach towards highly flexible automation. Technically, the work focuses on the precision assembly of laser resonators, which is one of the final and most crucial assembly steps in terms of beam quality and laser power. The paper presents a new design approach for miniaturized laser systems and new automation concepts for a robot-based precision assembly, as well as passive and active alignment methods, which are based on a self-optimizing approach. Very promising results have already been achieved, considerably reducing the duration and complexity of the laser resonator assembly. These results as well as future development perspectives are discussed.

  12. Serial and parallel attentive visual searches: evidence from cumulative distribution functions of response times.

    PubMed

    Sung, Kyongje

    2008-12-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.

  13. Methods for design and evaluation of parallel computating systems (The PISCES project)

    NASA Technical Reports Server (NTRS)

    Pratt, Terrence W.; Wise, Robert; Haught, Mary JO

    1989-01-01

    The PISCES project started in 1984 under the sponsorship of the NASA Computational Structural Mechanics (CSM) program. A PISCES 1 programming environment and parallel FORTRAN were implemented in 1984 for the DEC VAX (using UNIX processes to simulate parallel processes). This system was used for experimentation with parallel programs for scientific applications and AI (dynamic scene analysis) applications. PISCES 1 was ported to a network of Apollo workstations by N. Fitzgerald.

  14. Parallel computing in genomic research: advances and applications

    PubMed Central

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

  15. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  16. Parallel computing in genomic research: advances and applications.

    PubMed

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  17. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  18. Abstract processing and observer vantage perspective in dysphoria.

    PubMed

    Hart-Smith, Ly; Moulds, Michelle L

    2018-05-07

    processing and observer vantage perspective have been associated with negative consequences in depression. We investigated the relationship between mode of processing and vantage perspective bidirectionally in high and low dysphoric individuals, using abstract and concrete descriptions of experimenter-provided everyday actions. When vantage perspective was manipulated and processing mode was measured (Study 1a), participants who adopted a field perspective did not differ from those who adopted an observer perspective in their preference for abstract descriptions, irrespective of dysphoria status. When processing mode was manipulated and vantage perspective was measured (Study 1b), participants provided with abstract descriptions had a greater tendency to adopt an observer perspective than those provided with concrete descriptions, irrespective of dysphoria status. These results were replicated in larger online samples (Studies 2a and 2b). Together, they indicate a unidirectional causal relationship, whereby processing mode causally influences vantage perspective, in contrast to the bidirectional relationship previously reported in an unselected sample (Libby, Shaeffer, & Eibach, 2009). Further, these findings demonstrate that abstract processing increases the likelihood of adopting an observer perspective, and support targeting abstract processing in the treatment of depression to address the negative consequences associated with both abstract processing and recalling/imagining events from an observer perspective. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  20. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  1. Parallel implementation of all-digital timing recovery for high-speed and real-time optical coherent receivers.

    PubMed

    Zhou, Xian; Chen, Xue

    2011-05-09

    The digital coherent receivers combine coherent detection with digital signal processing (DSP) to compensate for transmission impairments, and therefore are a promising candidate for future high-speed optical transmission system. However, the maximum symbol rate supported by such real-time receivers is limited by the processing rate of hardware. In order to cope with this difficulty, the parallel processing algorithms is imperative. In this paper, we propose a novel parallel digital timing recovery loop (PDTRL) based on our previous work. Furthermore, for increasing the dynamic dispersion tolerance range of receivers, we embed a parallel adaptive equalizer in the PDTRL. This parallel joint scheme (PJS) can be used to complete synchronization, equalization and polarization de-multiplexing simultaneously. Finally, we demonstrate that PDTRL and PJS allow the hardware to process 112 Gbit/s POLMUX-DQPSK signal at the hundreds MHz range. © 2011 Optical Society of America

  2. Spatially parallel processing of within-dimension conjunctions.

    PubMed

    Linnell, K J; Humphreys, G W

    2001-01-01

    Within-dimension conjunction search for red-green targets amongst red-blue, and blue-green, nontargets is extremely inefficient (Wolfe et al, 1990 Journal of Experimental Psychology: Human Perception and Performance 16 879-892). We tested whether pairs of red-green conjunction targets can nevertheless be processed spatially in parallel. Participants made speeded detection responses whenever a red-green target was present. Across trials where a second identical target was present, the distribution of detection times was compatible with the assumption that targets were processed in parallel (Miller, 1982 Cognitive Psychology 14 247-279). We show that this was not an artifact of response-competition or feature-based processing. We suggest that within-dimension conjunctions can be processed spatially in parallel. Visual search for such items may be inefficient owing to within-dimension grouping between items.

  3. Neuronal basis of speech comprehension.

    PubMed

    Specht, Karsten

    2014-01-01

    Verbal communication does not rely only on the simple perception of auditory signals. It is rather a parallel and integrative processing of linguistic and non-linguistic information, involving temporal and frontal areas in particular. This review describes the inherent complexity of auditory speech comprehension from a functional-neuroanatomical perspective. The review is divided into two parts. In the first part, structural and functional asymmetry of language relevant structures will be discus. The second part of the review will discuss recent neuroimaging studies, which coherently demonstrate that speech comprehension processes rely on a hierarchical network involving the temporal, parietal, and frontal lobes. Further, the results support the dual-stream model for speech comprehension, with a dorsal stream for auditory-motor integration, and a ventral stream for extracting meaning but also the processing of sentences and narratives. Specific patterns of functional asymmetry between the left and right hemisphere can also be demonstrated. The review article concludes with a discussion on interactions between the dorsal and ventral streams, particularly the involvement of motor related areas in speech perception processes, and outlines some remaining unresolved issues. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Observation of layered antiferromagnetism in self-assembled parallel NiSi nanowire arrays on Si(110) by spin-polarized scanning tunneling spectromicroscopy

    NASA Astrophysics Data System (ADS)

    Hong, Ie-Hong; Hsu, Hsin-Zan

    2018-03-01

    The layered antiferromagnetism of parallel nanowire (NW) arrays self-assembled on Si(110) have been observed at room temperature by direct imaging of both the topographies and magnetic domains using spin-polarized scanning tunneling microscopy/spectroscopy (SP-STM/STS). The topographic STM images reveal that the self-assembled unidirectional and parallel NiSi NWs grow into the Si(110) substrate along the [\\bar{1}10] direction (i.e. the endotaxial growth) and exhibit multiple-layer growth. The spatially-resolved SP-STS maps show that these parallel NiSi NWs of different heights produce two opposite magnetic domains, depending on the heights of either even or odd layers in the layer stack of the NiSi NWs. This layer-wise antiferromagnetic structure can be attributed to an antiferromagnetic interlayer exchange coupling between the adjacent layers in the multiple-layer NiSi NW with a B2 (CsCl-type) crystal structure. Such an endotaxial heterostructure of parallel magnetic NiSi NW arrays with a layered antiferromagnetic ordering in Si(110) provides a new and important perspective for the development of novel Si-based spintronic nanodevices.

  5. Parallel computation with molecular-motor-propelled agents in nanofabricated networks.

    PubMed

    Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V

    2016-03-08

    The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.

  6. Hadoop neural network for parallel and distributed feature selection.

    PubMed

    Hodge, Victoria J; O'Keefe, Simon; Austin, Jim

    2016-06-01

    In this paper, we introduce a theoretical basis for a Hadoop-based neural network for parallel and distributed feature selection in Big Data sets. It is underpinned by an associative memory (binary) neural network which is highly amenable to parallel and distributed processing and fits with the Hadoop paradigm. There are many feature selectors described in the literature which all have various strengths and weaknesses. We present the implementation details of five feature selection algorithms constructed using our artificial neural network framework embedded in Hadoop YARN. Hadoop allows parallel and distributed processing. Each feature selector can be divided into subtasks and the subtasks can then be processed in parallel. Multiple feature selectors can also be processed simultaneously (in parallel) allowing multiple feature selectors to be compared. We identify commonalities among the five features selectors. All can be processed in the framework using a single representation and the overall processing can also be greatly reduced by only processing the common aspects of the feature selectors once and propagating these aspects across all five feature selectors as necessary. This allows the best feature selector and the actual features to select to be identified for large and high dimensional data sets through exploiting the efficiency and flexibility of embedding the binary associative-memory neural network in Hadoop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R

    Methods, apparatuses, and computer program products for endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface (`PAMI`) of a parallel computer are provided. Embodiments include establishing by a parallel application a data communications geometry, the geometry specifying a set of endpoints that are used in collective operations of the PAMI, including associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry. Embodiments also include registering in each endpoint in the geometry a dispatch callback function for a collective operation and executing without blocking, through a single onemore » of the endpoints in the geometry, an instruction for the collective operation.« less

  8. [CMACPAR an modified parallel neuro-controller for control processes].

    PubMed

    Ramos, E; Surós, R

    1999-01-01

    CMACPAR is a Parallel Neurocontroller oriented to real time systems as for example Control Processes. Its characteristics are mainly a fast learning algorithm, a reduced number of calculations, great generalization capacity, local learning and intrinsic parallelism. This type of neurocontroller is used in real time applications required by refineries, hydroelectric centers, factories, etc. In this work we present the analysis and the parallel implementation of a modified scheme of the Cerebellar Model CMAC for the n-dimensional space projection using a mean granularity parallel neurocontroller. The proposed memory management allows for a significant memory reduction in training time and required memory size.

  9. Parallel-Processing Test Bed For Simulation Software

    NASA Technical Reports Server (NTRS)

    Blech, Richard; Cole, Gary; Townsend, Scott

    1996-01-01

    Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).

  10. Testing bidirectional effects between cannabis use and depressive symptoms: moderation by the serotonin transporter gene.

    PubMed

    Otten, Roy; Engels, Rutger C M E

    2013-09-01

    Evidence for the assumption that cannabis use is associated with depression and depressive symptoms is inconsistent and mostly weak. It is likely that the mixed results are due to the fact that prior studies ignored the moderating effects of an individual's genetic vulnerability. The present study takes a first step in scrutinizing the relationship between cannabis use and depressive symptoms by taking a developmental molecular-genetic perspective. Specifically, we concentrated on changes in cannabis use and depressive symptoms over time in a simultaneous manner and differences herein for individuals with and without the short allele of the 5-hydroxytryptamine (serotonin) transporter gene-linked polymorphic region (5-HTTLPR) genotype. Data were from 310 adolescents over a period of 4 years. We used a parallel-process growth model, which allows co-development of cannabis use and depressive symptoms throughout adolescence, and the possible role of the 5-HTTLPR genotype in this process. We used data from the younger siblings of these adolescents in an attempt to replicate potential findings. The parallel-process growth model shows that cannabis use increases the risk for an increase in depressive symptoms over time but only in the presence of the short allele of the 5-HTTLPR genotype. This effect remained significant after controlling for covariates. We did not find conclusive support for the idea that depressive symptoms affect cannabis use. These findings were replicated in the sample of the younger siblings. The findings of the present study show first evidence that the links between cannabis use and depressive symptoms are conditional on the individual's genetic makeup. © 2011 The Authors, Addiction Biology © 2011 Society for the Study of Addiction.

  11. Towards physical principles of biological evolution

    NASA Astrophysics Data System (ADS)

    Katsnelson, Mikhail I.; Wolf, Yuri I.; Koonin, Eugene V.

    2018-03-01

    Biological systems reach organizational complexity that far exceeds the complexity of any known inanimate objects. Biological entities undoubtedly obey the laws of quantum physics and statistical mechanics. However, is modern physics sufficient to adequately describe, model and explain the evolution of biological complexity? Detailed parallels have been drawn between statistical thermodynamics and the population-genetic theory of biological evolution. Based on these parallels, we outline new perspectives on biological innovation and major transitions in evolution, and introduce a biological equivalent of thermodynamic potential that reflects the innovation propensity of an evolving population. Deep analogies have been suggested to also exist between the properties of biological entities and processes, and those of frustrated states in physics, such as glasses. Such systems are characterized by frustration whereby local state with minimal free energy conflict with the global minimum, resulting in ‘emergent phenomena’. We extend such analogies by examining frustration-type phenomena, such as conflicts between different levels of selection, in biological evolution. These frustration effects appear to drive the evolution of biological complexity. We further address evolution in multidimensional fitness landscapes from the point of view of percolation theory and suggest that percolation at level above the critical threshold dictates the tree-like evolution of complex organisms. Taken together, these multiple connections between fundamental processes in physics and biology imply that construction of a meaningful physical theory of biological evolution might not be a futile effort. However, it is unrealistic to expect that such a theory can be created in one scoop; if it ever comes to being, this can only happen through integration of multiple physical models of evolutionary processes. Furthermore, the existing framework of theoretical physics is unlikely to suffice for adequate modeling of the biological level of complexity, and new developments within physics itself are likely to be required.

  12. Power and Professionalism: Reconstruction of Medical Educators' Practice by Way of a MA(Ed).

    ERIC Educational Resources Information Center

    Elmer, Roger

    England's King Alfred's College offers a MA(Ed) professional enquiry for teachers. In 1997, four medical doctors expressed interest in developing educational perspectives. Critical examination of the MA(Ed) indicated close parallels with the work of medical educators. The congruity was in an educational philosophy: people's internal values and…

  13. Chaos and Christianity: A Response to Butz and a Biblical Alternative.

    ERIC Educational Resources Information Center

    Watts, Richard E.; Trusty, Jerry

    1997-01-01

    M.R. Butz's position regarding chaos theory and Christianity is reviewed. The compatibility of biblical theology and the sciences is discussed. Parallels between chaos theory and the philosophical perspective of Soren Kierkegaard are explored. A biblical model is offered for counselors in assisting Christian clients in embracing chaos. (Author/EMK)

  14. Hot Flashes and Panic Attacks: A Comparison of Symptomatology, Neurobiology, Treatment, and a Role for Cognition

    ERIC Educational Resources Information Center

    Hanisch, Laura J.; Hantsoo, Liisa; Freeman, Ellen W.; Sullivan, Gregory M.; Coyne, James C.

    2008-01-01

    Despite decades of research, the causal mechanisms of hot flashes are not adequately understood, and a biopsychosocial perspective on hot flashes remains underdeveloped. This article explores overlooked parallels between hot flashes and panic attacks within 5 areas: course and symptomatology, physiological indicators, neurocircuitry and…

  15. Historical perspective

    Treesearch

    Kenneth Smith

    1986-01-01

    The history of shortleaf pine in the South generally parallels that of the area having the largest concentration of shortleaf, the Ouachita Mountains of Arkansas and Oklahoma. There, in the nineteenth century, agricultural settlers cut trees to clear land for crops and supply local needs for wood. Around 1900, cutting greatly expanded as large sawmills began to log by...

  16. Equity from a Vocational Education Research Perspective. Research and Development Series No. 214E.

    ERIC Educational Resources Information Center

    Eliason, Nancy Carol

    Female participation continues to increase in postsecondary vocational education and the labor market. This growth has paralleled increased funding under the Vocational Education Amendments of 1976 for sex-equity related research and demonstration activities. Funding has not, however, kept pace with needs of institutions trying to ensure equal…

  17. The Attack on Affirmative Action: Lives in Parallel Universes.

    ERIC Educational Resources Information Center

    Olivas, Michael A.

    1993-01-01

    In response to criticism of affirmative action in higher education, it is argued that affirmative action has brought demonstrable improvements in U.S. society. The debate, and the related research and literature, are reviewed from both perspectives, and it is concluded that the time has come to end white male privilege. (MSE)

  18. Integrating Computer Technology in Early Childhood Education Environments: Issues Raised by Early Childhood Educators

    ERIC Educational Resources Information Center

    Wood, Eileen; Specht, Jacqueline; Willoughby, Teena; Mueller, Julie

    2008-01-01

    The purpose of this study was to assess the educators' perspectives on the introduction of computer technology in the early childhood education environment. Fifty early childhood educators completed a survey and participated in focus groups. Parallels existed between the individually completed survey data and the focus group discussions. The…

  19. Challenging the Focus of ESD: A Southern Perspective of ESD Guidelines

    ERIC Educational Resources Information Center

    de Andrade, Daniel Fonseca

    2011-01-01

    In parallel to the 2009 World Conference on Education for Sustainable Development held in Bonn, Germany, UNESCO organised a group of 25 young education for sustainable development (ESD)-engaged people from 25 countries to bring perceptions, demands, suggestions and contributions to the conference. Prior to the conference the group was divided into…

  20. Post-Adoption Depression: Clinical Windows on an Emerging Concept

    ERIC Educational Resources Information Center

    Speilman, Eda

    2011-01-01

    In recent years, the concept of post-adoption depression--with both parallels and differences from postpartum depression--has emerged as a salient descriptor of the experience of a significant minority of newly adoptive parents. This article offers a clinical perspective on post-adoption depression through the stories of several families seen in…

  1. Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A. (Technical Monitor); Jost, G.; Jin, H.; Labarta J.; Gimenez, J.; Caubet, J.

    2003-01-01

    Parallel programming paradigms include process level parallelism, thread level parallelization, and multilevel parallelism. This viewgraph presentation describes a detailed performance analysis of these paradigms for Shared Memory Architecture (SMA). This analysis uses the Paraver Performance Analysis System. The presentation includes diagrams of a flow of useful computations.

  2. Illuminating the dark matter of social neuroscience: Considering the problem of social interaction from philosophical, psychological, and neuroscientific perspectives.

    PubMed

    Przyrembel, Marisa; Smallwood, Jonathan; Pauen, Michael; Singer, Tania

    2012-01-01

    Successful human social interaction depends on our capacity to understand other people's mental states and to anticipate how they will react to our actions. Despite its importance to the human condition, the exact mechanisms underlying our ability to understand another's actions, feelings, and thoughts are still a matter of conjecture. Here, we consider this problem from philosophical, psychological, and neuroscientific perspectives. In a critical review, we demonstrate that attempts to draw parallels across these complementary disciplines is premature: The second-person perspective does not map directly to Interaction or Simulation theories, online social cognition, or shared neural network accounts underlying action observation or empathy. Nor does the third-person perspective map onto Theory-Theory (TT), offline social cognition, or the neural networks that support Theory of Mind (ToM). Moreover, we argue that important qualities of social interaction emerge through the reciprocal interplay of two independent agents whose unpredictable behavior requires that models of their partner's internal state be continually updated. This analysis draws attention to the need for paradigms in social neuroscience that allow two individuals to interact in a spontaneous and natural manner and to adapt their behavior and cognitions in a response contingent fashion due to the inherent unpredictability in another person's behavior. Even if such paradigms were implemented, it is possible that the specific neural correlates supporting such reciprocal interaction would not reflect computation unique to social interaction but rather the use of basic cognitive and emotional processes combined in a unique manner. Finally, we argue that given the crucial role of social interaction in human evolution, ontogeny, and every-day social life, a more theoretically and methodologically nuanced approach to the study of real social interaction will nevertheless help the field of social cognition to evolve.

  3. On the Optimality of Serial and Parallel Processing in the Psychological Refractory Period Paradigm: Effects of the Distribution of Stimulus Onset Asynchronies

    ERIC Educational Resources Information Center

    Miller, Jeff; Ulrich, Rolf; Rolke, Bettina

    2009-01-01

    Within the context of the psychological refractory period (PRP) paradigm, we developed a general theoretical framework for deciding when it is more efficient to process two tasks in serial and when it is more efficient to process them in parallel. This analysis suggests that a serial mode is more efficient than a parallel mode under a wide variety…

  4. The role of parallelism in the real-time processing of anaphora.

    PubMed

    Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P

    2012-06-01

    Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution.

  5. The role of parallelism in the real-time processing of anaphora

    PubMed Central

    Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P.

    2012-01-01

    Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution. PMID:23741080

  6. Advanced optical disk storage technology

    NASA Technical Reports Server (NTRS)

    Haritatos, Fred N.

    1996-01-01

    There is a growing need within the Air Force for more and better data storage solutions. Rome Laboratory, the Air Force's Center of Excellence for C3I technology, has sponsored the development of a number of operational prototypes to deal with this growing problem. This paper will briefly summarize the various prototype developments with examples of full mil-spec and best commercial practice. These prototypes have successfully operated under severe space, airborne and tactical field environments. From a technical perspective these prototypes have included rewritable optical media ranging from a 5.25-inch diameter format up to the 14-inch diameter disk format. Implementations include an airborne sensor recorder, a deployable optical jukebox and a parallel array of optical disk drives. They include stand-alone peripheral devices to centralized, hierarchical storage management systems for distributed data processing applications.

  7. Algorithms and programming tools for image processing on the MPP

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1985-01-01

    Topics addressed include: data mapping and rotational algorithms for the Massively Parallel Processor (MPP); Parallel Pascal language; documentation for the Parallel Pascal Development system; and a description of the Parallel Pascal language used on the MPP.

  8. Parallelization strategies for continuum-generalized method of moments on the multi-thread systems

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.

    2017-07-01

    Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.

  9. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  10. Parallel adaptive wavelet collocation method for PDEs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com; Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu; Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allowsmore » fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.« less

  11. Adapting high-level language programs for parallel processing using data flow

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  12. Feynman’s clock, a new variational principle, and parallel-in-time quantum dynamics

    PubMed Central

    McClean, Jarrod R.; Parkhill, John A.; Aspuru-Guzik, Alán

    2013-01-01

    We introduce a discrete-time variational principle inspired by the quantum clock originally proposed by Feynman and use it to write down quantum evolution as a ground-state eigenvalue problem. The construction allows one to apply ground-state quantum many-body theory to quantum dynamics, extending the reach of many highly developed tools from this fertile research area. Moreover, this formalism naturally leads to an algorithm to parallelize quantum simulation over time. We draw an explicit connection between previously known time-dependent variational principles and the time-embedded variational principle presented. Sample calculations are presented, applying the idea to a hydrogen molecule and the spin degrees of freedom of a model inorganic compound, demonstrating the parallel speedup of our method as well as its flexibility in applying ground-state methodologies. Finally, we take advantage of the unique perspective of this variational principle to examine the error of basis approximations in quantum dynamics. PMID:24062428

  13. Development of parallel scales to measure HIV-related stigma

    PubMed Central

    Visser, Maretha J.; Kershaw, Trace; Makin, Jennifer D.; Forsyth, Brian W.C.

    2014-01-01

    HIV-related stigma is a multidimensional concept which has pervasive effects on the lives of HIV-infected people as well as serious consequences for the management of HIV/AIDS. In this research three parallel stigma scales were developed to assess personal views of stigma, stigma attributed to others, and internalized stigma experienced by HIV-infected individuals. The stigma scales were administered in two samples: a community sample of 1077 respondents and 317 HIV-infected pregnant women recruited at clinics from the same community in Tshwane (South Africa). A two-factor structure referring to moral judgment and interpersonal distancing was confirmed across scales and sample groups. The internal consistency of the scales was acceptable and evidence of validity is reported. Parallel scales to assess and compare different perspectives of stigma provide opportunities for research aimed at understanding of stigma, assessing the consequences or evaluating possible interventions aimed at reducing stigma. PMID:18266101

  14. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    NASA Astrophysics Data System (ADS)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  15. Applying Parallel Processing Techniques to Tether Dynamics Simulation

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl

    1996-01-01

    The focus of this research has been to determine the effectiveness of applying parallel processing techniques to a sizable real-world problem, the simulation of the dynamics associated with a tether which connects two objects in low earth orbit, and to explore the degree to which the parallelization process can be automated through the creation of new software tools. The goal has been to utilize this specific application problem as a base to develop more generally applicable techniques.

  16. (Positive) Power to the Child: The Role of Children's Willing Stance toward Parents in Developmental Cascades from Toddler Age to Early Preadolescence

    PubMed Central

    Kochanska, Grazyna; Kim, Sanghag; Boldt, Lea J.

    2015-01-01

    In contrast to once dominant views of children as passive in the parent-led process of socialization, they are now seen as active agents who can considerably influence that process. But those newer perspectives typically focus on the child's antagonistic influence, due either to a difficult temperament or aversive, resistant, negative behaviors that elicit adversarial responses from the parent and lead to future coercive cascades in the relationship. Children's capacity to act as receptive, willing, even enthusiastic, active socialization agents is largely overlooked. Informed by attachment theory and other relational perspectives, we depict children as able to adopt an active willing stance and to exert robust positive influence in the mutually cooperative socialization enterprise. A longitudinal study of 100 community families (mothers, fathers, and children) demonstrates that willing stance (a) is a latent construct, observable in diverse parent-child contexts parallel at 38, 52, and 67 months, and longitudinally stable, (b) originates within an early secure parent-child relationship at 25 months, and (c) promotes a positive future cascade toward adaptive outcomes at age 10. The outcomes include the parent's observed and child-reported positive, responsive behavior, as well as child-reported internal obligation to obey the parent and parent-reported low level of child behavior problems. The construct of willing stance has implications for basic research in typical socialization and in developmental psychopathology, and for prevention and intervention. PMID:26439058

  17. Parallel labeling experiments and metabolic flux analysis: Past, present and future methodologies.

    PubMed

    Crown, Scott B; Antoniewicz, Maciek R

    2013-03-01

    Radioactive and stable isotopes have been applied for decades to elucidate metabolic pathways and quantify carbon flow in cellular systems using mass and isotope balancing approaches. Isotope-labeling experiments can be conducted as a single tracer experiment, or as parallel labeling experiments. In the latter case, several experiments are performed under identical conditions except for the choice of substrate labeling. In this review, we highlight robust approaches for probing metabolism and addressing metabolically related questions though parallel labeling experiments. In the first part, we provide a brief historical perspective on parallel labeling experiments, from the early metabolic studies when radioisotopes were predominant to present-day applications based on stable-isotopes. We also elaborate on important technical and theoretical advances that have facilitated the transition from radioisotopes to stable-isotopes. In the second part of the review, we focus on parallel labeling experiments for (13)C-metabolic flux analysis ((13)C-MFA). Parallel experiments offer several advantages that include: tailoring experiments to resolve specific fluxes with high precision; reducing the length of labeling experiments by introducing multiple entry-points of isotopes; validating biochemical network models; and improving the performance of (13)C-MFA in systems where the number of measurements is limited. We conclude by discussing some challenges facing the use of parallel labeling experiments for (13)C-MFA and highlight the need to address issues related to biological variability, data integration, and rational tracer selection. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Parallel and serial grouping of image elements in visual perception.

    PubMed

    Houtkamp, Roos; Roelfsema, Pieter R

    2010-12-01

    The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some situations, but we demonstrate that there are also situations where Gestalt grouping becomes serial. We observe substantial time delays when image elements have to be grouped indirectly through a chain of local groupings. We call this chaining process incremental grouping and demonstrate that it can occur for only a single object at a time. We suggest that incremental grouping requires the gradual spread of object-based attention so that eventually all the object's parts become grouped explicitly by an attentional labeling process. Our findings inspire a new incremental grouping theory that relates the parallel, local grouping process to feedforward processing and the serial, incremental grouping process to recurrent processing in the visual cortex.

  19. Aging and feature search: the effect of search area.

    PubMed

    Burton-Danner, K; Owsley, C; Jackson, G R

    2001-01-01

    The preattentive system involves the rapid parallel processing of visual information in the visual scene so that attention can be directed to meaningful objects and locations in the environment. This study used the feature search methodology to examine whether there are aging-related deficits in parallel-processing capabilities when older adults are required to visually search a large area of the visual field. Like young subjects, older subjects displayed flat, near-zero slopes for the Reaction Time x Set Size function when searching over a broad area (30 degrees radius) of the visual field, implying parallel processing of the visual display. These same older subjects exhibited impairment in another task, also dependent on parallel processing, performed over the same broad field area; this task, called the useful field of view test, has more complex task demands. Results imply that aging-related breakdowns of parallel processing over a large visual field area are not likely to emerge when required responses are simple, there is only one task to perform, and there is no limitation on visual inspection time.

  20. Interdisciplinary Research and Phenomenology as Parallel Processes of Consciousness

    ERIC Educational Resources Information Center

    Arvidson, P. Sven

    2016-01-01

    There are significant parallels between interdisciplinarity and phenomenology. Interdisciplinary conscious processes involve identifying relevant disciplines, evaluating each disciplinary insight, and creating common ground. In an analogous way, phenomenology involves conscious processes of epoché, reduction, and eidetic variation. Each stresses…

  1. Processing data communications events by awakening threads in parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for themore » context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.« less

  2. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  3. Parallel computation with the force

    NASA Technical Reports Server (NTRS)

    Jordan, H. F.

    1985-01-01

    A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.

  4. PREFACE: Conceptual and Technical Challenges for Quantum Gravity 2014 - Parallel session: Noncommutative Geometry and Quantum Gravity

    NASA Astrophysics Data System (ADS)

    Martinetti, P.; Wallet, J.-C.; Amelino-Camelia, G.

    2015-08-01

    The conference Conceptual and Technical Challenges for Quantum Gravity at Sapienza University of Rome, from 8 to 12 September 2014, has provided a beautiful opportunity for an encounter between different approaches and different perspectives on the quantum-gravity problem. It contributed to a higher level of shared knowledge among the quantum-gravity communities pursuing each specific research program. There were plenary talks on many different approaches, including in particular string theory, loop quantum gravity, spacetime noncommutativity, causal dynamical triangulations, asymptotic safety and causal sets. Contributions from the perspective of philosophy of science were also welcomed. In addition several parallel sessions were organized. The present volume collects contributions from the Noncommutative Geometry and Quantum Gravity parallel session4, with additional invited contributions from specialists in the field. Noncommutative geometry in its many incarnations appears at the crossroad of many researches in theoretical and mathematical physics: • from models of quantum space-time (with or without breaking of Lorentz symmetry) to loop gravity and string theory, • from early considerations on UV-divergencies in quantum field theory to recent models of gauge theories on noncommutative spacetime, • from Connes description of the standard model of elementary particles to recent Pati-Salam like extensions. This volume provides an overview of these various topics, interesting for the specialist as well as accessible to the newcomer. 4partially funded by CNRS PEPS /PTI ''Metric aspect of noncommutative geometry: from Monge to Higgs''

  5. The new moon illusion and the role of perspective in the perception of straight and parallel lines.

    PubMed

    Rogers, Brian; Naumenko, Olga

    2015-01-01

    In the new moon illusion, the sun does not appear to be in a direction perpendicular to the boundary between the lit and dark sides of the moon, and aircraft jet trails appear to follow curved paths across the sky. In both cases, lines that are physically straight and parallel to the horizon appear to be curved. These observations prompted us to investigate the neglected question of how we are able to judge the straightness and parallelism of extended lines. To do this, we asked observers to judge the 2-D alignment of three artificial "stars" projected onto the dome of the Saint Petersburg Planetarium that varied in both their elevation and their separation in horizontal azimuth. The results showed that observers make substantial, systematic errors, biasing their judgments away from the veridical great-circle locations and toward equal-elevation settings. These findings further demonstrate that whenever information about the distance of extended lines or isolated points is insufficient, observers tend to assume equidistance, and as a consequence, their straightness judgments are biased toward the angular separation of straight and parallel lines.

  6. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  7. A normal tissue dose response model of dynamic repair processes.

    PubMed

    Alber, Markus; Belka, Claus

    2006-01-07

    A model is presented for serial, critical element complication mechanisms for irradiated volumes from length scales of a few millimetres up to the entire organ. The central element of the model is the description of radiation complication as the failure of a dynamic repair process. The nature of the repair process is seen as reestablishing the structural organization of the tissue, rather than mere replenishment of lost cells. The interactions between the cells, such as migration, involved in the repair process are assumed to have finite ranges, which limits the repair capacity and is the defining property of a finite-sized reconstruction unit. Since the details of the repair processes are largely unknown, the development aims to make the most general assumptions about them. The model employs analogies and methods from thermodynamics and statistical physics. An explicit analytical form of the dose response of the reconstruction unit for total, partial and inhomogeneous irradiation is derived. The use of the model is demonstrated with data from animal spinal cord experiments and clinical data about heart, lung and rectum. The three-parameter model lends a new perspective to the equivalent uniform dose formalism and the established serial and parallel complication models. Its implications for dose optimization are discussed.

  8. Parallel integrated frame synchronizer chip

    NASA Technical Reports Server (NTRS)

    Solomon, Jeffrey Michael (Inventor); Ghuman, Parminder Singh (Inventor); Bennett, Toby Dennis (Inventor)

    2000-01-01

    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

  9. Development and Coherence of Beliefs Regarding Disease Causality and Prevention

    ERIC Educational Resources Information Center

    Sigelman, Carol K.

    2014-01-01

    Guided by a naïve theories perspective on the development of thinking about disease, this study of 188 children aged 6 to 18 examined knowledge of HIV/AIDS causality and prevention using parallel measures derived from open-ended and structured interviews. Knowledge of both risk factors and prevention rules, as well as conceptual understanding of…

  10. Marketing Informal Education Institutions in Israel: The Centrality of Customers' Active Involvement in Service Development

    ERIC Educational Resources Information Center

    Oplatka, Izhar

    2004-01-01

    The current paper outlines a unique marketing perspective that prevails in some informal education institutions in Israel parallel with "traditional modes of marketing", such as promotion, public relations and the like. Based on a case study research in five community centres, a service development based on active participation of the…

  11. The Reemergence of the National Science Foundation in American Education: Perspectives and Problems.

    ERIC Educational Resources Information Center

    Hlebowitsh, Peter S.; Wraga, William G.

    1989-01-01

    Criticized are the National Science Foundation (NSF) funded curriculum reforms during the post-Sputnik epoch. The parallels and contrasts between the proposals of today's NSF and those supported during the late 1950s and early 1960s are outlined. The proper role of a policymaking body in American education is recommended. (YP)

  12. Contexts That Matter to the Leadership Development of Latino Male College Students: A Mixed Methods Perspective

    ERIC Educational Resources Information Center

    Garcia, Gina A.; Huerta, Adrian H.; Ramirez, Jenesis J.; Patrón, Oscar E.

    2017-01-01

    As the number of Latino males entering college increases, there is a need to understand their unique leadership experiences. This study used a convergent parallel mixed methods design to understand what contexts contribute to Latino male undergraduate students' leadership development, capacity, and experiences. Quantitative data were gathered by…

  13. "Laughing Matters": The Comedian as Social Observer, Teacher, and Conduit of the Sociological Perspective

    ERIC Educational Resources Information Center

    Bingham, Shawn Chandler; Hernandez, Alexander A.

    2009-01-01

    Much of the sociological curriculum often represents society as tragedy. This article explores the incorporation of a society as comedy component in introductory courses at two institutions using the sociological insight and social critique of comedians. A general discussion of parallels between the comedic eye and the sociological imagination is…

  14. Text Talk, Body Talk, Table Talk: A Design of Ratio and Proportion as Classroom Parallel Events

    ERIC Educational Resources Information Center

    Abrahamson, Dor

    2003-01-01

    The paper describes the rationale and 10-day implementation in a 5th-grade classroom (n=19) of an experimental ratio-and-proportion instructional design. In this constructivist-phenomenological design, coming from our theoretical perspective, design research, and domain analysis, students: (1) link "real-world" and "mathematical" objects…

  15. Children's Perspectives of Play and Learning for Educational Practice

    ERIC Educational Resources Information Center

    Theobald, Maryanne; Danby, Susan; Einarsdóttir, Jóhanna; Bourne, Jane; Jones, Desley; Ross, Sharon; Knaggs, Helen; Carter-Jones, Claire

    2015-01-01

    Play as a learning practice increasingly is under challenge as a valued component of early childhood education. Views held in parallel include confirmation of the place of play in early childhood education and, at the same time, a denigration of the role of play in favor for more teacher-structured and formal activities. As a consequence,…

  16. Using a Boundary Object Perspective to Reconsider the Meaning of STEM in a Canadian Context

    ERIC Educational Resources Information Center

    Shanahan, Marie-Claire; Carol-Ann Burke, Lydia E.; Francis, Krista

    2016-01-01

    The term "STEM," used to describe science, technology, engineering, and mathematics, has come to prominence in Canada over the last decade, raising questions about its meaning. Here we examine its history in the United States and the sociopolitical commitments that have, in parallel, guided science education in Canada. The divergent…

  17. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  18. Parallelization of ARC3D with Computer-Aided Tools

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; Hribar, Michelle; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    A series of efforts have been devoted to investigating methods of porting and parallelizing applications quickly and efficiently for new architectures, such as the SCSI Origin 2000 and Cray T3E. This report presents the parallelization of a CFD application, ARC3D, using the computer-aided tools, Cesspools. Steps of parallelizing this code and requirements of achieving better performance are discussed. The generated parallel version has achieved reasonably well performance, for example, having a speedup of 30 for 36 Cray T3E processors. However, this performance could not be obtained without modification of the original serial code. It is suggested that in many cases improving serial code and performing necessary code transformations are important parts for the automated parallelization process although user intervention in many of these parts are still necessary. Nevertheless, development and improvement of useful software tools, such as Cesspools, can help trim down many tedious parallelization details and improve the processing efficiency.

  19. Development of a parallel FE simulator for modeling the whole trans-scale failure process of rock from meso- to engineering-scale

    NASA Astrophysics Data System (ADS)

    Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao

    2017-01-01

    Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.

  20. Curious parallels and curious connections--phylogenetic thinking in biology and historical linguistics.

    PubMed

    Atkinson, Quentin D; Gray, Russell D

    2005-08-01

    In The Descent of Man (1871), Darwin observed "curious parallels" between the processes of biological and linguistic evolution. These parallels mean that evolutionary biologists and historical linguists seek answers to similar questions and face similar problems. As a result, the theory and methodology of the two disciplines have evolved in remarkably similar ways. In addition to Darwin's curious parallels of process, there are a number of equally curious parallels and connections between the development of methods in biology and historical linguistics. Here we briefly review the parallels between biological and linguistic evolution and contrast the historical development of phylogenetic methods in the two disciplines. We then look at a number of recent studies that have applied phylogenetic methods to language data and outline some current problems shared by the two fields.

  1. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A; Faraj, Daniel A

    2013-06-04

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  2. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  3. Granger causality--statistical analysis under a configural perspective.

    PubMed

    von Eye, Alexander; Wiedermann, Wolfgang; Mun, Eun-Young

    2014-03-01

    The concept of Granger causality can be used to examine putative causal relations between two series of scores. Based on regression models, it is asked whether one series can be considered the cause for the second series. In this article, we propose extending the pool of methods available for testing hypotheses that are compatible with Granger causation by adopting a configural perspective. This perspective allows researchers to assume that effects exist for specific categories only or for specific sectors of the data space, but not for other categories or sectors. Configural Frequency Analysis (CFA) is proposed as the method of analysis from a configural perspective. CFA base models are derived for the exploratory analysis of Granger causation. These models are specified so that they parallel the regression models used for variable-oriented analysis of hypotheses of Granger causation. An example from the development of aggression in adolescence is used. The example shows that only one pattern of change in aggressive impulses over time Granger-causes change in physical aggression against peers.

  4. Parallel pivoting combined with parallel reduction

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita

    1987-01-01

    Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

  5. A cognitive perspective on object relations, drive development and ego structure in the second and third years of life.

    PubMed

    Posener, J A

    1989-01-01

    This paper extends a recent line of research by correlating Piaget's theory of cognitive development with several psychoanalytic perspectives on development during the second and third years of life. The concrete, imagistic, unintegrated nature of mental representations associated by Mahler and Kernberg with this period, along with the mental operation of splitting, are related to preconceptual representation, a cognitive mode described by Piaget. Psychoanalytic perspectives on the body ego and object world associated with the anal period are also seen to involve concrete, unintegrated representations which show correspondence with preconceptual cognition. Parallels are explored between cognitive stages and the psychoanalytic understanding of ego and superego development. While psychoanalysis is not a cognitive psychology, aspects of its theory are concerned with cognitive structure and are enriched by a consideration of cognitive development.

  6. Improving operating room productivity via parallel anesthesia processing.

    PubMed

    Brown, Michael J; Subramanian, Arun; Curry, Timothy B; Kor, Daryl J; Moran, Steven L; Rohleder, Thomas R

    2014-01-01

    Parallel processing of regional anesthesia may improve operating room (OR) efficiency in patients undergoes upper extremity surgical procedures. The purpose of this paper is to evaluate whether performing regional anesthesia outside the OR in parallel increases total cases per day, improve efficiency and productivity. Data from all adult patients who underwent regional anesthesia as their primary anesthetic for upper extremity surgery over a one-year period were used to develop a simulation model. The model evaluated pure operating modes of regional anesthesia performed within and outside the OR in a parallel manner. The scenarios were used to evaluate how many surgeries could be completed in a standard work day (555 minutes) and assuming a standard three cases per day, what was the predicted end-of-day time overtime. Modeling results show that parallel processing of regional anesthesia increases the average cases per day for all surgeons included in the study. The average increase was 0.42 surgeries per day. Where it was assumed that three cases per day would be performed by all surgeons, the days going to overtime was reduced by 43 percent with parallel block. The overtime with parallel anesthesia was also projected to be 40 minutes less per day per surgeon. Key limitations include the assumption that all cases used regional anesthesia in the comparisons. Many days may have both regional and general anesthesia. Also, as a case study, single-center research may limit generalizability. Perioperative care providers should consider parallel administration of regional anesthesia where there is a desire to increase daily upper extremity surgical case capacity. Where there are sufficient resources to do parallel anesthesia processing, efficiency and productivity can be significantly improved. Simulation modeling can be an effective tool to show practice change effects at a system-wide level.

  7. High Performance Proactive Digital Forensics

    NASA Astrophysics Data System (ADS)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  8. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  9. Analysis of bacterial fatty acids by flow modulated comprehensive two-dimensional gas chromatography with parallel flame ionization detector/mass spectrometry.

    PubMed

    Gu, Qun; David, Frank; Lynen, Frédéric; Rumpel, Klaus; Xu, Guowang; De Vos, Paul; Sandra, Pat

    2010-06-25

    Comprehensive two-dimensional gas chromatography (GCxGC) offers an interesting tool for profiling bacterial fatty acids. Flow modulated GCxGC using a commercially available system was evaluated, different parameters such as column flows and modulation time were optimized. The method was tested on bacterial fatty acid methyl esters (BAMEs) from Stenotrophomonas maltophilia LMG 958T by using parallel flame ionization detector (FID)/mass spectrometry (MS). The results are compared to data obtained using a thermal modulated GCxGC system. The data show that flow modulated GCxGC-FID/MS method can be applied in a routine environment and offers interesting perspectives for chemotaxonomy of bacteria.

  10. Parallel Algorithms for Image Analysis.

    DTIC Science & Technology

    1982-06-01

    8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9

  11. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  12. Parallels between a Collaborative Research Process and the Middle Level Philosophy

    ERIC Educational Resources Information Center

    Dever, Robin; Ross, Diane; Miller, Jennifer; White, Paula; Jones, Karen

    2014-01-01

    The characteristics of the middle level philosophy as described in This We Believe closely parallel the collaborative research process. The journey of one research team is described in relationship to these characteristics. The collaborative process includes strengths such as professional relationships, professional development, courageous…

  13. Removal of suspended solids and turbidity from marble processing wastewaters by electrocoagulation: comparison of electrode materials and electrode connection systems.

    PubMed

    Solak, Murat; Kiliç, Mehmet; Hüseyin, Yazici; Sencan, Aziz

    2009-12-15

    In this study, removal of suspended solids (SS) and turbidity from marble processing wastewaters by electrocoagulation (EC) process were investigated by using aluminium (Al) and iron (Fe) electrodes which were run in serial and parallel connection systems. To remove these pollutants from the marble processing wastewater, an EC reactor including monopolar electrodes (Al/Fe) in parallel and serial connection system, was utilized. Optimization of differential operation parameters such as pH, current density, and electrolysis time on SS and turbidity removal were determined in this way. EC process with monopolar Al electrodes in parallel and serial connections carried out at the optimum conditions where the pH value was 9, current density was approximately 15 A/m(2), and electrolysis time was 2 min resulted in 100% SS removal. Removal efficiencies of EC process for SS with monopolar Fe electrodes in parallel and serial connection were found to be 99.86% and 99.94%, respectively. Optimum parameters for monopolar Fe electrodes in both of the connection types were found to be for pH value as 8, for electrolysis time as 2 min. The optimum current density value for Fe electrodes used in serial and parallel connections was also obtained at 10 and 20 A/m(2), respectively. Based on the results obtained, it was found that EC process running with each type of the electrodes and the connections was highly effective for the removal of SS and turbidity from marble processing wastewaters, and that operating costs with monopolar Al electrodes in parallel connection were the cheapest than that of the serial connection and all the configurations for Fe electrode.

  14. Stress and decision making: neural correlates of the interaction between stress, executive functions, and decision making under risk.

    PubMed

    Gathmann, Bettina; Schulte, Frank P; Maderwald, Stefan; Pawlikowski, Mirko; Starcke, Katrin; Schäfer, Lena C; Schöler, Tobias; Wolf, Oliver T; Brand, Matthias

    2014-03-01

    Stress and additional load on the executive system, produced by a parallel working memory task, impair decision making under risk. However, the combination of stress and a parallel task seems to preserve the decision-making performance [e.g., operationalized by the Game of Dice Task (GDT)] from decreasing, probably by a switch from serial to parallel processing. The question remains how the brain manages such demanding decision-making situations. The current study used a 7-tesla magnetic resonance imaging (MRI) system in order to investigate the underlying neural correlates of the interaction between stress (induced by the Trier Social Stress Test), risky decision making (GDT), and a parallel executive task (2-back task) to get a better understanding of those behavioral findings. The results show that on a behavioral level, stressed participants did not show significant differences in task performance. Interestingly, when comparing the stress group (SG) with the control group, the SG showed a greater increase in neural activation in the anterior prefrontal cortex when performing the 2-back task simultaneously with the GDT than when performing each task alone. This brain area is associated with parallel processing. Thus, the results may suggest that in stressful dual-tasking situations, where a decision has to be made when in parallel working memory is demanded, a stronger activation of a brain area associated with parallel processing takes place. The findings are in line with the idea that stress seems to trigger a switch from serial to parallel processing in demanding dual-tasking situations.

  15. Social control and coercion in addiction treatment: towards evidence-based policy and practice.

    PubMed

    Wild, T Cameron

    2006-01-01

    Social pressures are often an integral part of the process of seeking addiction treatment. However, scientists have not developed conclusive evidence on the processes, benefits and limitations of using legal, formal and informal social control tactics to inform policy makers, service providers and the public. This paper characterizes barriers to a robust interdisciplinary analysis of social control and coercion in addiction treatment and provides directions for future research. Conceptual analysis and review of key studies and trends in the area are used to describe eight implicit assumptions underlying policy, practice and scholarship on this topic. Many policies, programmes and researchers are guided by a simplistic behaviourist and health-service perspective on social controls that (a) overemphasizes the use of criminal justice systems to compel individuals into treatment and (b) fails to take into account provider, patient and public views. Policies and programmes that expand addiction treatment options deserve support. However, drawing a firm distinction between social controls (objective use of social pressure) and coercion (client perceptions and decision-making processes) supports a parallel position that rejects treatment policies, programmes, and associated practices that create client perceptions of coercion.

  16. New insights into a hot environment for early life.

    PubMed

    Dai, Jianghong

    2017-06-01

    Investigating the physical-chemical setting of early life is a challenging task. In this contribution, the author attempted to introduce a provocative concept from cosmology - cosmic microwave background (CMB), which is the residual thermal radiation from a hot early Universe - to the field. For this purpose, the author revisited a recently deduced biomarker, the 1,6-anhydro bond of sugars in bacteria. In vitro, the 1,6-anhydro bond of sugars reflects and captures residual thermal radiation in thermochemical processes and therefore is somewhat analogous to CMB. In vivo, the formation process of the 1,6-anhydro bond of sugars on the peptidoglycan of prokaryotic cell wall is parallel to in vitro processes, suggesting that the 1,6-anhydro bond is an ideal CMB-like analogue that suggests a hot setting for early life. The CMB-like 1,6-anhydro bond is involved in the life cycle of viruses and the metabolism of eukaryotes, underlying this notion. From a novel perspective, the application of the concept of the CMB to microbial ecology may give new insights into a hot environment, such as hydrothermal vents, supporting early life and providing hypotheses to test in molecular palaeontology. © 2017 Society for Applied Microbiology and John Wiley & Sons Ltd.

  17. Scaling up antiretroviral therapy in Uganda: using supply chain management to appraise health systems strengthening.

    PubMed

    Windisch, Ricarda; Waiswa, Peter; Neuhann, Florian; Scheibe, Florian; de Savigny, Don

    2011-08-01

    Strengthened national health systems are necessary for effective and sustained expansion of antiretroviral therapy (ART). ART and its supply chain management in Uganda are largely based on parallel and externally supported efforts. The question arises whether systems are being strengthened to sustain access to ART. This study applies systems thinking to assess supply chain management, the role of external support and whether investments create the needed synergies to strengthen health systems. This study uses the WHO health systems framework and examines the issues of governance, financing, information, human resources and service delivery in relation to supply chain management of medicines and the technologies. It looks at links and causal chains between supply chain management for ART and the national supply system for essential drugs. It combines data from the literature and key informant interviews with observations at health service delivery level in a study district. Current drug supply chain management in Uganda is characterized by parallel processes and information systems that result in poor quality and inefficiencies. Less than expected health system performance, stock outs and other shortages affect ART and primary care in general. Poor performance of supply chain management is amplified by weak conditions at all levels of the health system, including the areas of financing, governance, human resources and information. Governance issues include the lack to follow up initial policy intentions and a focus on narrow, short-term approaches. The opportunity and need to use ART investments for an essential supply chain management and strengthened health system has not been exploited. By applying a systems perspective this work indicates the seriousness of missing system prerequisites. The findings suggest that root causes and capacities across the system have to be addressed synergistically to enable systems that can match and accommodate investments in disease-specific interventions. The multiplicity and complexity of existing challenges require a long-term and systems perspective essentially in contrast to the current short term and program-specific nature of external assistance.

  18. Changes in visual perspective influence brain activity patterns during cognitive perspective-taking of other people's pain.

    PubMed

    Vistoli, Damien; Achim, Amélie M; Lavoie, Marie-Audrey; Jackson, Philip L

    2016-05-01

    Empathy refers to our capacity to share and understand the emotional states of others. It relies on two main processes according to existing models: an effortless affective sharing process based on neural resonance and a more effortful cognitive perspective-taking process enabling the ability to imagine and understand how others feel in specific situations. Until now, studies have focused on factors influencing the affective sharing process but little is known about those influencing the cognitive perspective-taking process and the related brain activations during vicarious pain. In the present fMRI study, we used the well-known physical pain observation task to examine whether the visual perspective can influence, in a bottom-up way, the brain regions involved in taking others' cognitive perspective to attribute their level of pain. We used a pseudo-dynamic version of this classic task which features hands in painful or neutral daily life situations while orthogonally manipulating: (1) the visual perspective with which hands were presented (first-person versus third-person conditions) and (2) the explicit instructions to imagine oneself or an unknown person in those situations (Self versus Other conditions). The cognitive perspective-taking process was investigated by comparing Other and Self conditions. When examined across both visual perspectives, this comparison showed no supra-threshold activation. Instead, the Other versus Self comparison led to a specific recruitment of the bilateral temporo-parietal junction when hands were presented according to a first-person (but not third-person) visual perspective. The present findings identify the visual perspective as a factor that modulates the neural activations related to cognitive perspective-taking during vicarious pain and show that this complex cognitive process can be influenced by perceptual stages of information processing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Parallel-hierarchical processing and classification of laser beam profile images based on the GPU-oriented architecture

    NASA Astrophysics Data System (ADS)

    Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan

    2017-08-01

    The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.

  20. Rubus: A compiler for seamless and extensible parallelism.

    PubMed

    Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.

  1. Rubus: A compiler for seamless and extensible parallelism

    PubMed Central

    Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758

  2. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify the program execution without changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.

  3. Relative Debugging of Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular, the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify, the program execution with out changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.

  4. High speed infrared imaging system and method

    DOEpatents

    Zehnder, Alan T.; Rosakis, Ares J.; Ravichandran, G.

    2001-01-01

    A system and method for radiation detection with an increased frame rate. A semi-parallel processing configuration is used to process a row or column of pixels in a focal-plane array in parallel to achieve a processing rate up to and greater than 1 million frames per second.

  5. Idle waves in high-performance computing

    NASA Astrophysics Data System (ADS)

    Markidis, Stefano; Vencels, Juris; Peng, Ivy Bo; Akhmetova, Dana; Laure, Erwin; Henri, Pierre

    2015-01-01

    The vast majority of parallel scientific applications distributes computation among processes that are in a busy state when computing and in an idle state when waiting for information from other processes. We identify the propagation of idle waves through processes in scientific applications with a local information exchange between the two processes. Idle waves are nondispersive and have a phase velocity inversely proportional to the average busy time. The physical mechanism enabling the propagation of idle waves is the local synchronization between two processes due to remote data dependency. This study provides a description of the large number of processes in parallel scientific applications as a continuous medium. This work also is a step towards an understanding of how localized idle periods can affect remote processes, leading to the degradation of global performance in parallel scientific applications.

  6. The science of computing - Parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1985-01-01

    Although parallel computation architectures have been known for computers since the 1920s, it was only in the 1970s that microelectronic components technologies advanced to the point where it became feasible to incorporate multiple processors in one machine. Concommitantly, the development of algorithms for parallel processing also lagged due to hardware limitations. The speed of computing with solid-state chips is limited by gate switching delays. The physical limit implies that a 1 Gflop operational speed is the maximum for sequential processors. A computer recently introduced features a 'hypercube' architecture with 128 processors connected in networks at 5, 6 or 7 points per grid, depending on the design choice. Its computing speed rivals that of supercomputers, but at a fraction of the cost. The added speed with less hardware is due to parallel processing, which utilizes algorithms representing different parts of an equation that can be broken into simpler statements and processed simultaneously. Present, highly developed computer languages like FORTRAN, PASCAL, COBOL, etc., rely on sequential instructions. Thus, increased emphasis will now be directed at parallel processing algorithms to exploit the new architectures.

  7. Expressing Parallelism with ROOT

    NASA Astrophysics Data System (ADS)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  8. Expressing Parallelism with ROOT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piparo, D.; Tejedor, E.; Guiraud, E.

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module inmore » Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.« less

  9. Parallel/Vector Integration Methods for Dynamical Astronomy

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    1999-01-01

    This paper reviews three recent works on the numerical methods to integrate ordinary differential equations (ODE), which are specially designed for parallel, vector, and/or multi-processor-unit(PU) computers. The first is the Picard-Chebyshev method (Fukushima, 1997a). It obtains a global solution of ODE in the form of Chebyshev polynomial of large (> 1000) degree by applying the Picard iteration repeatedly. The iteration converges for smooth problems and/or perturbed dynamics. The method runs around 100-1000 times faster in the vector mode than in the scalar mode of a certain computer with vector processors (Fukushima, 1997b). The second is a parallelization of a symplectic integrator (Saha et al., 1997). It regards the implicit midpoint rules covering thousands of timesteps as large-scale nonlinear equations and solves them by the fixed-point iteration. The method is applicable to Hamiltonian systems and is expected to lead an acceleration factor of around 50 in parallel computers with more than 1000 PUs. The last is a parallelization of the extrapolation method (Ito and Fukushima, 1997). It performs trial integrations in parallel. Also the trial integrations are further accelerated by balancing computational load among PUs by the technique of folding. The method is all-purpose and achieves an acceleration factor of around 3.5 by using several PUs. Finally, we give a perspective on the parallelization of some implicit integrators which require multiple corrections in solving implicit formulas like the implicit Hermitian integrators (Makino and Aarseth, 1992), (Hut et al., 1995) or the implicit symmetric multistep methods (Fukushima, 1998), (Fukushima, 1999).

  10. Brief Strategic Family Therapy: Implementing evidence-based models in community settings

    PubMed Central

    Szapocznik, José; Muir, Joan A.; Duff, Johnathan H.; Schwartz, Seth J.; Brown, C. Hendricks

    2014-01-01

    Reflecting a nearly 40-year collaborative partnership between clinical researchers and clinicians, the present article reviews the authors’ experience in developing, investigating, and implementing the Brief Strategic Family Therapy (BSFT) model. The first section of the article focuses on the theory, practice, and studies related to this evidence-based family therapy intervention targeting adolescent drug abuse and delinquency. The second section focuses on the implementation model created for the BSFT intervention– a model that parallels many of the recommendations furthered within the implementation science literature. Specific challenges encountered during the BSFT implementation process are reviewed, along with ways of conceptualizing and addressing these challenges from a systemic perspective. The implementation approach that we employ uses the same systemic principles and intervention techniques as those that underlie the BSFT model itself. Recommendations for advancing the field of implementation science, based on our on-the-ground experiences, are proposed. PMID:24274187

  11. Network neuroscience

    PubMed Central

    Bassett, Danielle S; Sporns, Olaf

    2017-01-01

    Despite substantial recent progress, our understanding of the principles and mechanisms underlying complex brain function and cognition remains incomplete. Network neuroscience proposes to tackle these enduring challenges. Approaching brain structure and function from an explicitly integrative perspective, network neuroscience pursues new ways to map, record, analyze and model the elements and interactions of neurobiological systems. Two parallel trends drive the approach: the availability of new empirical tools to create comprehensive maps and record dynamic patterns among molecules, neurons, brain areas and social systems; and the theoretical framework and computational tools of modern network science. The convergence of empirical and computational advances opens new frontiers of scientific inquiry, including network dynamics, manipulation and control of brain networks, and integration of network processes across spatiotemporal domains. We review emerging trends in network neuroscience and attempt to chart a path toward a better understanding of the brain as a multiscale networked system. PMID:28230844

  12. [Emotions and affect in psychoanalysisis].

    PubMed

    Carton, Solange; Widlöcher, Daniel

    2012-06-01

    The goal of this paper is to give some indications on the concept of affect in psychoanalysis. There is no single theory of affect, and Freud gave successive definitions, which continue to be deepened in contemporary psychoanalysis. We review some steps of Freud works on affect, then we look into some present major questions, such as its relationship to soma, the nature of unconscious affects and the repression of affect, which is particularly developed in the field of psychoanalytic psychosomatic. From Freud's definitions of affect as one of the drive representative and as a limit-concept between the somatic and the psychic, we develop some major theoretical perspectives, which give a central place to soma and drive impulses, and which agree on the major idea that affect is the result of a process. We then note some parallelism between psychoanalysis of affect and psychology and neurosciences of emotion, and underline the gaps and conditions of comparison between these different epistemological approaches.

  13. Impressions of Danger Influence Impressions of People: An Evolutionary Perspective on Individual and Collective Cognition

    PubMed Central

    Schaller, Mark; Faulkner, Jason; Park, Justin H.; Neuberg, Steven L.; Kenrick, Douglas T.

    2011-01-01

    An evolutionary approach to social cognition yields novel hypotheses about the perception of people belonging to specific kinds of social categories. These implications are illustrated by empirical results linking the perceived threat of physical injury to stereotypical impressions of outgroups. We review a set of studies revealing several ways in which threat-connoting cues influence perceptions of ethnic outgroups and the individuals who belong to those outgroups. We also present new results that suggest additional implications of evolved danger-avoidance mechanisms on interpersonal communication and the persistence of cultural-level stereotypes about ethnic outgroups. The conceptual utility of an evolutionary approach is further illustrated by a parallel line of research linking the threat of disease to additional kinds of social perceptions and behaviors. Evolved danger-avoidance mechanisms appear to contribute in diverse ways to individual-level cognitive processes, as well as to culturally-shared collective beliefs. PMID:21874126

  14. Organic Solar Cells beyond One Pair of Donor-Acceptor: Ternary Blends and More.

    PubMed

    Yang, Liqiang; Yan, Liang; You, Wei

    2013-06-06

    Ternary solar cells enjoy both an increased light absorption width, and an easy fabrication process associated with their simple structures. Significant progress has been made for such solar cells with demonstrated efficiencies over 7%; however, their fundamental working principles are still under investigation. This Perspective is intended to offer our insights on the three major governing mechanisms in these intriguing ternary solar cells: charge transfer, energy transfer, and parallel-linkage. Through careful analysis of exemplary cases, we summarize the advantages and limitations of these three major mechanisms and suggest future research directions. For example, incorporating additional singlet fission or upconversion materials into the energy transfer dominant ternary solar cells has the potential to break the theoretical efficiency limit in single junction organic solar cells. Clearly, a feedback loop between fundamental understanding and materials selection is in urgent need to accelerate the efficiency improvement of these ternary solar cells.

  15. File concepts for parallel I/O

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1989-01-01

    The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

  16. Nuclear emergency management procedures in Europe

    NASA Astrophysics Data System (ADS)

    Carter, Emma

    The Chernobyl accident brought to the fore the need for decision-making in nuclear emergency management to be transparent and consistent across Europe. A range of systems to support decision-making in future emergencies have since been developed, but, by and large, with little consultation with potential decision makers and limited understanding of the emergency management procedures across Europe and how they differ. In nuclear emergency management, coordination, communication and information sharing are of paramount importance. There are many key players with their own technical expertise, and several key activities occur in parallel, across different locations. Business process modelling can facilitate understanding through the representation of processes, aid transparency and structure the analysis, comparison and improvement of processes. This work has been conducted as part of a European Fifth Framework Programme project EVATECH, whose aim was to improve decision support methods, models and processes taking into account stakeholder expectations and concerns. It has involved the application of process modelling to document and compare the emergency management processes in four European countries. It has also involved a multidisciplinary approach taking a socio-technical perspective. The use of process modelling did indeed facilitate understanding and provided a common platform, which was not previously available, to consider emergency management processes. This thesis illustrates the structured analysis approach that process modelling enables. Firstly, through an individual analysis for the United Kingdom (UK) model that illustrated the potential benefits for a country. These are for training purposes, to build reflexive shared mental models, to aid coordination and for process improvement. Secondly, through a comparison of the processes in Belgium, Germany, Slovak Republic and the UK. In this comparison of the four processes we observed that the four process models are substantially different in their organisational structure and identified differences in the management of advice, where decisions are made and the communication network style. Another key aspect of this work is that through the structured analysis conducted we were able to develop a framework for the evaluation of DSS from the perspective of process. This work concludes reflecting on the challenges, which the European off-site nuclear emergency community face and suggest direction for future work, with particular reference to a recent conference on the capabilities and challenges of offsite nuclear emergency management, the Salzburg Symposium 2003.

  17. Synchronization Of Parallel Discrete Event Simulations

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  18. Knowledge representation into Ada parallel processing

    NASA Technical Reports Server (NTRS)

    Masotto, Tom; Babikyan, Carol; Harper, Richard

    1990-01-01

    The Knowledge Representation into Ada Parallel Processing project is a joint NASA and Air Force funded project to demonstrate the execution of intelligent systems in Ada on the Charles Stark Draper Laboratory fault-tolerant parallel processor (FTPP). Two applications were demonstrated - a portion of the adaptive tactical navigator and a real time controller. Both systems are implemented as Activation Framework Objects on the Activation Framework intelligent scheduling mechanism developed by Worcester Polytechnic Institute. The implementations, results of performance analyses showing speedup due to parallelism and initial efficiency improvements are detailed and further areas for performance improvements are suggested.

  19. Elementary School Teachers as "Targets and Agents of Change": Teachers' Learning in Interaction with Reform Science Curriculum

    ERIC Educational Resources Information Center

    Metz, Kathleen E.

    2009-01-01

    This article examines teachers' perspectives on the challenges of using a science reform curriculum, as well as their learning in interaction with the curriculum and parallel professional development program. As case studies, I selected 4 veteran teachers of 2nd or 3rd grade, with varying science backgrounds (including 2 with essentially none).…

  20. Perspectives on Tolerance in Education Flowing from a Comparison of Religion Education in Estonia and South Africa

    ERIC Educational Resources Information Center

    van der Walt, Johannes L.

    2013-01-01

    The question that prompted this investigation into religion education (RE) in Estonia and in South Africa was whether two countries from such totally different parts of the world, with such vastly different populations and cultures though with somewhat parallel histories, had tackled the same or similar problems regarding the provision of RE in…

  1. The Affordability of University Education: A Perspective from Both Sides of the 49th Parallel

    ERIC Educational Resources Information Center

    Swail, Watson Scott

    2004-01-01

    This study was conducted to better understand the relative affordability of public university education in Canada and the United States. The report was written to answer two key questions: (1) How does access to university education in Canada compare to access in the US? and (2) How affordable is the Canadian university system compared to the…

  2. Student Teachers' Team Teaching: How Do Learners in the Classroom Experience Team-Taught Lessons by Student Teachers?

    ERIC Educational Resources Information Center

    Baeten, Marlies; Simons, Mathea

    2016-01-01

    This study focuses on student teachers' team teaching. Two team teaching models (sequential and parallel teaching) were applied by 14 student teachers in a quasi-experimental design. When implementing new teaching models, it is important to take into account the perspectives of all actors involved. Although learners are key actors in the teaching…

  3. Picture superiority doubly dissociates the ERP correlates of recollection and familiarity.

    PubMed

    Curran, Tim; Doyle, Jeanne

    2011-05-01

    Two experiments investigated the processes underlying the picture superiority effect on recognition memory. Studied pictures were associated with higher accuracy than studied words, regardless of whether test stimuli were words (Experiment 1) or pictures (Experiment 2). Event-related brain potentials (ERPs) recorded during test suggested that the 300-500 msec FN400 old/new effect, hypothesized to be related to familiarity-based recognition, benefited from study/test congruity, such that it was larger when study and test format remained constant than when they differed. The 500-800 msec parietal old/new effect, hypothesized to be related to recollection, benefited from studying pictures, regardless of test format. The parallel between the accuracy and parietal ERP results suggests that picture superiority may arise from encoding the distinctive attributes of pictures in a manner that enhances their later recollection. Furthermore, when words were tested, opposite effects of studying words versus studying pictures were observed on the FN400 (word > picture) versus parietal (picture > word) old/new effects--providing strong evidence for a crossover interaction between these components that is consistent with a dual-process perspective.

  4. [Is disease merely illness?: Biomedicine, "parallel" forms of care and power].

    PubMed

    Menéndez, Eduardo L

    2015-09-01

    Following Giovanni Berlinguer's proposal that health/disease processes are one of the primary spies into the contradictions of a system, this article describes cases that occurred in central and peripheral capitalist contexts as well as in the so-called "real socialist" States that allow such a role to be seen. Secondly, we observe the processes and above all the interpretations developed in Latin America and especially Mexico regarding the role attributed to traditional medicine in the identity and sense of belonging of indigenous peoples, which emphasize the incompatibility of indigenous worldviews with biomedicine. To do so we analyze projects that were carried out under the notion of intercultural health, which in large part resulted in failure both in health and political terms. The almost entirely ideological content and perspective of these projects is highlighted, as is the scant relationship they hold with the reality of indigenous people. Lastly, the impact and role that the advance of these conceptualizations and health programs might have had in the disengagement experienced over the last nearly ten years in the ethnic movements of Latin America is considered.

  5. The species translation challenge—A systems biology perspective on human and rat bronchial epithelial cells

    PubMed Central

    Poussin, Carine; Mathis, Carole; Alexopoulos, Leonidas G; Messinis, Dimitris E; Dulize, Rémi H J; Belcastro, Vincenzo; Melas, Ioannis N; Sakellaropoulos, Theodore; Rhrissorrakrai, Kahn; Bilal, Erhan; Meyer, Pablo; Talikka, Marja; Boué, Stéphanie; Norel, Raquel; Rice, John J; Stolovitzky, Gustavo; Ivanov, Nikolai V; Peitsch, Manuel C; Hoeng, Julia

    2014-01-01

    The biological responses to external cues such as drugs, chemicals, viruses and hormones, is an essential question in biomedicine and in the field of toxicology, and cannot be easily studied in humans. Thus, biomedical research has continuously relied on animal models for studying the impact of these compounds and attempted to ‘translate’ the results to humans. In this context, the SBV IMPROVER (Systems Biology Verification for Industrial Methodology for PROcess VErification in Research) collaborative initiative, which uses crowd-sourcing techniques to address fundamental questions in systems biology, invited scientists to deploy their own computational methodologies to make predictions on species translatability. A multi-layer systems biology dataset was generated that was comprised of phosphoproteomics, transcriptomics and cytokine data derived from normal human (NHBE) and rat (NRBE) bronchial epithelial cells exposed in parallel to more than 50 different stimuli under identical conditions. The present manuscript describes in detail the experimental settings, generation, processing and quality control analysis of the multi-layer omics dataset accessible in public repositories for further intra- and inter-species translation studies. PMID:25977767

  6. The species translation challenge-a systems biology perspective on human and rat bronchial epithelial cells.

    PubMed

    Poussin, Carine; Mathis, Carole; Alexopoulos, Leonidas G; Messinis, Dimitris E; Dulize, Rémi H J; Belcastro, Vincenzo; Melas, Ioannis N; Sakellaropoulos, Theodore; Rhrissorrakrai, Kahn; Bilal, Erhan; Meyer, Pablo; Talikka, Marja; Boué, Stéphanie; Norel, Raquel; Rice, John J; Stolovitzky, Gustavo; Ivanov, Nikolai V; Peitsch, Manuel C; Hoeng, Julia

    2014-01-01

    The biological responses to external cues such as drugs, chemicals, viruses and hormones, is an essential question in biomedicine and in the field of toxicology, and cannot be easily studied in humans. Thus, biomedical research has continuously relied on animal models for studying the impact of these compounds and attempted to 'translate' the results to humans. In this context, the SBV IMPROVER (Systems Biology Verification for Industrial Methodology for PROcess VErification in Research) collaborative initiative, which uses crowd-sourcing techniques to address fundamental questions in systems biology, invited scientists to deploy their own computational methodologies to make predictions on species translatability. A multi-layer systems biology dataset was generated that was comprised of phosphoproteomics, transcriptomics and cytokine data derived from normal human (NHBE) and rat (NRBE) bronchial epithelial cells exposed in parallel to more than 50 different stimuli under identical conditions. The present manuscript describes in detail the experimental settings, generation, processing and quality control analysis of the multi-layer omics dataset accessible in public repositories for further intra- and inter-species translation studies.

  7. Big Computing in Astronomy: Perspectives and Challenges

    NASA Astrophysics Data System (ADS)

    Pankratius, Victor

    2014-06-01

    Hardware progress in recent years has led to astronomical instruments gathering large volumes of data. In radio astronomy for instance, the current generation of antenna arrays produces data at Tbits per second, and forthcoming instruments will expand these rates much further. As instruments are increasingly becoming software-based, astronomers will get more exposed to computer science. This talk therefore outlines key challenges that arise at the intersection of computer science and astronomy and presents perspectives on how both communities can collaborate to overcome these challenges.Major problems are emerging due to increases in data rates that are much larger than in storage and transmission capacity, as well as humans being cognitively overwhelmed when attempting to opportunistically scan through Big Data. As a consequence, the generation of scientific insight will become more dependent on automation and algorithmic instrument control. Intelligent data reduction will have to be considered across the entire acquisition pipeline. In this context, the presentation will outline the enabling role of machine learning and parallel computing.BioVictor Pankratius is a computer scientist who joined MIT Haystack Observatory following his passion for astronomy. He is currently leading efforts to advance astronomy through cutting-edge computer science and parallel computing. Victor is also involved in projects such as ALMA Phasing to enhance the ALMA Observatory with Very-Long Baseline Interferometry capabilities, the Event Horizon Telescope, as well as in the Radio Array of Portable Interferometric Detectors (RAPID) to create an analysis environment using parallel computing in the cloud. He has an extensive track record of research in parallel multicore systems and software engineering, with contributions to auto-tuning, debugging, and empirical experiments studying programmers. Victor has worked with major industry partners such as Intel, Sun Labs, and Oracle. He holds a distinguished doctorate and a Habilitation degree in Computer Science from the University of Karlsruhe. Contact him at pankrat@mit.edu, victorpankratius.com, or Twitter @vpankratius.

  8. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    PubMed

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.

  9. Constituent order and semantic parallelism in online comprehension: eye-tracking evidence from German.

    PubMed

    Knoeferle, Pia; Crocker, Matthew W

    2009-12-01

    Reading times for the second conjunct of and-coordinated clauses are faster when the second conjunct parallels the first conjunct in its syntactic or semantic (animacy) structure than when its structure differs (Frazier, Munn, & Clifton, 2000; Frazier, Taft, Roeper, & Clifton, 1984). What remains unclear, however, is the time course of parallelism effects, their scope, and the kinds of linguistic information to which they are sensitive. Findings from the first two eye-tracking experiments revealed incremental constituent order parallelism across the board-both during structural disambiguation (Experiment 1) and in sentences with unambiguously case-marked constituent order (Experiment 2), as well as for both marked and unmarked constituent orders (Experiments 1 and 2). Findings from Experiment 3 revealed effects of both constituent order and subtle semantic (noun phrase similarity) parallelism. Together our findings provide evidence for an across-the-board account of parallelism for processing and-coordinated clauses, in which both constituent order and semantic aspects of representations contribute towards incremental parallelism effects. We discuss our findings in the context of existing findings on parallelism and priming, as well as mechanisms of sentence processing.

  10. Six Years of Parallel Computing at NAS (1987 - 1993): What Have we Learned?

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In the fall of 1987 the age of parallelism at NAS began with the installation of a 32K processor CM-2 from Thinking Machines. In 1987 this was described as an "experiment" in parallel processing. In the six years since, NAS acquired a series of parallel machines, and conducted an active research and development effort focused on the use of highly parallel machines for applications in the computational aerosciences. In this time period parallel processing for scientific applications evolved from a fringe research topic into the one of main activities at NAS. In this presentation I will review the history of parallel computing at NAS in the context of the major progress, which has been made in the field in general. I will attempt to summarize the lessons we have learned so far, and the contributions NAS has made to the state of the art. Based on these insights I will comment on the current state of parallel computing (including the HPCC effort) and try to predict some trends for the next six years.

  11. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    PubMed

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  12. Mathematical Abstraction: Constructing Concept of Parallel Coordinates

    NASA Astrophysics Data System (ADS)

    Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.

    2017-09-01

    Mathematical abstraction is an important process in teaching and learning mathematics so pre-service mathematics teachers need to understand and experience this process. One of the theoretical-methodological frameworks for studying this process is Abstraction in Context (AiC). Based on this framework, abstraction process comprises of observable epistemic actions, Recognition, Building-With, Construction, and Consolidation called as RBC + C model. This study investigates and analyzes how pre-service mathematics teachers constructed and consolidated concept of Parallel Coordinates in a group discussion. It uses AiC framework for analyzing mathematical abstraction of a group of pre-service teachers consisted of four students in learning Parallel Coordinates concepts. The data were collected through video recording, students’ worksheet, test, and field notes. The result shows that the students’ prior knowledge related to concept of the Cartesian coordinate has significant role in the process of constructing Parallel Coordinates concept as a new knowledge. The consolidation process is influenced by the social interaction between group members. The abstraction process taken place in this group were dominated by empirical abstraction that emphasizes on the aspect of identifying characteristic of manipulated or imagined object during the process of recognizing and building-with.

  13. Tunable color parallel tandem organic light emitting devices with carbon nanotube and metallic sheet interlayers

    NASA Astrophysics Data System (ADS)

    Oliva, Jorge; Papadimitratos, Alexios; Desirena, Haggeo; De la Rosa, Elder; Zakhidov, Anvar A.

    2015-11-01

    Parallel tandem organic light emitting devices (OLEDs) were fabricated with transparent multiwall carbon nanotube sheets (MWCNT) and thin metal films (Al, Ag) as interlayers. In parallel monolithic tandem architecture, the MWCNT (or metallic films) interlayers are an active electrode which injects similar charges into subunits. In the case of parallel tandems with common anode (C.A.) of this study, holes are injected into top and bottom subunits from the common interlayer electrode; whereas in the configuration of common cathode (C.C.), electrons are injected into the top and bottom subunits. Both subunits of the tandem can thus be monolithically connected functionally in an active structure in which each subunit can be electrically addressed separately. Our tandem OLEDs have a polymer as emitter in the bottom subunit and a small molecule emitter in the top subunit. We also compared the performance of the parallel tandem with that of in series and the additional advantages of the parallel architecture over the in-series were: tunable chromaticity, lower voltage operation, and higher brightness. Finally, we demonstrate that processing of the MWCNT sheets as a common anode in parallel tandems is an easy and low cost process, since their integration as electrodes in OLEDs is achieved by simple dry lamination process.

  14. Relationship between mathematical abstraction in learning parallel coordinates concept and performance in learning analytic geometry of pre-service mathematics teachers: an investigation

    NASA Astrophysics Data System (ADS)

    Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.

    2018-05-01

    As one of the non-conventional mathematics concepts, Parallel Coordinates is potential to be learned by pre-service mathematics teachers in order to give them experiences in constructing richer schemes and doing abstraction process. Unfortunately, the study related to this issue is still limited. This study wants to answer a research question “to what extent the abstraction process of pre-service mathematics teachers in learning concept of Parallel Coordinates could indicate their performance in learning Analytic Geometry”. This is a case study that part of a larger study in examining mathematical abstraction of pre-service mathematics teachers in learning non-conventional mathematics concept. Descriptive statistics method is used in this study to analyze the scores from three different tests: Cartesian Coordinate, Parallel Coordinates, and Analytic Geometry. The participants in this study consist of 45 pre-service mathematics teachers. The result shows that there is a linear association between the score on Cartesian Coordinate and Parallel Coordinates. There also found that the higher levels of the abstraction process in learning Parallel Coordinates are linearly associated with higher student achievement in Analytic Geometry. The result of this study shows that the concept of Parallel Coordinates has a significant role for pre-service mathematics teachers in learning Analytic Geometry.

  15. Intergroup visual perspective-taking: Shared group membership impairs self-perspective inhibition but may facilitate perspective calculation.

    PubMed

    Simpson, Austin J; Todd, Andrew R

    2017-09-01

    Reasoning about what other people see, know, and want is essential for navigating social life. Yet, even neurodevelopmentally healthy adults make perspective-taking errors. Here, we examined how the group membership of perspective-taking targets (ingroup vs. outgroup) affects processes underlying visual perspective-taking. In three experiments using two bases of group identity (university affiliation and minimal groups), interference from one's own differing perspective (i.e., egocentric intrusion) was stronger when responding from an ingroup versus an outgroup member's perspective. Spontaneous perspective calculation, as indexed by interference from another's visual perspective when reporting one's own (i.e., altercentric intrusion), did not differ across target group membership in any of our experiments. Process-dissociation analyses, which aim to isolate automatic processes underlying altercentric-intrusion effects, further revealed negligible effects of target group membership on perspective calculation. Meta-analytically, however, there was suggestive evidence that shared group membership facilitates responding from others' perspectives when self and other perspectives are aligned. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Cedar Project---Original goals and progress to date

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cybenko, G.; Kuck, D.; Padua, D.

    1990-11-28

    This work encompasses a broad attack on high speed parallel processing. Hardware, software, applications development, and performance evaluation and visualization as well as research topics are proposed. Our goal is to develop practical parallel processing for the 1990's.

  17. Fear Control an Danger Control: A Test of the Extended Parallel Process Model (EPPM).

    ERIC Educational Resources Information Center

    Witte, Kim

    1994-01-01

    Explores cognitive and emotional mechanisms underlying success and failure of fear appeals in context of AIDS prevention. Offers general support for Extended Parallel Process Model. Suggests that cognitions lead to fear appeal success (attitude, intention, or behavior changes) via danger control processes, whereas the emotion fear leads to fear…

  18. Processing communications events in parallel active messaging interface by awakening thread from wait state

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-22

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for the context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.

  19. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R

    Endpoint-based parallel data processing with non-blocking collective instructions in a PAMI of a parallel computer is disclosed. The PAMI is composed of data communications endpoints, each including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task. The compute nodes are coupled for data communications through the PAMI. The parallel application establishes a data communications geometry specifying a set of endpoints that are used in collective operations of the PAMI by associating with the geometry a list of collective algorithms valid for use with themore » endpoints of the geometry; registering in each endpoint in the geometry a dispatch callback function for a collective operation; and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation.« less

  20. JSD: Parallel Job Accounting on the IBM SP2

    NASA Technical Reports Server (NTRS)

    Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)

    1995-01-01

    The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.

  1. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  2. Parallel Processing of Broad-Band PPM Signals

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement

    2010-01-01

    A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).

  3. Developing software to use parallel processing effectively. Final report, June-December 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Center, J.

    1988-10-01

    This report describes the difficulties involved in writing efficient parallel programs and describes the hardware and software support currently available for generating software that utilizes processing effectively. Historically, the processing rate of single-processor computers has increased by one order of magnitude every five years. However, this pace is slowing since electronic circuitry is coming up against physical barriers. Unfortunately, the complexity of engineering and research problems continues to require ever more processing power (far in excess of the maximum estimated 3 Gflops achievable by single-processor computers). For this reason, parallel-processing architectures are receiving considerable interest, since they offer high performancemore » more cheaply than a single-processor supercomputer, such as the Cray.« less

  4. Credit use: psychological perspectives on a multifaceted phenomenon.

    PubMed

    Kamleitner, Bernadette; Hoelzl, Erik; Kirchler, Erich

    2012-01-01

    Consumer borrowing is a highly topical and multifaceted phenomenon as well as a popular subject for study. We focus on consumer credit use and review the existing literature. To categorize what is known we identify four main psychological perspectives on the phenomenon: credit use as (1) a reflection of the situation, (2) a reflection of the person, (3) a cognitive process, and (4) a social process. On top of these perspectives we view credit use as a process that entails three distinct phases: (1) processes before credit acquisition, (2) processes at credit acquisition, and (3) processes after credit acquisition. We review the international literature along a two-tier structure that aligns the psychological perspectives with a process view of credit. This structure allows us to identify systematic concentrations as well as gaps in the existing research. We consolidate what is known within each perspective and identify what seems to be most urgently missing. Some of the most important gaps relate to research studying credit acquisition from the perspective of credit use as a reflection of the person or as a social process. In particular, research on credit use as a reflection of the person appears to focus exclusively on the first stage of the credit process. We conclude with a discussion that reaches across perspectives and identifies overarching gaps, trends, and open questions. We highlight a series of implicit linkages between perspectives and the geographical regions in which studies related to the perspectives were conducted. Beyond diagnosing a geographical imbalance of research, we argue for future research that systematically addresses interrelations between perspectives. We conclude with a set of global implications and research recommendations.

  5. Regional-scale calculation of the LS factor using parallel processing

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong

    2015-05-01

    With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.

  6. Compact holographic optical neural network system for real-time pattern recognition

    NASA Astrophysics Data System (ADS)

    Lu, Taiwei; Mintzer, David T.; Kostrzewski, Andrew A.; Lin, Freddie S.

    1996-08-01

    One of the important characteristics of artificial neural networks is their capability for massive interconnection and parallel processing. Recently, specialized electronic neural network processors and VLSI neural chips have been introduced in the commercial market. The number of parallel channels they can handle is limited because of the limited parallel interconnections that can be implemented with 1D electronic wires. High-resolution pattern recognition problems can require a large number of neurons for parallel processing of an image. This paper describes a holographic optical neural network (HONN) that is based on high- resolution volume holographic materials and is capable of performing massive 3D parallel interconnection of tens of thousands of neurons. A HONN with more than 16,000 neurons packaged in an attache case has been developed. Rotation- shift-scale-invariant pattern recognition operations have been demonstrated with this system. System parameters such as the signal-to-noise ratio, dynamic range, and processing speed are discussed.

  7. Parallel VLSI architecture emulation and the organization of APSA/MPP

    NASA Technical Reports Server (NTRS)

    Odonnell, John T.

    1987-01-01

    The Applicative Programming System Architecture (APSA) combines an applicative language interpreter with a novel parallel computer architecture that is well suited for Very Large Scale Integration (VLSI) implementation. The Massively Parallel Processor (MPP) can simulate VLSI circuits by allocating one processing element in its square array to an area on a square VLSI chip. As long as there are not too many long data paths, the MPP can simulate a VLSI clock cycle very rapidly. The APSA circuit contains a binary tree with a few long paths and many short ones. A skewed H-tree layout allows every processing element to simulate a leaf cell and up to four tree nodes, with no loss in parallelism. Emulation of a key APSA algorithm on the MPP resulted in performance 16,000 times faster than a Vax. This speed will make it possible for the APSA language interpreter to run fast enough to support research in parallel list processing algorithms.

  8. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  9. Parallel changes of taxonomic interaction networks in lacustrine bacterial communities induced by a polymetallic perturbation

    PubMed Central

    Laplante, Karine; Sébastien, Boutin; Derome, Nicolas

    2013-01-01

    Heavy metals released by anthropogenic activities such as mining trigger profound changes to bacterial communities. In this study we used 16S SSU rRNA gene high-throughput sequencing to characterize the impact of a polymetallic perturbation and other environmental parameters on taxonomic networks within five lacustrine bacterial communities from sites located near Rouyn-Noranda, Quebec, Canada. The results showed that community equilibrium was disturbed in terms of both diversity and structure. Moreover, heavy metals, especially cadmium combined with water acidity, induced parallel changes among sites via the selection of resistant OTUs (Operational Taxonomic Unit) and taxonomic dominance perturbations favoring the Alphaproteobacteria. Furthermore, under a similar selective pressure, covariation trends between phyla revealed conservation and parallelism within interphylum interactions. Our study sheds light on the importance of analyzing communities not only from a phylogenetic perspective but also including a quantitative approach to provide significant insights into the evolutionary forces that shape the dynamic of the taxonomic interaction networks in bacterial communities. PMID:23789031

  10. Parallel evolution of image processing tools for multispectral imagery

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Brumby, Steven P.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Szymanski, John J.; Bloch, Jeffrey J.

    2000-11-01

    We describe the implementation and performance of a parallel, hybrid evolutionary-algorithm-based system, which optimizes image processing tools for feature-finding tasks in multi-spectral imagery (MSI) data sets. Our system uses an integrated spatio-spectral approach and is capable of combining suitably-registered data from different sensors. We investigate the speed-up obtained by parallelization of the evolutionary process via multiple processors (a workstation cluster) and develop a model for prediction of run-times for different numbers of processors. We demonstrate our system on Landsat Thematic Mapper MSI , covering the recent Cerro Grande fire at Los Alamos, NM, USA.

  11. Application of integration algorithms in a parallel processing environment for the simulation of jet engines

    NASA Technical Reports Server (NTRS)

    Krosel, S. M.; Milner, E. J.

    1982-01-01

    The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.

  12. Managing internode data communications for an uninitialized process in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior tomore » initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.« less

  13. Managing internode data communications for an uninitialized process in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

  14. Parallel processing in the honeybee olfactory pathway: structure, function, and evolution.

    PubMed

    Rössler, Wolfgang; Brill, Martin F

    2013-11-01

    Animals face highly complex and dynamic olfactory stimuli in their natural environments, which require fast and reliable olfactory processing. Parallel processing is a common principle of sensory systems supporting this task, for example in visual and auditory systems, but its role in olfaction remained unclear. Studies in the honeybee focused on a dual olfactory pathway. Two sets of projection neurons connect glomeruli in two antennal-lobe hemilobes via lateral and medial tracts in opposite sequence with the mushroom bodies and lateral horn. Comparative studies suggest that this dual-tract circuit represents a unique adaptation in Hymenoptera. Imaging studies indicate that glomeruli in both hemilobes receive redundant sensory input. Recent simultaneous multi-unit recordings from projection neurons of both tracts revealed widely overlapping response profiles strongly indicating parallel olfactory processing. Whereas lateral-tract neurons respond fast with broad (generalistic) profiles, medial-tract neurons are odorant specific and respond slower. In analogy to "what-" and "where" subsystems in visual pathways, this suggests two parallel olfactory subsystems providing "what-" (quality) and "when" (temporal) information. Temporal response properties may support across-tract coincidence coding in higher centers. Parallel olfactory processing likely enhances perception of complex odorant mixtures to decode the diverse and dynamic olfactory world of a social insect.

  15. The State of Space Propulsion Research

    NASA Technical Reports Server (NTRS)

    Sackheim, R. L.; Cole, J. W.; Litchford, R. J.

    2006-01-01

    The current state of space propulsion research is assessed from both a historical perspective, spanning the decades since Apollo, and a forward-looking perspective, as defined by the enabling technologies required for a meaningful and sustainable human and robotic exploration program over the forthcoming decades. Previous research and technology investment approaches are examined and a course of action suggested for obtaining a more balanced portfolio of basic and applied research. The central recommendation is the establishment of a robust national Space Propulsion Research Initiative that would run parallel with systems development and include basic research activities. The basic framework and technical approach for this proposed initiative are defined and a potential implementation approach is recommended.

  16. Evolutionary psychology in the modern world: applications, perspectives, and strategies.

    PubMed

    Roberts, S Craig; van Vugt, Mark; Dunbar, Robin I M

    2012-12-20

    An evolutionary approach is a powerful framework which can bring new perspectives on any aspect of human behavior, to inform and complement those from other disciplines, from psychology and anthropology to economics and politics. Here we argue that insights from evolutionary psychology may be increasingly applied to address practical issues and help alleviate social problems. We outline the promise of this endeavor, and some of the challenges it faces. In doing so, we draw parallels between an applied evolutionary psychology and recent developments in Darwinian medicine, which similarly has the potential to complement conventional approaches. Finally, we describe some promising new directions which are developed in the associated papers accompanying this article.

  17. The 2nd Symposium on the Frontiers of Massively Parallel Computations

    NASA Technical Reports Server (NTRS)

    Mills, Ronnie (Editor)

    1988-01-01

    Programming languages, computer graphics, neural networks, massively parallel computers, SIMD architecture, algorithms, digital terrain models, sort computation, simulation of charged particle transport on the massively parallel processor and image processing are among the topics discussed.

  18. Big Data GPU-Driven Parallel Processing Spatial and Spatio-Temporal Clustering Algorithms

    NASA Astrophysics Data System (ADS)

    Konstantaras, Antonios; Skounakis, Emmanouil; Kilty, James-Alexander; Frantzeskakis, Theofanis; Maravelakis, Emmanuel

    2016-04-01

    Advances in graphics processing units' technology towards encompassing parallel architectures [1], comprised of thousands of cores and multiples of parallel threads, provide the foundation in terms of hardware for the rapid processing of various parallel applications regarding seismic big data analysis. Seismic data are normally stored as collections of vectors in massive matrices, growing rapidly in size as wider areas are covered, denser recording networks are being established and decades of data are being compiled together [2]. Yet, many processes regarding seismic data analysis are performed on each seismic event independently or as distinct tiles [3] of specific grouped seismic events within a much larger data set. Such processes, independent of one another can be performed in parallel narrowing down processing times drastically [1,3]. This research work presents the development and implementation of three parallel processing algorithms using Cuda C [4] for the investigation of potentially distinct seismic regions [5,6] present in the vicinity of the southern Hellenic seismic arc. The algorithms, programmed and executed in parallel comparatively, are the: fuzzy k-means clustering with expert knowledge [7] in assigning overall clusters' number; density-based clustering [8]; and a selves-developed spatio-temporal clustering algorithm encompassing expert [9] and empirical knowledge [10] for the specific area under investigation. Indexing terms: GPU parallel programming, Cuda C, heterogeneous processing, distinct seismic regions, parallel clustering algorithms, spatio-temporal clustering References [1] Kirk, D. and Hwu, W.: 'Programming massively parallel processors - A hands-on approach', 2nd Edition, Morgan Kaufman Publisher, 2013 [2] Konstantaras, A., Valianatos, F., Varley, M.R. and Makris, J.P.: 'Soft-Computing Modelling of Seismicity in the Southern Hellenic Arc', Geoscience and Remote Sensing Letters, vol. 5 (3), pp. 323-327, 2008 [3] Papadakis, S. and Diamantaras, K.: 'Programming and architecture of parallel processing systems', 1st Edition, Eds. Kleidarithmos, 2011 [4] NVIDIA.: 'NVidia CUDA C Programming Guide', version 5.0, NVidia (reference book) [5] Konstantaras, A.: 'Classification of Distinct Seismic Regions and Regional Temporal Modelling of Seismicity in the Vicinity of the Hellenic Seismic Arc', IEEE Selected Topics in Applied Earth Observations and Remote Sensing, vol. 6 (4), pp. 1857-1863, 2013 [6] Konstantaras, A. Varley, M.R.,. Valianatos, F., Collins, G. and Holifield, P.: 'Recognition of electric earthquake precursors using neuro-fuzzy models: methodology and simulation results', Proc. IASTED International Conference on Signal Processing Pattern Recognition and Applications (SPPRA 2002), Crete, Greece, 2002, pp 303-308, 2002 [7] Konstantaras, A., Katsifarakis, E., Maravelakis, E., Skounakis, E., Kokkinos, E. and Karapidakis, E.: 'Intelligent Spatial-Clustering of Seismicity in the Vicinity of the Hellenic Seismic Arc', Earth Science Research, vol. 1 (2), pp. 1-10, 2012 [8] Georgoulas, G., Konstantaras, A., Katsifarakis, E., Stylios, C.D., Maravelakis, E. and Vachtsevanos, G.: '"Seismic-Mass" Density-based Algorithm for Spatio-Temporal Clustering', Expert Systems with Applications, vol. 40 (10), pp. 4183-4189, 2013 [9] Konstantaras, A. J.: 'Expert knowledge-based algorithm for the dynamic discrimination of interactive natural clusters', Earth Science Informatics, 2015 (In Press, see: www.scopus.com) [10] Drakatos, G. and Latoussakis, J.: 'A catalog of aftershock sequences in Greece (1971-1997): Their spatial and temporal characteristics', Journal of Seismology, vol. 5, pp. 137-145, 2001

  19. An automated workflow for parallel processing of large multiview SPIM recordings

    PubMed Central

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-01-01

    Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585

  20. An automated workflow for parallel processing of large multiview SPIM recordings.

    PubMed

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-04-01

    Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  1. Multiprocessor speed-up, Amdahl's Law, and the Activity Set Model of parallel program behavior

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol

    1988-01-01

    An important issue in the effective use of parallel processing is the estimation of the speed-up one may expect as a function of the number of processors used. Amdahl's Law has traditionally provided a guideline to this issue, although it appears excessively pessimistic in the light of recent experimental results. In this note, Amdahl's Law is amended by giving a greater importance to the capacity of a program to make effective use of parallel processing, but also recognizing the fact that imbalance of the workload of each processor is bound to occur. An activity set model of parallel program behavior is then introduced along with the corresponding parallelism index of a program, leading to upper and lower bounds to the speed-up.

  2. Robust human detection, tracking, and recognition in crowded urban areas

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2014-06-01

    In this paper, we present algorithms we recently developed to support an automated security surveillance system for very crowded urban areas. In our approach for human detection, the color features are obtained by taking the difference of R, G, B spectrum and converting R, G, B to HSV (Hue, Saturation, Value) space. Morphological patch filtering and regional minimum and maximum segmentation on the extracted features are applied for target detection. The human tracking process approach includes: 1) Color and intensity feature matching track candidate selection; 2) Separate three parallel trackers for color, bright (above mean intensity), and dim (below mean intensity) detections, respectively; 3) Adaptive track gate size selection for reducing false tracking probability; and 4) Forward position prediction based on previous moving speed and direction for continuing tracking even when detections are missed from frame to frame. The Human target recognition is improved with a Super-Resolution Image Enhancement (SRIE) process. This process can improve target resolution by 3-5 times and can simultaneously process many targets that are tracked. Our approach can project tracks from one camera to another camera with a different perspective viewing angle to obtain additional biometric features from different perspective angles, and to continue tracking the same person from the 2nd camera even though the person moved out of the Field of View (FOV) of the 1st camera with `Tracking Relay'. Finally, the multiple cameras at different view poses have been geo-rectified to nadir view plane and geo-registered with Google- Earth (or other GIS) to obtain accurate positions (latitude, longitude, and altitude) of the tracked human for pin-point targeting and for a large area total human motion activity top-view. Preliminary tests of our algorithms indicate than high probability of detection can be achieved for both moving and stationary humans. Our algorithms can simultaneously track more than 100 human targets with averaged tracking period (time length) longer than the performance of the current state-of-the-art.

  3. Understanding Science: Frameworks for using stories to facilitate systems thinking

    NASA Astrophysics Data System (ADS)

    ElShafie, S. J.; Bean, J. R.

    2017-12-01

    Studies indicate that using a narrative structure for teaching and learning helps audiences to process and recall new information. Stories also help audiences retain specific information, such as character names or plot points, in the context of a broader narrative. Stories can therefore facilitate high-context systems learning in addition to low-context declarative learning. Here we incorporate a framework for science storytelling, which we use in communication workshops, with the Understanding Science framework developed by the UC Museum of Paleontology (UCMP) to explore the application of storytelling to systems thinking. We translate portions of the Understanding Science flowchart into narrative terms. Placed side by side, the two charts illustrate the parallels between the scientific process and the story development process. They offer a roadmap for developing stories about scientific studies and concepts. We also created a series of worksheets for use with the flowcharts. These new tools can generate stories from any perspective, including a scientist conducting a study; a character that plays a role in a larger system (e.g., foraminifera or a carbon atom); an entire system that interacts with other systems (e.g., the carbon cycle). We will discuss exemplar stories about climate change from each of these perspectives, which we are developing for workshops using content and storyboard models from the new UCMP website Understanding Global Change. This conceptual framework and toolkit will help instructors to develop stories about scientific concepts for use in a classroom setting. It will also help students to analyze stories presented in class, and to create their own stories about new concepts. This approach facilitates student metacognition of the learning process, and can also be used as a form of evaluation. We are testing this flowchart and its use in systems teaching with focus groups, in preparation for use in teacher professional development workshops.

  4. The individual therapy process questionnaire: development and validation of a revised measure to evaluate general change mechanisms in psychotherapy.

    PubMed

    Mander, Johannes

    2015-01-01

    There is a dearth of measures specifically designed to assess empirically validated mechanisms of therapeutic change. To fill in this research gap, the aim of the current study was to develop a measure that covers a large variety of empirically validated mechanisms of change with corresponding versions for the patient and therapist. To develop an instrument that is based on several important change process frameworks, we combined two established change mechanisms instruments: the Scale for the Multiperspective Assessment of General Change Mechanisms in Psychotherapy (SACiP) and the Scale of the Therapeutic Alliance-Revised (STA-R). In our study, 457 psychosomatic inpatients completed the SACiP and the STA-R and diverse outcome measures in early, middle and late stages of psychotherapy. Data analyses were conducted using factor analyses and multilevel modelling. The psychometric properties of the resulting Individual Therapy Process Questionnaire were generally good to excellent, as demonstrated by (a) exploratory factor analyses on both patient and therapist ratings, (b) CFA on later measuring times, (c) high internal consistencies and (d) significant outcome predictive effects. The parallel forms of the ITPQ deliver opportunities to compare the patient and therapist perspectives for a broader range of facets of change mechanisms than was hitherto possible. Consequently, the measure can be applied in future research to more specifically analyse different change mechanism profiles in session-to-session development and outcome prediction. Key Practitioner Message This article describes the development of an instrument that measures general mechanisms of change in psychotherapy from both the patient and therapist perspectives. Post-session item ratings from both the patient and therapist can be used as feedback to optimize therapeutic processes. We provide a detailed discussion of measures developed to evaluate therapeutic change mechanisms. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Emerging Methods and Systems for Observing Life in the Sea

    NASA Astrophysics Data System (ADS)

    Chavez, F.; Pearlman, J.; Simmons, S. E.

    2016-12-01

    There is a growing need for observations of life in the sea at time and space scales consistent with those made for physical and chemical parameters. International programs such as the Global Ocean Observing System (GOOS) and Marine Biodiversity Observation Networks (MBON) are making the case for expanded biological observations and working diligently to prioritize essential variables. Here we review past, present and emerging systems and methods for observing life in the sea from the perspective of maintaining continuous observations over long time periods. Methods that rely on ships with instrumentation and over-the-side sample collections will need to be supplemented and eventually replaced with those based from autonomous platforms. Ship-based optical and acoustic instruments are being reduced in size and power for deployment on moorings and autonomous vehicles. In parallel a new generation of low power, improved resolution sensors are being developed. Animal bio-logging is evolving with new, smaller and more sophisticated tags being developed. New genomic methods, capable of assessing multiple trophic levels from a single water sample, are emerging. Autonomous devices for genomic sample collection are being miniaturized and adapted to autonomous vehicles. The required processing schemes and methods for these emerging data collections are being developed in parallel with the instrumentation. An evolving challenge will be the integration of information from these disparate methods given that each provides their own unique view of life in the sea.

  6. Research on moving object detection based on frog's eyes

    NASA Astrophysics Data System (ADS)

    Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan

    2008-12-01

    On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.

  7. AZTEC. Parallel Iterative method Software for Solving Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, S.; Shadid, J.; Tuminaro, R.

    1995-07-01

    AZTEC is an interactive library that greatly simplifies the parrallelization process when solving the linear systems of equations Ax=b where A is a user supplied n X n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. AZTEC is intended as a software tool for users who want to avoid cumbersome parallel programming details but who have large sparse linear systems which require an efficiently utilized parallel processing system. A collection of data transformation tools are provided that allow for easy creation of distributed sparse unstructured matricesmore » for parallel solutions.« less

  8. High Performance Input/Output for Parallel Computer Systems

    NASA Technical Reports Server (NTRS)

    Ligon, W. B.

    1996-01-01

    The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.

  9. Efficient Parallel Levenberg-Marquardt Model Fitting towards Real-Time Automated Parametric Imaging Microscopy

    PubMed Central

    Zhu, Xiang; Zhang, Dianwen

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy. PMID:24130785

  10. FPGA-Based Filterbank Implementation for Parallel Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Berner, Stephan; DeLeon, Phillip

    1999-01-01

    One approach to parallel digital signal processing decomposes a high bandwidth signal into multiple lower bandwidth (rate) signals by an analysis bank. After processing, the subband signals are recombined into a fullband output signal by a synthesis bank. This paper describes an implementation of the analysis and synthesis banks using (Field Programmable Gate Arrays) FPGAs.

  11. Parallel Processing of the Target Language during Source Language Comprehension in Interpreting

    ERIC Educational Resources Information Center

    Dong, Yanping; Lin, Jiexuan

    2013-01-01

    Two experiments were conducted to test the hypothesis that the parallel processing of the target language (TL) during source language (SL) comprehension in interpreting may be influenced by two factors: (i) link strength from SL to TL, and (ii) the interpreter's cognitive resources supplement to TL processing during SL comprehension. The…

  12. A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Owen, Jeffrey E.

    1988-01-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  13. A parallel implementation of an off-lattice individual-based model of multicellular populations

    NASA Astrophysics Data System (ADS)

    Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe

    2015-07-01

    As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.

  14. The AIS-5000 parallel processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmitt, L.A.; Wilson, S.S.

    1988-05-01

    The AIS-5000 is a commercially available massively parallel processor which has been designed to operate in an industrial environment. It has fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture. The processing elements are arranged in a one-dimensional chain that, for computer vision applications, can be as wide as the image itself. This architecture has superior cost/performance characteristics than two-dimensional mesh-connected systems. The design of the processing elements and their interconnections as well as the software used to program the system allow a wide variety of algorithms and applications to be implemented. In thismore » paper, the overall architecture of the system is described. Various components of the system are discussed, including details of the processing elements, data I/O pathways and parallel memory organization. A virtual two-dimensional model for programming image-based algorithms for the system is presented. This model is supported by the AIS-5000 hardware and software and allows the system to be treated as a full-image-size, two-dimensional, mesh-connected parallel processor. Performance bench marks are given for certain simple and complex functions.« less

  15. Progress in Unsteady Turbopump Flow Simulations

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin C.; Chan, William; Kwak, Dochan; Williams, Robert

    2002-01-01

    This viewgraph presentation discusses unsteady flow simulations for a turbopump intended for a reusable launch vehicle (RLV). The simulation process makes use of computational grids and parallel processing. The architecture of the parallel computers used is discussed, as is the scripting of turbopump simulations.

  16. Parallel processing optimization strategy based on MapReduce model in cloud storage environment

    NASA Astrophysics Data System (ADS)

    Cui, Jianming; Liu, Jiayi; Li, Qiuyan

    2017-05-01

    Currently, a large number of documents in the cloud storage process employed the way of packaging after receiving all the packets. From the local transmitter this stored procedure to the server, packing and unpacking will consume a lot of time, and the transmission efficiency is low as well. A new parallel processing algorithm is proposed to optimize the transmission mode. According to the operation machine graphs model work, using MPI technology parallel execution Mapper and Reducer mechanism. It is good to use MPI technology to implement Mapper and Reducer parallel mechanism. After the simulation experiment of Hadoop cloud computing platform, this algorithm can not only accelerate the file transfer rate, but also shorten the waiting time of the Reducer mechanism. It will break through traditional sequential transmission constraints and reduce the storage coupling to improve the transmission efficiency.

  17. 1060-nm VCSEL-based parallel-optical modules for optical interconnects

    NASA Astrophysics Data System (ADS)

    Nishimura, N.; Nagashima, K.; Kise, T.; Rizky, A. F.; Uemura, T.; Nekado, Y.; Ishikawa, Y.; Nasu, H.

    2015-03-01

    The capability of mounting a parallel-optical module onto a PCB through solder-reflow process contributes to reduce the number of piece parts, simplify its assembly process, and minimize a foot print for both AOC and on-board applications. We introduce solder-reflow-capable parallel-optical modules employing 1060-nm InGaAs/GaAs VCSEL which leads to the advantages of realizing wider modulation bandwidth, longer transmission distance, and higher reliability. We demonstrate 4-channel parallel optical link performance operated at a bit stream of 28 Gb/s 231-1 PRBS for each channel and transmitted through a 50-μm-core MMF beyond 500 m. We also introduce a new mounting technology of paralleloptical module to realize maintaining good coupling and robust electrical connection during solder-reflow process between an optical module and a polymer-waveguide-embedded PCB.

  18. [The parallelisms in of sound signal of domestic sheep and Northern fur seals].

    PubMed

    Nikol'skiĭ, A A; Lisitsina, T Iu

    2011-01-01

    The parallelisms in communicative behavior of domestic sheep and Northern fur seals within a herd are accompanied by parallelisms in parameters of sound signal, the calling scream. This signal ensures ties between babies and their mothers at a long distance. The basis of parallelisms is formed by amplitude modulation at two levels: the one being a direct amplitude modulation of the carrier frequency and the other--modulation of the carrier frequency oscillation. Parallelisms in the signal oscillatory process result in corresponding parallelisms in the structure of its frequency spectrum.

  19. An architecture for real-time vision processing

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong

    1994-01-01

    To study the feasibility of developing an architecture for real time vision processing, a task queue server and parallel algorithms for two vision operations were designed and implemented on an i860-based Mercury Computing System 860VS array processor. The proposed architecture treats each vision function as a task or set of tasks which may be recursively divided into subtasks and processed by multiple processors coordinated by a task queue server accessible by all processors. Each idle processor subsequently fetches a task and associated data from the task queue server for processing and posts the result to shared memory for later use. Load balancing can be carried out within the processing system without the requirement for a centralized controller. The author concludes that real time vision processing cannot be achieved without both sequential and parallel vision algorithms and a good parallel vision architecture.

  20. Cloud object store for archive storage of high performance computing data using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  1. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  2. Creating a Parallel Version of VisIt for Microsoft Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitlock, B J; Biagas, K S; Rawson, P L

    2011-12-07

    VisIt is a popular, free interactive parallel visualization and analysis tool for scientific data. Users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images or movies for presentations. VisIt was designed from the ground up to work on many scales of computers from modest desktops up to massively parallel clusters. VisIt is comprised of a set of cooperating programs. All programs can be run locally or in client/server mode in which some run locally and some run remotely on compute clusters. The VisIt program most able to harness today's computing powermore » is the VisIt compute engine. The compute engine is responsible for reading simulation data from disk, processing it, and sending results or images back to the VisIt viewer program. In a parallel environment, the compute engine runs several processes, coordinating using the Message Passing Interface (MPI) library. Each MPI process reads some subset of the scientific data and filters the data in various ways to create useful visualizations. By using MPI, VisIt has been able to scale well into the thousands of processors on large computers such as dawn and graph at LLNL. The advent of multicore CPU's has made parallelism the 'new' way to achieve increasing performance. With today's computers having at least 2 cores and in many cases up to 8 and beyond, it is more important than ever to deploy parallel software that can use that computing power not only on clusters but also on the desktop. We have created a parallel version of VisIt for Windows that uses Microsoft's MPI implementation (MSMPI) to process data in parallel on the Windows desktop as well as on a Windows HPC cluster running Microsoft Windows Server 2008. Initial desktop parallel support for Windows was deployed in VisIt 2.4.0. Windows HPC cluster support has been completed and will appear in the VisIt 2.5.0 release. We plan to continue supporting parallel VisIt on Windows so our users will be able to take full advantage of their multicore resources.« less

  3. Parallel processing spacecraft communication system

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary S. (Inventor); Donaldson, James A. (Inventor); Luong, Huy H. (Inventor); Wood, Steven H. (Inventor)

    1998-01-01

    An uplink controlling assembly speeds data processing using a special parallel codeblock technique. A correct start sequence initiates processing of a frame. Two possible start sequences can be used; and the one which is used determines whether data polarity is inverted or non-inverted. Processing continues until uncorrectable errors are found. The frame ends by intentionally sending a block with an uncorrectable error. Each of the codeblocks in the frame has a channel ID. Each channel ID can be separately processed in parallel. This obviates the problem of waiting for error correction processing. If that channel number is zero, however, it indicates that the frame of data represents a critical command only. That data is handled in a special way, independent of the software. Otherwise, the processed data further handled using special double buffering techniques to avoid problems from overrun. When overrun does occur, the system takes action to lose only the oldest data.

  4. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform.

    PubMed

    Cao, Jianfang; Chen, Lichao; Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance.

  5. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.

    PubMed

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.

  6. Spatial processing in the auditory cortex of the macaque monkey

    NASA Astrophysics Data System (ADS)

    Recanzone, Gregg H.

    2000-10-01

    The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.

  7. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    PubMed Central

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical modeling. PMID:27044039

  8. Brain plasticity and functionality explored by nonlinear optical microscopy

    NASA Astrophysics Data System (ADS)

    Sacconi, L.; Allegra, L.; Buffelli, M.; Cesare, P.; D'Angelo, E.; Gandolfi, D.; Grasselli, G.; Lotti, J.; Mapelli, J.; Strata, P.; Pavone, F. S.

    2010-02-01

    In combination with fluorescent protein (XFP) expression techniques, two-photon microscopy has become an indispensable tool to image cortical plasticity in living mice. In parallel to its application in imaging, multi-photon absorption has also been used as a tool for the dissection of single neurites with submicrometric precision without causing any visible collateral damage to the surrounding neuronal structures. In this work, multi-photon nanosurgery is applied to dissect single climbing fibers expressing GFP in the cerebellar cortex. The morphological consequences are then characterized with time lapse 3-dimensional two-photon imaging over a period of minutes to days after the procedure. Preliminary investigations show that the laser induced fiber dissection recalls a regenerative process in the fiber itself over a period of days. These results show the possibility of this innovative technique to investigate regenerative processes in adult brain. In parallel with imaging and manipulation technique, non-linear microscopy offers the opportunity to optically record electrical activity in intact neuronal networks. In this work, we combined the advantages of second-harmonic generation (SHG) with a random access (RA) excitation scheme to realize a new microscope (RASH) capable of optically recording fast membrane potential events occurring in a wide-field of view. The RASH microscope, in combination with bulk loading of tissue with FM4-64 dye, was used to simultaneously record electrical activity from clusters of Purkinje cells in acute cerebellar slices. Complex spikes, both synchronous and asynchronous, were optically recorded simultaneously across a given population of neurons. Spontaneous electrical activity was also monitored simultaneously in pairs of neurons, where action potentials were recorded without averaging across trials. These results show the strength of this technique in describing the temporal dynamics of neuronal assemblies, opening promising perspectives in understanding the computations of neuronal networks.

  9. Obsessive-compulsive tendencies are associated with a focused information processing strategy.

    PubMed

    Soref, Assaf; Dar, Reuven; Argov, Galit; Meiran, Nachshon

    2008-12-01

    The study examined the hypothesis that obsessive-compulsive (OC) tendencies are related to a reliance on focused and serial rather than a parallel, speed-oriented information processing style. Ten students with high OC tendencies and 10 students with low OC tendencies performed the flanker task, in which they were required to quickly classify a briefly presented target letter (S or H) that was flanked by compatible (e.g., SSSSS) or incompatible (e.g., HHSHH) noise letters. Participants received 4 blocks of 100 trials each, two with 50% compatible trials and two with 80% compatible trials and were informed of the probability of compatible trials before the beginning of each block. As predicted, high OC participants, as compared to low OC participants, had slower overall reaction time (RT) and lower tendency for parallel processing (defined as incompatible trials RT minus compatible trials RT). Low, more than high OC participants tended to adjust their focused/parallel processing including a shift towards parallel processing in blocks with 80% compatible trials and in trials following compatible trials. Implications of these results to the cognitive theory and therapy of OCD are discussed.

  10. Next Generation Parallelization Systems for Processing and Control of PDS Image Node Assets

    NASA Astrophysics Data System (ADS)

    Verma, R.

    2017-06-01

    We present next-generation parallelization tools to help Planetary Data System (PDS) Imaging Node (IMG) better monitor, process, and control changes to nearly 650 million file assets and over a dozen machines on which they are referenced or stored.

  11. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  12. Hierarchical Parallelization of Gene Differential Association Analysis

    PubMed Central

    2011-01-01

    Background Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication) parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Results Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. Conclusions The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels. PMID:21936916

  13. Hierarchical parallelization of gene differential association analysis.

    PubMed

    Needham, Mark; Hu, Rui; Dwarkadas, Sandhya; Qiu, Xing

    2011-09-21

    Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication) parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels.

  14. Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo

    NASA Astrophysics Data System (ADS)

    Khosravi, Ebrahim

    1998-12-01

    This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.

  15. Modeling Cooperative Threads to Project GPU Performance for Adaptive Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Jiayuan; Uram, Thomas; Morozov, Vitali A.

    Most accelerators, such as graphics processing units (GPUs) and vector processors, are particularly suitable for accelerating massively parallel workloads. On the other hand, conventional workloads are developed for multi-core parallelism, which often scale to only a few dozen OpenMP threads. When hardware threads significantly outnumber the degree of parallelism in the outer loop, programmers are challenged with efficient hardware utilization. A common solution is to further exploit the parallelism hidden deep in the code structure. Such parallelism is less structured: parallel and sequential loops may be imperfectly nested within each other, neigh boring inner loops may exhibit different concurrency patternsmore » (e.g. Reduction vs. Forall), yet have to be parallelized in the same parallel section. Many input-dependent transformations have to be explored. A programmer often employs a larger group of hardware threads to cooperatively walk through a smaller outer loop partition and adaptively exploit any encountered parallelism. This process is time-consuming and error-prone, yet the risk of gaining little or no performance remains high for such workloads. To reduce risk and guide implementation, we propose a technique to model workloads with limited parallelism that can automatically explore and evaluate transformations involving cooperative threads. Eventually, our framework projects the best achievable performance and the most promising transformations without implementing GPU code or using physical hardware. We envision our technique to be integrated into future compilers or optimization frameworks for autotuning.« less

  16. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  17. Process optimization using combinatorial design principles: parallel synthesis and design of experiment methods.

    PubMed

    Gooding, Owen W

    2004-06-01

    The use of parallel synthesis techniques with statistical design of experiment (DoE) methods is a powerful combination for the optimization of chemical processes. Advances in parallel synthesis equipment and easy to use software for statistical DoE have fueled a growing acceptance of these techniques in the pharmaceutical industry. As drug candidate structures become more complex at the same time that development timelines are compressed, these enabling technologies promise to become more important in the future.

  18. Options for Parallelizing a Planning and Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin D.

    2011-01-01

    Space missions have a growing interest in putting multi-core processors onboard spacecraft. For many missions processing power significantly slows operations. We investigate how continual planning and scheduling algorithms can exploit multi-core processing and outline different potential design decisions for a parallelized planning architecture. This organization of choices and challenges helps us with an initial design for parallelizing the CASPER planning system for a mesh multi-core processor. This work extends that presented at another workshop with some preliminary results.

  19. Illuminating the dark matter of social neuroscience: Considering the problem of social interaction from philosophical, psychological, and neuroscientific perspectives

    PubMed Central

    Przyrembel, Marisa; Smallwood, Jonathan; Pauen, Michael; Singer, Tania

    2012-01-01

    Successful human social interaction depends on our capacity to understand other people's mental states and to anticipate how they will react to our actions. Despite its importance to the human condition, the exact mechanisms underlying our ability to understand another's actions, feelings, and thoughts are still a matter of conjecture. Here, we consider this problem from philosophical, psychological, and neuroscientific perspectives. In a critical review, we demonstrate that attempts to draw parallels across these complementary disciplines is premature: The second-person perspective does not map directly to Interaction or Simulation theories, online social cognition, or shared neural network accounts underlying action observation or empathy. Nor does the third-person perspective map onto Theory-Theory (TT), offline social cognition, or the neural networks that support Theory of Mind (ToM). Moreover, we argue that important qualities of social interaction emerge through the reciprocal interplay of two independent agents whose unpredictable behavior requires that models of their partner's internal state be continually updated. This analysis draws attention to the need for paradigms in social neuroscience that allow two individuals to interact in a spontaneous and natural manner and to adapt their behavior and cognitions in a response contingent fashion due to the inherent unpredictability in another person's behavior. Even if such paradigms were implemented, it is possible that the specific neural correlates supporting such reciprocal interaction would not reflect computation unique to social interaction but rather the use of basic cognitive and emotional processes combined in a unique manner. Finally, we argue that given the crucial role of social interaction in human evolution, ontogeny, and every-day social life, a more theoretically and methodologically nuanced approach to the study of real social interaction will nevertheless help the field of social cognition to evolve. PMID:22737120

  20. Re-forming supercritical quasi-parallel shocks. I - One- and two-dimensional simulations

    NASA Technical Reports Server (NTRS)

    Thomas, V. A.; Winske, D.; Omidi, N.

    1990-01-01

    The process of reforming supercritical quasi-parallel shocks is investigated using one-dimensional and two-dimensional hybrid (particle ion, massless fluid electron) simulations both of shocks and of simpler two-stream interactions. It is found that the supercritical quasi-parallel shock is not steady. Instread of a well-defined shock ramp between upstream and downstream states that remains at a fixed position in the flow, the ramp periodically steepens, broadens, and then reforms upstream of its former position. It is concluded that the wave generation process is localized at the shock ramp and that the reformation process proceeds in the absence of upstream perturbations intersecting the shock.

  1. Overview of the DART project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, K.R.; Hansen, F.R.; Napolitano, L.M.

    1992-01-01

    DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate ( C'' or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability bymore » using DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less

  2. Overview of the DART project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, K.R.; Hansen, F.R.; Napolitano, L.M.

    1992-01-01

    DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate (``C`` or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability by usingmore » DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less

  3. A Debugger for Computational Grid Applications

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of a debugger for computational grid applications. Details are given on NAS parallel tools groups (including parallelization support tools, evaluation of various parallelization strategies, and distributed and aggregated computing), debugger dependencies, scalability, initial implementation, the process grid, and information on Globus.

  4. Psychodrama: A Creative Approach for Addressing Parallel Process in Group Supervision

    ERIC Educational Resources Information Center

    Hinkle, Michelle Gimenez

    2008-01-01

    This article provides a model for using psychodrama to address issues of parallel process during group supervision. Information on how to utilize the specific concepts and techniques of psychodrama in relation to group supervision is discussed. A case vignette of the model is provided.

  5. Telemetry downlink interfaces and level-zero processing

    NASA Technical Reports Server (NTRS)

    Horan, S.; Pfeiffer, J.; Taylor, J.

    1991-01-01

    The technical areas being investigated are as follows: (1) processing of space to ground data frames; (2) parallel architecture performance studies; and (3) parallel programming techniques. Additionally, the University administrative details and the technical liaison between New Mexico State University and Goddard Space Flight Center are addressed.

  6. Language Classification using N-grams Accelerated by FPGA-based Bloom Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, A; Gokhale, M

    N-Gram (n-character sequences in text documents) counting is a well-established technique used in classifying the language of text in a document. In this paper, n-gram processing is accelerated through the use of reconfigurable hardware on the XtremeData XD1000 system. Our design employs parallelism at multiple levels, with parallel Bloom Filters accessing on-chip RAM, parallel language classifiers, and parallel document processing. In contrast to another hardware implementation (HAIL algorithm) that uses off-chip SRAM for lookup, our highly scalable implementation uses only on-chip memory blocks. Our implementation of end-to-end language classification runs at 85x comparable software and 1.45x the competing hardware design.

  7. Parallel processing implementation for the coupled transport of photons and electrons using OpenMP

    NASA Astrophysics Data System (ADS)

    Doerner, Edgardo

    2016-05-01

    In this work the use of OpenMP to implement the parallel processing of the Monte Carlo (MC) simulation of the coupled transport for photons and electrons is presented. This implementation was carried out using a modified EGSnrc platform which enables the use of the Microsoft Visual Studio 2013 (VS2013) environment, together with the developing tools available in the Intel Parallel Studio XE 2015 (XE2015). The performance study of this new implementation was carried out in a desktop PC with a multi-core CPU, taking as a reference the performance of the original platform. The results were satisfactory, both in terms of scalability as parallelization efficiency.

  8. Parallel Processing Strategies of the Primate Visual System

    PubMed Central

    Nassi, Jonathan J.; Callaway, Edward M.

    2009-01-01

    Preface Incoming sensory information is sent to the brain along modality-specific channels corresponding to the five senses. Each of these channels further parses the incoming signals into parallel streams to provide a compact, efficient input to the brain. Ultimately, these parallel input signals must be elaborated upon and integrated within the cortex to provide a unified and coherent percept. Recent studies in the primate visual cortex have greatly contributed to our understanding of how this goal is accomplished. Multiple strategies including retinal tiling, hierarchical and parallel processing and modularity, defined spatially and by cell type-specific connectivity, are all used by the visual system to recover the rich detail of our visual surroundings. PMID:19352403

  9. Design of high-performance parallelized gene predictors in MATLAB.

    PubMed

    Rivard, Sylvain Robert; Mailloux, Jean-Gabriel; Beguenane, Rachid; Bui, Hung Tien

    2012-04-10

    This paper proposes a method of implementing parallel gene prediction algorithms in MATLAB. The proposed designs are based on either Goertzel's algorithm or on FFTs and have been implemented using varying amounts of parallelism on a central processing unit (CPU) and on a graphics processing unit (GPU). Results show that an implementation using a straightforward approach can require over 4.5 h to process 15 million base pairs (bps) whereas a properly designed one could perform the same task in less than five minutes. In the best case, a GPU implementation can yield these results in 57 s. The present work shows how parallelism can be used in MATLAB for gene prediction in very large DNA sequences to produce results that are over 270 times faster than a conventional approach. This is significant as MATLAB is typically overlooked due to its apparent slow processing time even though it offers a convenient environment for bioinformatics. From a practical standpoint, this work proposes two strategies for accelerating genome data processing which rely on different parallelization mechanisms. Using a CPU, the work shows that direct access to the MEX function increases execution speed and that the PARFOR construct should be used in order to take full advantage of the parallelizable Goertzel implementation. When the target is a GPU, the work shows that data needs to be segmented into manageable sizes within the GFOR construct before processing in order to minimize execution time.

  10. Observing with HST V: Improvements to the Scheduling of HST Parallel Observations

    NASA Astrophysics Data System (ADS)

    Taylor, D. K.; Vanorsow, D.; Lucks, M.; Henry, R.; Ratnatunga, K.; Patterson, A.

    1994-12-01

    Recent improvements to the Hubble Space Telescope (HST) ground system have significantly increased the frequency of pure parallel observations, i.e. the simultaneous use of multiple HST instruments by different observers. Opportunities for parallel observations are limited by a variety of timing, hardware, and scientific constraints. Formerly, such opportunities were heuristically predicted prior to the construction of the primary schedule (or calendar), and lack of complete information resulted in high rates of scheduling failures and missed opportunities. In the current process the search for parallel opportunities is delayed until the primary schedule is complete, at which point new software tools are employed to identify places where parallel observations are supported. The result has been a considerable increase in parallel throughput. A new technique, known as ``parallel crafting,'' is currently under development to streamline further the parallel scheduling process. This radically new method will replace the standard exposure logsheet with a set of abstract rules from which observation parameters will be constructed ``on the fly'' to best match the constraints of the parallel opportunity. Currently, parallel observers must specify a huge (and highly redundant) set of exposure types in order to cover all possible types of parallel opportunities. Crafting rules permit the observer to express timing, filter, and splitting preferences in a far more succinct manner. The issue of coordinated parallel observations (same PI using different instruments simultaneously), long a troublesome aspect of the ground system, is also being addressed. For Cycle 5, the Phase II Proposal Instructions now have an exposure-level PAR WITH special requirement. While only the primary's alignment will be scheduled on the calendar, new commanding will provide for parallel exposures with both instruments.

  11. Full Stokes finite-element modeling of ice sheets using a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Seddik, H.; Greve, R.

    2016-12-01

    Thermo-mechanical simulation of ice sheets is an important approach to understand and predict their evolution in a changing climate. For that purpose, higher order (e.g., ISSM, BISICLES) and full Stokes (e.g., Elmer/Ice, http://elmerice.elmerfem.org) models are increasingly used to more accurately model the flow of entire ice sheets. In parallel to this development, the rapidly improving performance and capabilities of Graphics Processing Units (GPUs) allows to efficiently offload more calculations of complex and computationally demanding problems on those devices. Thus, in order to continue the trend of using full Stokes models with greater resolutions, using GPUs should be considered for the implementation of ice sheet models. We developed the GPU-accelerated ice-sheet model Sainō. Sainō is an Elmer (http://www.csc.fi/english/pages/elmer) derivative implemented in Objective-C which solves the full Stokes equations with the finite element method. It uses the standard OpenCL language (http://www.khronos.org/opencl/) to offload the assembly of the finite element matrix on the GPU. A mesh-coloring scheme is used so that elements with the same color (non-sharing nodes) are assembled in parallel on the GPU without the need for synchronization primitives. The current implementation shows that, for the ISMIP-HOM experiment A, during the matrix assembly in double precision with 8000, 87,500 and 252,000 brick elements, Sainō is respectively 2x, 10x and 14x faster than Elmer/Ice (when both models are run on a single processing unit). In single precision, Sainō is even 3x, 20x and 25x faster than Elmer/Ice. A detailed description of the comparative results between Sainō and Elmer/Ice will be presented, and further perspectives in optimization and the limitations of the current implementation.

  12. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  13. Parallel design patterns for a low-power, software-defined compressed video encoder

    NASA Astrophysics Data System (ADS)

    Bruns, Michael W.; Hunt, Martin A.; Prasad, Durga; Gunupudi, Nageswara R.; Sonachalam, Sekar

    2011-06-01

    Video compression algorithms such as H.264 offer much potential for parallel processing that is not always exploited by the technology of a particular implementation. Consumer mobile encoding devices often achieve real-time performance and low power consumption through parallel processing in Application Specific Integrated Circuit (ASIC) technology, but many other applications require a software-defined encoder. High quality compression features needed for some applications such as 10-bit sample depth or 4:2:2 chroma format often go beyond the capability of a typical consumer electronics device. An application may also need to efficiently combine compression with other functions such as noise reduction, image stabilization, real time clocks, GPS data, mission/ESD/user data or software-defined radio in a low power, field upgradable implementation. Low power, software-defined encoders may be implemented using a massively parallel memory-network processor array with 100 or more cores and distributed memory. The large number of processor elements allow the silicon device to operate more efficiently than conventional DSP or CPU technology. A dataflow programming methodology may be used to express all of the encoding processes including motion compensation, transform and quantization, and entropy coding. This is a declarative programming model in which the parallelism of the compression algorithm is expressed as a hierarchical graph of tasks with message communication. Data parallel and task parallel design patterns are supported without the need for explicit global synchronization control. An example is described of an H.264 encoder developed for a commercially available, massively parallel memorynetwork processor device.

  14. A parallel computational model for GATE simulations.

    PubMed

    Rannou, F R; Vega-Acevedo, N; El Bitar, Z

    2013-12-01

    GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu

    1995-01-01

    As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.

  16. Traditional Chinese medicine on the effects of low-intensity laser irradiation on cells

    NASA Astrophysics Data System (ADS)

    Liu, Timon C.; Duan, Rui; Li, Yan; Cai, Xiongwei

    2002-04-01

    In previous paper, process-specific times (PSTs) are defined by use of molecular reaction dynamics and time quantum theory established by TCY Liu et al., and the change of PSTs representing two weakly nonlinearly coupled bio-processes are shown to be parallel, which is called time parallel principle (TPP). The PST of a physiological process (PP) is called physiological time (PT). After the PTs of two PPs are compared with their Yin-Yang property of traditional Chinese medicine (TCM), the PST model of Yin and Yang (YPTM) was put forward: for two related processes, the process of small PST is Yin, and the other process is Yang. The Yin-Yang parallel principle (YPP) was put forward in terms of YPTM and TPP, which is the fundamental principle of TCM. In this paper, we apply it to study TCM on the effects of low intensity laser on cells, and successfully explained observed phenomena.

  17. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  18. A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database

    DOE PAGES

    Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; ...

    2013-01-01

    Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order in amore » 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.« less

  19. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers

    PubMed Central

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation. PMID:28239346

  20. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers.

    PubMed

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation.

  1. Parallel Guessing: A Strategy for High-Speed Computation

    DTIC Science & Technology

    1984-09-19

    for using additional hardware to obtain higher processing speed). In this paper we argue that parallel guessing for image analysis is a useful...from a true solution, or the correctness of a guess, can be readily checked. We review image - analysis algorithms having a parallel guessing or

  2. 76 FR 2853 - Approval and Promulgation of Air Quality Implementation Plans; Delaware; Infrastructure State...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-18

    ... technical analysis submitted for parallel-processing by DNREC on December 9, 2010, to address significant... technical analysis submitted by DNREC for parallel-processing on December 9, 2010, to satisfy the... consists of a technical analysis that provides detailed support for Delaware's position that it has...

  3. Tracking the Continuity of Language Comprehension: Computer Mouse Trajectories Suggest Parallel Syntactic Processing

    ERIC Educational Resources Information Center

    Farmer, Thomas A.; Cargill, Sarah A.; Hindy, Nicholas C.; Dale, Rick; Spivey, Michael J.

    2007-01-01

    Although several theories of online syntactic processing assume the parallel activation of multiple syntactic representations, evidence supporting simultaneous activation has been inconclusive. Here, the continuous and non-ballistic properties of computer mouse movements are exploited, by recording their streaming x, y coordinates to procure…

  4. Parallel and Serial Processes in Visual Search

    ERIC Educational Resources Information Center

    Thornton, Thomas L.; Gilden, David L.

    2007-01-01

    A long-standing issue in the study of how people acquire visual information centers around the scheduling and deployment of attentional resources: Is the process serial, or is it parallel? A substantial empirical effort has been dedicated to resolving this issue. However, the results remain largely inconclusive because the methodologies that have…

  5. Motor and verbal perspective taking in children with Autism Spectrum Disorder: Changes in social interaction with people and tools.

    PubMed

    Studenka, Breanna E; Gillam, Sandra L; Hartzheim, Daphne; Gillam, Ronald B

    2017-07-01

    Children with Autism Spectrum Disorder (ASD) have difficulty communicating with others nonverbally, via mechanisms such as hand gestures, eye contact and facial expression. Individuals with ASD also have marked deficits in planning future actions (Hughes, 1996), which might contribute to impairments in non-verbal communication. Perspective taking is typically assessed using verbal scenarios whereby the participant imagines how an actor would interact in a social situation (e.g., Sally Anne task; Baron-Cohen, Leslie, & Frith, 1985). The current project evaluated motor perspective taking in five children with ASD (8-11 years old) as they participated in a narrative intervention program over the course of about 16 weeks. The goal of the motor perspective-taking task was to facilitate the action of an experimenter either hammering with a tool or putting it away. Initially, children with ASD facilitated the experimenter's action less than neurotypical control children. As the narrative intervention progressed, children with ASD exhibited increased motor facilitation that paralleled their increased use of mental state and causal language, indicating a link between verbal and motor perspective taking. Motoric perspective taking provides an additional way to assess understanding and communication in children with ASD and may be a valuable tool for both early assessment and diagnosis of children with ASD. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Using Motivational Interviewing Techniques to Address Parallel Process in Supervision

    ERIC Educational Resources Information Center

    Giordano, Amanda; Clarke, Philip; Borders, L. DiAnne

    2013-01-01

    Supervision offers a distinct opportunity to experience the interconnection of counselor-client and counselor-supervisor interactions. One product of this network of interactions is parallel process, a phenomenon by which counselors unconsciously identify with their clients and subsequently present to their supervisors in a similar fashion…

  7. Parallelization of a hydrological model using the message passing interface

    USGS Publications Warehouse

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  8. A perspective on unstructured grid flow solvers

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.

    1995-01-01

    This survey paper assesses the status of compressible Euler and Navier-Stokes solvers on unstructured grids. Different spatial and temporal discretization options for steady and unsteady flows are discussed. The integration of these components into an overall framework to solve practical problems is addressed. Issues such as grid adaptation, higher order methods, hybrid discretizations and parallel computing are briefly discussed. Finally, some outstanding issues and future research directions are presented.

  9. The Red Book and clinical practice.

    PubMed

    Bygott, Catherine

    2012-09-01

    Jung's work is fundamentally an experience, not an idea. From this perspective, I attempt to bridge conference, consulting room and living psyche by considering the influence of the 'Red Book' on clinical practice through the subtle and imaginal. Jung's journey as a man broadens out to have relevance for women. His story is individual but its archetypal foundation finds parallel expression in analytic practice today. © 2012, The Society of Analytical Psychology.

  10. What Multilevel Parallel Programs do when you are not Watching: A Performance Analysis Case Study Comparing MPI/OpenMP, MLP, and Nested OpenMP

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Labarta, Jesus; Gimenez, Judit

    2004-01-01

    With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors, parallel programming techniques have evolved that support parallelism beyond a single level. When comparing the performance of applications based on different programming paradigms, it is important to differentiate between the influence of the programming model itself and other factors, such as implementation specific behavior of the operating system (OS) or architectural issues. Rewriting-a large scientific application in order to employ a new programming paradigms is usually a time consuming and error prone task. Before embarking on such an endeavor it is important to determine that there is really a gain that would not be possible with the current implementation. A detailed performance analysis is crucial to clarify these issues. The multilevel programming paradigms considered in this study are hybrid MPI/OpenMP, MLP, and nested OpenMP. The hybrid MPI/OpenMP approach is based on using MPI [7] for the coarse grained parallelization and OpenMP [9] for fine grained loop level parallelism. The MPI programming paradigm assumes a private address space for each process. Data is transferred by explicitly exchanging messages via calls to the MPI library. This model was originally designed for distributed memory architectures but is also suitable for shared memory systems. The second paradigm under consideration is MLP which was developed by Taft. The approach is similar to MPi/OpenMP, using a mix of coarse grain process level parallelization and loop level OpenMP parallelization. As it is the case with MPI, a private address space is assumed for each process. The MLP approach was developed for ccNUMA architectures and explicitly takes advantage of the availability of shared memory. A shared memory arena which is accessible by all processes is required. Communication is done by reading from and writing to the shared memory.

  11. SCORPIO: A Scalable Two-Phase Parallel I/O Library With Application To A Large Scale Subsurface Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; Sripathi, Vamsi; Mills, Richard T

    2013-01-01

    Inefficient parallel I/O is known to be a major bottleneck among scientific applications employed on supercomputers as the number of processor cores grows into the thousands. Our prior experience indicated that parallel I/O libraries such as HDF5 that rely on MPI-IO do not scale well beyond 10K processor cores, especially on parallel file systems (like Lustre) with single point of resource contention. Our previous optimization efforts for a massively parallel multi-phase and multi-component subsurface simulator (PFLOTRAN) led to a two-phase I/O approach at the application level where a set of designated processes participate in the I/O process by splitting themore » I/O operation into a communication phase and a disk I/O phase. The designated I/O processes are created by splitting the MPI global communicator into multiple sub-communicators. The root process in each sub-communicator is responsible for performing the I/O operations for the entire group and then distributing the data to rest of the group. This approach resulted in over 25X speedup in HDF I/O read performance and 3X speedup in write performance for PFLOTRAN at over 100K processor cores on the ORNL Jaguar supercomputer. This research describes the design and development of a general purpose parallel I/O library, SCORPIO (SCalable block-ORiented Parallel I/O) that incorporates our optimized two-phase I/O approach. The library provides a simplified higher level abstraction to the user, sitting atop existing parallel I/O libraries (such as HDF5) and implements optimized I/O access patterns that can scale on larger number of processors. Performance results with standard benchmark problems and PFLOTRAN indicate that our library is able to maintain the same speedups as before with the added flexibility of being applicable to a wider range of I/O intensive applications.« less

  12. Visualization Co-Processing of a CFD Simulation

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    1999-01-01

    OVERFLOW, a widely used CFD simulation code, is combined with a visualization system, pV3, to experiment with an environment for simulation/visualization co-processing on a SGI Origin 2000 computer(O2K) system. The shared memory version of the solver is used with the O2K 'pfa' preprocessor invoked to automatically discover parallelism in the source code. No other explicit parallelism is enabled. In order to study the scaling and performance of the visualization co-processing system, sample runs are made with different processor groups in the range of 1 to 254 processors. The data exchange between the visualization system and the simulation system is rapid enough for user interactivity when the problem size is small. This shared memory version of OVERFLOW, with minimal parallelization, does not scale well to an increasing number of available processors. The visualization task takes about 18 to 30% of the total processing time and does not appear to be a major contributor to the poor scaling. Improper load balancing and inter-processor communication overhead are contributors to this poor performance. Work is in progress which is aimed at obtaining improved parallel performance of the solver and removing the limitations of serial data transfer to pV3 by examining various parallelization/communication strategies, including the use of the explicit message passing.

  13. PaFlexPepDock: parallel ab-initio docking of peptides onto their receptors with full flexibility based on Rosetta.

    PubMed

    Li, Haiou; Lu, Liyao; Chen, Rong; Quan, Lijun; Xia, Xiaoyan; Lü, Qiang

    2014-01-01

    Structural information related to protein-peptide complexes can be very useful for novel drug discovery and design. The computational docking of protein and peptide can supplement the structural information available on protein-peptide interactions explored by experimental ways. Protein-peptide docking of this paper can be described as three processes that occur in parallel: ab-initio peptide folding, peptide docking with its receptor, and refinement of some flexible areas of the receptor as the peptide is approaching. Several existing methods have been used to sample the degrees of freedom in the three processes, which are usually triggered in an organized sequential scheme. In this paper, we proposed a parallel approach that combines all the three processes during the docking of a folding peptide with a flexible receptor. This approach mimics the actual protein-peptide docking process in parallel way, and is expected to deliver better performance than sequential approaches. We used 22 unbound protein-peptide docking examples to evaluate our method. Our analysis of the results showed that the explicit refinement of the flexible areas of the receptor facilitated more accurate modeling of the interfaces of the complexes, while combining all of the moves in parallel helped the constructing of energy funnels for predictions.

  14. A VIEW OF TURKEY AND EUROPEAN RELATIONS FROM THE PERSPECTIVE OF MEDICAL LEGISLATION: AN ASSESMENT OF STATE OF PLAY

    PubMed Central

    Ekmekçi, Perihan Elif; Arda, Berna

    2015-01-01

    The aim of this paper is to reflect the situation of health legislation alignment in Turkey in its accession process to the European Union and Customs Union Agreement, and to discuss the the EU’s health priorities of in parallel with the Turkish ones. The health legislation alignment processes consist of three titles which are: European Union alignment process, the harmonization done in the framework of membership to Council of Europe, and the obligations under the Customs Union Agreement. Significant human resources are required for the adoption of the legislations which make ethically imperative the discussion of whether there is a harmony among the priorities of both parities. Unless this harmony and paralellisim is shown, the human resources appointed for the adoption of health legislation process would not prove their efficiency and effectiveness. In this article, the Customs Union and formal negotiations for full EU membership are included in the phrase “the alignment process to European Union”. Council Decisions 1/95 and 2/97 ground on the obligations provided by the Customs Union Agreement. The reference document used to discuss the formal negotiation process for full membership to European Union is the Turkish National Program for the Adoption of the EU Acquis 2008–2013. The legislative obligations of Turkey arising from its membership of the Council of Europe, which has significant contributions to the medical legislation especially in the field of medical ethics, are also included in this article. PMID:26269696

  15. New perspectives for European climate services: HORIZON2020

    NASA Astrophysics Data System (ADS)

    Bruning, Claus; Tilche, Andrea

    2014-05-01

    The developing of new end-to-end climate services was one of the core priorities of 7th Framework for Research and Technological Development of the European Commission and will become one of the key strategic priorities of Societal Challenge 5 of HORIZON2020 (the new EU Framework Programme for Research and Innovation 2014-2020). Results should increase the competitiveness of European businesses, and the ability of regional and national authorities to make effective decisions in climate-sensitive sectors. In parallel, the production of new tailored climate information should strengthen the resilience of the European society to climate change. In this perspective the strategy to support and foster the underpinning science for climate services in HORIZON2020 will be presented.

  16. Moving in Parallel Toward a Modern Modeling Epistemology: Bayes Factors and Frequentist Modeling Methods.

    PubMed

    Rodgers, Joseph Lee

    2016-01-01

    The Bayesian-frequentist debate typically portrays these statistical perspectives as opposing views. However, both Bayesian and frequentist statisticians have expanded their epistemological basis away from a singular focus on the null hypothesis, to a broader perspective involving the development and comparison of competing statistical/mathematical models. For frequentists, statistical developments such as structural equation modeling and multilevel modeling have facilitated this transition. For Bayesians, the Bayes factor has facilitated this transition. The Bayes factor is treated in articles within this issue of Multivariate Behavioral Research. The current presentation provides brief commentary on those articles and more extended discussion of the transition toward a modern modeling epistemology. In certain respects, Bayesians and frequentists share common goals.

  17. [The regionalized healthcare network in Santa Catarina State, Brazil, from 2011 to 2015: governance system and oral healthcare].

    PubMed

    Godoi, Heloisa; Andrade, Selma Regina de; Mello, Ana Lúcia Schaefer Ferreira de

    2017-09-28

    : The objective was to describe the governance system used in structuring the regionalized healthcare network in Santa Catarina State, Brazil, based on the Bipartite Inter-Managerial Commission (CIB), with a focus on structuring of oral healthcare. This was a qualitative, exploratory-descriptive documental study, based on the foundations of governance as an analytical tool through identification of the dimensions actors, norms, nodal points, and processes. Secondary data were collected from the minutes of CIB meetings held from January 2011 to December 2015. The analysis shows weaknesses in CIB governance in Santa Catarina in relation to regionalized structuring of oral healthcare from a network perspective. Structuring of oral healthcare occurs in parallel to that of other thematic networks in the state and shows the expansion of dental services, especially those with medium complexity, as an effect of the prevailing governance process. The relations established between administrators and decision-making processes allowed recognizing this network's "prescription", since there is little negotiation and local demand, limited more to following recommendations and incentives from the federal/state sphere, intermediated by staff from the State Health Secretariat. Thus, setting a policy agenda for oral healthcare for the population of Santa Catarina is weakened, with a peripheral position in relation to other health programs.

  18. Exploring revictimization process among Turkish women: The role of early maladaptive schemas on the link between child abuse and partner violence.

    PubMed

    Atmaca, Sinem; Gençöz, Tülin

    2016-02-01

    The purpose of the current study is to explore the revictimization process between child abuse and neglect (CAN), and intimate partner violence (IPV) based on the schema theory perspective. For this aim, 222 married women recruited in four central cities of Turkey participated in the study. Results indicated that early negative CAN experiences increased the risk of being exposed to later IPV. Specifically, emotional abuse and sexual abuse in the childhood predicted the four subtypes of IPV, which are physical, psychological, and sexual violence, and injury, while physical abuse only associated with physical violence. To explore the mediational role of early maladaptive schemas (EMSs) on this association, first, five schema domains were tested via Parallel Multiple Mediation Model. Results indicated that only Disconnection/Rejection (D/R) schema domains mediated the association between CAN and IPV. Second, to determine the particular mediational roles of each schema, eighteen EMS were tested as mediators, and results showed that Emotional Deprivation Schema and Vulnerability to Harm or Illness Schema mediated the association between CAN and IPV. These findings provided an empirical support for the crucial roles of EMSs on the effect of revictimization process. Clinical implications were discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Gas fired boilers: Perspective for near future fuel composition and impact on burner design process

    NASA Astrophysics Data System (ADS)

    Schiro, Fabio; Stoppato, Anna; Benato, Alberto

    2017-11-01

    The advancements on gas boiler technology run in parallel with the growth of renewable energy production. The renewable production will impact on the fuel gas quality, since the gas grid will face an increasing injection of alternative fuels (biogas, biomethane, hydrogen). Biogas allows producing energy with a lower CO2 impact; hydrogen production by electrolysis can mitigate the issues related to the mismatch between energy production by renewable and energy request. These technologies will contribute to achieve the renewable production targets, but the impact on whole fuel gas production-to-consumption chain must be evaluated. In the first part of this study, the Authors present the future scenario of the grid gas composition and the implications on gas fed appliances. Given that the widely used premixed burners are currently designed mainly by trial and error, a broader fuel gas quality range means an additional hitch on this design process. A better understanding and structuring of this process is helpful for future appliance-oriented developments. The Authors present an experimental activity on a premixed condensing boiler setup. A test protocol highlighting the burners' flexibility in terms of mixture composition is adopted and the system fuel flexibility is characterized around multiple reference conditions.

  20. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    NASA Astrophysics Data System (ADS)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  1. [Impairment - disability - participation for all : New federal reporting in light of the UN Convention on the Rights of Persons with Disabilities].

    PubMed

    Wacker, Elisabeth

    2016-09-01

    The new Federal Government's Report on Participation explores the contexts in which impairments become disabilities for those individuals who experience them. In parallel, it outlines the factors that foster inclusion and opportunities to act for everyone in society - despite existing impairments.From a sociopolitical and health policy perspective, disability refers to unequal opportunities based on impairment. Hence, the focus here is on the equalisation of these participation opportunities to match those of the entire population - but always from differentiated perspectives on the various social arenas. The human rights approach stresses protection against discrimination as well as dignity and self-determination for all. From a human resources angle, the emphasis is on the performance of individuals in favourable conditions and the attainment of personal goals within their actual everyday circumstances.The new reporting concept is indebted to these perspectives and thus focuses on individual life circumstances, while referring to the WHO's International Classification of Functioning, Disability and Health (ICF) - an approach now validated on a global scale. Therefore, it does not only report on measures provided by services for persons with disabilities but, more crucially, investigates determinants on the personal and environmental levels, unequal opportunities and the interdependency between context and competence for particular sections of the population. Two groups are singled out in the process: elderly persons and individuals with mental health impairments.The participation report is part of the National Action Plan to implement the UN Convention on the Rights of Persons with Disabilities (UNCRPD). An independent scientific committee conceptualises the design of the report while accompanying and commenting upon its realisation. Currently, a second federal report on participation is emerging from the new concept.

  2. An embedded multi-core parallel model for real-time stereo imaging

    NASA Astrophysics Data System (ADS)

    He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu

    2018-04-01

    The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.

  3. Segmentation of remotely sensed data using parallel region growing

    NASA Technical Reports Server (NTRS)

    Tilton, J. C.; Cox, S. C.

    1983-01-01

    The improved spatial resolution of the new earth resources satellites will increase the need for effective utilization of spatial information in machine processing of remotely sensed data. One promising technique is scene segmentation by region growing. Region growing can use spatial information in two ways: only spatially adjacent regions merge together, and merging criteria can be based on region-wide spatial features. A simple region growing approach is described in which the similarity criterion is based on region mean and variance (a simple spatial feature). An effective way to implement region growing for remote sensing is as an iterative parallel process on a large parallel processor. A straightforward parallel pixel-based implementation of the algorithm is explored and its efficiency is compared with sequential pixel-based, sequential region-based, and parallel region-based implementations. Experimental results from on aircraft scanner data set are presented, as is a discussioon of proposed improvements to the segmentation algorithm.

  4. Solution-processed parallel tandem polymer solar cells using silver nanowires as intermediate electrode.

    PubMed

    Guo, Fei; Kubis, Peter; Li, Ning; Przybilla, Thomas; Matt, Gebhard; Stubhan, Tobias; Ameri, Tayebeh; Butz, Benjamin; Spiecker, Erdmann; Forberich, Karen; Brabec, Christoph J

    2014-12-23

    Tandem architecture is the most relevant concept to overcome the efficiency limit of single-junction photovoltaic solar cells. Series-connected tandem polymer solar cells (PSCs) have advanced rapidly during the past decade. In contrast, the development of parallel-connected tandem cells is lagging far behind due to the big challenge in establishing an efficient interlayer with high transparency and high in-plane conductivity. Here, we report all-solution fabrication of parallel tandem PSCs using silver nanowires as intermediate charge collecting electrode. Through a rational interface design, a robust interlayer is established, enabling the efficient extraction and transport of electrons from subcells. The resulting parallel tandem cells exhibit high fill factors of ∼60% and enhanced current densities which are identical to the sum of the current densities of the subcells. These results suggest that solution-processed parallel tandem configuration provides an alternative avenue toward high performance photovoltaic devices.

  5. The science of computing - The evolution of parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1985-01-01

    The present paper is concerned with the approaches to be employed to overcome the set of limitations in software technology which impedes currently an effective use of parallel hardware technology. The process required to solve the arising problems is found to involve four different stages. At the present time, Stage One is nearly finished, while Stage Two is under way. Tentative explorations are beginning on Stage Three, and Stage Four is more distant. In Stage One, parallelism is introduced into the hardware of a single computer, which consists of one or more processors, a main storage system, a secondary storage system, and various peripheral devices. In Stage Two, parallel execution of cooperating programs on different machines becomes explicit, while in Stage Three, new languages will make parallelism implicit. In Stage Four, there will be very high level user interfaces capable of interacting with scientists at the same level of abstraction as scientists do with each other.

  6. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less

  7. Linking micro- and macroevolutionary perspectives to evaluate the role of Quaternary sea-level oscillations in island diversification.

    PubMed

    Papadopoulou, Anna; Knowles, L Lacey

    2017-12-01

    With shifts in island area, isolation, and cycles of island fusion-fission, the role of Quaternary sea-level oscillations as drivers of diversification is complex and not well understood. Here, we conduct parallel comparisons of population and species divergence between two island areas of equivalent size that have been affected differently by sea-level oscillations, with the aim to understand the micro- and macroevolutionary dynamics associated with sea-level change. Using genome-wide datasets for a clade of seven Amphiacusta ground cricket species endemic to the Puerto Rico Bank (PRB), we found consistently deeper interspecific divergences and higher population differentiation across the unfragmented Western PRB, in comparison to the currently fragmented Eastern PRB that has experienced extreme changes in island area and connectivity during the Quaternary. We evaluate alternative hypotheses related to the microevolutionary processes (population splitting, extinction, and merging) that regulate the frequency of completed speciation across the PRB. Our results suggest that under certain combinations of archipelago characteristics and taxon traits, the repeated changes in island area and connectivity may create an opposite effect to the hypothesized "species pump" action of oscillating sea levels. Our study highlights how a microevolutionary perspective can complement current macroecological work on the Quaternary dynamics of island biodiversity. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  8. Historical Experiments and Physics Teaching: adding considerations from a Bibliographic Review and the Cultural History of Science

    NASA Astrophysics Data System (ADS)

    Jardim, W. T.; Guerra, A.

    2017-12-01

    In this paper, a discussion about the purposes of historical experiments in science teaching found in the literature will be presented. As a starting point, we carried out a bibliographic review, on the websites of six relevant periodicals for the area of Science Teaching and, especially for Physics Teaching. The search was based, at first, on works published between the years 2001 and 2016, from terms like "historical experiments", "museums" and "experience". Thereon, due to the large number of publications found, a screening process was developed based on the analysis of titles, abstracts, keywords and, whether necessary, the whole text, aiming to identify which searches emphasize working with historical experiments in Physics teaching, from a theoretical perspective or based on manipulation of a replica of historical apparatus. The selected proposals were arranged in categories adapted from the work of Heering and Höttecke (2014) which allowed us to draw a parallel between the national and international publication that presented resembling scopes. Furthermore, the analysis of the results leads us to infer that, in general, extralab factors, inherent to science, when not neglected, are placed in a peripheral perspective. Thus, we draw theoretical considerations based on Historians of Science, which develop their researches based on the bias of the Cultural History of Science, seeking to add reflections to what has been developed about historical experiments in teaching up to now.

  9. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce

    PubMed Central

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987

  10. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform

    PubMed Central

    Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance. PMID:29861711

  11. Level 2 Perspective Taking Entails Two Processes: Evidence from PRP Experiments

    ERIC Educational Resources Information Center

    Janczyk, Markus

    2013-01-01

    In many situations people need to mentally adopt the (spatial) perspective of other persons, an ability that is referred to as "Level 2 perspective taking." Its underlying processes have been ascribed to mental self-rotation that can be dissociated from mental object-rotation. Recent findings suggest that perspective taking/self-rotation…

  12. Migration and Development: A Theoretical Perspective 1

    PubMed Central

    De Haas, Hein

    2010-01-01

    The debate on migration and development has swung back and forth like a pendulum, from developmentalist optimism in the 1950s and 1960s, to neo‐Marxist pessimism over the 1970s and 1980s, towards more optimistic views in the 1990s and 2000s. This paper argues how such discursive shifts in the migration and development debate should be primarily seen as part of more general paradigm shifts in social and development theory. However, the classical opposition between pessimistic and optimistic views is challenged by empirical evidence pointing to the heterogeneity of migration impacts. By integrating and amending insights from the new economics of labor migration, livelihood perspectives in development studies and transnational perspectives in migration studies – which share several though as yet unobserved conceptual parallels – this paper elaborates the contours of a conceptual framework that simultaneously integrates agency and structure perspectives and is therefore able to account for the heterogeneous nature of migration‐development interactions. The resulting perspective reveals the naivety of recent views celebrating migration as self‐help development “from below”. These views are largely ideologically driven and shift the attention away from structural constraints and the vital role of states in shaping favorable conditions for positive development impacts of migration to occur. PMID:26900199

  13. Geometric and perceptual effects of the location of the observer vantage point for linear-perspective images.

    PubMed

    Todorović, Dejan

    2005-01-01

    New geometric analyses are presented of three impressive examples of the effects of location of the vantage point on virtual 3-D spaces conveyed by linear-perspective images. In the 'egocentric-road' effect, the perceived direction of the depicted road is always pointed towards the observer, for any position of the vantage point. It is shown that perspective images of real-observer-aimed roads are characterised by a specific, simple pattern of projected side lines. Given that pattern, the position of the observer, and certain assumptions and perspective arguments, the perceived direction of the virtual road towards the observer can be predicted. In the 'skewed balcony' and the 'collapsing ceiling' effects, the position of the vantage point affects the impression of alignment of the virtual architecture conveyed by large-scale illusionistic paintings and the real architecture surrounding them. It is shown that the dislocation of the vantage point away from the viewing position prescribed by the perspective construction induces a mismatch between the painted vanishing point of elements in the picture and the real vanishing point of corresponding elements of the actual architecture. This mismatch of vanishing points provides visual information that the elements of the two architectures are not mutually parallel.

  14. A Connectionist Simulation of Attention and Vector Comparison: The Need for Serial Processing in Parallel Hardware

    DTIC Science & Technology

    1991-01-01

    visual and three-layer connectionist network, in that the input layer of memory processing is serial, and is likely to represent each module is... Selective attention gates visual University Press. processing in the extrastnate cortex. Science, 229:782-784. Treasman, A.M. (1985). Preartentive...AD-A242 225 A CONNECTIONIST SIMULATION OF ATTENTION AND VECTOR COMPARISON: THE NEED FOR SERIAL PROCESSING IN PARALLEL HARDWARE Technical Report AlP

  15. Parallel processing approach to transform-based image coding

    NASA Astrophysics Data System (ADS)

    Normile, James O.; Wright, Dan; Chu, Ken; Yeh, Chia L.

    1991-06-01

    This paper describes a flexible parallel processing architecture designed for use in real time video processing. The system consists of floating point DSP processors connected to each other via fast serial links, each processor has access to a globally shared memory. A multiple bus architecture in combination with a dual ported memory allows communication with a host control processor. The system has been applied to prototyping of video compression and decompression algorithms. The decomposition of transform based algorithms for decompression into a form suitable for parallel processing is described. A technique for automatic load balancing among the processors is developed and discussed, results ar presented with image statistics and data rates. Finally techniques for accelerating the system throughput are analyzed and results from the application of one such modification described.

  16. Modeling the role of parallel processing in visual search.

    PubMed

    Cave, K R; Wolfe, J M

    1990-04-01

    Treisman's Feature Integration Theory and Julesz's Texton Theory explain many aspects of visual search. However, these theories require that parallel processing mechanisms not be used in many visual searches for which they would be useful, and they imply that visual processing should be much slower than it is. Most importantly, they cannot account for recent data showing that some subjects can perform some conjunction searches very efficiently. Feature Integration Theory can be modified so that it accounts for these data and helps to answer these questions. In this new theory, which we call Guided Search, the parallel stage guides the serial stage as it chooses display elements to process. A computer simulation of Guided Search produces the same general patterns as human subjects in a number of different types of visual search.

  17. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  18. Exploiting parallel computing with limited program changes using a network of microcomputers

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.

    1985-01-01

    Network computing and multiprocessor computers are two discernible trends in parallel processing. The computational behavior of an iterative distributed process in which some subtasks are completed later than others because of an imbalance in computational requirements is of significant interest. The effects of asynchronus processing was studied. A small existing program was converted to perform finite element analysis by distributing substructure analysis over a network of four Apple IIe microcomputers connected to a shared disk, simulating a parallel computer. The substructure analysis uses an iterative, fully stressed, structural resizing procedure. A framework of beams divided into three substructures is used as the finite element model. The effects of asynchronous processing on the convergence of the design variables are determined by not resizing particular substructures on various iterations.

  19. Hypercluster parallel processing library user's manual

    NASA Technical Reports Server (NTRS)

    Quealy, Angela

    1990-01-01

    This User's Manual describes the Hypercluster Parallel Processing Library, composed of FORTRAN-callable subroutines which enable a FORTRAN programmer to manipulate and transfer information throughout the Hypercluster at NASA Lewis Research Center. Each subroutine and its parameters are described in detail. A simple heat flow application using Laplace's equation is included to demonstrate the use of some of the library's subroutines. The manual can be used initially as an introduction to the parallel features provided by the library. Thereafter it can be used as a reference when programming an application.

  20. Parallel dynamics between non-Hermitian and Hermitian systems

    NASA Astrophysics Data System (ADS)

    Wang, P.; Lin, S.; Jin, L.; Song, Z.

    2018-06-01

    We reveals a connection between non-Hermitian and Hermitian systems by studying the connection between a family of non-Hermitian and Hermitian Hamiltonians based on exact solutions. In general, for a dynamic process in a non-Hermitian system H , there always exists a parallel dynamic process governed by the corresponding Hermitian conjugate system H†. We show that a linear superposition of the two parallel dynamics is exactly equivalent to the time evolution of a state under a Hermitian Hamiltonian H , and we present the relations between {H ,H ,H†} .

  1. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  2. Framing Student Perspectives into the Higher Education Institutional Review Policy Process

    ERIC Educational Resources Information Center

    Poth, Cheryl; Riedel, Alex; Luth, Robert

    2015-01-01

    It is necessary and desirable to enhance student learning in higher education by integrating multiple perspectives during institutional policy reviews, yet few examples of such a process exist. This article describes an institutional assessment policy review process that used a questionnaire to elicit 269 students' perspectives on a draft policy…

  3. On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms.

    PubMed

    Chen, Chunlei; He, Li; Zhang, Huixiang; Zheng, Hao; Wang, Lei

    2017-01-01

    Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions.

  4. What is a good public participation process? Five perspectives from the public.

    PubMed

    Webler, T; Tuler, S; Krueger, R

    2001-03-01

    It is now widely accepted that members of the public should be involved in environmental decision-making. This has inspired many to search for principles that characterize good public participation processes. In this paper we report on a study that identifies discourses about what defines a good process. Our case study was a forest planning process in northern New England and New York. We employed Q methodology to learn how participants characterize a good process differently, by selecting, defining, and privileging different principles. Five discourses, or perspectives, about good process emerged from our study. One perspective emphasizes that a good process acquires and maintains popular legitimacy. A second sees a good process as one that facilitates an ideological discussion. A third focuses on the fairness of the process. A fourth perspective conceptualizes participatory processes as a power struggle--in this instance a power play between local land-owning interests and outsiders. A fifth perspective highlights the need for leadership and compromise. Dramatic differences among these views suggest an important challenge for those responsible for designing and carrying out public participation processes. Conflicts may emerge about process designs because people disagree about what is good in specific contexts.

  5. A learnable parallel processing architecture towards unity of memory and computing

    NASA Astrophysics Data System (ADS)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  6. Electrophysiological evidence for parallel and serial processing during visual search.

    PubMed

    Luck, S J; Hillyard, S A

    1990-12-01

    Event-related potentials were recorded from young adults during a visual search task in order to evaluate parallel and serial models of visual processing in the context of Treisman's feature integration theory. Parallel and serial search strategies were produced by the use of feature-present and feature-absent targets, respectively. In the feature-absent condition, the slopes of the functions relating reaction time and latency of the P3 component to set size were essentially identical, indicating that the longer reaction times observed for larger set sizes can be accounted for solely by changes in stimulus identification and classification time, rather than changes in post-perceptual processing stages. In addition, the amplitude of the P3 wave on target-present trials in this condition increased with set size and was greater when the preceding trial contained a target, whereas P3 activity was minimal on target-absent trials. These effects are consistent with the serial self-terminating search model and appear to contradict parallel processing accounts of attention-demanding visual search performance, at least for a subset of search paradigms. Differences in ERP scalp distributions further suggested that different physiological processes are utilized for the detection of feature presence and absence.

  7. Identifying failure in a tree network of a parallel computer

    DOEpatents

    Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.

    2010-08-24

    Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

  8. A learnable parallel processing architecture towards unity of memory and computing.

    PubMed

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-08-14

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  9. Rapid Parallel Semantic Processing of Numbers without Awareness

    ERIC Educational Resources Information Center

    Van Opstal, Filip; de Lange, Floris P.; Dehaene, Stanislas

    2011-01-01

    In this study, we investigate whether multiple digits can be processed at a semantic level without awareness, either serially or in parallel. In two experiments, we presented participants with two successive sets of four simultaneous Arabic digits. The first set was masked and served as a subliminal prime for the second, visible target set.…

  10. Scalable Parallel Algorithms for Multidimensional Digital Signal Processing

    DTIC Science & Technology

    1991-12-31

    Proceedings, San Diego CL., August 1989, pp. 132-146. 53 [13] A. L. Gorin, L. Auslander, and A. Silberger . Balanced computation of 2D trans- forms on a tree...Speech, Signal Processing. ASSP-34, Oct. 1986,pp. 1301-1309. [24] A. Norton and A. Silberger . Parallelization and performance analysis of the Cooley-Tukey

  11. A Neurally Plausible Parallel Distributed Processing Model of Event-Related Potential Word Reading Data

    ERIC Educational Resources Information Center

    Laszlo, Sarah; Plaut, David C.

    2012-01-01

    The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between…

  12. The Extended Parallel Process Model: Illuminating the Gaps in Research

    ERIC Educational Resources Information Center

    Popova, Lucy

    2012-01-01

    This article examines constructs, propositions, and assumptions of the extended parallel process model (EPPM). Review of the EPPM literature reveals that its theoretical concepts are thoroughly developed, but the theory lacks consistency in operational definitions of some of its constructs. Out of the 12 propositions of the EPPM, a few have not…

  13. Parallel Process and Isomorphism: A Model for Decision Making in the Supervisory Triad

    ERIC Educational Resources Information Center

    Koltz, Rebecca L.; Odegard, Melissa A.; Feit, Stephen S.; Provost, Kent; Smith, Travis

    2012-01-01

    Parallel process and isomorphism are two supervisory concepts that are often discussed independently but rarely discussed in connection with each other. These two concepts, philosophically, have different historical roots, as well as different implications for interventions with regard to the supervisory triad. The authors examine the difference…

  14. Parallel Distributed Processing at 25: Further Explorations in the Microstructure of Cognition

    ERIC Educational Resources Information Center

    Rogers, Timothy T.; McClelland, James L.

    2014-01-01

    This paper introduces a special issue of "Cognitive Science" initiated on the 25th anniversary of the publication of "Parallel Distributed Processing" (PDP), a two-volume work that introduced the use of neural network models as vehicles for understanding cognition. The collection surveys the core commitments of the PDP…

  15. Fast, Massively Parallel Data Processors

    NASA Technical Reports Server (NTRS)

    Heaton, Robert A.; Blevins, Donald W.; Davis, ED

    1994-01-01

    Proposed fast, massively parallel data processor contains 8x16 array of processing elements with efficient interconnection scheme and options for flexible local control. Processing elements communicate with each other on "X" interconnection grid with external memory via high-capacity input/output bus. This approach to conditional operation nearly doubles speed of various arithmetic operations.

  16. Cache write generate for parallel image processing on shared memory architectures.

    PubMed

    Wittenbrink, C M; Somani, A K; Chen, C H

    1996-01-01

    We investigate cache write generate, our cache mode invention. We demonstrate that for parallel image processing applications, the new mode improves main memory bandwidth, CPU efficiency, cache hits, and cache latency. We use register level simulations validated by the UW-Proteus system. Many memory, cache, and processor configurations are evaluated.

  17. A parallel expert system for the control of a robotic air vehicle

    NASA Technical Reports Server (NTRS)

    Shakley, Donald; Lamont, Gary B.

    1988-01-01

    Expert systems can be used to govern the intelligent control of vehicles, for example the Robotic Air Vehicle (RAV). Due to the nature of the RAV system the associated expert system needs to perform in a demanding real-time environment. The use of a parallel processing capability to support the associated expert system's computational requirement is critical in this application. Thus, algorithms for parallel real-time expert systems must be designed, analyzed, and synthesized. The design process incorporates a consideration of the rule-set/face-set size along with representation issues. These issues are looked at in reference to information movement and various inference mechanisms. Also examined is the process involved with transporting the RAV expert system functions from the TI Explorer, where they are implemented in the Automated Reasoning Tool (ART), to the iPSC Hypercube, where the system is synthesized using Concurrent Common LISP (CCLISP). The transformation process for the ART to CCLISP conversion is described. The performance characteristics of the parallel implementation of these expert systems on the iPSC Hypercube are compared to the TI Explorer implementation.

  18. Two schemes for rapid generation of digital video holograms using PC cluster

    NASA Astrophysics Data System (ADS)

    Park, Hanhoon; Song, Joongseok; Kim, Changseob; Park, Jong-Il

    2017-12-01

    Computer-generated holography (CGH), which is a process of generating digital holograms, is computationally expensive. Recently, several methods/systems of parallelizing the process using graphic processing units (GPUs) have been proposed. Indeed, use of multiple GPUs or a personal computer (PC) cluster (each PC with GPUs) enabled great improvements in the process speed. However, extant literature has less often explored systems involving rapid generation of multiple digital holograms and specialized systems for rapid generation of a digital video hologram. This study proposes a system that uses a PC cluster and is able to more efficiently generate a video hologram. The proposed system is designed to simultaneously generate multiple frames and accelerate the generation by parallelizing the CGH computations across a number of frames, as opposed to separately generating each individual frame while parallelizing the CGH computations within each frame. The proposed system also enables the subprocesses for generating each frame to execute in parallel through multithreading. With these two schemes, the proposed system significantly reduced the data communication time for generating a digital hologram when compared with that of the state-of-the-art system.

  19. Bin-Hash Indexing: A Parallel Method for Fast Query Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, Edward W; Gosink, Luke J.; Wu, Kesheng

    2008-06-27

    This paper presents a new parallel indexing data structure for answering queries. The index, called Bin-Hash, offers extremely high levels of concurrency, and is therefore well-suited for the emerging commodity of parallel processors, such as multi-cores, cell processors, and general purpose graphics processing units (GPU). The Bin-Hash approach first bins the base data, and then partitions and separately stores the values in each bin as a perfect spatial hash table. To answer a query, we first determine whether or not a record satisfies the query conditions based on the bin boundaries. For the bins with records that can not bemore » resolved, we examine the spatial hash tables. The procedures for examining the bin numbers and the spatial hash tables offer the maximum possible level of concurrency; all records are able to be evaluated by our procedure independently in parallel. Additionally, our Bin-Hash procedures access much smaller amounts of data than similar parallel methods, such as the projection index. This smaller data footprint is critical for certain parallel processors, like GPUs, where memory resources are limited. To demonstrate the effectiveness of Bin-Hash, we implement it on a GPU using the data-parallel programming language CUDA. The concurrency offered by the Bin-Hash index allows us to fully utilize the GPU's massive parallelism in our work; over 12,000 records can be simultaneously evaluated at any one time. We show that our new query processing method is an order of magnitude faster than current state-of-the-art CPU-based indexing technologies. Additionally, we compare our performance to existing GPU-based projection index strategies.« less

  20. Topology-dependent density optima for efficient simultaneous network exploration

    NASA Astrophysics Data System (ADS)

    Wilson, Daniel B.; Baker, Ruth E.; Woodhouse, Francis G.

    2018-06-01

    A random search process in a networked environment is governed by the time it takes to visit every node, termed the cover time. Often, a networked process does not proceed in isolation but competes with many instances of itself within the same environment. A key unanswered question is how to optimize this process: How many concurrent searchers can a topology support before the benefits of parallelism are outweighed by competition for space? Here, we introduce the searcher-averaged parallel cover time (APCT) to quantify these economies of scale. We show that the APCT of the networked symmetric exclusion process is optimized at a searcher density that is well predicted by the spectral gap. Furthermore, we find that nonequilibrium processes, realized through the addition of bias, can support significantly increased density optima. Our results suggest alternative hybrid strategies of serial and parallel search for efficient information gathering in social interaction and biological transport networks.

  1. Implementation and performance of parallel Prolog interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, S.; Kale, L.V.; Balkrishna, R.

    1988-01-01

    In this paper, the authors discuss the implementation of a parallel Prolog interpreter on different parallel machines. The implementation is based on the REDUCE--OR process model which exploits both AND and OR parallelism in logic programs. It is machine independent as it runs on top of the chare-kernel--a machine-independent parallel programming system. The authors also give the performance of the interpreter running a diverse set of benchmark pargrams on parallel machines including shared memory systems: an Alliant FX/8, Sequent and a MultiMax, and a non-shared memory systems: Intel iPSC/32 hypercube, in addition to its performance on a multiprocessor simulation system.

  2. A Human–Robot Interaction Perspective on Assistive and Rehabilitation Robotics

    PubMed Central

    Beckerle, Philipp; Salvietti, Gionata; Unal, Ramazan; Prattichizzo, Domenico; Rossi, Simone; Castellini, Claudio; Hirche, Sandra; Endo, Satoshi; Amor, Heni Ben; Ciocarlie, Matei; Mastrogiovanni, Fulvio; Argall, Brenna D.; Bianchi, Matteo

    2017-01-01

    Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human–robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions. PMID:28588473

  3. A Human-Robot Interaction Perspective on Assistive and Rehabilitation Robotics.

    PubMed

    Beckerle, Philipp; Salvietti, Gionata; Unal, Ramazan; Prattichizzo, Domenico; Rossi, Simone; Castellini, Claudio; Hirche, Sandra; Endo, Satoshi; Amor, Heni Ben; Ciocarlie, Matei; Mastrogiovanni, Fulvio; Argall, Brenna D; Bianchi, Matteo

    2017-01-01

    Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human-robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions.

  4. Provider Perspectives on Partnering With Parents of Hospitalized Children to Improve Safety.

    PubMed

    Rosenberg, Rebecca E; Williams, Emily; Ramchandani, Neesha; Rosenfeld, Peri; Silber, Beth; Schlucter, Juliette; Geraghty, Gail; Sullivan-Bolyai, Susan

    2018-06-01

    There is increasing emphasis on the importance of patient and family engagement for improving patient safety. Our purpose in this study was to understand health care team perspectives on parent-provider safety partnerships for hospitalized US children to complement a parallel study of parent perspectives. Our research team, including a family advisor, conducted semistructured interviews and focus groups of a purposive sample of 20 inpatient pediatric providers (nurses, patient care technicians, physicians) in an acute-care pediatric unit at a US urban tertiary hospital. We used a constant comparison technique and qualitative thematic content analysis. Themes emerged from providers on facilitators, barriers, and role negotiation and/or balancing interpersonal interactions in parent-provider safety partnership. Facilitators included the following: (1) mutual respect of roles, (2) parent advocacy and rule-following, and (3) provider quality care, empathetic adaptability, and transparent communication of expectations. Barriers included the following: (1) lack of respect, (2) differences in parent versus provider risk perception and parent lack of availability, and (3) provider medical errors and inconsistent communication, lack of engagement skills and time, and fear of overwhelming information. Providers described themes related to balancing parent advocacy with clinician's expertise, a provider's personal response to challenges to the professional role, and parents balancing relationship building with escalating safety concerns. To keep children safe in the hospital, providers balance perceived challenges to their personal and professional roles continuously in interpersonal interactions, paralleling parent concerns about role ambiguity and trust. Understanding these shared barriers to and facilitators of parent-provider safety partnerships can inform system design, parent education, and professional training. Copyright © 2018 by the American Academy of Pediatrics.

  5. Real-time SHVC software decoding with multi-threaded parallel processing

    NASA Astrophysics Data System (ADS)

    Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu

    2014-09-01

    This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.

  6. Analysis of impact of general-purpose graphics processor units in supersonic flow modeling

    NASA Astrophysics Data System (ADS)

    Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.

    2017-06-01

    Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  7. Progress and challenges in coupled hydrodynamic-ecological estuarine modeling.

    PubMed

    Ganju, Neil K; Brush, Mark J; Rashleigh, Brenda; Aretxabaleta, Alfredo L; Del Barrio, Pilar; Grear, Jason S; Harris, Lora A; Lake, Samuel J; McCardell, Grant; O'Donnell, James; Ralston, David K; Signell, Richard P; Testa, Jeremy M; Vaudrey, Jamie M P

    2016-03-01

    Numerical modeling has emerged over the last several decades as a widely accepted tool for investigations in environmental sciences. In estuarine research, hydrodynamic and ecological models have moved along parallel tracks with regard to complexity, refinement, computational power, and incorporation of uncertainty. Coupled hydrodynamic-ecological models have been used to assess ecosystem processes and interactions, simulate future scenarios, and evaluate remedial actions in response to eutrophication, habitat loss, and freshwater diversion. The need to couple hydrodynamic and ecological models to address research and management questions is clear, because dynamic feedbacks between biotic and physical processes are critical interactions within ecosystems. In this review we present historical and modern perspectives on estuarine hydrodynamic and ecological modeling, consider model limitations, and address aspects of model linkage, skill assessment, and complexity. We discuss the balance between spatial and temporal resolution and present examples using different spatiotemporal scales. Finally, we recommend future lines of inquiry, approaches to balance complexity and uncertainty, and model transparency and utility. It is idealistic to think we can pursue a "theory of everything" for estuarine models, but recent advances suggest that models for both scientific investigations and management applications will continue to improve in terms of realism, precision, and accuracy.

  8. Progress and challenges in coupled hydrodynamic-ecological estuarine modeling

    USGS Publications Warehouse

    Ganju, Neil K.; Brush, Mark J.; Rashleigh, Brenda; Aretxabaleta, Alfredo L.; del Barrio, Pilar; Grear, Jason S.; Harris, Lora A.; Lake, Samuel J.; McCardell, Grant; O'Donnell, James; Ralston, David K.; Signell, Richard P.; Testa, Jeremy; Vaudrey, Jamie M. P.

    2016-01-01

    Numerical modeling has emerged over the last several decades as a widely accepted tool for investigations in environmental sciences. In estuarine research, hydrodynamic and ecological models have moved along parallel tracks with regard to complexity, refinement, computational power, and incorporation of uncertainty. Coupled hydrodynamic-ecological models have been used to assess ecosystem processes and interactions, simulate future scenarios, and evaluate remedial actions in response to eutrophication, habitat loss, and freshwater diversion. The need to couple hydrodynamic and ecological models to address research and management questions is clear because dynamic feedbacks between biotic and physical processes are critical interactions within ecosystems. In this review, we present historical and modern perspectives on estuarine hydrodynamic and ecological modeling, consider model limitations, and address aspects of model linkage, skill assessment, and complexity. We discuss the balance between spatial and temporal resolution and present examples using different spatiotemporal scales. Finally, we recommend future lines of inquiry, approaches to balance complexity and uncertainty, and model transparency and utility. It is idealistic to think we can pursue a “theory of everything” for estuarine models, but recent advances suggest that models for both scientific investigations and management applications will continue to improve in terms of realism, precision, and accuracy.

  9. Progress and challenges in coupled hydrodynamic-ecological estuarine modeling

    PubMed Central

    Ganju, Neil K.; Brush, Mark J.; Rashleigh, Brenda; Aretxabaleta, Alfredo L.; del Barrio, Pilar; Grear, Jason S.; Harris, Lora A.; Lake, Samuel J.; McCardell, Grant; O’Donnell, James; Ralston, David K.; Signell, Richard P.; Testa, Jeremy M.; Vaudrey, Jamie M.P.

    2016-01-01

    Numerical modeling has emerged over the last several decades as a widely accepted tool for investigations in environmental sciences. In estuarine research, hydrodynamic and ecological models have moved along parallel tracks with regard to complexity, refinement, computational power, and incorporation of uncertainty. Coupled hydrodynamic-ecological models have been used to assess ecosystem processes and interactions, simulate future scenarios, and evaluate remedial actions in response to eutrophication, habitat loss, and freshwater diversion. The need to couple hydrodynamic and ecological models to address research and management questions is clear, because dynamic feedbacks between biotic and physical processes are critical interactions within ecosystems. In this review we present historical and modern perspectives on estuarine hydrodynamic and ecological modeling, consider model limitations, and address aspects of model linkage, skill assessment, and complexity. We discuss the balance between spatial and temporal resolution and present examples using different spatiotemporal scales. Finally, we recommend future lines of inquiry, approaches to balance complexity and uncertainty, and model transparency and utility. It is idealistic to think we can pursue a “theory of everything” for estuarine models, but recent advances suggest that models for both scientific investigations and management applications will continue to improve in terms of realism, precision, and accuracy. PMID:27721675

  10. The melding of drug markets in Houston after Katrina: dealer and user perspectives.

    PubMed

    Kotarba, Joseph A; Fackler, Jennifer; Johnson, Bruce D; Dunlap, Eloise

    2010-07-01

    In the aftermath of Hurricane Katrina, the majority of routine activities in New Orleans were disrupted, including the illegal drug market. The large-scale relocation of New Orleans evacuees (NOEs), including many illegal drug users and sellers, to host cities led to a need for new sources of illegal drugs. This need was quickly satisfied by two initially distinct drug markets (1) drug dealers from New Orleans who were themselves evacuees and (2) established drug dealers in the host cities. To be expected, the two markets did not operate indefinitely in parallel fashion. This paper describes the evolving, operational relationship between these two drug markets over time, with a focus on Houston. We analyze the reciprocal evolution of these two markets at two significant points in time: at the beginning of the relocation (2005) and two years later (2007). The overall trend is towards a melding of the two drug markets, as evidenced primarily by decreases in drug-related violence and the cross-fertilization of drug tastes. We describe the process by which the two drug markets are melded over time, in order to seek a better understanding of the social processes by which drug markets in general evolve.

  11. The Melding of Drug Markets in Houston After Katrina: Dealer and User Perspectives

    PubMed Central

    Kotarba, Joseph A.; Fackler, Jennifer; Johnson, Bruce D.; Dunlap, Eloise

    2013-01-01

    In the aftermath of Hurricane Katrina, the majority of routine activities in New Orleans were disrupted, including the illegal drug market. The large-scale relocation of New Orleans evacuees (NOEs), including many illegal drug users and sellers, to host cities led to a need for new sources of illegal drugs. This need was quickly satisfied by two initially distinct drug markets (1) drug dealers from New Orleans who were themselves evacuees and (2) established drug dealers in the host cities. To be expected, the two markets did not operate indefinitely in parallel fashion. This paper describes the evolving, operational relationship between these two drug markets over time, with a focus on Houston. We analyze the reciprocal evolution of these two markets at two significant points in time: at the beginning of the relocation (2005) and two years later (2007). The overall trend is towards a melding of the two drug markets, as evidenced primarily by decreases in drug-related violence and the cross-fertilization of drug tastes. We describe the process by which the two drug markets are melded over time, in order to seek a better understanding of the social processes by which drug markets in general evolve. PMID:20509741

  12. Historical perspectives of autonomy within the medical profession: considerations for 21st century physical therapy practice.

    PubMed

    Johnson, Michael P; Abrams, Sandra L

    2005-10-01

    As a part of the American Physical Therapy Association's (APTA) vision statement, by the year 2020, physical therapists "will hold all privileges of autonomous practice." This vision statement and the ideals held within it are elemental to the direction of our continued growth as a profession. Many members and nonmembers, however, appear confused and perhaps even intimidated by the concept of autonomous practice. This paper will review and discuss the processes used by other health care professions to gain autonomy within the US health care system. In particular, the processes used by physicians, which were extremely effective and have been used as a template by many other health professions, including physical therapy. Further discussion will focus on the physical therapy profession, emphasizing the parallels with medicine and considering many issues relevant to the goal of autonomous practice. By understanding the past and considering the present, readers will develop an appreciation of (1) the foundation for autonomous practice in health care, (2) the vision of the APTA and why the profession is well positioned to achieve this vision, and (3) the factors we need to consider to hold (and maintain) all privileges of autonomous practice.

  13. Immuno-Oncology-The Translational Runway for Gene Therapy: Gene Therapeutics to Address Multiple Immune Targets.

    PubMed

    Weß, Ludger; Schnieders, Frank

    2017-12-01

    Cancer therapy is once again experiencing a paradigm shift. This shift is based on extensive clinical experience demonstrating that cancer cannot be successfully fought by addressing only single targets or pathways. Even the combination of several neo-antigens in cancer vaccines is not sufficient for successful, lasting tumor eradication. The focus has therefore shifted to the immune system's role in cancer and the striking abilities of cancer cells to manipulate and/or deactivate the immune system. Researchers and pharma companies have started to target the processes and cells known to support immune surveillance and the elimination of tumor cells. Immune processes, however, require novel concepts beyond the traditional "single-target-single drug" paradigm and need parallel targeting of diverse cells and mechanisms. This review gives a perspective on the role of gene therapy technologies in the evolving immuno-oncology space and identifies gene therapy as a major driver in the development and regulation of effective cancer immunotherapy. Present challenges and breakthroughs ranging from chimeric antigen receptor T-cell therapy, gene-modified oncolytic viruses, combination cancer vaccines, to RNA therapeutics are spotlighted. Gene therapy is recognized as the most prominent technology enabling effective immuno-oncology strategies.

  14. Digital intermediate frequency QAM modulator using parallel processing

    DOEpatents

    Pao, Hsueh-Yuan [Livermore, CA; Tran, Binh-Nien [San Ramon, CA

    2008-05-27

    The digital Intermediate Frequency (IF) modulator applies to various modulation types and offers a simple and low cost method to implement a high-speed digital IF modulator using field programmable gate arrays (FPGAs). The architecture eliminates multipliers and sequential processing by storing the pre-computed modulated cosine and sine carriers in ROM look-up-tables (LUTs). The high-speed input data stream is parallel processed using the corresponding LUTs, which reduces the main processing speed, allowing the use of low cost FPGAs.

  15. The Development of Reading and Spelling in Arabic Orthography: Two Parallel Processes?

    ERIC Educational Resources Information Center

    Taha, Haitham

    2016-01-01

    The parallels between reading and spelling skills in Arabic were tested. One-hundred forty-three native Arab students, with typical reading development, from second, fourth, and sixth grades were tested with reading, spelling and orthographic decision tasks. The results indicated a full parallel between the reading and spelling performances within…

  16. Design of a massively parallel computer using bit serial processing elements

    NASA Technical Reports Server (NTRS)

    Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing

    1995-01-01

    A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.

  17. A novel milliliter-scale chemostat system for parallel cultivation of microorganisms in stirred-tank bioreactors.

    PubMed

    Schmideder, Andreas; Severin, Timm Steffen; Cremer, Johannes Heinrich; Weuster-Botz, Dirk

    2015-09-20

    A pH-controlled parallel stirred-tank bioreactor system was modified for parallel continuous cultivation on a 10 mL-scale by connecting multichannel peristaltic pumps for feeding and medium removal with micro-pipes (250 μm inner diameter). Parallel chemostat processes with Escherichia coli as an example showed high reproducibility with regard to culture volume and flow rates as well as dry cell weight, dissolved oxygen concentration and pH control at steady states (n=8, coefficient of variation <5%). Reliable estimation of kinetic growth parameters of E. coli was easily achieved within one parallel experiment by preselecting ten different steady states. Scalability of milliliter-scale steady state results was demonstrated by chemostat studies with a stirred-tank bioreactor on a liter-scale. Thus, parallel and continuously operated stirred-tank bioreactors on a milliliter-scale facilitate timesaving and cost reducing steady state studies with microorganisms. The applied continuous bioreactor system overcomes the drawbacks of existing miniaturized bioreactors, like poor mass transfer and insufficient process control. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  19. cljam: a library for handling DNA sequence alignment/map (SAM) with parallel processing.

    PubMed

    Takeuchi, Toshiki; Yamada, Atsuo; Aoki, Takashi; Nishimura, Kunihiro

    2016-01-01

    Next-generation sequencing can determine DNA bases and the results of sequence alignments are generally stored in files in the Sequence Alignment/Map (SAM) format and the compressed binary version (BAM) of it. SAMtools is a typical tool for dealing with files in the SAM/BAM format. SAMtools has various functions, including detection of variants, visualization of alignments, indexing, extraction of parts of the data and loci, and conversion of file formats. It is written in C and can execute fast. However, SAMtools requires an additional implementation to be used in parallel with, for example, OpenMP (Open Multi-Processing) libraries. For the accumulation of next-generation sequencing data, a simple parallelization program, which can support cloud and PC cluster environments, is required. We have developed cljam using the Clojure programming language, which simplifies parallel programming, to handle SAM/BAM data. Cljam can run in a Java runtime environment (e.g., Windows, Linux, Mac OS X) with Clojure. Cljam can process and analyze SAM/BAM files in parallel and at high speed. The execution time with cljam is almost the same as with SAMtools. The cljam code is written in Clojure and has fewer lines than other similar tools.

  20. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  1. A Review of High-Performance Computational Strategies for Modeling and Imaging of Electromagnetic Induction Data

    NASA Astrophysics Data System (ADS)

    Newman, Gregory A.

    2014-01-01

    Many geoscientific applications exploit electrostatic and electromagnetic fields to interrogate and map subsurface electrical resistivity—an important geophysical attribute for characterizing mineral, energy, and water resources. In complex three-dimensional geologies, where many of these resources remain to be found, resistivity mapping requires large-scale modeling and imaging capabilities, as well as the ability to treat significant data volumes, which can easily overwhelm single-core and modest multicore computing hardware. To treat such problems requires large-scale parallel computational resources, necessary for reducing the time to solution to a time frame acceptable to the exploration process. The recognition that significant parallel computing processes must be brought to bear on these problems gives rise to choices that must be made in parallel computing hardware and software. In this review, some of these choices are presented, along with the resulting trade-offs. We also discuss future trends in high-performance computing and the anticipated impact on electromagnetic (EM) geophysics. Topics discussed in this review article include a survey of parallel computing platforms, graphics processing units to multicore CPUs with a fast interconnect, along with effective parallel solvers and associated solver libraries effective for inductive EM modeling and imaging.

  2. NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.

  3. A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.

    1999-01-01

    The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.

  4. SIAM Conference on Parallel Processing for Scientific Computing, 4th, Chicago, IL, Dec. 11-13, 1989, Proceedings

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack (Editor); Messina, Paul (Editor); Sorensen, Danny C. (Editor); Voigt, Robert G. (Editor)

    1990-01-01

    Attention is given to such topics as an evaluation of block algorithm variants in LAPACK and presents a large-grain parallel sparse system solver, a multiprocessor method for the solution of the generalized Eigenvalue problem on an interval, and a parallel QR algorithm for iterative subspace methods on the CM2. A discussion of numerical methods includes the topics of asynchronous numerical solutions of PDEs on parallel computers, parallel homotopy curve tracking on a hypercube, and solving Navier-Stokes equations on the Cedar Multi-Cluster system. A section on differential equations includes a discussion of a six-color procedure for the parallel solution of elliptic systems using the finite quadtree structure, data parallel algorithms for the finite element method, and domain decomposition methods in aerodynamics. Topics dealing with massively parallel computing include hypercube vs. 2-dimensional meshes and massively parallel computation of conservation laws. Performance and tools are also discussed.

  5. Therapeutic approach to Crohn disease: possible parallels with hidradenitis suppurativa.

    PubMed

    González Lama, Y; Marín-Jiménez, I

    2016-09-01

    The current controversy in the setting of dermatology surrounding the treatment of inflammatory diseases, and specifically hidradenitis suppurativa, bears strong similarities with the debate concerning inflammatory bowel disease (IBD) that took place several years ago. That debate led to new perspectives on this disease and, in particular, its treatment after the development of biological agents. Copyright © 2016 Elsevier España, S.L.U. y AEDV. All rights reserved.

  6. Perspectives in AE--"A Feminist Is a Feminist": The Continued Activism of Dr. Juanita Johnson-Bailey

    ERIC Educational Resources Information Center

    Johnson, Brenda W.

    2014-01-01

    After 20 years I was back in school faced with an assignment to research an adult education scholar, leader, or practitioner. After a quick review of the list we were provided I was drawn to Dr. Johnson-Bailey as the focus of my paper primarily because our lives paralleled in so many ways. She was on staff at my Alma Mater, The University of…

  7. Parallel perceptual enhancement and hierarchic relevance evaluation in an audio-visual conjunction task.

    PubMed

    Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E

    2008-10-21

    Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.

  8. Deliberative Mapping of options for tackling climate change: Citizens and specialists ‘open up’ appraisal of geoengineering

    PubMed Central

    Bellamy, Rob; Chilvers, Jason; Vaughan, Naomi E.

    2014-01-01

    Appraisals of deliberate, large-scale interventions in the earth’s climate system, known collectively as ‘geoengineering’, have largely taken the form of narrowly framed and exclusive expert analyses that prematurely ‘close down’ upon particular proposals. Here, we present the findings from the first ‘upstream’ appraisal of geoengineering to deliberately ‘open up’ to a broader diversity of framings, knowledges and future pathways. We report on the citizen strand of an innovative analytic–deliberative participatory appraisal process called Deliberative Mapping. A select but diverse group of sociodemographically representative citizens from Norfolk (United Kingdom) were engaged in a deliberative multi-criteria appraisal of geoengineering proposals relative to other options for tackling climate change, in parallel to symmetrical appraisals by diverse experts and stakeholders. Despite seeking to map divergent perspectives, a remarkably consistent view of option performance emerged across both the citizens’ and the specialists’ deliberations, where geoengineering proposals were outperformed by mitigation alternatives. PMID:25224904

  9. Ideologies of aid, practices of power: lessons for Medicaid managed care.

    PubMed

    Nelson, Nancy L

    2005-03-01

    The articles in this special issue teach valuable lessons based on what happened in New Mexico with the shift to Medicaid managed care. By reframing these lessons in broader historical and cultural terms with reference to aid programs, we have the opportunity to learn a great deal more about the relationship between poverty, public policy, and ideology. Medicaid as a state and federal aid program in the United States and economic development programs as foreign aid provide useful analogies specifically because they exhibit a variety of parallel patterns. The increasing concatenation of corporate interests with state and nongovernmental interests in aid programs is ultimately producing a less centralized system of power and responsibility. This process of decentralization, however, is not undermining the sources of power behind aid efforts, although it does make the connections between intent, planning, and outcome less direct. Ultimately, the devolution of power produces many unintended consequences for aid policy. But it also reinforces the perspective that aid and the need for it are nonpolitical issues.

  10. "It depends on us": employee perspective of healthy working conditions during continual reorganisations in a radiology department.

    PubMed

    Nilsson, Kerstin; Hertting, Anna; Petterson, Inga-Lill

    2009-01-01

    This study focuses on employees' experience of occupational health in a radiology department within a Swedish university hospital during years of continual reorganisations. This department's stable personal health trends in terms of self-rated mental health and sick-leave rates diverged from the general trends of deteriorating working conditions in the hospital. The aim was to identify dimensions of working conditions as positive determinants contributing to occupational health in a department of radiology undergoing continual reorganisations. Open-ended interviews with twelve employees were transcribed and analyzed using content-analysis. The employees experienced their new stimulating working tasks and a supporting organizational climate as important contributors to the healthy work condition. The positive effects of handling new technical challenges and the positive organisational climate, which were characterized by mutual trust, as well as work-confidence and respect for each others' competence, seem to function as buffering factors, balancing the negative effects of parallel downsizing and restructuring processes.

  11. Sigmund Freud-early network theories of the brain.

    PubMed

    Surbeck, Werner; Killeen, Tim; Vetter, Johannes; Hildebrandt, Gerhard

    2018-06-01

    Since the early days of modern neuroscience, psychological models of brain function have been a key component in the development of new knowledge. These models aim to provide a framework that allows the integration of discoveries derived from the fundamental disciplines of neuroscience, including anatomy and physiology, as well as clinical neurology and psychiatry. During the initial stages of his career, Sigmund Freud (1856-1939), became actively involved in these nascent fields with a burgeoning interest in functional neuroanatomy. In contrast to his contemporaries, Freud was convinced that cognition could not be localised to separate modules and that the brain processes cognition not in a merely serial manner but in a parallel and dynamic fashion-anticipating fundamental aspects of current network theories of brain function. This article aims to shed light on Freud's seminal, yet oft-overlooked, early work on functional neuroanatomy and his reasons for finally abandoning the conventional neuroscientific "brain-based" reference frame in order to conceptualise the mind from a purely psychological perspective.

  12. Deliberative Mapping of options for tackling climate change: Citizens and specialists 'open up' appraisal of geoengineering.

    PubMed

    Bellamy, Rob; Chilvers, Jason; Vaughan, Naomi E

    2016-04-01

    Appraisals of deliberate, large-scale interventions in the earth's climate system, known collectively as 'geoengineering', have largely taken the form of narrowly framed and exclusive expert analyses that prematurely 'close down' upon particular proposals. Here, we present the findings from the first 'upstream' appraisal of geoengineering to deliberately 'open up' to a broader diversity of framings, knowledges and future pathways. We report on the citizen strand of an innovative analytic-deliberative participatory appraisal process called Deliberative Mapping. A select but diverse group of sociodemographically representative citizens from Norfolk (United Kingdom) were engaged in a deliberative multi-criteria appraisal of geoengineering proposals relative to other options for tackling climate change, in parallel to symmetrical appraisals by diverse experts and stakeholders. Despite seeking to map divergent perspectives, a remarkably consistent view of option performance emerged across both the citizens' and the specialists' deliberations, where geoengineering proposals were outperformed by mitigation alternatives. © The Author(s) 2014.

  13. Algorithmic aspects for the reconstruction of spatio-spectral data cubes in the perspective of the SKA

    NASA Astrophysics Data System (ADS)

    Mary, D.; Ferrari, A.; Ferrari, C.; Deguignet, J.; Vannier, M.

    2016-12-01

    With millions of receivers leading to TerraByte data cubes, the story of the giant SKA telescope is also that of collaborative efforts from radioastronomy, signal processing, optimization and computer sciences. Reconstructing SKA cubes poses two challenges. First, the majority of existing algorithms work in 2D and cannot be directly translated into 3D. Second, the reconstruction implies solving an inverse problem and it is not clear what ultimate limit we can expect on the error of this solution. This study addresses (of course partially) both challenges. We consider an extremely simple data acquisition model, and we focus on strategies making it possible to implement 3D reconstruction algorithms that use state-of-the-art image/spectral regularization. The proposed approach has two main features: (i) reduced memory storage with respect to a previous approach; (ii) efficient parallelization and ventilation of the computational load over the spectral bands. This work will allow to implement and compare various 3D reconstruction approaches in a large scale framework.

  14. CPD and KT: Models Used and Opportunities for Synergy.

    PubMed

    Sargeant, Joan; Borduas, Francine; Sales, Anne; Klein, Doug; Lynn, Brenna; Stenerson, Heather

    2017-01-01

    The two fields of continuing professional development (CPD) and knowledge translation (KT) within the health care sector, and their related research have developed as somewhat parallel paths with limited points of overlap or intersection. This is slowly beginning to change. The purpose of this paper is to describe and compare the dominant conceptual models informing each field with the view of increasing understanding and appreciation of the two fields, how they are similar and where they differ, and the current and potential points of intersection. The models include the "knowledge-to-action" (KTA) cycle informing KT, models informing CPD curriculum design and individual self-directed learning, and the Kirkpatrick model for evaluating educational outcomes. When compared through the perspectives of conceptual designs, processes, and outcomes, the models overlap. We also identify shared gaps in both fields (eg, the need to explore the influence of the context in which CPD and KT interventions take place) and suggest opportunities for synergies and for moving forward.

  15. Niche construction, sources of selection and trait coevolution.

    PubMed

    Laland, Kevin; Odling-Smee, John; Endler, John

    2017-10-06

    Organisms modify and choose components of their local environments. This 'niche construction' can alter ecological processes, modify natural selection and contribute to inheritance through ecological legacies. Here, we propose that niche construction initiates and modifies the selection directly affecting the constructor, and on other species, in an orderly, directed and sustained manner. By dependably generating specific environmental states, niche construction co-directs adaptive evolution by imposing a consistent statistical bias on selection. We illustrate how niche construction can generate this evolutionary bias by comparing it with artificial selection. We suggest that it occupies the middle ground between artificial and natural selection. We show how the perspective leads to testable predictions related to: (i) reduced variance in measures of responses to natural selection in the wild; (ii) multiple trait coevolution, including the evolution of sequences of traits and patterns of parallel evolution; and (iii) a positive association between niche construction and biodiversity. More generally, we submit that evolutionary biology would benefit from greater attention to the diverse properties of all sources of selection.

  16. Best practices for fungal germplasm repositories and perspectives on their implementation.

    PubMed

    Wiest, Aric; Schnittker, Robert; Plamann, Mike; McCluskey, Kevin

    2012-02-01

    In over 50 years, the Fungal Genetics Stock Center has grown to become a world-recognized biological resource center. Along with this growth comes the development and implementation of myriad practices for the management and curation of a diverse collection of filamentous fungi, yeast, and molecular genetic tools for working with the fungi. These practices include techniques for the testing, manipulation, and preservation of individual fungal isolates as well as for processing of thousands of isolates in parallel. In addition to providing accurate record keeping, an electronic managements system allows the observation of trends in strain distribution and in sample characteristics. Because many ex situ fungal germplasm repositories around the world share similar objectives, best-practice guidelines have been developed by a number of organizations such as the Organization for Economic Cooperation and Development or the International Society for Biological and Environmental Repositories. These best-practice guidelines provide a framework for the successful operation of collections and promote the development and interactions of biological resource centers around the world.

  17. An Update to Returning Genetic Research Results to Individuals: Perspectives of the Industry Pharmacogenomics Working Group

    PubMed Central

    Prucka, Sandra K; Arnold, Lester J; Brandt, John E; Gilardi, Sandra; Harty, Lea C; Hong, Feng; Malia, Joanne; Pulford, David J

    2015-01-01

    The ease with which genotyping technologies generate tremendous amounts of data on research participants has been well chronicled, a feat that continues to become both faster and cheaper to perform. In parallel to these advances come additional ethical considerations and debates, one of which centers on providing individual research results and incidental findings back to research participants taking part in genetic research efforts. In 2006 the Industry Pharmacogenomics Working Group (I-PWG) offered some ‘Points-to-Consider’ on this topic within the context of the drug development process from those who are affiliated to pharmaceutical companies. Today many of these points remain applicable to the discussion but will be expanded upon in this updated viewpoint from the I-PWG. The exploratory nature of pharmacogenomic work in the pharmaceutical industry is discussed to provide context for why these results typically are not best suited for return. Operational challenges unique to this industry which cause barriers to returning this information are also explained. PMID:24471556

  18. The Symptoms and Functioning Severity Scale (SFSS): Psychometric Evaluation and Discrepancies among Youth, Caregiver, and Clinician Ratings over Time

    PubMed Central

    Athay, M. Michele; Riemer, Manuel; Bickman, Leonard

    2012-01-01

    This paper describes the development and psychometric evaluation of the Symptoms and Functioning Severity Scale (SFSS), which includes three parallel forms to systematically capture clinician, youth, and caregiver perspectives of youth symptoms on a frequent basis. While there is widespread consensus that different raters of youth psychopathology vary significantly in their assessment this is the first paper that specifically investigates the discrepancies among clinician, youth, and caregiver ratings in a community mental health setting throughout the treatment process. Results for all three respondent versions indicate the SFSS is a psychometrically sound instrument for use in this population. Significant discrepancies in scores exist at baseline among the three respondents. Longitudinal analyses reveal the youth-clinician and caregiver-clinician score discrepancies decrease significantly over time. Differences by youth gender exist for caregiver-clinician discrepancies. The average youth-caregiver score discrepancy remains consistent throughout treatment. Implications for future research and clinical practice are discussed. PMID:22407556

  19. Structure and function of complex brain networks

    PubMed Central

    Sporns, Olaf

    2013-01-01

    An increasing number of theoretical and empirical studies approach the function of the human brain from a network perspective. The analysis of brain networks is made feasible by the development of new imaging acquisition methods as well as new tools from graph theory and dynamical systems. This review surveys some of these methodological advances and summarizes recent findings on the architecture of structural and functional brain networks. Studies of the structural connectome reveal several modules or network communities that are interlinked by hub regions mediating communication processes between modules. Recent network analyses have shown that network hubs form a densely linked collective called a “rich club,” centrally positioned for attracting and dispersing signal traffic. In parallel, recordings of resting and task-evoked neural activity have revealed distinct resting-state networks that contribute to functions in distinct cognitive domains. Network methods are increasingly applied in a clinical context, and their promise for elucidating neural substrates of brain and mental disorders is discussed. PMID:24174898

  20. Efficient testing methodologies for microcameras in a gigapixel imaging system

    NASA Astrophysics Data System (ADS)

    Youn, Seo Ho; Marks, Daniel L.; McLaughlin, Paul O.; Brady, David J.; Kim, Jungsang

    2013-04-01

    Multiscale parallel imaging--based on a monocentric optical design--promises revolutionary advances in diverse imaging applications by enabling high resolution, real-time image capture over a wide field-of-view (FOV), including sport broadcast, wide-field microscopy, astronomy, and security surveillance. Recently demonstrated AWARE-2 is a gigapixel camera consisting of an objective lens and 98 microcameras spherically arranged to capture an image over FOV of 120° by 50°, using computational image processing to form a composite image of 0.96 gigapixels. Since microcameras are capable of individually adjusting exposure, gain, and focus, true parallel imaging is achieved with a high dynamic range. From the integration perspective, manufacturing and verifying consistent quality of microcameras is a key to successful realization of AWARE cameras. We have developed an efficient testing methodology that utilizes a precisely fabricated dot grid chart as a calibration target to extract critical optical properties such as optical distortion, veiling glare index, and modulation transfer function to validate imaging performance of microcameras. This approach utilizes an AWARE objective lens simulator which mimics the actual objective lens but operates with a short object distance, suitable for a laboratory environment. Here we describe the principles of the methodologies developed for AWARE microcameras and discuss the experimental results with our prototype microcameras. Reference Brady, D. J., Gehm, M. E., Stack, R. A., Marks, D. L., Kittle, D. S., Golish, D. R., Vera, E. M., and Feller, S. D., "Multiscale gigapixel photography," Nature 486, 386--389 (2012).

Top