Sample records for parallel independent component

  1. Parallel group independent component analysis for massive fMRI data sets.

    PubMed

    Chen, Shaojie; Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H; Pekar, James J; Lindquist, Martin A; Eloyan, Ani; Caffo, Brian S

    2017-01-01

    Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively.

  2. Three-way parallel independent component analysis for imaging genetics using multi-objective optimization.

    PubMed

    Ulloa, Alvaro; Jingyu Liu; Vergara, Victor; Jiayu Chen; Calhoun, Vince; Pattichis, Marios

    2014-01-01

    In the biomedical field, current technology allows for the collection of multiple data modalities from the same subject. In consequence, there is an increasing interest for methods to analyze multi-modal data sets. Methods based on independent component analysis have proven to be effective in jointly analyzing multiple modalities, including brain imaging and genetic data. This paper describes a new algorithm, three-way parallel independent component analysis (3pICA), for jointly identifying genomic loci associated with brain function and structure. The proposed algorithm relies on the use of multi-objective optimization methods to identify correlations among the modalities and maximally independent sources within modality. We test the robustness of the proposed approach by varying the effect size, cross-modality correlation, noise level, and dimensionality of the data. Simulation results suggest that 3p-ICA is robust to data with SNR levels from 0 to 10 dB and effect-sizes from 0 to 3, while presenting its best performance with high cross-modality correlations, and more than one subject per 1,000 variables. In an experimental study with 112 human subjects, the method identified links between a genetic component (pointing to brain function and mental disorder associated genes, including PPP3CC, KCNQ5, and CYP7B1), a functional component related to signal decreases in the default mode network during the task, and a brain structure component indicating increases of gray matter in brain regions of the default mode region. Although such findings need further replication, the simulation and in-vivo results validate the three-way parallel ICA algorithm presented here as a useful tool in biomedical data decomposition applications.

  3. A Parallel Independent Component Analysis Approach to Investigate Genomic Influence on Brain Function

    PubMed Central

    Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D.

    2009-01-01

    Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings. PMID:19834575

  4. A Parallel Independent Component Analysis Approach to Investigate Genomic Influence on Brain Function.

    PubMed

    Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D

    2008-01-01

    Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings.

  5. A novel approach to analyzing fMRI and SNP data via parallel independent component analysis

    NASA Astrophysics Data System (ADS)

    Liu, Jingyu; Pearlson, Godfrey; Calhoun, Vince; Windemuth, Andreas

    2007-03-01

    There is current interest in understanding genetic influences on brain function in both the healthy and the disordered brain. Parallel independent component analysis, a new method for analyzing multimodal data, is proposed in this paper and applied to functional magnetic resonance imaging (fMRI) and a single nucleotide polymorphism (SNP) array. The method aims to identify the independent components of each modality and the relationship between the two modalities. We analyzed 92 participants, including 29 schizophrenia (SZ) patients, 13 unaffected SZ relatives, and 50 healthy controls. We found a correlation of 0.79 between one fMRI component and one SNP component. The fMRI component consists of activations in cingulate gyrus, multiple frontal gyri, and superior temporal gyrus. The related SNP component is contributed to significantly by 9 SNPs located in sets of genes, including those coding for apolipoprotein A-I, and C-III, malate dehydrogenase 1 and the gamma-aminobutyric acid alpha-2 receptor. A significant difference in the presences of this SNP component is found between the SZ group (SZ patients and their relatives) and the control group. In summary, we constructed a framework to identify the interactions between brain functional and genetic information; our findings provide new insight into understanding genetic influences on brain function in a common mental disorder.

  6. Target objects defined by a conjunction of colour and shape can be selected independently and in parallel.

    PubMed

    Jenkins, Michael; Grubert, Anna; Eimer, Martin

    2017-11-01

    It is generally assumed that during search for targets defined by a feature conjunction, attention is allocated sequentially to individual objects. We tested this hypothesis by tracking the time course of attentional processing biases with the N2pc component in tasks where observers searched for two targets defined by a colour/shape conjunction. In Experiment 1, two displays presented in rapid succession (100 ms or 10 ms SOA) each contained a target and a colour-matching or shape-matching distractor on opposite sides. Target objects in both displays elicited N2pc components of similar size that overlapped in time when the SOA was 10 ms, suggesting that attention was allocated in parallel to both targets. Analogous results were found in Experiment 2, where targets and partially matching distractors were both accompanied by an object without target-matching features. Colour-matching and shape-matching distractors also elicited N2pc components, and the target N2pc was initially identical to the sum of the two distractor N2pcs, suggesting that the initial phase of attentional object selection was guided independently by feature templates for target colour and shape. Beyond 230 ms after display onset, the target N2pc became superadditive, indicating that attentional selection processes now started to be sensitive to the presence of feature conjunctions. Results show that independent attentional selection processes can be activated in parallel by two target objects in situations where these objects are defined by a feature conjunction.

  7. Component Technology for High-Performance Scientific Simulation Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epperly, T; Kohn, S; Kumfert, G

    2000-11-09

    We are developing scientific software component technology to manage the complexity of modem, parallel simulation software and increase the interoperability and re-use of scientific software packages. In this paper, we describe a language interoperability tool named Babel that enables the creation and distribution of language-independent software libraries using interface definition language (IDL) techniques. We have created a scientific IDL that focuses on the unique interface description needs of scientific codes, such as complex numbers, dense multidimensional arrays, complicated data types, and parallelism. Preliminary results indicate that in addition to language interoperability, this approach provides useful tools for thinking about themore » design of modem object-oriented scientific software libraries. Finally, we also describe a web-based component repository called Alexandria that facilitates the distribution, documentation, and re-use of scientific components and libraries.« less

  8. A high speed buffer for LV data acquisition

    NASA Technical Reports Server (NTRS)

    Cavone, Angelo A.; Sterlina, Patrick S.; Clemmons, James I., Jr.; Meyers, James F.

    1987-01-01

    The laser velocimeter (autocovariance) buffer interface is a data acquisition subsystem designed specifically for the acquisition of data from a laser velocimeter. The subsystem acquires data from up to six laser velocimeter components in parallel, measures the times between successive data points for each of the components, establishes and maintains a coincident condition between any two or three components, and acquires data from other instrumentation systems simultaneously with the laser velocimeter data points. The subsystem is designed to control the entire data acquisition process based on initial setup parameters obtained from a host computer and to be independent of the computer during the acquisition. On completion of the acquisition cycle, the interface transfers the contents of its memory to the host under direction of the host via a single 16-bit parallel DMA channel.

  9. A Linguistic Model in Component Oriented Programming

    NASA Astrophysics Data System (ADS)

    Crăciunean, Daniel Cristian; Crăciunean, Vasile

    2016-12-01

    It is a fact that the component-oriented programming, well organized, can bring a large increase in efficiency in the development of large software systems. This paper proposes a model for building software systems by assembling components that can operate independently of each other. The model is based on a computing environment that runs parallel and distributed applications. This paper introduces concepts as: abstract aggregation scheme and aggregation application. Basically, an aggregation application is an application that is obtained by combining corresponding components. In our model an aggregation application is a word in a language.

  10. Draco,Version 6.x.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Kelly; Budge, Kent; Lowrie, Rob

    2016-03-03

    Draco is an object-oriented component library geared towards numerically intensive, radiation (particle) transport applications built for parallel computing hardware. It consists of semi-independent packages and a robust build system. The packages in Draco provide a set of components that can be used by multiple clients to build transport codes. The build system can also be extracted for use in clients. Software includes smart pointers, Design-by-Contract assertions, unit test framework, wrapped MPI functions, a file parser, unstructured mesh data structures, a random number generator, root finders and an angular quadrature component.

  11. Reusable Component Model Development Approach for Parallel and Distributed Simulation

    PubMed Central

    Zhu, Feng; Yao, Yiping; Chen, Huilong; Yao, Feng

    2014-01-01

    Model reuse is a key issue to be resolved in parallel and distributed simulation at present. However, component models built by different domain experts usually have diversiform interfaces, couple tightly, and bind with simulation platforms closely. As a result, they are difficult to be reused across different simulation platforms and applications. To address the problem, this paper first proposed a reusable component model framework. Based on this framework, then our reusable model development approach is elaborated, which contains two phases: (1) domain experts create simulation computational modules observing three principles to achieve their independence; (2) model developer encapsulates these simulation computational modules with six standard service interfaces to improve their reusability. The case study of a radar model indicates that the model developed using our approach has good reusability and it is easy to be used in different simulation platforms and applications. PMID:24729751

  12. Signaling added response-independent reinforcement to assess Pavlovian processes in resistance to change and relapse.

    PubMed

    Podlesnik, Christopher A; Fleet, James D

    2014-09-01

    Behavioral momentum theory asserts Pavlovian stimulus-reinforcer relations govern the persistence of operant behavior. Specifically, resistance to conditions of disruption (e.g., extinction, satiation) reflects the relation between discriminative stimuli and the prevailing reinforcement conditions. The present study assessed whether Pavlovian stimulus-reinforcer relations govern resistance to disruption in pigeons by arranging both response-dependent and -independent food reinforcers in two components of a multiple schedule. In one component, discrete-stimulus changes preceded response-independent reinforcers, paralleling methods that reduce Pavlovian conditioned responding to contextual stimuli. Compared to the control component with no added stimuli preceding response-independent reinforcement, response rates increased as discrete-stimulus duration increased (0, 5, 10, and 15 s) across conditions. Although resistance to extinction decreased as stimulus duration increased in the component with the added discrete stimulus, further tests revealed no effect of discrete stimuli, including other disrupters (presession food, intercomponent food, modified extinction) and reinstatement designed to control for generalization decrement. These findings call into question a straightforward conception that the stimulus-reinforcer relations governing resistance to disruption reflect the same processes as Pavlovian conditioning, as asserted by behavioral momentum theory. © Society for the Experimental Analysis of Behavior.

  13. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  14. Age-associated patterns in gray matter volume, cerebral perfusion and BOLD oscillations in children and adolescents.

    PubMed

    Bray, Signe

    2017-05-01

    Healthy brain development involves changes in brain structure and function that are believed to support cognitive maturation. However, understanding how structural changes such as grey matter thinning relate to functional changes is challenging. To gain insight into structure-function relationships in development, the present study took a data driven approach to define age-related patterns of variation in gray matter volume (GMV), cerebral blood flow (CBF) and blood-oxygen level dependent (BOLD) signal variation (fractional amplitude of low-frequency fluctuations; fALFF) in 59 healthy children aged 7-18 years, and examined relationships between modalities. Principal components analysis (PCA) was applied to each modality in parallel, and participant scores for the top components were assessed for age associations. We found that decompositions of CBF, GMV and fALFF all included components for which scores were significantly associated with age. The dominant patterns in GMV and CBF showed significant (GMV) or trend level (CBF) associations with age and a strong spatial overlap, driven by increased signal intensity in default mode network (DMN) regions. GMV, CBF and fALFF additionally showed components accounting for 3-5% of variability with significant age associations. However, these patterns were relatively spatially independent, with small-to-moderate overlap between modalities. Independence of age effects was further demonstrated by correlating individual subject maps between modalities: CBF was significantly less correlated with GMV and fALFF in older children relative to younger. These spatially independent effects of age suggest that the parallel decline observed in global GMV and CBF may not reflect spatially synchronized processes. Hum Brain Mapp 38:2398-2407, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

    Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy,more » and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics (which we discussed in [1]) where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel. We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.« less

  16. The equilibrium and stability of the gaseous component of the galaxy, 2

    NASA Technical Reports Server (NTRS)

    Kellman, S. A.

    1971-01-01

    A time-independent, linear, plane and axially-symmetric stability analysis was performed on a self-gravitating, plane-parallel, isothermal layer of nonmagnetic, nonrotating gas. The gas layer was immersed in a plane-stratified field isothermal layer of stars which supply a self-consistent gravitational field. Only the gaseous component was perturbed. Expressions were derived for the perturbed gas potential and perturbed gas density that satisfied both the Poisson and hydrostatic equilibrium equations. The equation governing the size of the perturbations in the mid-plane was found to be analogous to the one-dimensional time-independent Schrodinger equation for a particle bound by a potential well, and with similar boundary conditions. The radius of the neutral state was computed numerically and compared with the Jeans' and Ledoux radius. The inclusion of a rigid stellar component increased the Ledoux radius, though only slightly. Isodensity contours of the neutrual or marginally unstable state were constructed.

  17. A three-way parallel ICA approach to analyze links among genetics, brain structure and brain function.

    PubMed

    Vergara, Victor M; Ulloa, Alvaro; Calhoun, Vince D; Boutte, David; Chen, Jiayu; Liu, Jingyu

    2014-09-01

    Multi-modal data analysis techniques, such as the Parallel Independent Component Analysis (pICA), are essential in neuroscience, medical imaging and genetic studies. The pICA algorithm allows the simultaneous decomposition of up to two data modalities achieving better performance than separate ICA decompositions and enabling the discovery of links between modalities. However, advances in data acquisition techniques facilitate the collection of more than two data modalities from each subject. Examples of commonly measured modalities include genetic information, structural magnetic resonance imaging (MRI) and functional MRI. In order to take full advantage of the available data, this work extends the pICA approach to incorporate three modalities in one comprehensive analysis. Simulations demonstrate the three-way pICA performance in identifying pairwise links between modalities and estimating independent components which more closely resemble the true sources than components found by pICA or separate ICA analyses. In addition, the three-way pICA algorithm is applied to real experimental data obtained from a study that investigate genetic effects on alcohol dependence. Considered data modalities include functional MRI (contrast images during alcohol exposure paradigm), gray matter concentration images from structural MRI and genetic single nucleotide polymorphism (SNP). The three-way pICA approach identified links between a SNP component (pointing to brain function and mental disorder associated genes, including BDNF, GRIN2B and NRG1), a functional component related to increased activation in the precuneus area, and a gray matter component comprising part of the default mode network and the caudate. Although such findings need further verification, the simulation and in-vivo results validate the three-way pICA algorithm presented here as a useful tool in biomedical data fusion applications. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. P-HS-SFM: a parallel harmony search algorithm for the reproduction of experimental data in the continuous microscopic crowd dynamic models

    NASA Astrophysics Data System (ADS)

    Jaber, Khalid Mohammad; Alia, Osama Moh'd.; Shuaib, Mohammed Mahmod

    2018-03-01

    Finding the optimal parameters that can reproduce experimental data (such as the velocity-density relation and the specific flow rate) is a very important component of the validation and calibration of microscopic crowd dynamic models. Heavy computational demand during parameter search is a known limitation that exists in a previously developed model known as the Harmony Search-Based Social Force Model (HS-SFM). In this paper, a parallel-based mechanism is proposed to reduce the computational time and memory resource utilisation required to find these parameters. More specifically, two MATLAB-based multicore techniques (parfor and create independent jobs) using shared memory are developed by taking advantage of the multithreading capabilities of parallel computing, resulting in a new framework called the Parallel Harmony Search-Based Social Force Model (P-HS-SFM). The experimental results show that the parfor-based P-HS-SFM achieved a better computational time of about 26 h, an efficiency improvement of ? 54% and a speedup factor of 2.196 times in comparison with the HS-SFM sequential processor. The performance of the P-HS-SFM using the create independent jobs approach is also comparable to parfor with a computational time of 26.8 h, an efficiency improvement of about 30% and a speedup of 2.137 times.

  19. A discrimination-association model for decomposing component processes of the implicit association test.

    PubMed

    Stefanutti, Luca; Robusto, Egidio; Vianello, Michelangelo; Anselmi, Pasquale

    2013-06-01

    A formal model is proposed that decomposes the implicit association test (IAT) effect into three process components: stimuli discrimination, automatic association, and termination criterion. Both response accuracy and reaction time are considered. Four independent and parallel Poisson processes, one for each of the four label categories of the IAT, are assumed. The model parameters are the rate at which information accrues on the counter of each process and the amount of information that is needed before a response is given. The aim of this study is to present the model and an illustrative application in which the process components of a Coca-Pepsi IAT are decomposed.

  20. High-throughput shadow mask printing of passive electrical components on paper by supersonic cluster beam deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caruso, Francesco; Bellacicca, Andrea; Milani, Paolo, E-mail: pmilani@mi.infn.it

    We report the rapid prototyping of passive electrical components (resistors and capacitors) on plain paper by an additive and parallel technology consisting of supersonic cluster beam deposition (SCBD) coupled with shadow mask printing. Cluster-assembled films have a growth mechanism substantially different from that of atom-assembled ones providing the possibility of a fine tuning of their electrical conduction properties around the percolative conduction threshold. Exploiting the precise control on cluster beam intensity and shape typical of SCBD, we produced, in a one-step process, batches of resistors with resistance values spanning a range of two orders of magnitude. Parallel plate capacitors withmore » paper as the dielectric medium were also produced with capacitance in the range of tens of picofarads. Compared to standard deposition technologies, SCBD allows for a very efficient use of raw materials and the rapid production of components with different shape and dimensions while controlling independently the electrical characteristics. Discrete electrical components produced by SCBD are very robust against deformation and bending, and they can be easily assembled to build circuits with desired characteristics. The availability of large batches of these components enables the rapid and cheap prototyping and integration of electrical components on paper as building blocks of more complex systems.« less

  1. The 20 kW battery study program

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Six battery configurations were selected for detailed study and these are described. A computer program was modified for use in estimation of the weights, costs, and reliabilities of each of the configurations, as a function of several important independent variables, such as system voltage, battery voltage ratio (battery voltage/bus voltage), and the number of parallel units into which each of the components of the power subsystem was divided. The computer program was used to develop the relationship between the independent variables alone and in combination, and the dependent variables: weight, cost, and availability. Parametric data, including power loss curves, are given.

  2. The Design and Evaluation of "CAPTools"--A Computer Aided Parallelization Toolkit

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Frumkin, Michael; Hribar, Michelle; Jin, Haoqiang; Waheed, Abdul; Johnson, Steve; Cross, Jark; Evans, Emyr; Ierotheou, Constantinos; Leggett, Pete; hide

    1998-01-01

    Writing applications for high performance computers is a challenging task. Although writing code by hand still offers the best performance, it is extremely costly and often not very portable. The Computer Aided Parallelization Tools (CAPTools) are a toolkit designed to help automate the mapping of sequential FORTRAN scientific applications onto multiprocessors. CAPTools consists of the following major components: an inter-procedural dependence analysis module that incorporates user knowledge; a 'self-propagating' data partitioning module driven via user guidance; an execution control mask generation and optimization module for the user to fine tune parallel processing of individual partitions; a program transformation/restructuring facility for source code clean up and optimization; a set of browsers through which the user interacts with CAPTools at each stage of the parallelization process; and a code generator supporting multiple programming paradigms on various multiprocessors. Besides describing the rationale behind the architecture of CAPTools, the parallelization process is illustrated via case studies involving structured and unstructured meshes. The programming process and the performance of the generated parallel programs are compared against other programming alternatives based on the NAS Parallel Benchmarks, ARC3D and other scientific applications. Based on these results, a discussion on the feasibility of constructing architectural independent parallel applications is presented.

  3. Concurrent white matter bundles and grey matter networks using independent component analysis.

    PubMed

    O'Muircheartaigh, Jonathan; Jbabdi, Saad

    2018-04-15

    Developments in non-invasive diffusion MRI tractography techniques have permitted the investigation of both the anatomy of white matter pathways connecting grey matter regions and their structural integrity. In parallel, there has been an expansion in automated techniques aimed at parcellating grey matter into distinct regions based on functional imaging. Here we apply independent component analysis to whole-brain tractography data to automatically extract brain networks based on their associated white matter pathways. This method decomposes the tractography data into components that consist of paired grey matter 'nodes' and white matter 'edges', and automatically separates major white matter bundles, including known cortico-cortical and cortico-subcortical tracts. We show how this framework can be used to investigate individual variations in brain networks (in terms of both nodes and edges) as well as their associations with individual differences in behaviour and anatomy. Finally, we investigate correspondences between tractography-based brain components and several canonical resting-state networks derived from functional MRI. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Modeling Changes in Measured Conductance of Thin Boron Carbide Semiconducting Films Under Irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, George G.; Wang, Yongqiang; Ianno, N. J.

    Semiconducting, p-type, amorphous partially dehydrogenated boron carbide films (a-B 10C 2+x:H y) were deposited utilizing plasma enhanced chemical vapor deposition (PECVD) onto n-type silicon thus creating a heterojunction diode. A model was developed for the conductance of the device as a function of perturbation frequency (f) that incorporates changes of the electrical properties for both the a-B 10C 2+x:H y film and the silicon substrate when irradiated. The virgin model has 3 independent variables (R1, C1, R3), and 1 dependent variable (f). These samples were then irradiated with 200 keV He + ions, and the conductance model was matched tomore » the measured data. It was found that initial irradiation (0.1 displacements per atom (dpa) equivalent) resulted in a decrease in the parallel junction resistance parameter from 6032 Ω to 2705 Ω. Further irradiation drastically increased the parallel junction resistance parameter to 39000 Ω (0.2 dpa equivalent), 77440 Ω (0.3 dpa equivalent), and 190000 Ω (0.5 dpa equivalent). It is believed that the initial irradiation causes type inversion of the silicon substrate changing the original junction from a p-n to a p-p+ with a much lower barrier height leading to a lower junction resistance component between the a-B 10C 2+x:H y and irradiated silicon. In addition, it was found that after irradiation, a second parallel resistor and capacitor component is required for the model, introducing 2 additional independent variables (R2, C2). This is interpreted as the junction between the irradiated and virgin silicon near ion end of range.« less

  5. Modeling Changes in Measured Conductance of Thin Boron Carbide Semiconducting Films Under Irradiation

    DOE PAGES

    Peterson, George G.; Wang, Yongqiang; Ianno, N. J.; ...

    2016-11-09

    Semiconducting, p-type, amorphous partially dehydrogenated boron carbide films (a-B 10C 2+x:H y) were deposited utilizing plasma enhanced chemical vapor deposition (PECVD) onto n-type silicon thus creating a heterojunction diode. A model was developed for the conductance of the device as a function of perturbation frequency (f) that incorporates changes of the electrical properties for both the a-B 10C 2+x:H y film and the silicon substrate when irradiated. The virgin model has 3 independent variables (R1, C1, R3), and 1 dependent variable (f). These samples were then irradiated with 200 keV He + ions, and the conductance model was matched tomore » the measured data. It was found that initial irradiation (0.1 displacements per atom (dpa) equivalent) resulted in a decrease in the parallel junction resistance parameter from 6032 Ω to 2705 Ω. Further irradiation drastically increased the parallel junction resistance parameter to 39000 Ω (0.2 dpa equivalent), 77440 Ω (0.3 dpa equivalent), and 190000 Ω (0.5 dpa equivalent). It is believed that the initial irradiation causes type inversion of the silicon substrate changing the original junction from a p-n to a p-p+ with a much lower barrier height leading to a lower junction resistance component between the a-B 10C 2+x:H y and irradiated silicon. In addition, it was found that after irradiation, a second parallel resistor and capacitor component is required for the model, introducing 2 additional independent variables (R2, C2). This is interpreted as the junction between the irradiated and virgin silicon near ion end of range.« less

  6. Mushroom body defect is required in parallel to Netrin for midline axon guidance in Drosophila

    PubMed Central

    Cate, Marie-Sophie; Gajendra, Sangeetha; Alsbury, Samantha; Raabe, Thomas; Tear, Guy; Mitchell, Kevin J.

    2016-01-01

    The outgrowth of many neurons within the central nervous system is initially directed towards or away from the cells lying at the midline. Recent genetic evidence suggests that a simple model of differential sensitivity to the conserved Netrin attractants and Slit repellents is insufficient to explain the guidance of all axons at the midline. In the Drosophila embryonic ventral nerve cord, many axons still cross the midline in the absence of the Netrin genes (NetA and NetB) or their receptor frazzled. Here we show that mutation of mushroom body defect (mud) dramatically enhances the phenotype of Netrin or frazzled mutants, resulting in many more axons failing to cross the midline, although mutations in mud alone have little effect. This suggests that mud, which encodes a microtubule-binding coiled-coil protein homologous to NuMA and LIN-5, is an essential component of a Netrin-independent pathway that acts in parallel to promote midline crossing. We demonstrate that this novel role of Mud in axon guidance is independent of its previously described role in neural precursor development. These studies identify a parallel pathway controlling midline guidance in Drosophila and highlight a novel role for Mud potentially acting downstream of Frizzled to aid axon guidance. PMID:26893348

  7. Inductive Position Sensor

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C. (Inventor); Simmons, Stephen M. (Inventor)

    2015-01-01

    An inductive position sensor uses three parallel inductors, each of which has an axial core that is an independent magnetic structure. A first support couples first and second inductors and separate them by a fixed distance. A second support coupled to a third inductor disposed between the first and second inductors. The first support and second support are configured for relative movement as distance changes from the third inductor to each of the first and second inductors. An oscillating current is supplied to the first and second inductors. A device measures a phase component of a source voltage generating the oscillating current and a phase component of voltage induced in the third inductor when the oscillating current is supplied to the first and second inductors such that the phase component of the voltage induced overlaps the phase component of the source voltage.

  8. Simulation of finite-strain inelastic phenomena governed by creep and plasticity

    NASA Astrophysics Data System (ADS)

    Li, Zhen; Bloomfield, Max O.; Oberai, Assad A.

    2017-11-01

    Inelastic mechanical behavior plays an important role in many applications in science and engineering. Phenomenologically, this behavior is often modeled as plasticity or creep. Plasticity is used to represent the rate-independent component of inelastic deformation and creep is used to represent the rate-dependent component. In several applications, especially those at elevated temperatures and stresses, these processes occur simultaneously. In order to model these process, we develop a rate-objective, finite-deformation constitutive model for plasticity and creep. The plastic component of this model is based on rate-independent J_2 plasticity, and the creep component is based on a thermally activated Norton model. We describe the implementation of this model within a finite element formulation, and present a radial return mapping algorithm for it. This approach reduces the additional complexity of modeling plasticity and creep, over thermoelasticity, to just solving one nonlinear scalar equation at each quadrature point. We implement this algorithm within a multiphysics finite element code and evaluate the consistent tangent through automatic differentiation. We verify and validate the implementation, apply it to modeling the evolution of stresses in the flip chip manufacturing process, and test its parallel strong-scaling performance.

  9. Time-dependent behavior of passive skeletal muscle

    NASA Astrophysics Data System (ADS)

    Ahamed, T.; Rubin, M. B.; Trimmer, B. A.; Dorfmann, L.

    2016-03-01

    An isotropic three-dimensional nonlinear viscoelastic model is developed to simulate the time-dependent behavior of passive skeletal muscle. The development of the model is stimulated by experimental data that characterize the response during simple uniaxial stress cyclic loading and unloading. Of particular interest is the rate-dependent response, the recovery of muscle properties from the preconditioned to the unconditioned state and stress relaxation at constant stretch during loading and unloading. The model considers the material to be a composite of a nonlinear hyperelastic component in parallel with a nonlinear dissipative component. The strain energy and the corresponding stress measures are separated additively into hyperelastic and dissipative parts. In contrast to standard nonlinear inelastic models, here the dissipative component is modeled using an evolution equation that combines rate-independent and rate-dependent responses smoothly with no finite elastic range. Large deformation evolution equations for the distortional deformations in the elastic and in the dissipative component are presented. A robust, strongly objective numerical integration algorithm is used to model rate-dependent and rate-independent inelastic responses. The constitutive formulation is specialized to simulate the experimental data. The nonlinear viscoelastic model accurately represents the time-dependent passive response of skeletal muscle.

  10. The parallel-sequential field subtraction techniques for nonlinear ultrasonic imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.

    2018-04-01

    Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage and have sensitivity to particularly closed defects. This study utilizes two modes of focusing: parallel, in which the elements are fired together with a delay law, and sequential, in which elements are fired independently. In the parallel focusing, a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded; with elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images formed from the coherent component of the field and use this to characterize nonlinearity of closed fatigue cracks. In particular we monitor the reduction in amplitude at the fundamental frequency at each focal point and use this metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g., back wall or large scatters) and allow damage to be detected at an early stage.

  11. Components of action potential repolarization in cerebellar parallel fibres.

    PubMed

    Pekala, Dobromila; Baginskas, Armantas; Szkudlarek, Hanna J; Raastad, Morten

    2014-11-15

    Repolarization of the presynaptic action potential is essential for transmitter release, excitability and energy expenditure. Little is known about repolarization in thin, unmyelinated axons forming en passant synapses, which represent the most common type of axons in the mammalian brain's grey matter.We used rat cerebellar parallel fibres, an example of typical grey matter axons, to investigate the effects of K(+) channel blockers on repolarization. We show that repolarization is composed of a fast tetraethylammonium (TEA)-sensitive component, determining the width and amplitude of the spike, and a slow margatoxin (MgTX)-sensitive depolarized after-potential (DAP). These two components could be recorded at the granule cell soma as antidromic action potentials and from the axons with a newly developed miniaturized grease-gap method. A considerable proportion of fast repolarization remained in the presence of TEA, MgTX, or both. This residual was abolished by the addition of quinine. The importance of proper control of fast repolarization was demonstrated by somatic recordings of antidromic action potentials. In these experiments, the relatively broad K(+) channel blocker 4-aminopyridine reduced the fast repolarization, resulting in bursts of action potentials forming on top of the DAP. We conclude that repolarization of the action potential in parallel fibres is supported by at least three groups of K(+) channels. Differences in their temporal profiles allow relatively independent control of the spike and the DAP, whereas overlap of their temporal profiles provides robust control of axonal bursting properties.

  12. Second order kinetic theory of parallel momentum transport in collisionless drift wave turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yang, E-mail: lyang13@mails.tsinghua.edu.cn; Southwestern Institute of Physics, Chengdu 610041; Gao, Zhe

    A second order kinetic model for turbulent ion parallel momentum transport is presented. A new nonresonant second order parallel momentum flux term is calculated. The resonant component of the ion parallel electrostatic force is the momentum source, while the nonresonant component of the ion parallel electrostatic force compensates for that of the nonresonant second order parallel momentum flux. The resonant component of the kinetic momentum flux can be divided into three parts, including the pinch term, the diffusive term, and the residual stress. By reassembling the pinch term and the residual stress, the residual stress can be considered as amore » pinch term of parallel wave-particle resonant velocity, and, therefore, may be called as “resonant velocity pinch” term. Considering the resonant component of the ion parallel electrostatic force is the transfer rate between resonant ions and waves (or, equivalently, nonresonant ions), a conservation equation of the parallel momentum of resonant ions and waves is obtained.« less

  13. Parallel digital modem using multirate digital filter banks

    NASA Technical Reports Server (NTRS)

    Sadr, Ramin; Vaidyanathan, P. P.; Raphaeli, Dan; Hinedi, Sami

    1994-01-01

    A new class of architectures for an all-digital modem is presented in this report. This architecture, referred to as the parallel receiver (PRX), is based on employing multirate digital filter banks (DFB's) to demodulate, track, and detect the received symbol stream. The resulting architecture is derived, and specifications are outlined for designing the DFB for the PRX. The key feature of this approach is a lower processing rate then either the Nyquist rate or the symbol rate, without any degradation in the symbol error rate. Due to the freedom in choosing the processing rate, the designer is able to arbitrarily select and use digital components, independent of the speed of the integrated circuit technology. PRX architecture is particularly suited for high data rate applications, and due to the modular structure of the parallel signal path, expansion to even higher data rates is accommodated with each. Applications of the PRX would include gigabit satellite channels, multiple spacecraft, optical links, interactive cable-TV, telemedicine, code division multiple access (CDMA) communications, and others.

  14. Guided Exploration of Genomic Risk for Gray Matter Abnormalities in Schizophrenia Using Parallel Independent Component Analysis with Reference

    PubMed Central

    Chen, Jiayu; Calhoun, Vince D.; Pearlson, Godfrey D.; Perrone-Bizzozero, Nora; Sui, Jing; Turner, Jessica A.; Bustillo, Juan R; Ehrlich, Stefan; Sponheim, Scott R.; Cañive, José M.; Ho, Beng-Choon; Liu, Jingyu

    2013-01-01

    One application of imaging genomics is to explore genetic variants associated with brain structure and function, presenting a new means of mapping genetic influences on mental disorders. While there is growing interest in performing genome-wide searches for determinants, it remains challenging to identify genetic factors of small effect size, especially in limited sample sizes. In an attempt to address this issue, we propose to take advantage of a priori knowledge, specifically to extend parallel independent component analysis (pICA) to incorporate a reference (pICA-R), aiming to better reveal relationships between hidden factors of a particular attribute. The new approach was first evaluated on simulated data for its performance under different configurations of effect size and dimensionality. Then pICA-R was applied to a 300-participant (140 schizophrenia (SZ) patients versus 160 healthy controls) dataset consisting of structural magnetic resonance imaging (sMRI) and single nucleotide polymorphism (SNP) data. Guided by a reference SNP set derived from ANK3, a gene implicated by the Psychiatric Genomic Consortium SZ study, pICA-R identified one pair of SNP and sMRI components with a significant loading correlation of 0.27 (p = 1.64×10−6). The sMRI component showed a significant group difference in loading parameters between patients and controls (p = 1.33×10−15), indicating SZ-related reduction in gray matter concentration in prefrontal and temporal regions. The linked SNP component also showed a group difference (p = 0.04) and was predominantly contributed to by 1,030 SNPs. The effect of these top contributing SNPs was verified using association test results of the Psychiatric Genomic Consortium SZ study, where the 1,030 SNPs exhibited significant SZ enrichment compared to the whole genome. In addition, pathway analyses indicated the genetic component majorly relating to neurotransmitter and nervous system signaling pathways. Given the simulation and experiment results, pICA-R may prove a promising multivariate approach for use in imaging genomics to discover reliable genetic risk factors under a scenario of relatively high dimensionality and small effect size. PMID:23727316

  15. Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2017-03-01

    Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.

  16. Urban residential greenspace and mental health in youth: Different approaches to testing multiple pathways yield different conclusions.

    PubMed

    Dzhambov, Angel; Hartig, Terry; Markevych, Iana; Tilov, Boris; Dimitrova, Donka

    2018-01-01

    Urban greenspace can benefit mental health through multiple mechanisms. They may work together, but previous studies have treated them as independent. We aimed to compare single and parallel mediation models, which estimate the independent contributions of different paths, to several models that posit serial mediation components in the pathway from greenspace to mental health. We collected cross-sectional survey data from 399 participants (15-25 years of age) in the city of Plovdiv, Bulgaria. Objective "exposure" to urban residential greenspace was defined by the Normalized Difference Vegetation Index (NDVI), Soil Adjusted Vegetation Index, tree cover density within the 500-m buffer, and Euclidean distance to the nearest urban greenspace. Self-reported measures of availability, access, quality, and usage of greenspace were also used. Mental health was measured with the General Health Questionnaire. The following potential mediators were considered in single and parallel mediation models: restorative quality of the neighborhood, neighborhood social cohesion, commuting and leisure time physical activity, road traffic noise annoyance, and perceived air pollution. Four models were tested with the following serial mediation components: (1) restorative quality → social cohesion; (2) restorative quality → physical activity; (3) perceived traffic pollution → restorative quality; (4) and noise annoyance → physical activity. There was no direct association between objectively-measured greenspace and mental health. For the 500-m buffer, the tests of the single mediator models suggested that restorative quality mediated the relationship between NDVI and mental health. Tests of parallel mediation models did not find any significant indirect effects. In line with theory, tests of the serial mediation models showed that higher restorative quality was associated with more physical activity and more social cohesion, and in turn with better mental health. As for self-reported greenspace measures, single mediation through restorative quality was significant only for time in greenspace, and there was no mediation though restorative quality in the parallel mediation models; however, serial mediation through restorative quality and social cohesion/physical activity was indicated for all self-reported measures except for greenspace quality. Statistical models should adequately address the theoretically indicated interdependencies between mechanisms underlying association between greenspace and mental health. If such causal relationships hold, testing mediators alone or in parallel may lead to incorrect inferences about the relative contribution of specific paths, and thus to inappropriate intervention strategies. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Memory Retrieval Given Two Independent Cues: Cue Selection or Parallel Access?

    ERIC Educational Resources Information Center

    Rickard, Timothy C.; Bajic, Daniel

    2004-01-01

    A basic but unresolved issue in the study of memory retrieval is whether multiple independent cues can be used concurrently (i.e., in parallel) to recall a single, common response. A number of empirical results, as well as potentially applicable theories, suggest that retrieval can proceed in parallel, though Rickard (1997) set forth a model that…

  18. Implementation and performance of parallel Prolog interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, S.; Kale, L.V.; Balkrishna, R.

    1988-01-01

    In this paper, the authors discuss the implementation of a parallel Prolog interpreter on different parallel machines. The implementation is based on the REDUCE--OR process model which exploits both AND and OR parallelism in logic programs. It is machine independent as it runs on top of the chare-kernel--a machine-independent parallel programming system. The authors also give the performance of the interpreter running a diverse set of benchmark pargrams on parallel machines including shared memory systems: an Alliant FX/8, Sequent and a MultiMax, and a non-shared memory systems: Intel iPSC/32 hypercube, in addition to its performance on a multiprocessor simulation system.

  19. Reliability Modeling Methodology for Independent Approaches on Parallel Runways Safety Analysis

    NASA Technical Reports Server (NTRS)

    Babcock, P.; Schor, A.; Rosch, G.

    1998-01-01

    This document is an adjunct to the final report An Integrated Safety Analysis Methodology for Emerging Air Transport Technologies. That report presents the results of our analysis of the problem of simultaneous but independent, approaches of two aircraft on parallel runways (independent approaches on parallel runways, or IAPR). This introductory chapter presents a brief overview and perspective of approaches and methodologies for performing safety analyses for complex systems. Ensuing chapter provide the technical details that underlie the approach that we have taken in performing the safety analysis for the IAPR concept.

  20. Using Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data

    NASA Astrophysics Data System (ADS)

    O'Connor, A. S.; Justice, B.; Harris, A. T.

    2013-12-01

    Graphics Processing Units (GPUs) are high-performance multiple-core processors capable of very high computational speeds and large data throughput. Modern GPUs are inexpensive and widely available commercially. These are general-purpose parallel processors with support for a variety of programming interfaces, including industry standard languages such as C. GPU implementations of algorithms that are well suited for parallel processing can often achieve speedups of several orders of magnitude over optimized CPU codes. Significant improvements in speeds for imagery orthorectification, atmospheric correction, target detection and image transformations like Independent Components Analsyis (ICA) have been achieved using GPU-based implementations. Additional optimizations, when factored in with GPU processing capabilities, can provide 50x - 100x reduction in the time required to process large imagery. Exelis Visual Information Solutions (VIS) has implemented a CUDA based GPU processing frame work for accelerating ENVI and IDL processes that can best take advantage of parallelization. Testing Exelis VIS has performed shows that orthorectification can take as long as two hours with a WorldView1 35,0000 x 35,000 pixel image. With GPU orthorecification, the same orthorectification process takes three minutes. By speeding up image processing, imagery can successfully be used by first responders, scientists making rapid discoveries with near real time data, and provides an operational component to data centers needing to quickly process and disseminate data.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aceves, Salvador M.; Ledesma-Orozco, Elias Rigoberto; Espinosa-Loza, Francisco

    A pressure vessel apparatus for cryogenic capable storage of hydrogen or other cryogenic gases at high pressure includes an insert with a parallel inlet duct, a perpendicular inlet duct connected to the parallel inlet. The perpendicular inlet duct and the parallel inlet duct connect the interior cavity with the external components. The insert also includes a parallel outlet duct and a perpendicular outlet duct connected to the parallel outlet duct. The perpendicular outlet duct and the parallel outlet duct connect the interior cavity with the external components.

  2. Multiple Independent File Parallel I/O with HDF5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, M. C.

    2016-07-13

    The HDF5 library has supported the I/O requirements of HPC codes at Lawrence Livermore National Labs (LLNL) since the late 90’s. In particular, HDF5 used in the Multiple Independent File (MIF) parallel I/O paradigm has supported LLNL code’s scalable I/O requirements and has recently been gainfully used at scales as large as O(10 6) parallel tasks.

  3. Monte Carlo simulation methodology for the reliabilty of aircraft structures under damage tolerance considerations

    NASA Astrophysics Data System (ADS)

    Rambalakos, Andreas

    Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the exponent in the crack propagation rate (Paris equation) and the yield strength of the elements are considered in the analytical model. The structural component is assumed to consist of a prescribed number of elements. This Monte Carlo simulation methodology is used to determine the required non-periodic inspections so that the reliability of the structural component will not fall below a prescribed minimum level. A sensitivity analysis is conducted to determine the effect of three key parameters on the specification of the non-periodic inspection intervals: namely a parameter associated with the time to crack initiation, the applied nominal stress fluctuation and the minimum acceptable reliability level.

  4. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  5. A Generalized Electron Heat Flow Relation and its Connection to the Thermal Force and the Solar Wind Parallel Electric Field

    NASA Astrophysics Data System (ADS)

    Scudder, J. D.

    2017-12-01

    Enroute to a new formulation of the heat law for the solar wind plasma the role of the invariably neglected, but omnipresent, thermal force for the multi-fluid physics of the corona and solar wind expansion will be discussed. This force (a) controls the size of the collisional ion electron energy exchange, favoring the thermal vs supra thermal electrons; (b) occurs whenever heat flux occurs; (c) remains after the electron and ion fluids come to a no slip, zero parallel current, equilibrium; (d) enhances the equilibrium parallel electric field; but (e) has a size that is theoretically independent of the electron collision frequency - allowing its importance to persist far up into the corona where collisions are invariably ignored in first approximation. The constituent parts of the thermal force allow the derivation of a new generalized electron heat flow relation that will be presented. It depends on the separate field aligned divergences of electron and ion pressures and the gradients of the ion gravitational potential and parallel flow energies and is based upon a multi-component electron distribution function. The new terms in this heat law explicitly incorporate the astrophysical context of gradients, acceleration and external forces that make demands on the parallel electric field and quasi-neutrality; essentially all of these effects are missing in traditional formulations.

  6. The FORCE - A highly portable parallel programming language

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger

    1989-01-01

    This paper explains why the FORCE parallel programming language is easily portable among six different shared-memory multiprocessors, and how a two-level macro preprocessor makes it possible to hide low-level machine dependencies and to build machine-independent high-level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared-memory multiprocessor executing them.

  7. The FORCE: A highly portable parallel programming language

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger

    1989-01-01

    Here, it is explained why the FORCE parallel programming language is easily portable among six different shared-memory microprocessors, and how a two-level macro preprocessor makes it possible to hide low level machine dependencies and to build machine-independent high level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared memory multiprocessor executing them.

  8. Broadening and collisional interference of lines in the IR spectra of ammonia. Theory

    NASA Astrophysics Data System (ADS)

    Cherkasov, M. R.

    2016-06-01

    The general theory of relaxation spectral shape parameters in the impact approximation (M. R. Cherkasov, J. Quant. Spectrosc. Radiat. Transfer 141, 73 (2014)) is adapted to the case of line broadening of infrared spectra of ammonia. Specific features of line broadening of parallel and perpendicular bands are discussed. It is shown that in both cases the spectrum consists of independently broadened singlets and doublets; however, the components of doublets can be affected by collisional interference. The paper is the first part of a cycle of studies devoted to the problems of spectral line broadening of ammonia.

  9. Trade-off results and preliminary designs of Near-Term Hybrid Vehicles

    NASA Technical Reports Server (NTRS)

    Sandberg, J. J.

    1980-01-01

    Phase I of the Near-Term Hybrid Vehicle Program involved the development of preliminary designs of electric/heat engine hybrid passenger vehicles. The preliminary designs were developed on the basis of mission analysis, performance specification, and design trade-off studies conducted independently by four contractors. THe resulting designs involve parallel hybrid (heat engine/electric) propulsion systems with significant variation in component selection, power train layout, and control strategy. Each of the four designs is projected by its developer as having the potential to substitute electrical energy for 40% to 70% of the petroleum fuel consumed annually by its conventional counterpart.

  10. Optical implementation of polarization-independent, bidirectional, nonblocking Clos network using polarization control technique in free space

    NASA Astrophysics Data System (ADS)

    Yang, Junbo; Yang, Jiankun; Li, Xiujian; Chang, Shengli; Su, Xianyu; Ping, Xu

    2011-04-01

    The clos network is one of the earliest multistage interconnection networks. Recently, it has been widely studied in parallel optical information processing systems, and there have been many efforts to develop this network. In this paper, a smart and compact Clos network, including Clos(2,3,2) and Clos(2,4,2), is proposed by using polarizing beam-splitters (PBS), phase spatial light modulators (PSLM), and mirrors. PBS features that are s-component (perpendicular to the incident plane) of the incident light beam is reflected, and the p-component (parallel to the incident plane) passes through it. According to switching logic, under control of external electrical signals, PSLM functions to control routing paths of the signal beams, i.e., the polarization of each optical signal is rotated or not rotated 90° by a programmable PSLM. This new type of configuration grants the features of less optical components, compact in structure, efficient in performance, and insensitive to polarization of signal beam. In addition, the straight, the exchange, and the broadcast functions of the basic switch element are implemented bidirectionally in free-space. Furthermore, the new optical experimental module of 2×3 and 2×4 optical switch is also presented by a cascading polarization-independent bidirectional 2×2 optical switch. Simultaneously, the routing state-table of 2×3 and 2×4 optical switch to perform all permutation output and nonblocking switch for the input signal beam, is achieved. Since the proposed optical setup consists of only optical polarization elements, it is compact in structure, and possesses a low energy loss, a high signal-to-ratio, and an available large number of optical channels. Finally, the discussions and the experimental results show that the Clos network proposed here should be helpful in the design of large-scale network matrix, and may be used in optical communication and optical information processing.

  11. Comparison between four dissimilar solar panel configurations

    NASA Astrophysics Data System (ADS)

    Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.

    2017-12-01

    Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.

  12. Integration of statistical and physiological analyses of adaptation of near-isogenic barley lines.

    PubMed

    Romagosa, I; Fox, P N; García Del Moral, L F; Ramos, J M; García Del Moral, B; Roca de Togores, F; Molina-Cano, J L

    1993-08-01

    Seven near-isogenic barley lines, differing for three independent mutant genes, were grown in 15 environments in Spain. Genotype x environment interaction (G x E) for grain yield was examined with the Additive Main Effects and Multiplicative interaction (AMMI) model. The results of this statistical analysis of multilocation yield-data were compared with a morpho-physiological characterization of the lines at two sites (Molina-Cano et al. 1990). The first two principal component axes from the AMMI analysis were strongly associated with the morpho-physiological characters. The independent but parallel discrimination among genotypes reflects genetic differences and highlights the power of the AMMI analysis as a tool to investigate G x E. Characters which appear to be positively associated with yield in the germplasm under study could be identified for some environments.

  13. Explicit pre-training instruction does not improve implicit perceptual-motor sequence learning

    PubMed Central

    Sanchez, Daniel J.; Reber, Paul J.

    2012-01-01

    Memory systems theory argues for separate neural systems supporting implicit and explicit memory in the human brain. Neuropsychological studies support this dissociation, but empirical studies of cognitively healthy participants generally observe that both kinds of memory are acquired to at least some extent, even in implicit learning tasks. A key question is whether this observation reflects parallel intact memory systems or an integrated representation of memory in healthy participants. Learning of complex tasks in which both explicit instruction and practice is used depends on both kinds of memory, and how these systems interact will be an important component of the learning process. Theories that posit an integrated, or single, memory system for both types of memory predict that explicit instruction should contribute directly to strengthening task knowledge. In contrast, if the two types of memory are independent and acquired in parallel, explicit knowledge should have no direct impact and may serve in a “scaffolding” role in complex learning. Using an implicit perceptual-motor sequence learning task, the effect of explicit pre-training instruction on skill learning and performance was assessed. Explicit pre-training instruction led to robust explicit knowledge, but sequence learning did not benefit from the contribution of pre-training sequence memorization. The lack of an instruction benefit suggests that during skill learning, implicit and explicit memory operate independently. While healthy participants will generally accrue parallel implicit and explicit knowledge in complex tasks, these types of information appear to be separately represented in the human brain consistent with multiple memory systems theory. PMID:23280147

  14. Genetic association of impulsivity in young adults: a multivariate study

    PubMed Central

    Khadka, S; Narayanan, B; Meda, S A; Gelernter, J; Han, S; Sawyer, B; Aslanzadeh, F; Stevens, M C; Hawkins, K A; Anticevic, A; Potenza, M N; Pearlson, G D

    2014-01-01

    Impulsivity is a heritable, multifaceted construct with clinically relevant links to multiple psychopathologies. We assessed impulsivity in young adult (N~2100) participants in a longitudinal study, using self-report questionnaires and computer-based behavioral tasks. Analysis was restricted to the subset (N=426) who underwent genotyping. Multivariate association between impulsivity measures and single-nucleotide polymorphism data was implemented using parallel independent component analysis (Para-ICA). Pathways associated with multiple genes in components that correlated significantly with impulsivity phenotypes were then identified using a pathway enrichment analysis. Para-ICA revealed two significantly correlated genotype–phenotype component pairs. One impulsivity component included the reward responsiveness subscale and behavioral inhibition scale of the Behavioral-Inhibition System/Behavioral-Activation System scale, and the second impulsivity component included the non-planning subscale of the Barratt Impulsiveness Scale and the Experiential Discounting Task. Pathway analysis identified processes related to neurogenesis, nervous system signal generation/amplification, neurotransmission and immune response. We identified various genes and gene regulatory pathways associated with empirically derived impulsivity components. Our study suggests that gene networks implicated previously in brain development, neurotransmission and immune response are related to impulsive tendencies and behaviors. PMID:25268255

  15. Manipulating parallel circuits: the perioperative management of patients with complex congenital cardiac disease.

    PubMed

    Lawrenson, John; Eyskens, Benedicte; Vlasselaers, Dirk; Gewillig, Marc

    2003-08-01

    In all patients undergoing cardiac surgery, the effective delivery of oxygen to the tissues is of paramount importance. In the patient with relatively normal cardiac structures, the pulmonary and systemic circulations are relatively independent of each other. In the patient with a functional single ventricle, the pulmonary and systemic circulations are dependent on the same pump. As a consequence of this interdependency, the haemodynamic changes following complex palliative procedures, such as the Norwood operation, can be difficult to understand. Comparison of the newly created surgical connections to a simple set of direct current electrical circuits may help the practitioner to successfully care for the patient. In patients undergoing complex palliations, the pulmonary and systemic circulations can be compared to two circuits in parallel. Manipulations of variables, such as resistance or flow, in one circuit, can profoundly affect the performance of the other circuit. A large pulmonary flow might result in a large increase in the saturation of haemoglobin with oxygen returning to the heart via the pulmonary veins at the expense of a decreased systemic flow. Accurate balancing of these parallel circulations requires an appreciation of all interventions that can affect individual components of both circulations.

  16. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  17. Efficient abstract data type components for distributed and parallel systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bastani, F.; Hilal, W.; Iyengar, S.S.

    1987-10-01

    One way of improving software system's comprehensibility and maintainability is to decompose it into several components, each of which encapsulates some information concerning the system. These components can be classified into four categories, namely, abstract data type, functional, interface, and control components. Such a classfication underscores the need for different specification, implementation, and performance-improvement methods for different types of components. This article focuses on the development of high-performance abstract data type components for distributed and parallel environments.

  18. Effect of Substrate Wetting on the Morphology and Dynamics of Phase Separating Multi-Component Mixture

    NASA Astrophysics Data System (ADS)

    Goyal, Abheeti; Toschi, Federico; van der Schoot, Paul

    2017-11-01

    We study the morphological evolution and dynamics of phase separation of multi-component mixture in thin film constrained by a substrate. Specifically, we have explored the surface-directed spinodal decomposition of multicomponent mixture numerically by Free Energy Lattice Boltzmann (LB) simulations. The distinguishing feature of this model over the Shan-Chen (SC) model is that we have explicit and independent control over the free energy functional and EoS of the system. This vastly expands the ambit of physical systems that can be realistically simulated by LB simulations. We investigate the effect of composition, film thickness and substrate wetting on the phase morphology and the mechanism of growth in the vicinity of the substrate. The phase morphology and averaged size in the vicinity of the substrate fluctuate greatly due to the wetting of the substrate in both the parallel and perpendicular directions. Additionally, we also describe how the model presented here can be extended to include an arbitrary number of fluid components.

  19. The control of attentional target selection in a colour/colour conjunction task.

    PubMed

    Berggren, Nick; Eimer, Martin

    2016-11-01

    To investigate the time course of attentional object selection processes in visual search tasks where targets are defined by a combination of features from the same dimension, we measured the N2pc component as an electrophysiological marker of attentional object selection during colour/colour conjunction search. In Experiment 1, participants searched for targets defined by a combination of two colours, while ignoring distractor objects that matched only one of these colours. Reliable N2pc components were triggered by targets and also by partially matching distractors, even when these distractors were accompanied by a target in the same display. The target N2pc was initially equal in size to the sum of the two N2pc components to the two different types of partially matching distractors and became superadditive from approximately 250 ms after search display onset. Experiment 2 demonstrated that the superadditivity of the target N2pc was not due to a selective disengagement of attention from task-irrelevant partially matching distractors. These results indicate that attention was initially deployed separately and in parallel to all target-matching colours, before attentional allocation processes became sensitive to the presence of both matching colours within the same object. They suggest that attention can be controlled simultaneously and independently by multiple features from the same dimension and that feature-guided attentional selection processes operate in parallel for different target-matching objects in the visual field.

  20. Asymmetry in the Farley-Buneman dispersion relation caused by parallel electric fields

    NASA Astrophysics Data System (ADS)

    Forsythe, Victoriya V.; Makarevich, Roman A.

    2016-11-01

    An implicit assumption utilized in studies of E region plasma waves generated by the Farley-Buneman instability (FBI) is that the FBI dispersion relation and its solutions for the growth rate and phase velocity are perfectly symmetric with respect to the reversal of the wave propagation component parallel to the magnetic field. In the present study, a recently derived general dispersion relation that describes fundamental plasma instabilities in the lower ionosphere including FBI is considered and it is demonstrated that the dispersion relation is symmetric only for background electric fields that are perfectly perpendicular to the magnetic field. It is shown that parallel electric fields result in significant differences between the growth rates and phase velocities for propagation of parallel components of opposite signs. These differences are evaluated using numerical solutions of the general dispersion relation and shown to exhibit an approximately linear relationship with the parallel electric field near the E region peak altitude of 110 km. An analytic expression for the differences is also derived from an approximate version of the dispersion relation, with comparisons between numerical and analytic results agreeing near 110 km. It is further demonstrated that parallel electric fields do not change the overall symmetry when the full 3-D wave propagation vector is reversed, with no symmetry seen when either the perpendicular or parallel component is reversed. The present results indicate that moderate-to-strong parallel electric fields of 0.1-1.0 mV/m can result in experimentally measurable differences between the characteristics of plasma waves with parallel propagation components of opposite polarity.

  1. High-Resolution Study of the First Stretching Overtones of H3Si79Br.

    PubMed

    Ceausu; Graner; Bürger; Mkadmi; Pracna; Lafferty

    1998-11-01

    The Fourier transform infrared spectrum of monoisotopic H3Si79Br (resolution 7.7 x 10(-3) cm-1) was studied from 4200 to 4520 cm-1, in the region of the first overtones of the Si-H stretching vibration. The investigation of the spectrum revealed the presence of two band systems, the first consisting of one parallel (nu0 = 4340.2002 cm-1) and one perpendicular (nu0 = 4342.1432 cm-1) strong component, and the second of one parallel (nu0 = 4405.789 cm-1) and one perpendicular (nu0 = 4416.233 cm-1) weak component. The rovibrational analysis shows strong local perturbations for both strong and weak systems. Seven hundred eighty-one nonzero-weighted transitions belonging to the strong system [the (200) manifold in the local mode picture] were fitted to a simple model involving a perpendicular component interacting by a weak Coriolis resonance with a parallel component. The most severely perturbed transitions (whose ||obs-calc || values exceeded 3 x 10(-3) cm-1) were given zero weights. The standard deviations of the fit were 1.0 x 10(-3) and 0.69 x 10(-3) cm-1 for the parallel and the perpendicular components, respectively. The weak band system, severely perturbed by many "dark" perturbers, was fitted to a model involving one parallel and one perpendicular band, connected by a Coriolis-type resonance. The K" . DeltaK = +10 to +18 subbands of the perpendicular component, which showed very high observed - calculated values ( approximately 0.5 cm-1), were excluded from this calculation. The standard deviations of the fit were 11 x 10(-3) and 13 x 10(-3) cm-1 for the parallel and the perpendicular components, respectively. Copyright 1998 Academic Press.

  2. Deep Evolutionary Comparison of Gene Expression Identifies Parallel Recruitment of Trans-Factors in Two Independent Origins of C4 Photosynthesis

    PubMed Central

    Kümpers, Britta M. C.; Smith-Unna, Richard D.; Hibberd, Julian M.

    2014-01-01

    With at least 60 independent origins spanning monocotyledons and dicotyledons, the C4 photosynthetic pathway represents one of the most remarkable examples of convergent evolution. The recurrent evolution of this highly complex trait involving alterations to leaf anatomy, cell biology and biochemistry allows an increase in productivity by ∼50% in tropical and subtropical areas. The extent to which separate lineages of C4 plants use the same genetic networks to maintain C4 photosynthesis is unknown. We developed a new informatics framework to enable deep evolutionary comparison of gene expression in species lacking reference genomes. We exploited this to compare gene expression in species representing two independent C4 lineages (Cleome gynandra and Zea mays) whose last common ancestor diverged ∼140 million years ago. We define a cohort of 3,335 genes that represent conserved components of leaf and photosynthetic development in these species. Furthermore, we show that genes encoding proteins of the C4 cycle are recruited into networks defined by photosynthesis-related genes. Despite the wide evolutionary separation and independent origins of the C4 phenotype, we report that these species use homologous transcription factors to both induce C4 photosynthesis and to maintain the cell specific gene expression required for the pathway to operate. We define a core molecular signature associated with leaf and photosynthetic maturation that is likely shared by angiosperm species derived from the last common ancestor of the monocotyledons and dicotyledons. We show that deep evolutionary comparisons of gene expression can reveal novel insight into the molecular convergence of highly complex phenotypes and that parallel evolution of trans-factors underpins the repeated appearance of C4 photosynthesis. Thus, exploitation of extant natural variation associated with complex traits can be used to identify regulators. Moreover, the transcription factors that are shared by independent C4 lineages are key targets for engineering the C4 pathway into C3 crops such as rice. PMID:24901697

  3. Micropower Mixed-signal VLSI Independent Component Analysis for Gradient Flow Acoustic Source Separation.

    PubMed

    Stanaćević, Milutin; Li, Shuo; Cauwenberghs, Gert

    2016-07-01

    A parallel micro-power mixed-signal VLSI implementation of independent component analysis (ICA) with reconfigurable outer-product learning rules is presented. With the gradient sensing of the acoustic field over a miniature microphone array as a pre-processing method, the proposed ICA implementation can separate and localize up to 3 sources in mild reverberant environment. The ICA processor is implemented in 0.5 µm CMOS technology and occupies 3 mm × 3 mm area. At 16 kHz sampling rate, ASIC consumes 195 µW power from a 3 V supply. The outer-product implementation of natural gradient and Herault-Jutten ICA update rules demonstrates comparable performance to benchmark FastICA algorithm in ideal conditions and more robust performance in noisy and reverberant environment. Experiments demonstrate perceptually clear separation and precise localization over wide range of separation angles of two speech sources presented through speakers positioned at 1.5 m from the array on a conference room table. The presented ASIC leads to a extreme small form factor and low power consumption microsystem for source separation and localization required in applications like intelligent hearing aids and wireless distributed acoustic sensor arrays.

  4. Single trial decoding of belief decision making from EEG and fMRI data using independent components features

    PubMed Central

    Douglas, Pamela K.; Lau, Edward; Anderson, Ariana; Head, Austin; Kerr, Wesley; Wollner, Margalit; Moyer, Daniel; Li, Wei; Durnhofer, Mike; Bramen, Jennifer; Cohen, Mark S.

    2013-01-01

    The complex task of assessing the veracity of a statement is thought to activate uniquely distributed brain regions based on whether a subject believes or disbelieves a given assertion. In the current work, we present parallel machine learning methods for predicting a subject's decision response to a given propositional statement based on independent component (IC) features derived from EEG and fMRI data. Our results demonstrate that IC features outperformed features derived from event related spectral perturbations derived from any single spectral band, yet were similar to accuracy across all spectral bands combined. We compared our diagnostic IC spatial maps with our conventional general linear model (GLM) results, and found that informative ICs had significant spatial overlap with our GLM results, yet also revealed unique regions like amygdala that were not statistically significant in GLM analyses. Overall, these results suggest that ICs may yield a parsimonious feature set that can be used along with a decision tree structure for interpretation of features used in classifying complex cognitive processes such as belief and disbelief across both fMRI and EEG neuroimaging modalities. PMID:23914164

  5. Assembly of the cnidarian camera-type eye from vertebrate-like components.

    PubMed

    Kozmik, Zbynek; Ruzickova, Jana; Jonasova, Kristyna; Matsumoto, Yoshifumi; Vopalensky, Pavel; Kozmikova, Iryna; Strnad, Hynek; Kawamura, Shoji; Piatigorsky, Joram; Paces, Vaclav; Vlcek, Cestmir

    2008-07-01

    Animal eyes are morphologically diverse. Their assembly, however, always relies on the same basic principle, i.e., photoreceptors located in the vicinity of dark shielding pigment. Cnidaria as the likely sister group to the Bilateria are the earliest branching phylum with a well developed visual system. Here, we show that camera-type eyes of the cubozoan jellyfish, Tripedalia cystophora, use genetic building blocks typical of vertebrate eyes, namely, a ciliary phototransduction cascade and melanogenic pathway. Our findings indicative of parallelism provide an insight into eye evolution. Combined, the available data favor the possibility that vertebrate and cubozoan eyes arose by independent recruitment of orthologous genes during evolution.

  6. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  7. Conjunction of anti-parallel and component reconnection at the dayside MP: Cluster and Double Star coordinated observation on 6 April 2004

    NASA Astrophysics Data System (ADS)

    Wang, J.; Pu, Z. Y.; Fu, S. Y.; Wang, X. G.; Xiao, C. J.; Dunlop, M. W.; Wei, Y.; Bogdanova, Y. V.; Zong, Q. G.; Xie, L.

    2011-05-01

    Previous theoretical and simulation studies have suggested that the anti-parallel and component reconnection can occur simultaneously on the dayside magnetopause. Certain observations have also been reported to support global conjunct pattern of magnetic reconnection. Here, we show direct evidence for the conjunction of anti-parallel and component MR using coordinated observations of Double Star TC-1 and Cluster under the same IMF condition on 6 April, 2004. The global MR X-line configuration constructed is in good agreement with the “S-shape” model.

  8. Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation

    PubMed Central

    Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan

    2014-01-01

    Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432

  9. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  10. Deformation due to the distension of cylindrical igneous contacts: A kinematic model

    NASA Astrophysics Data System (ADS)

    Morgan, John

    1980-06-01

    A simple kinematic model is described that predicts the state of overall wall-rock strain resulting from the distension of igneous contacts. It applies to the axially symmetric expansion of any pluton whose overall shape is a cylinder with circular cross section i.e. to late magmatic plutons which are circular or annular in cross section. The model is not capable of predicting the strain distribution in the zone of contact strain, but does predict components of overall strain whose magnitudes are calculated from the change in shape of the zone of contact strain. These strain components are: (1) overall radial shortening of the wall rocks overlineer; (2) overall vertical extension overlineev; and (3) overall horizontal extension parallel to the contact overlineeh (the axis of symmetry is arbitrarily oriented vertically). In addition, one local strain magnitude can be predicted, namely the horizontal extension of the contact surface ehc. The four strain parameters and {(1 + overlineev) }/{(1 + overlineeh}) are graphed as functions of two independent variables: (1) outward distension of the contact ( r - r0)/ r; and (2) depth of contact strain ( rd - r)/ r. r is the present, observed radius of the pluton, r0 is the original radius and rd is the radius of contact strain. If ( rd- r)/ r is reduced or ( r - r0)/ r is increased, absolute values of the overall strain components are increased, ehc increases with ( r - r0)/ r but is independent of ( r d - r)/r · (1 + overlineev)/(l + overlineeh) ≅ 1 over a large range of values of both independent variables. The model has been applied to two Archean plutons in northwestern Ontario. According to a previous study, strain near the contact of the Bamaji-Blackstone batholith is characterized by large values of extension parallel to the contact and shortening normal to the contact, ( r - r0)/r and ( rd - r)/ r are estimated to be less than 0.20 and 0.27 respectively. The horizontal extension parallel to the contact is apparently a minimum estimate of ehc and the depth of contact strain was previously underestimated. The range of values of ehc indicates that ( r - r0)/ r is larger than previously estimated by a factor of at least three. A similar problem has been encountered at the convex boundary of the Marmion Lake crescentic pluton. The pluton was emplaced along an older contact between greenstone and tonalitic gneiss. A minimum value of the outward displacement of the convex boundary of the pluton can be estimated from a major fold in the greenstone. It is found that the magnitude of this outward displacement is greater than the width of the pluton or ( r - r0). Apparently, the folding pre-dates the emplacement of the crescent; it probably dates from the emplacement of the tonalitic gneiss into greenstone cover.

  11. Glutamate-Mediated Primary Somatosensory Cortex Excitability Correlated with Circulating Copper and Ceruloplasmin

    PubMed Central

    Tecchio, Franca; Assenza, Giovanni; Zappasodi, Filippo; Mariani, Stefania; Salustri, Carlo; Squitti, Rosanna

    2011-01-01

    Objective. To verify whether markers of metal homeostasis are related to a magnetoencephalographic index representative of glutamate-mediated excitability of the primary somatosensory cortex. The index is identified as the source strength of the earliest component (M20) of the somatosensory magnetic fields (SEFs) evoked by right median nerve stimulation at wrist. Method. Thirty healthy right-handed subjects (51 ± 22 years) were enrolled in the study. A source reconstruction algorithm was applied to assess the amount of synchronously activated neurons subtending the M20 and the following SEF component (M30), which is generated by two independent contributions of gabaergic and glutamatergic transmission. Serum copper, ceruloplasmin, iron, transferrin, transferrin saturation, and zinc levels were measured. Results. Total copper and ceruloplasmin negatively correlated with the M20 source strength. Conclusion. This pilot study suggests that higher level of body copper reserve, as marked by ceruloplasmin variations, parallels lower cortical glutamatergic responsiveness. PMID:22145081

  12. Thermogravimetric analysis of the gasification of microalgae Chlorella vulgaris.

    PubMed

    Figueira, Camila Emilia; Moreira, Paulo Firmino; Giudici, Reinaldo

    2015-12-01

    The gasification of microalgae Chlorella vulgaris under an atmosphere of argon and water vapor was investigated by thermogravimetric analysis. The data were interpreted by using conventional isoconversional methods and also by the independent parallel reaction (IPR) model, in which the degradation is considered to happen individually to each pseudo-component of biomass (lipid, carbohydrate and protein). The IPR model allows obtaining the kinetic parameters of the degradation reaction of each component. Three main stages were observed during the gasification process and the differential thermogravimetric curve was satisfactorily fitted by the IPR model considering three pseudocomponents. The comparison of the activation energy values obtained by the methods and those found in the literature for other microalgae was satisfactory. Quantification of reaction products was performed using online gas chromatography. The major products detected were H2, CO and CH4, indicating the potential for producing fuel gas and syngas from microalgae. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Solar system applications of Mie theory and of radiative transfer of polarized light

    NASA Technical Reports Server (NTRS)

    Whitehill, L. P.

    1972-01-01

    A theory of the multiple scattering of polarized light is discussed using the doubling method of van de Hulst. The concept of the Stokes parameters is derived and used to develop the form of the scattering phase matrix of a single particle. The diffuse reflection and transmission matrices of a single scattering plane parallel atmosphere are expressed as a function of the phase matrix, and the symmetry properties of these matrices are examined. Four matrices are required to describe scattering and transmission. The scattering matrix that results from the addition of two identical layers is derived. Using the doubling method, the scattering and transmission matrices of layers of arbitrary optical thickness can be derived. The doubling equations are then rewritten in terms of their Fourier components. Computation time is reduced since each Fourier component doubles independently. Computation time is also reduced through the use of symmetry properties.

  14. SPSS and SAS programs for determining the number of components using parallel analysis and velicer's MAP test.

    PubMed

    O'Connor, B P

    2000-08-01

    Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.

  15. Stress Fields Along Okinawa Trough and Ryukyu Arc Inferred From Regional Broadband Moment Tensors

    NASA Astrophysics Data System (ADS)

    Kubo, A.; Fukuyama, E.

    2001-12-01

    Most shallow earthquakes along Okinawa trough and Ryukyu arc are relatively small (M<5.5). Focal mechanism estimations for such events were difficult due to insufficient dataset. However, this situation is improved by regional broadband network (FREESIA). Lower limit of magnitude of the earthquakes determined becomes 1.5 smaller in M{}w than that of Harvard moment tensors. As a result, we could examine the stress field in more detail than Fournier et al.(2001, JGR, 106, 13751-) did based on surface geology and teleseismic moment tensors. In the NE Okinawa trough, extension axes are oblique to the trough strike, while in SW Okinawa trough, they are perpendicular to the trough. Fault type in SW is normal fault and gradually changes to mixture of normal and strike slip toward NE. In the Ryukyu arc, extension axes are parallel to the arc. Although this feature is not clear in the NW Ryukyu arc, arc parallel extension may be a major property of entire arc. Dominant fault type is normal fault and several strike slips with the same extensional component are included. The volcanic train is located at the edge of arc parallel extension field faced A simple explanation of the arc parallel extension is the response to the opening motion of the Okinawa trough. Another possible mechanism is forearc movement due to oblique subduction which is enhanced in SW. We consider that the Okinawa trough and the Ryukyu arc are independent stress provinces.

  16. Highly fault-tolerant parallel computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spielman, D.A.

    We re-introduce the coded model of fault-tolerant computation in which the input and output of a computational device are treated as words in an error-correcting code. A computational device correctly computes a function in the coded model if its input and output, once decoded, are a valid input and output of the function. In the coded model, it is reasonable to hope to simulate all computational devices by devices whose size is greater by a constant factor but which are exponentially reliable even if each of their components can fail with some constant probability. We consider fine-grained parallel computations inmore » which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time t on w processors can be performed reliably on a faulty machine in the coded model using w log{sup O(l)} w processors and time t log{sup O(l)} w. The failure probability of the computation will be at most t {center_dot} exp(-w{sup 1/4}). The codes used to communicate with our fault-tolerant machines are generalized Reed-Solomon codes and can thus be encoded and decoded in O(n log{sup O(1)} n) sequential time and are independent of the machine they are used to communicate with. We also show how coded computation can be used to self-correct many linear functions in parallel with arbitrarily small overhead.« less

  17. Element-topology-independent preconditioners for parallel finite element computations

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alexander, Scott

    1992-01-01

    A family of preconditioners for the solution of finite element equations are presented, which are element-topology independent and thus can be applicable to element order-free parallel computations. A key feature of the present preconditioners is the repeated use of element connectivity matrices and their left and right inverses. The properties and performance of the present preconditioners are demonstrated via beam and two-dimensional finite element matrices for implicit time integration computations.

  18. Assembly of the cnidarian camera-type eye from vertebrate-like components

    PubMed Central

    Kozmik, Zbynek; Ruzickova, Jana; Jonasova, Kristyna; Matsumoto, Yoshifumi; Vopalensky, Pavel; Kozmikova, Iryna; Strnad, Hynek; Kawamura, Shoji; Piatigorsky, Joram; Paces, Vaclav; Vlcek, Cestmir

    2008-01-01

    Animal eyes are morphologically diverse. Their assembly, however, always relies on the same basic principle, i.e., photoreceptors located in the vicinity of dark shielding pigment. Cnidaria as the likely sister group to the Bilateria are the earliest branching phylum with a well developed visual system. Here, we show that camera-type eyes of the cubozoan jellyfish, Tripedalia cystophora, use genetic building blocks typical of vertebrate eyes, namely, a ciliary phototransduction cascade and melanogenic pathway. Our findings indicative of parallelism provide an insight into eye evolution. Combined, the available data favor the possibility that vertebrate and cubozoan eyes arose by independent recruitment of orthologous genes during evolution. PMID:18577593

  19. Enhanced mutagenesis parallels enhanced reactivation of herpes virus in a human cell line.

    PubMed Central

    Lytle, C D; Knott, D C

    1982-01-01

    U.v. irradiation of human NB-E cells results in enhanced mutagenesis and enhanced reactivation of u.v.-irradiated H-1 virus grown in those cells ( Cornelis et al., 1982). This paper reports a similar study using herpes simplex virus (HSV) in NB-E cells. The mutation frequency of HSV (resistance of virus plaque formation to 40 micrograms/ml iododeoxycytidine ) increased approximately linearly with exposure of the virus to u.v. radiation. HSV grown in unirradiated cells gave a slope of 1.8 X 10(-5)m2/J, with 3.2 X 10(-5)m2/J for HSV grown in cells irradiated (3 J/m2) 24 h before infection. There was no evidence for mutagenesis of unirradiated virus by irradiated cells, as seen with H-1 virus. Enhanced reactivation of irradiated HSV in parallel cultures increased virus survival, manifested as a change in slope of the final component of the two-component survival curve from a D0 of 27 J/m2 in unirradiated cells to 45 J/m2 in irradiated cells. Thus, enhanced mutagenesis and enhanced reactivation occurred for irradiated HSV in NB-E cells. The difference in the enhanced mutagenesis of HSV (dependent on damaged DNA sites) and of H-1 virus (primarily independent of damaged DNA sites) is discussed in terms of differences in DNA polymerases. PMID:6329698

  20. Quantum mechanical generalized phase-shift approach to atom-surface scattering: a Feshbach projection approach to dealing with closed channel effects.

    PubMed

    Maji, Kaushik; Kouri, Donald J

    2011-03-28

    We have developed a new method for solving quantum dynamical scattering problems, using the time-independent Schrödinger equation (TISE), based on a novel method to generalize a "one-way" quantum mechanical wave equation, impose correct boundary conditions, and eliminate exponentially growing closed channel solutions. The approach is readily parallelized to achieve approximate N(2) scaling, where N is the number of coupled equations. The full two-way nature of the TISE is included while propagating the wave function in the scattering variable and the full S-matrix is obtained. The new algorithm is based on a "Modified Cayley" operator splitting approach, generalizing earlier work where the method was applied to the time-dependent Schrödinger equation. All scattering variable propagation approaches to solving the TISE involve solving a Helmholtz-type equation, and for more than one degree of freedom, these are notoriously ill-behaved, due to the unavoidable presence of exponentially growing contributions to the numerical solution. Traditionally, the method used to eliminate exponential growth has posed a major obstacle to the full parallelization of such propagation algorithms. We stabilize by using the Feshbach projection operator technique to remove all the nonphysical exponentially growing closed channels, while retaining all of the propagating open channel components, as well as exponentially decaying closed channel components.

  1. A parallel randomized trial on the effect of a healthful diet on inflammageing and its consequences in European elderly people: design of the NU-AGE dietary intervention study.

    PubMed

    Berendsen, Agnes; Santoro, Aurelia; Pini, Elisa; Cevenini, Elisa; Ostan, Rita; Pietruszka, Barbara; Rolf, Katarzyna; Cano, Noël; Caille, Aurélie; Lyon-Belgy, Noëlle; Fairweather-Tait, Susan; Feskens, Edith; Franceschi, Claudio; de Groot, C P G M

    2013-01-01

    The proportion of European elderly is expected to increase to 30% in 2060. Combining dietary components may modulate many processes involved in ageing. So, it is likely that a healthful diet approach might have greater favourable impact on age-related decline than individual dietary components. This paper describes the design of a healthful diet intervention on inflammageing and its consequences in the elderly. The NU-AGE study is a parallel randomized one-year trial in 1250 apparently healthy, independently living European participants aged 65-80 years. Participants are randomised into either the diet group or control group. Participants in the diet group received dietary advice aimed at meeting the nutritional requirements of the ageing population. Special attention was paid to nutrients that may be inadequate or limiting in diets of elderly, such as vitamin D, vitamin B12, and calcium. C-reactive protein is measured as primary outcome. The NU-AGE study is the first dietary intervention investigating the effect of a healthful diet providing targeted nutritional recommendations for optimal health and quality of life in apparently healthy European elderly. Results of this intervention will provide evidence on the effect of a healthful diet on the prevention of age related decline. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  3. Can we estimate total magnetization directions from aeromagnetic data using Helbig's integrals?

    USGS Publications Warehouse

    Phillips, J.D.

    2005-01-01

    An algorithm that implements Helbig's (1963) integrals for estimating the vector components (mx, my, mz) of tile magnetic dipole moment from the first order moments of the vector magnetic field components (??X, ??Y, ??Z) is tested on real and synthetic data. After a grid of total field aeromagnetic data is converted to vector component grids using Fourier filtering, Helbig's infinite integrals are evaluated as finite integrals in small moving windows using a quadrature algorithm based on the 2-D trapezoidal rule. Prior to integration, best-fit planar surfaces must be removed from the component data within the data windows in order to make the results independent of the coordinate system origin. Two different approaches are described for interpreting the results of the integration. In the "direct" method, results from pairs of different window sizes are compared to identify grid nodes where the angular difference between solutions is small. These solutions provide valid estimates of total magnetization directions for compact sources such as spheres or dipoles, but not for horizontally elongated or 2-D sources. In the "indirect" method, which is more forgiving of source geometry, results of the quadrature analysis are scanned for solutions that are parallel to a specified total magnetization direction.

  4. Design of object-oriented distributed simulation classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D. (Principal Investigator)

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  5. Design of Object-Oriented Distributed Simulation Classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  6. Monolithic Parallel Tandem Organic Photovoltaic Cell with Transparent Carbon Nanotube Interlayer

    NASA Technical Reports Server (NTRS)

    Tanaka, S.; Mielczarek, K.; Ovalle-Robles, R.; Wang, B.; Hsu, D.; Zakhidov, A. A.

    2009-01-01

    We demonstrate an organic photovoltaic cell with a monolithic tandem structure in parallel connection. Transparent multiwalled carbon nanotube sheets are used as an interlayer anode electrode for this parallel tandem. The characteristics of front and back cells are measured independently. The short circuit current density of the parallel tandem cell is larger than the currents of each individual cell. The wavelength dependence of photocurrent for the parallel tandem cell shows the superposition spectrum of the two spectral sensitivities of the front and back cells. The monolithic three-electrode photovoltaic cell indeed operates as a parallel tandem with improved efficiency.

  7. Differentiating social and personal power: opposite effects on stereotyping, but parallel effects on behavioral approach tendencies.

    PubMed

    Lammers, Joris; Stoker, Janka I; Stapel, Diederik A

    2009-12-01

    How does power affect behavior? We posit that this depends on the type of power. We distinguish between social power (power over other people) and personal power (freedom from other people) and argue that these two types of power have opposite associations with independence and interdependence. We propose that when the distinction between independence and interdependence is relevant, social power and personal power will have opposite effects; however, they will have parallel effects when the distinction is irrelevant. In two studies (an experimental study and a large field study), we demonstrate this by showing that social power and personal power have opposite effects on stereotyping, but parallel effects on behavioral approach.

  8. Accuracy of the Parallel Analysis Procedure with Polychoric Correlations

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Li, Feiming; Bandalos, Deborah

    2009-01-01

    The purpose of this study was to investigate the application of the parallel analysis (PA) method for choosing the number of factors in component analysis for situations in which data are dichotomous or ordinal. Although polychoric correlations are sometimes used as input for component analyses, the random data matrices generated for use in PA…

  9. Parallel Algorithms and Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  10. Plasma waves downstream of weak collisionless shocks

    NASA Technical Reports Server (NTRS)

    Coroniti, F. V.; Greenstadt, E. W.; Moses, S. L.; Smith, E. J.; Tsurutani, B. T.

    1993-01-01

    In September 1983 the International Sun Earth Explorer 3 (ISEE 3) International Cometary Explorer (ICE) spacecraft made a long traversal of the distant dawnside flank region of the Earth's magnetosphere and had many encounters with the low Mach number bow shock. These weak shocks excite plasma wave electric field turbulence with amplitudes comparable to those detected in the much stronger bow shock near the nose region. Downstream of quasi-perpendicular (quasi-parallel) shocks, the E field spectra exhibit a strong peak (plateau) at midfrequencies (1 - 3 kHz); the plateau shape is produced by a low-frequency (100 - 300 Hz) emission which is more intense behind downstream of two quasi-perpendicular shocks show that the low frequency signals are polarized parallel to the magnetic field, whereas the midfrequency emissions are unpolarized or only weakly polarized. A new high frequency (10 - 30 kHz) emission which is above the maximum Doppler shift exhibit a distinct peak at high frequencies; this peak is often blurred by the large amplitude fluctuations of the midfrequency waves. The high-frequency component is strongly polarized along the magnetic field and varies independently of the lower-frequency waves.

  11. Turbo Trellis Coded Modulation With Iterative Decoding for Mobile Satellite Communications

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1997-01-01

    In this paper, analytical bounds on the performance of parallel concatenation of two codes, known as turbo codes, and serial concatenation of two codes over fading channels are obtained. Based on this analysis, design criteria for the selection of component trellis codes for MPSK modulation, and a suitable bit-by-bit iterative decoding structure are proposed. Examples are given for throughput of 2 bits/sec/Hz with 8PSK modulation. The parallel concatenation example uses two rate 4/5 8-state convolutional codes with two interleavers. The convolutional codes' outputs are then mapped to two 8PSK modulations. The serial concatenated code example uses an 8-state outer code with rate 4/5 and a 4-state inner trellis code with 5 inputs and 2 x 8PSK outputs per trellis branch. Based on the above mentioned design criteria for fading channels, a method to obtain he structure of the trellis code with maximum diversity is proposed. Simulation results are given for AWGN and an independent Rayleigh fading channel with perfect Channel State Information (CSI).

  12. Effect of parallel electric fields on the ponderomotive stabilization of MHD instabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litwin, C.; Hershkowitz, N.

    The contribution of the wave electric field component E/sub parallel/, parallel to the magnetic field, to the ponderomotive stabilization of curvature driven instabilities is evaluated and compared to the transverse component contribution. For the experimental density range, in which the stability is primarily determined by the m = 1 magnetosonic wave, this contribution is found to be the dominant and stabilizing when the electron temperature is neglected. For sufficiently high electron temperatures the dominant fast wave is found to be axially evanescent. In the same limit, E/sub parallel/ becomes radially oscillating. It is concluded that the increased electron temperature nearmore » the plasma surface reduces the magnitude of ponderomotive effects.« less

  13. Determining the Index of Refraction of an Unknown Object using Passive Polarimetric Imagery Degraded by Atmospheric Turbulence

    DTIC Science & Technology

    2010-08-09

    44 9 A photograph of a goniophotometer used by Bell and a schematic of a goniophotometer used by Mian et al...plane is called the parallel field component because it lies parallel to the specular plane. The incident electric field vector component which...resides in the plane or- thogonal to the specular plane is called the perpendicular field component because it lies perpendicular to the specular plane. If

  14. Coarse-grained component concurrency in Earth system modeling: parallelizing atmospheric radiative transfer in the GFDL AM3 model using the Flexible Modeling System coupling framework

    NASA Astrophysics Data System (ADS)

    Balaji, V.; Benson, Rusty; Wyman, Bruce; Held, Isaac

    2016-10-01

    Climate models represent a large variety of processes on a variety of timescales and space scales, a canonical example of multi-physics multi-scale modeling. Current hardware trends, such as Graphical Processing Units (GPUs) and Many Integrated Core (MIC) chips, are based on, at best, marginal increases in clock speed, coupled with vast increases in concurrency, particularly at the fine grain. Multi-physics codes face particular challenges in achieving fine-grained concurrency, as different physics and dynamics components have different computational profiles, and universal solutions are hard to come by. We propose here one approach for multi-physics codes. These codes are typically structured as components interacting via software frameworks. The component structure of a typical Earth system model consists of a hierarchical and recursive tree of components, each representing a different climate process or dynamical system. This recursive structure generally encompasses a modest level of concurrency at the highest level (e.g., atmosphere and ocean on different processor sets) with serial organization underneath. We propose to extend concurrency much further by running more and more lower- and higher-level components in parallel with each other. Each component can further be parallelized on the fine grain, potentially offering a major increase in the scalability of Earth system models. We present here first results from this approach, called coarse-grained component concurrency, or CCC. Within the Geophysical Fluid Dynamics Laboratory (GFDL) Flexible Modeling System (FMS), the atmospheric radiative transfer component has been configured to run in parallel with a composite component consisting of every other atmospheric component, including the atmospheric dynamics and all other atmospheric physics components. We will explore the algorithmic challenges involved in such an approach, and present results from such simulations. Plans to achieve even greater levels of coarse-grained concurrency by extending this approach within other components, such as the ocean, will be discussed.

  15. Texture-dependent motion signals in primate middle temporal area

    PubMed Central

    Gharaei, Saba; Tailby, Chris; Solomon, Selina S; Solomon, Samuel G

    2013-01-01

    Neurons in the middle temporal (MT) area of primate cortex provide an important stage in the analysis of visual motion. For simple stimuli such as bars and plaids some neurons in area MT – pattern cells – seem to signal motion independent of contour orientation, but many neurons – component cells – do not. Why area MT supports both types of receptive field is unclear. To address this we made extracellular recordings from single units in area MT of anaesthetised marmoset monkeys and examined responses to two-dimensional images with a large range of orientations and spatial frequencies. Component and pattern cell response remained distinct during presentation of these complex spatial textures. Direction tuning curves were sharpest in component cells when a texture contained a narrow range of orientations, but were similar across all neurons for textures containing all orientations. Response magnitude of pattern cells, but not component cells, increased with the spatial bandwidth of the texture. In addition, response variability in all neurons was reduced when the stimulus was rich in spatial texture. Fisher information analysis showed that component cells provide more informative responses than pattern cells when a texture contains a narrow range of orientations, but pattern cells had more informative responses for broadband textures. Component cells and pattern cells may therefore coexist because they provide complementary and parallel motion signals. PMID:24000175

  16. G-Protein Genomic Association With Normal Variation in Gray Matter Density

    PubMed Central

    Chen, Jiayu; Calhoun, Vince D.; Arias-Vasquez, Alejandro; Zwiers, Marcel P.; van Hulzen, Kimm; Fernández, Guillén; Fisher, Simon E.; Franke, Barbara; Turner, Jessica A.; Liu, Jingyu

    2017-01-01

    While detecting genetic variations underlying brain structures helps reveal mechanisms of neural disorders, high data dimensionality poses a major challenge for imaging genomic association studies. In this work, we present the application of a recently proposed approach, parallel independent component analysis with reference (pICA-R), to investigate genomic factors potentially regulating gray matter variation in a healthy population. This approach simultaneously assesses many variables for an aggregate effect and helps to elicit particular features in the data. We applied pICA-R to analyze gray matter density (GMD) images (274,131 voxels) in conjunction with single nucleotide polymorphism (SNP) data (666,019 markers) collected from 1,256 healthy individuals of the Brain Imaging Genetics (BIG) study. Guided by a genetic reference derived from the gene GNA14, pICA-R identified a significant SNP-GMD association (r = −0.16, P = 2.34 × 10−8), implying that subjects with specific genotypes have lower localized GMD. The identified components were then projected to an independent dataset from the Mind Clinical Imaging Consortium (MCIC) including 89 healthy individuals, and the obtained loadings again yielded a significant SNP-GMD association (r = −0.25, P = 0.02). The imaging component reflected GMD variations in frontal, precuneus, and cingulate regions. The SNP component was enriched in genes with neuronal functions, including synaptic plasticity, axon guidance, molecular signal transduction via PKA and CREB, highlighting the GRM1, PRKCH, GNA12, and CAMK2B genes. Collectively, our findings suggest that GNA12 and GNA14 play a key role in the genetic architecture underlying normal GMD variation in frontal and parietal regions. PMID:26248772

  17. The language parallel Pascal and other aspects of the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  18. Multiple independent autonomous hydraulic oscillators driven by a common gravity head.

    PubMed

    Kim, Sung-Jin; Yokokawa, Ryuji; Lesher-Perez, Sasha Cai; Takayama, Shuichi

    2015-06-15

    Self-switching microfluidic circuits that are able to perform biochemical experiments in a parallel and autonomous manner, similar to instruction-embedded electronics, are rarely implemented. Here, we present design principles and demonstrations for gravity-driven, integrated, microfluidic pulsatile flow circuits. With a common gravity head as the only driving force, these fluidic oscillator arrays realize a wide range of periods (0.4 s-2 h) and flow rates (0.10-63 μl min(-1)) with completely independent timing between the multiple oscillator sub-circuits connected in parallel. As a model application, we perform systematic, parallel analysis of endothelial cell elongation response to different fluidic shearing patterns generated by the autonomous microfluidic pulsed flow generation system.

  19. Parallel ICA and its hardware implementation in hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Du, Hongtao; Qi, Hairong; Peterson, Gregory D.

    2004-04-01

    Advances in hyperspectral images have dramatically boosted remote sensing applications by providing abundant information using hundreds of contiguous spectral bands. However, the high volume of information also results in excessive computation burden. Since most materials have specific characteristics only at certain bands, a lot of these information is redundant. This property of hyperspectral images has motivated many researchers to study various dimensionality reduction algorithms, including Projection Pursuit (PP), Principal Component Analysis (PCA), wavelet transform, and Independent Component Analysis (ICA), where ICA is one of the most popular techniques. It searches for a linear or nonlinear transformation which minimizes the statistical dependence between spectral bands. Through this process, ICA can eliminate superfluous but retain practical information given only the observations of hyperspectral images. One hurdle of applying ICA in hyperspectral image (HSI) analysis, however, is its long computation time, especially for high volume hyperspectral data sets. Even the most efficient method, FastICA, is a very time-consuming process. In this paper, we present a parallel ICA (pICA) algorithm derived from FastICA. During the unmixing process, pICA divides the estimation of weight matrix into sub-processes which can be conducted in parallel on multiple processors. The decorrelation process is decomposed into the internal decorrelation and the external decorrelation, which perform weight vector decorrelations within individual processors and between cooperative processors, respectively. In order to further improve the performance of pICA, we seek hardware solutions in the implementation of pICA. Until now, there are very few hardware designs for ICA-related processes due to the complicated and iterant computation. This paper discusses capacity limitation of FPGA implementations for pICA in HSI analysis. A synthesis of Application-Specific Integrated Circuit (ASIC) is designed for pICA-based dimensionality reduction in HSI analysis. The pICA design is implemented using standard-height cells and aimed at TSMC 0.18 micron process. During the synthesis procedure, three ICA-related reconfigurable components are developed for the reuse and retargeting purpose. Preliminary results show that the standard-height cell based ASIC synthesis provide an effective solution for pICA and ICA-related processes in HSI analysis.

  20. Parallels among the ``music scores'' of solar cycles, space weather and Earth's climate

    NASA Astrophysics Data System (ADS)

    Kolláth, Zoltán; Oláh, Katalin; van Driel-Gesztelyi, Lidia

    2012-07-01

    Solar variability and its effects on the physical variability of our (space) environment produces complex signals. In the indicators of solar activity at least four independent cyclic components can be identified, all of them with temporal variations in their timescales. Time-frequency distributions (see Kolláth & Oláh 2009) are perfect tools to disclose the ``music scores'' in these complex time series. Special features in the time-frequency distributions, like frequency splitting, or modulations on different timescales provide clues, which can reveal similar trends among different indices like sunspot numbers, interplanetary magnetic field strength in the Earth's neighborhood and climate data. On the pseudo-Wigner Distribution (PWD) the frequency splitting of all the three main components (the Gleissberg and Schwabe cycles, and an ~5.5 year signal originating from cycle asymmetry, i.e. the Waldmeier effect) can be identified as a ``bubble'' shaped structure after 1950. The same frequency splitting feature can also be found in the heliospheric magnetic field data and the microwave radio flux.

  1. Nine-channel mid-power bipolar pulse generator based on a field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haylock, Ben, E-mail: benjamin.haylock2@griffithuni.edu.au; Lenzini, Francesco; Kasture, Sachin

    Many channel arbitrary pulse sequence generation is required for the electro-optic reconfiguration of optical waveguide networks in Lithium Niobate. Here we describe a scalable solution to the requirement for mid-power bipolar parallel outputs, based on pulse patterns generated by an externally clocked field programmable gate array. Positive and negative pulses can be generated at repetition rates up to 80 MHz with pulse width adjustable in increments of 1.6 ns across nine independent outputs. Each channel can provide 1.5 W of RF power and can be synchronised with the operation of other components in an optical network such as light sourcesmore » and detectors through an external clock with adjustable delay.« less

  2. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  3. GPU-based Parallel Application Design for Emerging Mobile Devices

    NASA Astrophysics Data System (ADS)

    Gupta, Kshitij

    A revolution is underway in the computing world that is causing a fundamental paradigm shift in device capabilities and form-factor, with a move from well-established legacy desktop/laptop computers to mobile devices in varying sizes and shapes. Amongst all the tasks these devices must support, graphics has emerged as the 'killer app' for providing a fluid user interface and high-fidelity game rendering, effectively making the graphics processor (GPU) one of the key components in (present and future) mobile systems. By utilizing the GPU as a general-purpose parallel processor, this dissertation explores the GPU computing design space from an applications standpoint, in the mobile context, by focusing on key challenges presented by these devices---limited compute, memory bandwidth, and stringent power consumption requirements---while improving the overall application efficiency of the increasingly important speech recognition workload for mobile user interaction. We broadly partition trends in GPU computing into four major categories. We analyze hardware and programming model limitations in current-generation GPUs and detail an alternate programming style called Persistent Threads, identify four use case patterns, and propose minimal modifications that would be required for extending native support. We show how by manually extracting data locality and altering the speech recognition pipeline, we are able to achieve significant savings in memory bandwidth while simultaneously reducing the compute burden on GPU-like parallel processors. As we foresee GPU computing to evolve from its current 'co-processor' model into an independent 'applications processor' that is capable of executing complex work independently, we create an alternate application framework that enables the GPU to handle all control-flow dependencies autonomously at run-time while minimizing host involvement to just issuing commands, that facilitates an efficient application implementation. Finally, as compute and communication capabilities of mobile devices improve, we analyze energy implications of processing speech recognition locally (on-chip) and offloading it to servers (in-cloud).

  4. Independent Axes of Genetic Variation and Parallel Evolutionary Divergence Of Opercle Bone Shape in Threespine Stickleback

    PubMed Central

    Kimmel, Charles B.; Cresko, William A.; Phillips, Patrick C.; Ullmann, Bonnie; Currey, Mark; von Hippel, Frank; Kristjánsson, Bjarni K.; Gelmond, Ofer; McGuigan, Katrina

    2014-01-01

    Evolution of similar phenotypes in independent populations is often taken as evidence of adaptation to the same fitness optimum. However, the genetic architecture of traits might cause evolution to proceed more often toward particular phenotypes, and less often toward others, independently of the adaptive value of the traits. Freshwater populations of Alaskan threespine stickleback have repeatedly evolved the same distinctive opercle shape after divergence from an oceanic ancestor. Here we demonstrate that this pattern of parallel evolution is widespread, distinguishing oceanic and freshwater populations across the Pacific Coast of North America and Iceland. We test whether this parallel evolution reflects genetic bias by estimating the additive genetic variance– covariance matrix (G) of opercle shape in an Alaskan oceanic (putative ancestral) population. We find significant additive genetic variance for opercle shape and that G has the potential to be biasing, because of the existence of regions of phenotypic space with low additive genetic variation. However, evolution did not occur along major eigenvectors of G, rather it occurred repeatedly in the same directions of high evolvability. We conclude that the parallel opercle evolution is most likely due to selection during adaptation to freshwater habitats, rather than due to biasing effects of opercle genetic architecture. PMID:22276538

  5. Parallelization and automatic data distribution for nuclear reactor simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebrock, L.M.

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directlymore » affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.« less

  6. Independent and Parallel Evolution of New Genes by Gene Duplication in Two Origins of C4 Photosynthesis Provides New Insight into the Mechanism of Phloem Loading in C4 Species.

    PubMed

    Emms, David M; Covshoff, Sarah; Hibberd, Julian M; Kelly, Steven

    2016-07-01

    C4 photosynthesis is considered one of the most remarkable examples of evolutionary convergence in eukaryotes. However, it is unknown whether the evolution of C4 photosynthesis required the evolution of new genes. Genome-wide gene-tree species-tree reconciliation of seven monocot species that span two origins of C4 photosynthesis revealed that there was significant parallelism in the duplication and retention of genes coincident with the evolution of C4 photosynthesis in these lineages. Specifically, 21 orthologous genes were duplicated and retained independently in parallel at both C4 origins. Analysis of this gene cohort revealed that the set of parallel duplicated and retained genes is enriched for genes that are preferentially expressed in bundle sheath cells, the cell type in which photosynthesis was activated during C4 evolution. Furthermore, functional analysis of the cohort of parallel duplicated genes identified SWEET-13 as a potential key transporter in the evolution of C4 photosynthesis in grasses, and provides new insight into the mechanism of phloem loading in these C4 species. C4 photosynthesis, gene duplication, gene families, parallel evolution. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  7. Independent and Parallel Evolution of New Genes by Gene Duplication in Two Origins of C4 Photosynthesis Provides New Insight into the Mechanism of Phloem Loading in C4 Species

    PubMed Central

    Emms, David M.; Covshoff, Sarah; Hibberd, Julian M.; Kelly, Steven

    2016-01-01

    C4 photosynthesis is considered one of the most remarkable examples of evolutionary convergence in eukaryotes. However, it is unknown whether the evolution of C4 photosynthesis required the evolution of new genes. Genome-wide gene-tree species-tree reconciliation of seven monocot species that span two origins of C4 photosynthesis revealed that there was significant parallelism in the duplication and retention of genes coincident with the evolution of C4 photosynthesis in these lineages. Specifically, 21 orthologous genes were duplicated and retained independently in parallel at both C4 origins. Analysis of this gene cohort revealed that the set of parallel duplicated and retained genes is enriched for genes that are preferentially expressed in bundle sheath cells, the cell type in which photosynthesis was activated during C4 evolution. Furthermore, functional analysis of the cohort of parallel duplicated genes identified SWEET-13 as a potential key transporter in the evolution of C4 photosynthesis in grasses, and provides new insight into the mechanism of phloem loading in these C4 species. Key words: C4 photosynthesis, gene duplication, gene families, parallel evolution. PMID:27016024

  8. Resonant snubber inverter

    DOEpatents

    Lai, Jih-Sheng; Young, Sr., Robert W.; Chen, Daoshen; Scudiere, Matthew B.; Ott, Jr., George W.; White, Clifford P.; McKeever, John W.

    1997-01-01

    A resonant, snubber-based, soft switching, inverter circuit achieves lossless switching during dc-to-ac power conversion and power conditioning with minimum component count and size. Current is supplied to the resonant snubber branches solely by the main inverter switches. Component count and size are reduced by use of a single semiconductor switch in the resonant snubber branches. Component count is also reduced by maximizing the use of stray capacitances of the main switches as parallel resonant capacitors. Resonance charging and discharging of the parallel capacitances allows lossless, zero voltage switching. In one embodiment, circuit component size and count are minimized while achieving lossless, zero voltage switching within a three-phase inverter.

  9. Resonant snubber inverter

    DOEpatents

    Lai, J.S.; Young, R.W. Sr.; Chen, D.; Scudiere, M.B.; Ott, G.W. Jr.; White, C.P.; McKeever, J.W.

    1997-06-24

    A resonant, snubber-based, soft switching, inverter circuit achieves lossless switching during dc-to-ac power conversion and power conditioning with minimum component count and size. Current is supplied to the resonant snubber branches solely by the main inverter switches. Component count and size are reduced by use of a single semiconductor switch in the resonant snubber branches. Component count is also reduced by maximizing the use of stray capacitances of the main switches as parallel resonant capacitors. Resonance charging and discharging of the parallel capacitances allows lossless, zero voltage switching. In one embodiment, circuit component size and count are minimized while achieving lossless, zero voltage switching within a three-phase inverter. 14 figs.

  10. Synergistic modulation of KCNQ1/KCNE1 K(+) channels (IKs) by phosphatidylinositol 4,5-bisphosphate (PIP2) and [ATP]i.

    PubMed

    Kienitz, Marie-Cécile; Vladimirova, Dilyana

    2015-07-01

    Cardiac KCNQ1/KCNE1 channels (IKs) are dependent on the concentration of membrane phosphatidylinositol-4,5-bisphosphate (PIP2) and on cytosolic ATP by two distinct mechanisms. In this study we measured IKs and FRET between PH-PLCδ-based fluorescent PIP2 sensors in a stable KCNQ1/KCNE1 CHO cell line. Effects of activating either a muscarinic M3 receptor or the switchable phosphatase Ci-VSP on IKs were analyzed. Recovery of IKs from inhibition induced by muscarinic stimulation was incomplete despite full PIP2 resynthesis. Recovery of IKs was completely suppressed under ATP-free conditions, but partially restored by the ATP analog AMP-PCP, providing evidence that depletion of intracellular ATP inhibits IKs independent of PIP2-depletion. Simultaneous patch-clamp and FRET measurements in cells co-expressing Ci-VSP and the PIP2-FRET sensor revealed a component of IKs inhibition directly related to dynamic PIP2-depletion. A second component of inhibition was independent of acute changes in PIP2 and could be mimicked by ATP-free pipette solution, suggesting that it results from intracellular ATP-depletion. The reduction of intracellular ATP upon Ci-VSP activation appears to be independent of its activity as a phosphoinositide phosphatase. Our data demonstrate that ATP-depletion slowed IKs activation but had no short-term effect on PIP2 regeneration, suggesting that impaired PIP2-resynthesis cannot account for the rapid IKs inhibition by ATP-depletion. Furthermore, the second component of IKs inhibition by Ci-VSP was reduced by AMP-PCP in the pipette filling solution, indicating that direct binding of ATP to the KCNQ1/KCNE1 complex is required for voltage activation of IKs. We suggest that fluctuations of the cellular metabolic state regulate IKs in parallel with Gq-coupled PLC activation and PIP2-depletion. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Anti-parallel versus Component Reconnection at the Earth Magnetopause

    NASA Astrophysics Data System (ADS)

    Trattner, K. J.; Burch, J. L.; Ergun, R.; Eriksson, S.; Fuselier, S. A.; Gomez, R. G.; Giles, B. L.; Steven, P. M.; Strangeway, R. J.; Wilder, F. D.

    2017-12-01

    Magnetic reconnection at the Earth's magnetopause is discussed and has been observed as anti-parallel and component reconnection. While anti-parallel reconnection occurs between magnetic field lines of (ideally) exactly opposite polarity, component reconnection (also known as the tilted X-line model) predicts the location of the reconnection line to be anchored at the sub-solar point and extend continuously along the dayside magnetopause, while the ratio of the IMF By/Bz component determines the tilt of the X-line relative to the equatorial plane.A reconnection location prediction model known as the Maximum Magnetic Shear Model combines these two scenarios. The model predicts that during dominant IMF By conditions, magnetic reconnection occurs along an extended line across the dayside magnetopause but generally not through the sub-solar point (as predicted in the original tilted X-line model). Rather, the line follows the ridge of maximum magnetic shear across the dayside magnetopause. In contrast, for dominant IMF Bz (155° < tan-1(By/Bz) < 205°) or dominant Bx (|Bx|/B > 0.7) conditions, the reconnection location bifurcates and traces to high-latitudes, in close agreement with the anti-parallel reconnection scenario, and does not cross the dayside magnetopause as a single tilted reconnection line. Using observations from the Magnetospheric MultiScale missions during a magnetopause crossing when the IMF rotated from an dominate IMF BZ to a dominant IMF BY field we will investigate when the transition between the anti-parallel and tilted X-line scenarios occurs.

  12. Trajectories of disposable income among people of working ages diagnosed with multiple sclerosis: a nationwide register-based cohort study in Sweden 7 years before to 4 years after diagnosis with a population-based reference group

    PubMed Central

    Mogard, Olof; Alexanderson, Kristina; Karampampa, Korinna; Friberg, Emilie; Tinghög, Petter

    2018-01-01

    Objectives To describe how disposable income (DI) and three main components changed, and analyse whether DI development differed from working-aged people with multiple sclerosis (MS) to a reference group from 7 years before to 4 years after diagnosis in Sweden. Design Population-based cohort study, 12-year follow-up (7 years before to 4 years after diagnosis). Setting Swedish working-age population with microdata linked from two nationwide registers. Participants Residents diagnosed with MS in 2009 aged 25–59 years (n=785), and references without MS (n=7847) randomly selected with stratified matching (sex, age, education and country of birth). Primary and secondary outcome measures DI was defined as the annual after tax sum of incomes (earnings and benefits) to measure individual economic welfare. Three main components of DI were analysed as annual sums: earnings, sickness absence benefits and disability pension benefits. Results We found no differences in mean annual DI between people with and without MS by independent t-tests (p values between 0.15 and 0.96). Differences were found for all studied components of DI from diagnosis year by independent t-tests, for example, in the final study year (2013): earnings (−64 867 Swedish Krona (SEK); 95% CI−79 203 to −50 528); sickness absence benefits (13 330 SEK; 95% CI 10 042 to 16 500); and disability pension benefits (21 360 SEK; 95% CI 17 380 to 25 350). A generalised estimating equation evaluated DI trajectory development between people with and without MS to find both trajectories developed in parallel, both before (−4039 SEK; 95% CI −10 536 to 2458) and after (−781 SEK; 95% CI −6988 to 5360) diagnosis. Conclusions The key finding of parallel DI trajectory development between working-aged MS and references suggests minimal economic impact within the first 4 years of diagnosis. The Swedish welfare system was responsive to the observed reductions in earnings around MS diagnosis through balancing DI with morbidity-related benefits. Future decreases in economic welfare may be experienced as the disease progresses, although thorough investigation with future studies of modern cohorts are required. PMID:29743325

  13. Computer hardware fault administration

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  14. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed Central

    Nadkarni, P. M.; Miller, P. L.

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations. PMID:1807632

  15. Evaluation of Parallel Analysis Methods for Determining the Number of Factors

    ERIC Educational Resources Information Center

    Crawford, Aaron V.; Green, Samuel B.; Levy, Roy; Lo, Wen-Juo; Scott, Lietta; Svetina, Dubravka; Thompson, Marilyn S.

    2010-01-01

    Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria…

  16. Independent and parallel evolution of new genes by gene duplication in two origins of C4 photosynthesis provides new insight into the mechanism of phloem loading in C4 species

    DOE PAGES

    Emms, David M.; Covshoff, Sarah; Hibberd, Julian M.; ...

    2016-03-24

    C4 photosynthesis is considered one of the most remarkable examples of evolutionary convergence in eukaryotes. However, it is unknown whether the evolution of C4 photosynthesis required the evolution of new genes. Genome-wide gene-tree species-tree reconciliation of seven monocot species that span two origins of C4 photosynthesis revealed that there was significant parallelism in the duplication and retention of genes coincident with the evolution of C4 photosynthesis in these lineages. Specifically, 21 orthologous genes were duplicated and retained independently in parallel at both C4 origins. Analysis of this gene cohort revealed that the set of parallel duplicated and retained genes ismore » enriched for genes that are preferentially expressed in bundle sheath cells, the cell type in which photosynthesis was activated during C4 evolution. Moreover, functional analysis of the cohort of parallel duplicated genes identified SWEET-13 as a potential key transporter in the evolution of C4 photosynthesis in grasses, and provides new insight into the mechanism of phloem loading in these C4 species.« less

  17. Effects of a parallel resistor on electrical characteristics of a piezoelectric transformer in open-circuit transient state.

    PubMed

    Chang, Kuo-Tsai

    2007-01-01

    This paper investigates electrical transient characteristics of a Rosen-type piezoelectric transformer (PT), including maximum voltages, time constants, energy losses and average powers, and their improvements immediately after turning OFF. A parallel resistor connected to both input terminals of the PT is needed to improve the transient characteristics. An equivalent circuit for the PT is first given. Then, an open-circuit voltage, involving a direct current (DC) component and an alternating current (AC) component, and its related energy losses are derived from the equivalent circuit with initial conditions. Moreover, an AC power control system, including a DC-to-AC resonant inverter, a control switch and electronic instruments, is constructed to determine the electrical characteristics of the OFF transient state. Furthermore, the effects of the parallel resistor on the transient characteristics at different parallel resistances are measured. The advantages of adding the parallel resistor also are discussed. From the measured results, the DC time constant is greatly decreased from 9 to 0.04 ms by a 10 k(omega) parallel resistance under open output.

  18. Mine Hoist Operator Training System. Phase I Report.

    DTIC Science & Technology

    1978-11-01

    Bodies of Knowledge Function Control speed of conveyances Hold conveyances in position Structural Components Types of brakes : * Disc * Drum - Jaw...Parallel motion Components of each type * Disc / drum * Pads/shoes * Operating mechanisms Operating mediums for braking * Hydraulic/pneumatic * Manual...SHAFT GUIDES Wood El BRAKES Steel Rails El Drum : Wire Rope: Jaw El Full Lock El Parallel Motion El Half Lock El Disc El LEVELS DRIVE MOTORS Single El

  19. PUP: An Architecture to Exploit Parallel Unification in Prolog

    DTIC Science & Technology

    1988-03-01

    environment stacking mo del similar to the Warren Abstract Machine [23] since it has been shown to be super ior to other known models (see [21]). The storage...execute in groups of independent operations. Unifications belonging to different group s may not overlap. Also unification operations belonging to the...since all parallel operations on the unification units must complete before any of the units can star t executing the next group of parallel

  20. Interpersonal and intrapersonal factors as parallel independent mediators in the association between internalized HIV stigma and ART adherence

    PubMed Central

    Seghatol-Eslami, Victoria C.; Dark, Heather; Raper, James L.; Mugavero, Michael J.; Turan, Janet M.; Turan, Bulent

    2016-01-01

    Introduction People living with HIV (PLWH) need to adhere to antiretroviral therapy (ART) to achieve optimal health. One reason for ART non-adherence is HIV-related stigma. Objectives We aimed to examine whether HIV treatment self-efficacy (an intrapersonal mechanism) mediates the stigma – adherence association. We also examined whether self-efficacy and the concern about being seen while taking HIV medication (an interpersonal mechanism) are parallel mediators independent of each other. Methods 180 PLWH self-reported internalized HIV stigma, ART adherence, HIV treatment self-efficacy, and concerns about being seen while taking HIV medication. We calculated bias-corrected 95% confidence intervals (CIs) for indirect effects using bootstrapping to conduct mediation analyses. Results Adherence self-efficacy mediated the relationship between internalized stigma and ART adherence. Additionally, self-efficacy and concern about being seen while taking HIV medication uniquely mediated and explained almost all of the stigma – adherence association in independent paths (parallel mediation). Conclusion These results can inform intervention strategies to promote ART adherence. PMID:27926668

  1. Response Errors Explain the Failure of Independent-Channels Models of Perception of Temporal Order

    PubMed Central

    García-Pérez, Miguel A.; Alcalá-Quintana, Rocío

    2012-01-01

    Independent-channels models of perception of temporal order (also referred to as threshold models or perceptual latency models) have been ruled out because two formal properties of these models (monotonicity and parallelism) are not borne out by data from ternary tasks in which observers must judge whether stimulus A was presented before, after, or simultaneously with stimulus B. These models generally assume that observed responses are authentic indicators of unobservable judgments, but blinks, lapses of attention, or errors in pressing the response keys (maybe, but not only, motivated by time pressure when reaction times are being recorded) may make observers misreport their judgments or simply guess a response. We present an extension of independent-channels models that considers response errors and we show that the model produces psychometric functions that do not satisfy monotonicity and parallelism. The model is illustrated by fitting it to data from a published study in which the ternary task was used. The fitted functions describe very accurately the absence of monotonicity and parallelism shown by the data. These characteristics of empirical data are thus consistent with independent-channels models when response errors are taken into consideration. The implications of these results for the analysis and interpretation of temporal order judgment data are discussed. PMID:22493586

  2. Definition, reporting, and interpretation of composite outcomes in clinical trials: systematic review

    PubMed Central

    Cordoba, Gloria; Schwartz, Lisa; Woloshin, Steven; Bae, Harold

    2010-01-01

    Objective To study how composite outcomes, which have combined several components into a single measure, are defined, reported, and interpreted. Design Systematic review of parallel group randomised clinical trials published in 2008 reporting a binary composite outcome. Two independent observers extracted the data using a standardised data sheet, and two other observers, blinded to the results, selected the most important component. Results Of 40 included trials, 29 (73%) were about cardiovascular topics and 24 (60%) were entirely or partly industry funded. Composite outcomes had a median of three components (range 2–9). Death or cardiovascular death was the most important component in 33 trials (83%). Only one trial provided a good rationale for the choice of components. We judged that the components were not of similar importance in 28 trials (70%); in 20 of these, death was combined with hospital admission. Other major problems were change in the definition of the composite outcome between the abstract, methods, and results sections (13 trials); missing, ambiguous, or uninterpretable data (9 trials); and post hoc construction of composite outcomes (4 trials). Only 24 trials (60%) provided reliable estimates for both the composite and its components, and only six trials (15%) had components of similar, or possibly similar, clinical importance and provided reliable estimates. In 11 of 16 trials with a statistically significant composite, the abstract conclusion falsely implied that the effect applied also to the most important component. Conclusions The use of composite outcomes in trials is problematic. Components are often unreasonably combined, inconsistently defined, and inadequately reported. These problems will leave many readers confused, often with an exaggerated perception of how well interventions work. PMID:20719825

  3. Definition, reporting, and interpretation of composite outcomes in clinical trials: systematic review.

    PubMed

    Cordoba, Gloria; Schwartz, Lisa; Woloshin, Steven; Bae, Harold; Gøtzsche, Peter C

    2010-08-18

    To study how composite outcomes, which have combined several components into a single measure, are defined, reported, and interpreted. Systematic review of parallel group randomised clinical trials published in 2008 reporting a binary composite outcome. Two independent observers extracted the data using a standardised data sheet, and two other observers, blinded to the results, selected the most important component. Of 40 included trials, 29 (73%) were about cardiovascular topics and 24 (60%) were entirely or partly industry funded. Composite outcomes had a median of three components (range 2-9). Death or cardiovascular death was the most important component in 33 trials (83%). Only one trial provided a good rationale for the choice of components. We judged that the components were not of similar importance in 28 trials (70%); in 20 of these, death was combined with hospital admission. Other major problems were change in the definition of the composite outcome between the abstract, methods, and results sections (13 trials); missing, ambiguous, or uninterpretable data (9 trials); and post hoc construction of composite outcomes (4 trials). Only 24 trials (60%) provided reliable estimates for both the composite and its components, and only six trials (15%) had components of similar, or possibly similar, clinical importance and provided reliable estimates. In 11 of 16 trials with a statistically significant composite, the abstract conclusion falsely implied that the effect applied also to the most important component. The use of composite outcomes in trials is problematic. Components are often unreasonably combined, inconsistently defined, and inadequately reported. These problems will leave many readers confused, often with an exaggerated perception of how well interventions work.

  4. Parallel tempering for the traveling salesman problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Percus, Allon; Wang, Richard; Hyman, Jeffrey

    We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulationsmore » are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.« less

  5. TMVOC-MP: a parallel numerical simulator for Three-PhaseNon-isothermal Flows of Multicomponent Hydrocarbon Mixtures inporous/fractured media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Keni; Yamamoto, Hajime; Pruess, Karsten

    2008-02-15

    TMVOC-MP is a massively parallel version of the TMVOC code (Pruess and Battistelli, 2002), a numerical simulator for three-phase non-isothermal flow of water, gas, and a multicomponent mixture of volatile organic chemicals (VOCs) in multidimensional heterogeneous porous/fractured media. TMVOC-MP was developed by introducing massively parallel computing techniques into TMVOC. It retains the physical process model of TMVOC, designed for applications to contamination problems that involve hydrocarbon fuels or organic solvents in saturated and unsaturated zones. TMVOC-MP can model contaminant behavior under 'natural' environmental conditions, as well as for engineered systems, such as soil vapor extraction, groundwater pumping, or steam-assisted sourcemore » remediation. With its sophisticated parallel computing techniques, TMVOC-MP can handle much larger problems than TMVOC, and can be much more computationally efficient. TMVOC-MP models multiphase fluid systems containing variable proportions of water, non-condensible gases (NCGs), and water-soluble volatile organic chemicals (VOCs). The user can specify the number and nature of NCGs and VOCs. There are no intrinsic limitations to the number of NCGs or VOCs, although the arrays for fluid components are currently dimensioned as 20, accommodating water plus 19 components that may be either NCGs or VOCs. Among them, NCG arrays are dimensioned as 10. The user may select NCGs from a data bank provided in the software. The currently available choices include O{sub 2}, N{sub 2}, CO{sub 2}, CH{sub 4}, ethane, ethylene, acetylene, and air (a pseudo-component treated with properties averaged from N{sub 2} and O{sub 2}). Thermophysical property data of VOCs can be selected from a chemical data bank, included with TMVOC-MP, that provides parameters for 26 commonly encountered chemicals. Users also can input their own data for other fluids. The fluid components may partition (volatilize and/or dissolve) among gas, aqueous, and NAPL phases. Any combination of the three phases may present, and phases may appear and disappear in the course of a simulation. In addition, VOCs may be adsorbed by the porous medium, and may biodegrade according to a simple half-life model. Detailed discussion of physical processes, assumptions, and fluid properties used in TMVOC-MP can be found in the TMVOC user's guide (Pruess and Battistelli, 2002). TMVOC-MP was developed based on the parallel framework of the TOUGH2-MP code (Zhang et al. 2001, Wu et al. 2002). It uses the MPI (Message Passing Forum, 1994) for parallel implementation. A domain decomposition approach is adopted for the parallelization. The code partitions a simulation domain, defined by an unstructured grid, using partitioning algorithm from the METIS software package (Karypsis and Kumar, 1998). In parallel simulation, each processor is in charge of one part of the simulation domain for assembling mass and energy balance equations, solving linear equation systems, updating thermophysical properties, and performing other local computations. The local linear-equation systems are solved in parallel by multiple processors with the Aztec linear solver package (Tuminaro et al., 1999). Although each processor solves the linearized equations of subdomains independently, the entire linear equation system is solved together by all processors collaboratively via communication between neighboring processors during each iteration. Detailed discussion of the prototype of the data-exchange scheme can be found in Elmroth et al. (2001). In addition, FORTRAN 90 features are introduced to TMVOC-MP, such as dynamic memory allocation, array operation, matrix manipulation, and replacing 'common blocks' (used in the original TMVOC) with modules. All new subroutines are written in FORTRAN 90. Program units imported from the original TMVOC remain in standard FORTRAN 77. This report provides a quick starting guide for using the TMVOC-MP program. We suppose that the users have basic knowledge of using the original TMVOC code. The users can find the detailed technical description of the physical processes modeled, and the mathematical and numerical methods in the user's guide for TMVOC (Pruess and Battistelli, 2002).« less

  6. Animated computer graphics models of space and earth sciences data generated via the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David

    1987-01-01

    The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.

  7. Parallel Spectral Acquisition with an Ion Cyclotron Resonance Cell Array.

    PubMed

    Park, Sung-Gun; Anderson, Gordon A; Navare, Arti T; Bruce, James E

    2016-01-19

    Mass measurement accuracy is a critical analytical figure-of-merit in most areas of mass spectrometry application. However, the time required for acquisition of high-resolution, high mass accuracy data limits many applications and is an aspect under continual pressure for development. Current efforts target implementation of higher electrostatic and magnetic fields because ion oscillatory frequencies increase linearly with field strength. As such, the time required for spectral acquisition of a given resolving power and mass accuracy decreases linearly with increasing fields. Mass spectrometer developments to include multiple high-resolution detectors that can be operated in parallel could further decrease the acquisition time by a factor of n, the number of detectors. Efforts described here resulted in development of an instrument with a set of Fourier transform ion cyclotron resonance (ICR) cells as detectors that constitute the first MS array capable of parallel high-resolution spectral acquisition. ICR cell array systems consisting of three or five cells were constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. Independent ion populations were injected and trapped within each cell in the array. Upon filling the array, all ions in all cells were simultaneously excited and ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition, parallel mass spectrometry (MS) and MS/MS measurements, and parallel high-resolution acquisition with the MS array system.

  8. Inversion of potential field data using the finite element method on parallel computers

    NASA Astrophysics Data System (ADS)

    Gross, L.; Altinay, C.; Shaw, S.

    2015-11-01

    In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.

  9. Fast disk array for image storage

    NASA Astrophysics Data System (ADS)

    Feng, Dan; Zhu, Zhichun; Jin, Hai; Zhang, Jiangling

    1997-01-01

    A fast disk array is designed for the large continuous image storage. It includes a high speed data architecture and the technology of data striping and organization on the disk array. The high speed data path which is constructed by two dual port RAM and some control circuit is configured to transfer data between a host system and a plurality of disk drives. The bandwidth can be more than 100 MB/s if the data path based on PCI (peripheral component interconnect). The organization of data stored on the disk array is similar to RAID 4. Data are striped on a plurality of disk, and each striping unit is equal to a track. I/O instructions are performed in parallel on the disk drives. An independent disk is used to store the parity information in the fast disk array architecture. By placing the parity generation circuit directly on the SCSI (or SCSI 2) bus, the parity information can be generated on the fly. It will affect little on the data writing in parallel on the other disks. The fast disk array architecture designed in the paper can meet the demands of the image storage.

  10. Streaming data analytics via message passing with application to graph algorithms

    DOE PAGES

    Plimpton, Steven J.; Shead, Tim

    2014-05-06

    The need to process streaming data, which arrives continuously at high-volume in real-time, arises in a variety of contexts including data produced by experiments, collections of environmental or network sensors, and running simulations. Streaming data can also be formulated as queries or transactions which operate on a large dynamic data store, e.g. a distributed database. We describe a lightweight, portable framework named PHISH which enables a set of independent processes to compute on a stream of data in a distributed-memory parallel manner. Datums are routed between processes in patterns defined by the application. PHISH can run on top of eithermore » message-passing via MPI or sockets via ZMQ. The former means streaming computations can be run on any parallel machine which supports MPI; the latter allows them to run on a heterogeneous, geographically dispersed network of machines. We illustrate how PHISH can support streaming MapReduce operations, and describe streaming versions of three algorithms for large, sparse graph analytics: triangle enumeration, subgraph isomorphism matching, and connected component finding. Lastly, we also provide benchmark timings for MPI versus socket performance of several kernel operations useful in streaming algorithms.« less

  11. A PIPO Boost Converter with Low Ripple and Medium Current Application

    NASA Astrophysics Data System (ADS)

    Bandri, S.; Sofian, A.; Ismail, F.

    2018-04-01

    This paper presents a Parallel Input Parallel Output (PIPO) boost converter is proposed to gain power ability of converter, and reduce current inductors. The proposed technique will distribute current for n-parallel inductor and switching component. Four parallel boost converters implement on input voltage 20.5Vdc to generate output voltage 28.8Vdc. The PIPO boost converter applied phase shift pulse width modulation which will compare with conventional PIPO boost converters by using a similar pulse for every switching component. The current ripple reduction shows an advantage PIPO boost converter then conventional boost converter. Varies loads and duty cycle will be simulated and analyzed to verify the performance of PIPO boost converter. Finally, the unbalance of current inductor is able to be verified on four area of duty cycle in less than 0.6.

  12. Different Relative Orientation of Static and Alternative Magnetic Fields and Cress Roots Direction of Growth Changes Their Gravitropic Reaction

    NASA Astrophysics Data System (ADS)

    Sheykina, Nadiia; Bogatina, Nina

    The following variants of roots location relatively to static and alternative components of magnetic field were studied. At first variant the static magnetic field was directed parallel to the gravitation vector, the alternative magnetic field was directed perpendicular to static one; roots were directed perpendicular to both two fields’ components and gravitation vector. At the variant the negative gravitropysm for cress roots was observed. At second variant the static magnetic field was directed parallel to the gravitation vector, the alternative magnetic field was directed perpendicular to static one; roots were directed parallel to alternative magnetic field. At third variant the alternative magnetic field was directed parallel to the gravitation vector, the static magnetic field was directed perpendicular to the gravitation vector, roots were directed perpendicular to both two fields components and gravitation vector; At forth variant the alternative magnetic field was directed parallel to the gravitation vector, the static magnetic field was directed perpendicular to the gravitation vector, roots were directed parallel to static magnetic field. In all cases studied the alternative magnetic field frequency was equal to Ca ions cyclotron frequency. In 2, 3 and 4 variants gravitropism was positive. But the gravitropic reaction speeds were different. In second and forth variants the gravitropic reaction speed in error limits coincided with the gravitropic reaction speed under Earth’s conditions. At third variant the gravitropic reaction speed was slowed essentially.

  13. Performance evaluation of canny edge detection on a tiled multicore architecture

    NASA Astrophysics Data System (ADS)

    Brethorst, Andrew Z.; Desai, Nehal; Enright, Douglas P.; Scrofano, Ronald

    2011-01-01

    In the last few years, a variety of multicore architectures have been used to parallelize image processing applications. In this paper, we focus on assessing the parallel speed-ups of different Canny edge detection parallelization strategies on the Tile64, a tiled multicore architecture developed by the Tilera Corporation. Included in these strategies are different ways Canny edge detection can be parallelized, as well as differences in data management. The two parallelization strategies examined were loop-level parallelism and domain decomposition. Loop-level parallelism is achieved through the use of OpenMP,1 and it is capable of parallelization across the range of values over which a loop iterates. Domain decomposition is the process of breaking down an image into subimages, where each subimage is processed independently, in parallel. The results of the two strategies show that for the same number of threads, programmer implemented, domain decomposition exhibits higher speed-ups than the compiler managed, loop-level parallelism implemented with OpenMP.

  14. An information theory of image gathering

    NASA Technical Reports Server (NTRS)

    Fales, Carl L.; Huck, Friedrich O.

    1991-01-01

    Shannon's mathematical theory of communication is extended to image gathering. Expressions are obtained for the total information that is received with a single image-gathering channel and with parallel channels. It is concluded that the aliased signal components carry information even though these components interfere with the within-passband components in conventional image gathering and restoration, thereby degrading the fidelity and visual quality of the restored image. An examination of the expression for minimum mean-square-error, or Wiener-matrix, restoration from parallel image-gathering channels reveals a method for unscrambling the within-passband and aliased signal components to restore spatial frequencies beyond the sampling passband out to the spatial frequency response cutoff of the optical aperture.

  15. 53BP1 and USP28 mediate p53-dependent cell cycle arrest in response to centrosome loss and prolonged mitosis.

    PubMed

    Fong, Chii Shyang; Mazo, Gregory; Das, Tuhin; Goodman, Joshua; Kim, Minhee; O'Rourke, Brian P; Izquierdo, Denisse; Tsou, Meng-Fu Bryan

    2016-07-02

    Mitosis occurs efficiently, but when it is disturbed or delayed, p53-dependent cell death or senescence is often triggered after mitotic exit. To characterize this process, we conducted CRISPR-mediated loss-of-function screens using a cell-based assay in which mitosis is consistently disturbed by centrosome loss. We identified 53BP1 and USP28 as essential components acting upstream of p53, evoking p21-dependent cell cycle arrest in response not only to centrosome loss, but also to other distinct defects causing prolonged mitosis. Intriguingly, 53BP1 mediates p53 activation independently of its DNA repair activity, but requiring its interacting protein USP28 that can directly deubiquitinate p53 in vitro and ectopically stabilize p53 in vivo. Moreover, 53BP1 can transduce prolonged mitosis to cell cycle arrest independently of the spindle assembly checkpoint (SAC), suggesting that while SAC protects mitotic accuracy by slowing down mitosis, 53BP1 and USP28 function in parallel to select against disturbed or delayed mitosis, promoting mitotic efficiency.

  16. Telecommunication service markets through the year 2000 in relation to millimeter wave satellite systems

    NASA Technical Reports Server (NTRS)

    Stevenson, S. M.

    1979-01-01

    NASA is currently conducting a series of millimeter wave satellite system market studies to develop 30/20 GHz satellite system concepts that have commercial potential. Four contractual efforts were undertaken: two parallel and independent system studies and two parallel and independent market studies. The marketing efforts are focused on forecasting the total domestic demand for long haul telecommunications services for the 1980-2000 period. Work completed to date and reported in this paper include projections of: geographical distribution of traffic; traffic volume as a function of urban area size; and user identification and forecasted demand.

  17. Computational Challenges of 3D Radiative Transfer in Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Jakub, Fabian; Bernhard, Mayer

    2017-04-01

    The computation of radiative heating and cooling rates is one of the most expensive components in todays atmospheric models. The high computational cost stems not only from the laborious integration over a wide range of the electromagnetic spectrum but also from the fact that solving the integro-differential radiative transfer equation for monochromatic light is already rather involved. This lead to the advent of numerous approximations and parameterizations to reduce the cost of the solver. One of the most prominent one is the so called independent pixel approximations (IPA) where horizontal energy transfer is neglected whatsoever and radiation may only propagate in the vertical direction (1D). Recent studies implicate that the IPA introduces significant errors in high resolution simulations and affects the evolution and development of convective systems. However, using fully 3D solvers such as for example MonteCarlo methods is not even on state of the art supercomputers feasible. The parallelization of atmospheric models is often realized by a horizontal domain decomposition, and hence, horizontal transfer of energy necessitates communication. E.g. a cloud's shadow at a low zenith angle will cast a long shadow and potentially needs to communication through a multitude of processors. Especially light in the solar spectral range may travel long distances through the atmosphere. Concerning highly parallel simulations, it is vital that 3D radiative transfer solvers put a special emphasis on parallel scalability. We will present an introduction to intricacies computing 3D radiative heating and cooling rates as well as report on the parallel performance of the TenStream solver. The TenStream is a 3D radiative transfer solver using the PETSc framework to iteratively solve a set of partial differential equation. We investigate two matrix preconditioners, (a) geometric algebraic multigrid preconditioning(MG+GAMG) and (b) block Jacobi incomplete LU (ILU) factorization. The TenStream solver is tested for up to 4096 cores and shows a parallel scaling efficiency of 80-90% on various supercomputers.

  18. An Adaptive Memory Interface Controller for Improving Bandwidth Utilization of Hybrid and Reconfigurable Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Tumeo, Antonino; Ferrandi, Fabrizio

    Emerging applications such as data mining, bioinformatics, knowledge discovery, social network analysis are irregular. They use data structures based on pointers or linked lists, such as graphs, unbalanced trees or unstructures grids, which generates unpredictable memory accesses. These data structures usually are large, but difficult to partition. These applications mostly are memory bandwidth bounded and have high synchronization intensity. However, they also have large amounts of inherent dynamic parallelism, because they potentially perform a task for each one of the element they are exploring. Several efforts are looking at accelerating these applications on hybrid architectures, which integrate general purpose processorsmore » with reconfigurable devices. Some solutions, which demonstrated significant speedups, include custom-hand tuned accelerators or even full processor architectures on the reconfigurable logic. In this paper we present an approach for the automatic synthesis of accelerators from C, targeted at irregular applications. In contrast to typical High Level Synthesis paradigms, which construct a centralized Finite State Machine, our approach generates dynamically scheduled hardware components. While parallelism exploitation in typical HLS-generated accelerators is usually bound within a single execution flow, our solution allows concurrently running multiple execution flow, thus also exploiting the coarser grain task parallelism of irregular applications. Our approach supports multiple, multi-ported and distributed memories, and atomic memory operations. Its main objective is parallelizing as many memory operations as possible, independently from their execution time, to maximize the memory bandwidth utilization. This significantly differs from current HLS flows, which usually consider a single memory port and require precise scheduling of memory operations. A key innovation of our approach is the generation of a memory interface controller, which dynamically maps concurrent memory accesses to multiple ports. We present a case study on a typical irregular kernel, Graph Breadth First search (BFS), exploring different tradeoffs in terms of parallelism and number of memories.« less

  19. A modular and low­cost 3D­-printed microfluidic device with assembly of capillaries for droplet mass production

    NASA Astrophysics Data System (ADS)

    Aguirre-Pablo, A. A.; Zhang, J. M.; Li, E. Q.; Thoroddsen, S. T.

    2015-11-01

    We report a new 3D­-printed microfluidic system with assembly of capillaries for droplet generation. The system consists of the following parts: 3D­printed Droplet Generation Units (DGUs) with embedded capillaries and two 3D-printed pyramid distributors for supplying two different fluid phases into every DGU. A single DGU consists of four independent parts: a top channel, a bottom channel, a capillary and a sealing gasket. All components are produced by 3d­printing except the capillaries, which are formed in a glass-­puller. DGUs are independent of the distributor and from each other; they can easily be assembled, replaced and modified due to its modular design which is an advantage in case of a faulty part or clogging, eliminating the need to fabricate a complete new system which is cost and time demanding. We assessed the feasibility of producing droplets in this device varying different fluid parameters, such as liquid viscosity and flow rate, which affect droplet size and generation frequency. The design and fabrication of this device is simple and low­-cost with the 3D printing technology. Due to the modular design of independent parts, low-cost fabrication and easy parallelization of multiple DGU's, this system provides great flexibility for industrial applications.

  20. Minimum envelope roughness pulse design for reduced amplifier distortion in parallel excitation.

    PubMed

    Grissom, William A; Kerr, Adam B; Stang, Pascal; Scott, Greig C; Pauly, John M

    2010-11-01

    Parallel excitation uses multiple transmit channels and coils, each driven by independent waveforms, to afford the pulse designer an additional spatial encoding mechanism that complements gradient encoding. In contrast to parallel reception, parallel excitation requires individual power amplifiers for each transmit channel, which can be cost prohibitive. Several groups have explored the use of low-cost power amplifiers for parallel excitation; however, such amplifiers commonly exhibit nonlinear memory effects that distort radio frequency pulses. This is especially true for pulses with rapidly varying envelopes, which are common in parallel excitation. To overcome this problem, we introduce a technique for parallel excitation pulse design that yields pulses with smoother envelopes. We demonstrate experimentally that pulses designed with the new technique suffer less amplifier distortion than unregularized pulses and pulses designed with conventional regularization.

  1. Cyclin D1-Cdk4 controls glucose metabolism independently of cell cycle progression.

    PubMed

    Lee, Yoonjin; Dominy, John E; Choi, Yoon Jong; Jurczak, Michael; Tolliday, Nicola; Camporez, Joao Paulo; Chim, Helen; Lim, Ji-Hong; Ruan, Hai-Bin; Yang, Xiaoyong; Vazquez, Francisca; Sicinski, Piotr; Shulman, Gerald I; Puigserver, Pere

    2014-06-26

    Insulin constitutes a principal evolutionarily conserved hormonal axis for maintaining glucose homeostasis; dysregulation of this axis causes diabetes. PGC-1α (peroxisome-proliferator-activated receptor-γ coactivator-1α) links insulin signalling to the expression of glucose and lipid metabolic genes. The histone acetyltransferase GCN5 (general control non-repressed protein 5) acetylates PGC-1α and suppresses its transcriptional activity, whereas sirtuin 1 deacetylates and activates PGC-1α. Although insulin is a mitogenic signal in proliferative cells, whether components of the cell cycle machinery contribute to its metabolic action is poorly understood. Here we report that in mice insulin activates cyclin D1-cyclin-dependent kinase 4 (Cdk4), which, in turn, increases GCN5 acetyltransferase activity and suppresses hepatic glucose production independently of cell cycle progression. Through a cell-based high-throughput chemical screen, we identify a Cdk4 inhibitor that potently decreases PGC-1α acetylation. Insulin/GSK-3β (glycogen synthase kinase 3-beta) signalling induces cyclin D1 protein stability by sequestering cyclin D1 in the nucleus. In parallel, dietary amino acids increase hepatic cyclin D1 messenger RNA transcripts. Activated cyclin D1-Cdk4 kinase phosphorylates and activates GCN5, which then acetylates and inhibits PGC-1α activity on gluconeogenic genes. Loss of hepatic cyclin D1 results in increased gluconeogenesis and hyperglycaemia. In diabetic models, cyclin D1-Cdk4 is chronically elevated and refractory to fasting/feeding transitions; nevertheless further activation of this kinase normalizes glycaemia. Our findings show that insulin uses components of the cell cycle machinery in post-mitotic cells to control glucose homeostasis independently of cell division.

  2. Simple and robust generation of ultrafast laser pulse trains using polarization-independent parallel-aligned thin films

    NASA Astrophysics Data System (ADS)

    Wang, Andong; Jiang, Lan; Li, Xiaowei; Wang, Zhi; Du, Kun; Lu, Yongfeng

    2018-05-01

    Ultrafast laser pulse temporal shaping has been widely applied in various important applications such as laser materials processing, coherent control of chemical reactions, and ultrafast imaging. However, temporal pulse shaping has been limited to only-in-lab technique due to the high cost, low damage threshold, and polarization dependence. Herein we propose a novel design of ultrafast laser pulse train generation device, which consists of multiple polarization-independent parallel-aligned thin films. Various pulse trains with controllable temporal profile can be generated flexibly by multi-reflections within the splitting films. Compared with other pulse train generation techniques, this method has advantages of compact structure, low cost, high damage threshold and polarization independence. These advantages endow it with high potential for broad utilization in ultrafast applications.

  3. Parallel line analysis: multifunctional software for the biomedical sciences

    NASA Technical Reports Server (NTRS)

    Swank, P. R.; Lewis, M. L.; Damron, K. L.; Morrison, D. R.

    1990-01-01

    An easy to use, interactive FORTRAN program for analyzing the results of parallel line assays is described. The program is menu driven and consists of five major components: data entry, data editing, manual analysis, manual plotting, and automatic analysis and plotting. Data can be entered from the terminal or from previously created data files. The data editing portion of the program is used to inspect and modify data and to statistically identify outliers. The manual analysis component is used to test the assumptions necessary for parallel line assays using analysis of covariance techniques and to determine potency ratios with confidence limits. The manual plotting component provides a graphic display of the data on the terminal screen or on a standard line printer. The automatic portion runs through multiple analyses without operator input. Data may be saved in a special file to expedite input at a future time.

  4. A distributed parallel storage architecture and its potential application within EOSDIS

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony

    1994-01-01

    We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.

  5. Tile-based Level of Detail for the Parallel Age

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niski, K; Cohen, J D

    Today's PCs incorporate multiple CPUs and GPUs and are easily arranged in clusters for high-performance, interactive graphics. We present an approach based on hierarchical, screen-space tiles to parallelizing rendering with level of detail. Adapt tiles, render tiles, and machine tiles are associated with CPUs, GPUs, and PCs, respectively, to efficiently parallelize the workload with good resource utilization. Adaptive tile sizes provide load balancing while our level of detail system allows total and independent management of the load on CPUs and GPUs. We demonstrate our approach on parallel configurations consisting of both single PCs and a cluster of PCs.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo Zehua; Tang Xianzhu

    Parallel transport of long mean-free-path plasma along an open magnetic field line is characterized by strong temperature anisotropy, which is driven by two effects. The first is magnetic moment conservation in a non-uniform magnetic field, which can transfer energy between parallel and perpendicular degrees of freedom. The second is decompressional cooling of the parallel temperature due to parallel flow acceleration by conventional presheath electric field which is associated with the sheath condition near the wall surface where the open magnetic field line intercepts the discharge chamber. To the leading order in gyroradius to system gradient length scale expansion, the parallelmore » transport can be understood via the Chew-Goldbeger-Low (CGL) model which retains two components of the parallel heat flux, i.e., q{sub n} associated with the parallel thermal energy and q{sub s} related to perpendicular thermal energy. It is shown that in addition to the effect of magnetic field strength (B) modulation, the two components (q{sub n} and q{sub s}) of the parallel heat flux play decisive roles in the parallel variation of the plasma profile, which includes the plasma density (n), parallel flow (u), parallel and perpendicular temperatures (T{sub Parallel-To} and T{sub Up-Tack }), and the ambipolar potential ({phi}). Both their profile (q{sub n}/B and q{sub s}/B{sup 2}) and the upstream values of the ratio of the conductive and convective thermal flux (q{sub n}/nuT{sub Parallel-To} and q{sub s}/nuT{sub Up-Tack }) provide the controlling physics, in addition to B modulation. The physics described by the CGL model are contrasted with those of the double-adiabatic laws and further elucidated by comparison with the first-principles kinetic simulation for a specific but representative flux expander case.« less

  7. Parallel ICA of FDG-PET and PiB-PET in three conditions with underlying Alzheimer's pathology

    PubMed Central

    Laforce, Robert; Tosun, Duygu; Ghosh, Pia; Lehmann, Manja; Madison, Cindee M.; Weiner, Michael W.; Miller, Bruce L.; Jagust, William J.; Rabinovici, Gil D.

    2014-01-01

    The relationships between clinical phenotype, β-amyloid (Aβ) deposition and neurodegeneration in Alzheimer's disease (AD) are incompletely understood yet have important ramifications for future therapy. The goal of this study was to utilize multimodality positron emission tomography (PET) data from a clinically heterogeneous population of patients with probable AD in order to: (1) identify spatial patterns of Aβ deposition measured by (11C)-labeled Pittsburgh Compound B (PiB-PET) and glucose metabolism measured by FDG-PET that correlate with specific clinical presentation and (2) explore associations between spatial patterns of Aβ deposition and glucose metabolism across the AD population. We included all patients meeting the criteria for probable AD (NIA–AA) who had undergone MRI, PiB and FDG-PET at our center (N = 46, mean age 63.0 ± 7.7, Mini-Mental State Examination 22.0 ± 4.8). Patients were subclassified based on their cognitive profiles into an amnestic/dysexecutive group (AD-memory; n = 27), a language-predominant group (AD-language; n = 10) and a visuospatial-predominant group (AD-visuospatial; n = 9). All patients were required to have evidence of amyloid deposition on PiB-PET. To capture the spatial distribution of Aβ deposition and glucose metabolism, we employed parallel independent component analysis (pICA), a method that enables joint analyses of multimodal imaging data. The relationships between PET components and clinical group were examined using a Receiver Operator Characteristic approach, including age, gender, education and apolipoprotein E ε4 allele carrier status as covariates. Results of the first set of analyses independently examining the relationship between components from each modality and clinical group showed three significant components for FDG: a left inferior frontal and temporoparietal component associated with AD-language (area under the curve [AUC] 0.82, p = 0.011), and two components associated with AD-visuospatial (bilateral occipito-parieto-temporal [AUC 0.85, p = 0.009] and right posterior cingulate cortex [PCC]/precuneus and right lateral parietal [AUC 0.69, p = 0.045]). The AD-memory associated component included predominantly bilateral inferior frontal, cuneus and inferior temporal, and right inferior parietal hypometabolism but did not reach significance (AUC 0.65, p = 0.062). None of the PiB components correlated with clinical group. Joint analysis of PiB and FDG with pICA revealed a correlated component pair, in which increased frontal and decreased PCC/precuneus PiB correlated with decreased FDG in the frontal, occipital and temporal regions (partial r = 0.75, p < 0.0001). Using multivariate data analysis, this study reinforced the notion that clinical phenotype in AD is tightly linked to patterns of glucose hypometabolism but not amyloid deposition. These findings are strikingly similar to those of univariate paradigms and provide additional support in favor of specific involvement of the language network, higher-order visual network, and default mode network in clinical variants of AD. The inverse relationship between Aβ deposition and glucose metabolism in partially overlapping brain regions suggests that Aβ may exert both local and remote effects on brain metabolism. Applying multivariate approaches such as pICA to multimodal imaging data is a promising approach for unraveling the complex relationships between different elements of AD pathophysiology. PMID:24818077

  8. HPCC Methodologies for Structural Design and Analysis on Parallel and Distributed Computing Platforms

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel

    1998-01-01

    In this grant, we have proposed a three-year research effort focused on developing High Performance Computation and Communication (HPCC) methodologies for structural analysis on parallel processors and clusters of workstations, with emphasis on reducing the structural design cycle time. Besides consolidating and further improving the FETI solver technology to address plate and shell structures, we have proposed to tackle the following design related issues: (a) parallel coupling and assembly of independently designed and analyzed three-dimensional substructures with non-matching interfaces, (b) fast and smart parallel re-analysis of a given structure after it has undergone design modifications, (c) parallel evaluation of sensitivity operators (derivatives) for design optimization, and (d) fast parallel analysis of mildly nonlinear structures. While our proposal was accepted, support was provided only for one year.

  9. Parallel Process and Isomorphism: A Model for Decision Making in the Supervisory Triad

    ERIC Educational Resources Information Center

    Koltz, Rebecca L.; Odegard, Melissa A.; Feit, Stephen S.; Provost, Kent; Smith, Travis

    2012-01-01

    Parallel process and isomorphism are two supervisory concepts that are often discussed independently but rarely discussed in connection with each other. These two concepts, philosophically, have different historical roots, as well as different implications for interventions with regard to the supervisory triad. The authors examine the difference…

  10. Exploring the Effects and Use of a Chinese-English Parallel Concordancer

    ERIC Educational Resources Information Center

    Gao, Zhao-Ming

    2011-01-01

    Previous studies on self-correction using corpora involve monolingual concordances and intervention from instructors such as marking of errors, the use of modified concordances, and other simplifications of the task. Can L2 learners independently refine their previous outputs by simply using a parallel concordancer without any hints about their…

  11. Writing Parallel Parameter Sweep Applications with pMatlab

    DTIC Science & Technology

    2011-01-01

    formulate this type of problem in a leader-worker paradigm. The SETI @Home project is a well- known leader-worker parallel application [1]. The SETI ...their results back to the SETI @Home servers when they are done computing the job. Because each job is independent, it does not matter if the 415th job

  12. Project MANTIS: A MANTle Induction Simulator for coupling geodynamic and electromagnetic modeling

    NASA Astrophysics Data System (ADS)

    Weiss, C. J.

    2009-12-01

    A key component to testing geodynamic hypotheses resulting from the 3D mantle convection simulations is the ability to easily translate the predicted physiochemical state to the model space relevant for an independent geophysical observation, such as earth's seismic, geodetic or electromagnetic response. In this contribution a new parallel code for simulating low-frequency, global-scale electromagnetic induction phenomena is introduced that has the same Earth discretization as the popular CitcomS mantle convection code. Hence, projection of the CitcomS model into the model space of electrical conductivity is greatly simplified, and focuses solely on the node-to-node, physics-based relationship between these Earth parameters without the need for "upscaling", "downscaling", averaging or harmonizing with some other model basis such as spherical harmonics. Preliminary performance tests of the MANTIS code on shared and distributed memory parallel compute platforms shows favorable scaling (>70% efficiency) for up to 500 processors. As with CitcomS, an OpenDX visualization widget (VISMAN) is also provided for 3D rendering and interactive interrogation of model results. Details of the MANTIS code will be briefly discussed here, focusing on compatibility with CitcomS modeling, as will be preliminary results in which the electromagnetic response of a CitcomS model is evaluated. VISMAN rendering of electrical tomography-derived electrical conductivity model overlain by an a 1x1 deg crustal conductivity map. Grey scale represents the log_10 magnitude of conductivity [S/m]. Arrows are horiztonal components of a hypothetical magnetospheric source field used to electromagnetically excite the conductivity model.

  13. Exploiting Vector and Multicore Parallelsim for Recursive, Data- and Task-Parallel Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Bin; Krishnamoorthy, Sriram; Agrawal, Kunal

    Modern hardware contains parallel execution resources that are well-suited for data-parallelism-vector units-and task parallelism-multicores. However, most work on parallel scheduling focuses on one type of hardware or the other. In this work, we present a scheduling framework that allows for a unified treatment of task- and data-parallelism. Our key insight is an abstraction, task blocks, that uniformly handles data-parallel iterations and task-parallel tasks, allowing them to be scheduled on vector units or executed independently as multicores. Our framework allows us to define schedulers that can dynamically select between executing task- blocks on vector units or multicores. We show that thesemore » schedulers are asymptotically optimal, and deliver the maximum amount of parallelism available in computation trees. To evaluate our schedulers, we develop program transformations that can convert mixed data- and task-parallel pro- grams into task block-based programs. Using a prototype instantiation of our scheduling framework, we show that, on an 8-core system, we can simultaneously exploit vector and multicore parallelism to achieve 14×-108× speedup over sequential baselines.« less

  14. Development of a mechatronic platform and validation of methods for estimating ankle stiffness during the stance phase of walking.

    PubMed

    Rouse, Elliott J; Hargrove, Levi J; Perreault, Eric J; Peshkin, Michael A; Kuiken, Todd A

    2013-08-01

    The mechanical properties of human joints (i.e., impedance) are constantly modulated to precisely govern human interaction with the environment. The estimation of these properties requires the displacement of the joint from its intended motion and a subsequent analysis to determine the relationship between the imposed perturbation and the resultant joint torque. There has been much investigation into the estimation of upper-extremity joint impedance during dynamic activities, yet the estimation of ankle impedance during walking has remained a challenge. This estimation is important for understanding how the mechanical properties of the human ankle are modulated during locomotion, and how those properties can be replicated in artificial prostheses designed to restore natural movement control. Here, we introduce a mechatronic platform designed to address the challenge of estimating the stiffness component of ankle impedance during walking, where stiffness denotes the static component of impedance. The system consists of a single degree of freedom mechatronic platform that is capable of perturbing the ankle during the stance phase of walking and measuring the response torque. Additionally, we estimate the platform's intrinsic inertial impedance using parallel linear filters and present a set of methods for estimating the impedance of the ankle from walking data. The methods were validated by comparing the experimentally determined estimates for the stiffness of a prosthetic foot to those measured from an independent testing machine. The parallel filters accurately estimated the mechatronic platform's inertial impedance, accounting for 96% of the variance, when averaged across channels and trials. Furthermore, our measurement system was found to yield reliable estimates of stiffness, which had an average error of only 5.4% (standard deviation: 0.7%) when measured at three time points within the stance phase of locomotion, and compared to the independently determined stiffness values of the prosthetic foot. The mechatronic system and methods proposed in this study are capable of accurately estimating ankle stiffness during the foot-flat region of stance phase. Future work will focus on the implementation of this validated system in estimating human ankle impedance during the stance phase of walking.

  15. Spatial patterns of brain amyloid-beta burden and atrophy rate associations in mild cognitive impairment.

    PubMed

    Tosun, Duygu; Schuff, Norbert; Mathis, Chester A; Jagust, William; Weiner, Michael W

    2011-04-01

    Amyloid-β accumulation in the brain is thought to be one of the earliest events in Alzheimer's disease, possibly leading to synaptic dysfunction, neurodegeneration and cognitive/functional decline. The earliest detectable changes seen with neuroimaging appear to be amyloid-β accumulation detected by (11)C-labelled Pittsburgh compound B positron emission tomography imaging. However, some individuals tolerate high brain amyloid-β loads without developing symptoms, while others progressively decline, suggesting that events in the brain downstream from amyloid-β deposition, such as regional brain atrophy rates, play an important role. The main purpose of this study was to understand the relationship between the regional distributions of increased amyloid-β and the regional distribution of increased brain atrophy rates in patients with mild cognitive impairment. To simultaneously capture the spatial distributions of amyloid-β and brain atrophy rates, we employed the statistical concept of parallel independent component analysis, an effective method for joint analysis of multimodal imaging data. Parallel independent component analysis identified significant relationships between two patterns of amyloid-β deposition and atrophy rates: (i) increased amyloid-β burden in the left precuneus/cuneus and medial-temporal regions was associated with increased brain atrophy rates in the left medial-temporal and parietal regions; and (ii) in contrast, increased amyloid-β burden in bilateral precuneus/cuneus and parietal regions was associated with increased brain atrophy rates in the right medial temporal regions. The spatial distribution of increased amyloid-β and the associated spatial distribution of increased brain atrophy rates embrace a characteristic pattern of brain structures known for a high vulnerability to Alzheimer's disease pathology, encouraging for the use of (11)C-labelled Pittsburgh compound B positron emission tomography measures as early indicators of Alzheimer's disease. These results may begin to shed light on the mechanisms by which amyloid-β deposition leads to neurodegeneration and cognitive decline and the development of a more specific Alzheimer's disease-specific imaging signature for diagnosis and use of this knowledge in the development of new anti-therapies for Alzheimer's disease.

  16. The (in)dependence of articulation and lexical planning during isolated word production.

    PubMed

    Buz, Esteban; Jaeger, T Florian

    The number of phonological neighbors to a word (PND) can affect its lexical planning and pronunciation. Similar parallel effects on planning and articulation have been observed for other lexical variables, such as a word's contextual predictability. Such parallelism is frequently taken to indicate that effects on articulation are mediated by effects on the time course of lexical planning. We test this mediation assumption for PND and find it unsupported. In a picture naming experiment, we measure speech onset latencies (planning), word durations, and vowel dispersion (articulation). We find that PND predicts both latencies and durations. Further, latencies predict durations. However, the effects of PND and latency on duration are independent: parallel effects do not imply mediation. We discuss the consequences for accounts of lexical planning, articulation, and the link between them. In particular, our results suggest that ease of planning does not explain effects of PND on articulation.

  17. The (in)dependence of articulation and lexical planning during isolated word production

    PubMed Central

    Buz, Esteban; Jaeger, T. Florian

    2016-01-01

    The number of phonological neighbors to a word (PND) can affect its lexical planning and pronunciation. Similar parallel effects on planning and articulation have been observed for other lexical variables, such as a word’s contextual predictability. Such parallelism is frequently taken to indicate that effects on articulation are mediated by effects on the time course of lexical planning. We test this mediation assumption for PND and find it unsupported. In a picture naming experiment, we measure speech onset latencies (planning), word durations, and vowel dispersion (articulation). We find that PND predicts both latencies and durations. Further, latencies predict durations. However, the effects of PND and latency on duration are independent: parallel effects do not imply mediation. We discuss the consequences for accounts of lexical planning, articulation, and the link between them. In particular, our results suggest that ease of planning does not explain effects of PND on articulation. PMID:27376094

  18. Parallel independent evolution of pathogenicity within the genus Yersinia

    PubMed Central

    Reuter, Sandra; Connor, Thomas R.; Barquist, Lars; Walker, Danielle; Feltwell, Theresa; Harris, Simon R.; Fookes, Maria; Hall, Miquette E.; Petty, Nicola K.; Fuchs, Thilo M.; Corander, Jukka; Dufour, Muriel; Ringwood, Tamara; Savin, Cyril; Bouchier, Christiane; Martin, Liliane; Miettinen, Minna; Shubin, Mikhail; Riehm, Julia M.; Laukkanen-Ninios, Riikka; Sihvonen, Leila M.; Siitonen, Anja; Skurnik, Mikael; Falcão, Juliana Pfrimer; Fukushima, Hiroshi; Scholz, Holger C.; Prentice, Michael B.; Wren, Brendan W.; Parkhill, Julian; Carniel, Elisabeth; Achtman, Mark; McNally, Alan; Thomson, Nicholas R.

    2014-01-01

    The genus Yersinia has been used as a model system to study pathogen evolution. Using whole-genome sequencing of all Yersinia species, we delineate the gene complement of the whole genus and define patterns of virulence evolution. Multiple distinct ecological specializations appear to have split pathogenic strains from environmental, nonpathogenic lineages. This split demonstrates that contrary to hypotheses that all pathogenic Yersinia species share a recent common pathogenic ancestor, they have evolved independently but followed parallel evolutionary paths in acquiring the same virulence determinants as well as becoming progressively more limited metabolically. Shared virulence determinants are limited to the virulence plasmid pYV and the attachment invasion locus ail. These acquisitions, together with genomic variations in metabolic pathways, have resulted in the parallel emergence of related pathogens displaying an increasingly specialized lifestyle with a spectrum of virulence potential, an emerging theme in the evolution of other important human pathogens. PMID:24753568

  19. Convergent evolution of marine mammals is associated with distinct substitutions in common genes

    PubMed Central

    Zhou, Xuming; Seim, Inge; Gladyshev, Vadim N.

    2015-01-01

    Phenotypic convergence is thought to be driven by parallel substitutions coupled with natural selection at the sequence level. Multiple independent evolutionary transitions of mammals to an aquatic environment offer an opportunity to test this thesis. Here, whole genome alignment of coding sequences identified widespread parallel amino acid substitutions in marine mammals; however, the majority of these changes were not unique to these animals. Conversely, we report that candidate aquatic adaptation genes, identified by signatures of likelihood convergence and/or elevated ratio of nonsynonymous to synonymous nucleotide substitution rate, are characterized by very few parallel substitutions and exhibit distinct sequence changes in each group. Moreover, no significant positive correlation was found between likelihood convergence and positive selection in all three marine lineages. These results suggest that convergence in protein coding genes associated with aquatic lifestyle is mainly characterized by independent substitutions and relaxed negative selection. PMID:26549748

  20. Observations of large parallel electric fields in the auroral ionosphere

    NASA Technical Reports Server (NTRS)

    Mozer, F. S.

    1976-01-01

    Rocket borne measurements employing a double probe technique were used to gather evidence for the existence of electric fields in the auroral ionosphere having components parallel to the magnetic field direction. An analysis of possible experimental errors leads to the conclusion that no known uncertainties can account for the roughly 10 mV/m parallel electric fields that are observed.

  1. Tools and Techniques for Adding Fault Tolerance to Distributed and Parallel Programs

    DTIC Science & Technology

    1991-12-07

    is rapidly approaching dimensions where fault tolerance can no longer be ignored. No matter how reliable the i .nd~ividual components May be, the...The scale of parallel computing systems is rapidly approaching dimensions where 41to’- erance can no longer be ignored. No matter how relitble the...those employed in the Tandem [71 and Stratus [35] systems, is clearly impractical. * No matter how reliable the individual components are, the sheer

  2. Parallel evolution of a type IV secretion system in radiating lineages of the host-restricted bacterial pathogen Bartonella.

    PubMed

    Engel, Philipp; Salzburger, Walter; Liesch, Marius; Chang, Chao-Chin; Maruyama, Soichi; Lanz, Christa; Calteau, Alexandra; Lajus, Aurélie; Médigue, Claudine; Schuster, Stephan C; Dehio, Christoph

    2011-02-10

    Adaptive radiation is the rapid origination of multiple species from a single ancestor as the result of concurrent adaptation to disparate environments. This fundamental evolutionary process is considered to be responsible for the genesis of a great portion of the diversity of life. Bacteria have evolved enormous biological diversity by exploiting an exceptional range of environments, yet diversification of bacteria via adaptive radiation has been documented in a few cases only and the underlying molecular mechanisms are largely unknown. Here we show a compelling example of adaptive radiation in pathogenic bacteria and reveal their genetic basis. Our evolutionary genomic analyses of the α-proteobacterial genus Bartonella uncover two parallel adaptive radiations within these host-restricted mammalian pathogens. We identify a horizontally-acquired protein secretion system, which has evolved to target specific bacterial effector proteins into host cells as the evolutionary key innovation triggering these parallel adaptive radiations. We show that the functional versatility and adaptive potential of the VirB type IV secretion system (T4SS), and thereby translocated Bartonella effector proteins (Beps), evolved in parallel in the two lineages prior to their radiations. Independent chromosomal fixation of the virB operon and consecutive rounds of lineage-specific bep gene duplications followed by their functional diversification characterize these parallel evolutionary trajectories. Whereas most Beps maintained their ancestral domain constitution, strikingly, a novel type of effector protein emerged convergently in both lineages. This resulted in similar arrays of host cell-targeted effector proteins in the two lineages of Bartonella as the basis of their independent radiation. The parallel molecular evolution of the VirB/Bep system displays a striking example of a key innovation involved in independent adaptive processes and the emergence of bacterial pathogens. Furthermore, our study highlights the remarkable evolvability of T4SSs and their effector proteins, explaining their broad application in bacterial interactions with the environment.

  3. Parallel Evolution of a Type IV Secretion System in Radiating Lineages of the Host-Restricted Bacterial Pathogen Bartonella

    PubMed Central

    Engel, Philipp; Salzburger, Walter; Liesch, Marius; Chang, Chao-Chin; Maruyama, Soichi; Lanz, Christa; Calteau, Alexandra; Lajus, Aurélie; Médigue, Claudine; Schuster, Stephan C.; Dehio, Christoph

    2011-01-01

    Adaptive radiation is the rapid origination of multiple species from a single ancestor as the result of concurrent adaptation to disparate environments. This fundamental evolutionary process is considered to be responsible for the genesis of a great portion of the diversity of life. Bacteria have evolved enormous biological diversity by exploiting an exceptional range of environments, yet diversification of bacteria via adaptive radiation has been documented in a few cases only and the underlying molecular mechanisms are largely unknown. Here we show a compelling example of adaptive radiation in pathogenic bacteria and reveal their genetic basis. Our evolutionary genomic analyses of the α-proteobacterial genus Bartonella uncover two parallel adaptive radiations within these host-restricted mammalian pathogens. We identify a horizontally-acquired protein secretion system, which has evolved to target specific bacterial effector proteins into host cells as the evolutionary key innovation triggering these parallel adaptive radiations. We show that the functional versatility and adaptive potential of the VirB type IV secretion system (T4SS), and thereby translocated Bartonella effector proteins (Beps), evolved in parallel in the two lineages prior to their radiations. Independent chromosomal fixation of the virB operon and consecutive rounds of lineage-specific bep gene duplications followed by their functional diversification characterize these parallel evolutionary trajectories. Whereas most Beps maintained their ancestral domain constitution, strikingly, a novel type of effector protein emerged convergently in both lineages. This resulted in similar arrays of host cell-targeted effector proteins in the two lineages of Bartonella as the basis of their independent radiation. The parallel molecular evolution of the VirB/Bep system displays a striking example of a key innovation involved in independent adaptive processes and the emergence of bacterial pathogens. Furthermore, our study highlights the remarkable evolvability of T4SSs and their effector proteins, explaining their broad application in bacterial interactions with the environment. PMID:21347280

  4. Applications of Parallel Process HiMAP for Large Scale Multidisciplinary Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Potsdam, Mark; Rodriguez, David; Kwak, Dochay (Technical Monitor)

    2000-01-01

    HiMAP is a three level parallel middleware that can be interfaced to a large scale global design environment for code independent, multidisciplinary analysis using high fidelity equations. Aerospace technology needs are rapidly changing. Computational tools compatible with the requirements of national programs such as space transportation are needed. Conventional computation tools are inadequate for modern aerospace design needs. Advanced, modular computational tools are needed, such as those that incorporate the technology of massively parallel processors (MPP).

  5. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  6. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  7. Organizing Compression of Hyperspectral Imagery to Allow Efficient Parallel Decompression

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew A.; Kiely, Aaron B.

    2014-01-01

    family of schemes has been devised for organizing the output of an algorithm for predictive data compression of hyperspectral imagery so as to allow efficient parallelization in both the compressor and decompressor. In these schemes, the compressor performs a number of iterations, during each of which a portion of the data is compressed via parallel threads operating on independent portions of the data. The general idea is that for each iteration it is predetermined how much compressed data will be produced from each thread.

  8. A Parallel Trade Study Architecture for Design Optimization of Complex Systems

    NASA Technical Reports Server (NTRS)

    Kim, Hongman; Mullins, James; Ragon, Scott; Soremekun, Grant; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    Design of a successful product requires evaluating many design alternatives in a limited design cycle time. This can be achieved through leveraging design space exploration tools and available computing resources on the network. This paper presents a parallel trade study architecture to integrate trade study clients and computing resources on a network using Web services. The parallel trade study solution is demonstrated to accelerate design of experiments, genetic algorithm optimization, and a cost as an independent variable (CAIV) study for a space system application.

  9. Delta connected resonant snubber circuit

    DOEpatents

    Lai, J.S.; Peng, F.Z.; Young, R.W. Sr.; Ott, G.W. Jr.

    1998-01-20

    A delta connected, resonant snubber-based, soft switching, inverter circuit achieves lossless switching during dc-to-ac power conversion and power conditioning with minimum component count and size. Current is supplied to the resonant snubber branches solely by the dc supply voltage through the main inverter switches and the auxiliary switches. Component count and size are reduced by use of a single semiconductor switch in the resonant snubber branches. Component count is also reduced by maximizing the use of stray capacitances of the main switches as parallel resonant capacitors. Resonance charging and discharging of the parallel capacitances allows lossless, zero voltage switching. In one embodiment, circuit component size and count are minimized while achieving lossless, zero voltage switching within a three-phase inverter. 36 figs.

  10. Delta connected resonant snubber circuit

    DOEpatents

    Lai, Jih-Sheng; Peng, Fang Zheng; Young, Sr., Robert W.; Ott, Jr., George W.

    1998-01-01

    A delta connected, resonant snubber-based, soft switching, inverter circuit achieves lossless switching during dc-to-ac power conversion and power conditioning with minimum component count and size. Current is supplied to the resonant snubber branches solely by the dc supply voltage through the main inverter switches and the auxiliary switches. Component count and size are reduced by use of a single semiconductor switch in the resonant snubber branches. Component count is also reduced by maximizing the use of stray capacitances of the main switches as parallel resonant capacitors. Resonance charging and discharging of the parallel capacitances allows lossless, zero voltage switching. In one embodiment, circuit component size and count are minimized while achieving lossless, zero voltage switching within a three-phase inverter.

  11. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  12. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  13. The R package "sperrorest" : Parallelized spatial error estimation and variable importance assessment for geospatial machine learning

    NASA Astrophysics Data System (ADS)

    Schratz, Patrick; Herrmann, Tobias; Brenning, Alexander

    2017-04-01

    Computational and statistical prediction methods such as the support vector machine have gained popularity in remote-sensing applications in recent years and are often compared to more traditional approaches like maximum-likelihood classification. However, the accuracy assessment of such predictive models in a spatial context needs to account for the presence of spatial autocorrelation in geospatial data by using spatial cross-validation and bootstrap strategies instead of their now more widely used non-spatial equivalent. The R package sperrorest by A. Brenning [IEEE International Geoscience and Remote Sensing Symposium, 1, 374 (2012)] provides a generic interface for performing (spatial) cross-validation of any statistical or machine-learning technique available in R. Since spatial statistical models as well as flexible machine-learning algorithms can be computationally expensive, parallel computing strategies are required to perform cross-validation efficiently. The most recent major release of sperrorest therefore comes with two new features (aside from improved documentation): The first one is the parallelized version of sperrorest(), parsperrorest(). This function features two parallel modes to greatly speed up cross-validation runs. Both parallel modes are platform independent and provide progress information. par.mode = 1 relies on the pbapply package and calls interactively (depending on the platform) parallel::mclapply() or parallel::parApply() in the background. While forking is used on Unix-Systems, Windows systems use a cluster approach for parallel execution. par.mode = 2 uses the foreach package to perform parallelization. This method uses a different way of cluster parallelization than the parallel package does. In summary, the robustness of parsperrorest() is increased with the implementation of two independent parallel modes. A new way of partitioning the data in sperrorest is provided by partition.factor.cv(). This function gives the user the possibility to perform cross-validation at the level of some grouping structure. As an example, in remote sensing of agricultural land uses, pixels from the same field contain nearly identical information and will thus be jointly placed in either the test set or the training set. Other spatial sampling resampling strategies are already available and can be extended by the user.

  14. Madness and sanity at the time of Indian independence

    PubMed Central

    Jain, Sanjeev; Murthy, Pratima; Sarin, Alok

    2016-01-01

    The backdrop of the Indian Independence offers glimpses of many ‘metaphors of madness’. In this article, we explore this through a few instances, starting from 1857, around the time of the First War of Independence, to 1947, when India became an independent nation. Such metaphors have their parallels both in historical as well as in contemporary times, where instances of one man's imagination becoming another's concept of irrationality and insanity continue. PMID:28066017

  15. Madness and sanity at the time of Indian independence.

    PubMed

    Jain, Sanjeev; Murthy, Pratima; Sarin, Alok

    2016-01-01

    The backdrop of the Indian Independence offers glimpses of many 'metaphors of madness'. In this article, we explore this through a few instances, starting from 1857, around the time of the First War of Independence, to 1947, when India became an independent nation. Such metaphors have their parallels both in historical as well as in contemporary times, where instances of one man's imagination becoming another's concept of irrationality and insanity continue.

  16. Kinematic principles of primate rotational vestibulo-ocular reflex. I. Spatial organization of fast phase velocity axes

    NASA Technical Reports Server (NTRS)

    Hess, B. J.; Angelaki, D. E.

    1997-01-01

    The spatial organization of fast phase velocity vectors of the vestibulo-ocular reflex (VOR) was studied in rhesus monkeys during yaw rotations about an earth-horizontal axis that changed continuously the orientation of the head relative to gravity ("barbecue spit" rotation). In addition to a velocity component parallel to the rotation axis, fast phases also exhibited a velocity component that invariably was oriented along the momentary direction of gravity. As the head rotated through supine and prone positions, torsional components of fast phase velocity axes became prominent. Similarly, as the head rotated through left and right ear-down positions, fast phase velocity axes exhibited prominent vertical components. The larger the speed of head rotation the greater the magnitude of this fast phase component, which was collinear with gravity. The main sequence properties of VOR fast phases were independent of head position. However, peak amplitude as well as peak velocity of fast phases were both modulated as a function of head orientation, exhibiting a minimum in prone position. The results suggest that the fast phases of vestibulo-ocular reflexes not only redirect gaze and reposition the eye in the direction of head motion but also reorient the eye with respect to earth-vertical when the head moves relative to gravity. As further elaborated in the companion paper, the underlying mechanism could be described as a dynamic, gravity-dependent modulation of the coordinates of ocular rotations relative to the head.

  17. Optimal message log reclamation for independent checkpointing

    NASA Technical Reports Server (NTRS)

    Wang, Yi-Min; Fuchs, W. Kent

    1993-01-01

    Independent (uncoordinated) check pointing for parallel and distributed systems allows maximum process autonomy but suffers from possible domino effects and the associated storage space overhead for maintaining multiple checkpoints and message logs. In most research on check pointing and recovery, it was assumed that only the checkpoints and message logs older than the global recovery line can be discarded. It is shown how recovery line transformation and decomposition can be applied to the problem of efficiently identifying all discardable message logs, thereby achieving optimal garbage collection. Communication trace-driven simulation for several parallel programs is used to show the benefits of the proposed algorithm for message log reclamation.

  18. Development of parallel algorithms for electrical power management in space applications

    NASA Technical Reports Server (NTRS)

    Berry, Frederick C.

    1989-01-01

    The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.

  19. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    PubMed

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  20. A real-time MPEG software decoder using a portable message-passing library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwong, Man Kam; Tang, P.T. Peter; Lin, Biquan

    1995-12-31

    We present a real-time MPEG software decoder that uses message-passing libraries such as MPL, p4 and MPI. The parallel MPEG decoder currently runs on the IBM SP system but can be easil ported to other parallel machines. This paper discusses our parallel MPEG decoding algorithm as well as the parallel programming environment under which it uses. Several technical issues are discussed, including balancing of decoding speed, memory limitation, 1/0 capacities, and optimization of MPEG decoding components. This project shows that a real-time portable software MPEG decoder is feasible in a general-purpose parallel machine.

  1. Proposed standardized definitions for vertical resolution and uncertainty in the NDACC lidar ozone and temperature algorithms - Part 2: Ozone DIAL uncertainty budget

    NASA Astrophysics Data System (ADS)

    Leblanc, Thierry; Sica, Robert J.; van Gijsel, Joanna A. E.; Godin-Beekmann, Sophie; Haefele, Alexander; Trickl, Thomas; Payen, Guillaume; Liberti, Gianluigi

    2016-08-01

    A standardized approach for the definition, propagation, and reporting of uncertainty in the ozone differential absorption lidar data products contributing to the Network for the Detection for Atmospheric Composition Change (NDACC) database is proposed. One essential aspect of the proposed approach is the propagation in parallel of all independent uncertainty components through the data processing chain before they are combined together to form the ozone combined standard uncertainty. The independent uncertainty components contributing to the overall budget include random noise associated with signal detection, uncertainty due to saturation correction, background noise extraction, the absorption cross sections of O3, NO2, SO2, and O2, the molecular extinction cross sections, and the number densities of the air, NO2, and SO2. The expression of the individual uncertainty components and their step-by-step propagation through the ozone differential absorption lidar (DIAL) processing chain are thoroughly estimated. All sources of uncertainty except detection noise imply correlated terms in the vertical dimension, which requires knowledge of the covariance matrix when the lidar signal is vertically filtered. In addition, the covariance terms must be taken into account if the same detection hardware is shared by the lidar receiver channels at the absorbed and non-absorbed wavelengths. The ozone uncertainty budget is presented as much as possible in a generic form (i.e., as a function of instrument performance and wavelength) so that all NDACC ozone DIAL investigators across the network can estimate, for their own instrument and in a straightforward manner, the expected impact of each reviewed uncertainty component. In addition, two actual examples of full uncertainty budget are provided, using nighttime measurements from the tropospheric ozone DIAL located at the Jet Propulsion Laboratory (JPL) Table Mountain Facility, California, and nighttime measurements from the JPL stratospheric ozone DIAL located at Mauna Loa Observatory, Hawai'i.

  2. A framework for grand scale parallelization of the combined finite discrete element method in 2d

    NASA Astrophysics Data System (ADS)

    Lei, Z.; Rougier, E.; Knight, E. E.; Munjiza, A.

    2014-09-01

    Within the context of rock mechanics, the Combined Finite-Discrete Element Method (FDEM) has been applied to many complex industrial problems such as block caving, deep mining techniques (tunneling, pillar strength, etc.), rock blasting, seismic wave propagation, packing problems, dam stability, rock slope stability, rock mass strength characterization problems, etc. The reality is that most of these were accomplished in a 2D and/or single processor realm. In this work a hardware independent FDEM parallelization framework has been developed using the Virtual Parallel Machine for FDEM, (V-FDEM). With V-FDEM, a parallel FDEM software can be adapted to different parallel architecture systems ranging from just a few to thousands of cores.

  3. Parallel workflow tools to facilitate human brain MRI post-processing

    PubMed Central

    Cui, Zaixu; Zhao, Chenxi; Gong, Gaolang

    2015-01-01

    Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues. PMID:26029043

  4. Independent component model for cognitive functions of multiple subjects using [15O]H2O PET images.

    PubMed

    Park, Hae-Jeong; Kim, Jae-Jin; Youn, Tak; Lee, Dong Soo; Lee, Myung Chul; Kwon, Jun Soo

    2003-04-01

    An independent component model of multiple subjects' positron emission tomography (PET) images is proposed to explore the overall functional components involved in a task and to explain subject specific variations of metabolic activities under altered experimental conditions utilizing the Independent component analysis (ICA) concept. As PET images represent time-compressed activities of several cognitive components, we derived a mathematical model to decompose functional components from cross-sectional images based on two fundamental hypotheses: (1) all subjects share basic functional components that are common to subjects and spatially independent of each other in relation to the given experimental task, and (2) all subjects share common functional components throughout tasks which are also spatially independent. The variations of hemodynamic activities according to subjects or tasks can be explained by the variations in the usage weight of the functional components. We investigated the plausibility of the model using serial cognitive experiments of simple object perception, object recognition, two-back working memory, and divided attention of a syntactic process. We found that the independent component model satisfactorily explained the functional components involved in the task and discuss here the application of ICA in multiple subjects' PET images to explore the functional association of brain activations. Copyright 2003 Wiley-Liss, Inc.

  5. An improved independent component analysis model for 3D chromatogram separation and its solution by multi-areas genetic algorithm.

    PubMed

    Cui, Lizhi; Poon, Josiah; Poon, Simon K; Chen, Hao; Gao, Junbin; Kwan, Paul; Fan, Kei; Ling, Zhihao

    2014-01-01

    The 3D chromatogram generated by High Performance Liquid Chromatography-Diode Array Detector (HPLC-DAD) has been researched widely in the field of herbal medicine, grape wine, agriculture, petroleum and so on. Currently, most of the methods used for separating a 3D chromatogram need to know the compounds' number in advance, which could be impossible especially when the compounds are complex or white noise exist. New method which extracts compounds from 3D chromatogram directly is needed. In this paper, a new separation model named parallel Independent Component Analysis constrained by Reference Curve (pICARC) was proposed to transform the separation problem to a multi-parameter optimization issue. It was not necessary to know the number of compounds in the optimization. In order to find all the solutions, an algorithm named multi-areas Genetic Algorithm (mGA) was proposed, where multiple areas of candidate solutions were constructed according to the fitness and distances among the chromosomes. Simulations and experiments on a real life HPLC-DAD data set were used to demonstrate our method and its effectiveness. Through simulations, it can be seen that our method can separate 3D chromatogram to chromatogram peaks and spectra successfully even when they severely overlapped. It is also shown by the experiments that our method is effective to solve real HPLC-DAD data set. Our method can separate 3D chromatogram successfully without knowing the compounds' number in advance, which is fast and effective.

  6. Tensorial extensions of independent component analysis for multisubject FMRI analysis.

    PubMed

    Beckmann, C F; Smith, S M

    2005-03-01

    We discuss model-free analysis of multisubject or multisession FMRI data by extending the single-session probabilistic independent component analysis model (PICA; Beckmann and Smith, 2004. IEEE Trans. on Medical Imaging, 23 (2) 137-152) to higher dimensions. This results in a three-way decomposition that represents the different signals and artefacts present in the data in terms of their temporal, spatial, and subject-dependent variations. The technique is derived from and compared with parallel factor analysis (PARAFAC; Harshman and Lundy, 1984. In Research methods for multimode data analysis, chapter 5, pages 122-215. Praeger, New York). Using simulated data as well as data from multisession and multisubject FMRI studies we demonstrate that the tensor PICA approach is able to efficiently and accurately extract signals of interest in the spatial, temporal, and subject/session domain. The final decompositions improve upon PARAFAC results in terms of greater accuracy, reduced interference between the different estimated sources (reduced cross-talk), robustness (against deviations of the data from modeling assumptions and against overfitting), and computational speed. On real FMRI 'activation' data, the tensor PICA approach is able to extract plausible activation maps, time courses, and session/subject modes as well as provide a rich description of additional processes of interest such as image artefacts or secondary activation patterns. The resulting data decomposition gives simple and useful representations of multisubject/multisession FMRI data that can aid the interpretation and optimization of group FMRI studies beyond what can be achieved using model-based analysis techniques.

  7. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  8. A communication library for the parallelization of air quality models on structured grids

    NASA Astrophysics Data System (ADS)

    Miehe, Philipp; Sandu, Adrian; Carmichael, Gregory R.; Tang, Youhua; Dăescu, Dacian

    PAQMSG is an MPI-based, Fortran 90 communication library for the parallelization of air quality models (AQMs) on structured grids. It consists of distribution, gathering and repartitioning routines for different domain decompositions implementing a master-worker strategy. The library is architecture and application independent and includes optimization strategies for different architectures. This paper presents the library from a user perspective. Results are shown from the parallelization of STEM-III on Beowulf clusters. The PAQMSG library is available on the web. The communication routines are easy to use, and should allow for an immediate parallelization of existing AQMs. PAQMSG can also be used for constructing new models.

  9. A portable MPI-based parallel vector template library

    NASA Technical Reports Server (NTRS)

    Sheffler, Thomas J.

    1995-01-01

    This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C++ by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of C or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.

  10. A Portable MPI-Based Parallel Vector Template Library

    NASA Technical Reports Server (NTRS)

    Sheffler, Thomas J.

    1995-01-01

    This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C + + by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of c or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.

  11. A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.

    PubMed

    Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F

    2018-03-01

    Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.

  12. Similar and contrasting dimensions of social cognition in schizophrenia and healthy subjects.

    PubMed

    Mehta, Urvakhsh Meherwan; Thirthalli, Jagadisha; Bhagyavathi, H D; Keshav Kumar, J; Subbakrishna, D K; Gangadhar, Bangalore N; Eack, Shaun M; Keshavan, Matcheri S

    2014-08-01

    Schizophrenia patients experience substantial impairments in social cognition (SC) and these deficits are associated with their poor functional outcome. Though SC is consistently shown to emerge as a cognitive dimension distinct from neurocognition, the dimensionality of SC is poorly understood. Moreover, comparing the components of SC between schizophrenia patients and healthy comparison subjects would provide specific insights on the construct validity of SC. We conducted principal component analyses of eight SC test scores (representing four domains of SC, namely, theory of mind, emotion processing, social perception and attributional bias) independently in 170 remitted schizophrenia patients and 111 matched healthy comparison subjects. We also conducted regression analyses to evaluate the relative contribution of individual SC components to other symptom dimensions, which are important clinical determinants of functional outcome (i.e., neurocognition, negative symptoms, motivational deficits and insight) in schizophrenia. A three-factor solution representing socio-emotional processing, social-inferential ability and external attribution components emerged in the patient group that accounted for 64.43% of the variance. In contrast, a two-factor solution representing socio-emotional processing and social-inferential ability was derived in the healthy comparison group that explained 56.5% of the variance. In the patient group, the social-inferential component predicted negative symptoms and motivational deficits. Our results suggest the presence of a multidimensional SC construct. The dimensionality of SC observed across the two groups, though not identical, displayed important parallels. Individual components also demonstrated distinct patterns of association with other symptom dimensions, thus supporting their external validity. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Efficient parallelization for AMR MHD multiphysics calculations; implementation in AstroBEAR

    NASA Astrophysics Data System (ADS)

    Carroll-Nellenback, Jonathan J.; Shroyer, Brandon; Frank, Adam; Ding, Chen

    2013-03-01

    Current adaptive mesh refinement (AMR) simulations require algorithms that are highly parallelized and manage memory efficiently. As compute engines grow larger, AMR simulations will require algorithms that achieve new levels of efficient parallelization and memory management. We have attempted to employ new techniques to achieve both of these goals. Patch or grid based AMR often employs ghost cells to decouple the hyperbolic advances of each grid on a given refinement level. This decoupling allows each grid to be advanced independently. In AstroBEAR we utilize this independence by threading the grid advances on each level with preference going to the finer level grids. This allows for global load balancing instead of level by level load balancing and allows for greater parallelization across both physical space and AMR level. Threading of level advances can also improve performance by interleaving communication with computation, especially in deep simulations with many levels of refinement. While we see improvements of up to 30% on deep simulations run on a few cores, the speedup is typically more modest (5-20%) for larger scale simulations. To improve memory management we have employed a distributed tree algorithm that requires processors to only store and communicate local sections of the AMR tree structure with neighboring processors. Using this distributed approach we are able to get reasonable scaling efficiency (>80%) out to 12288 cores and up to 8 levels of AMR - independent of the use of threading.

  14. Parallel-plate transmission line type of EMP simulators: Systematic review and recommendations

    NASA Astrophysics Data System (ADS)

    Giri, D. V.; Liu, T. K.; Tesche, F. M.; King, R. W. P.

    1980-05-01

    This report presents various aspects of the two-parallel-plate transmission line type of EMP simulator. Much of the work is the result of research efforts conducted during the last two decades at the Air Force Weapons Laboratory, and in industries/universities as well. The principal features of individual simulator components are discussed. The report also emphasizes that it is imperative to hybridize our understanding of individual components so that we can draw meaningful conclusions of simulator performance as a whole.

  15. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-01-10

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  16. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  17. Illumination system for a projector composed of three LCD panels

    NASA Astrophysics Data System (ADS)

    Ho, Fang C.; Chu, Cheng-Wei; Lee, William

    2004-10-01

    A novel compound prism device consisting of a cubic polarizing beam splitter (PBS) and a non-polarizing dichroic prism is configured as the core component of the illumination unit of a full color projection display system of three pieces of reflective type liquid crystal imaging panels. When the in-coming light beam impinging on the PBS at 45 deg. of incidence, the beam component polarized perpendicularly to the plane of incidence is reflected and directed toward a LCD panel of red-image signal addressed after transmitted through a red-passing dichroic filter. The beam component polarized in parallel with the plane of incidence of the PBS is transmitted and passing through a red-cut dichroic filter. The rest portion of the light beam is then got the blue and green color bands separated by the dichroic filter at 30 deg. of incidence and directed to a blue and green signal addressed LCD panel respectively. All the dichroic filters are designed polarization independent and the PBS has a high contrast ratio of 1000 for the on/off states of teh addressed pixels of the image panels. The color separation and re-combination prism unit will provide a screen uniformity of d(u',v') <0.01 when it is accomodated in the projector with a projection lens assembly of F/#2.4.

  18. Spectrally resolved single-shot wavefront sensing of broadband high-harmonic sources

    NASA Astrophysics Data System (ADS)

    Freisem, L.; Jansen, G. S. M.; Rudolf, D.; Eikema, K. S. E.; Witte, S.

    2018-03-01

    Wavefront sensors are an important tool to characterize coherent beams of extreme ultraviolet radiation. However, conventional Hartmann-type sensors do not allow for independent wavefront characterization of different spectral components that may be present in a beam, which limits their applicability for intrinsically broadband high-harmonic generation (HHG) sources. Here we introduce a wavefront sensor that measures the wavefronts of all the harmonics in a HHG beam in a single camera exposure. By replacing the mask apertures with transmission gratings at different orientations, we simultaneously detect harmonic wavefronts and spectra, and obtain sensitivity to spatiotemporal structure such as pulse front tilt as well. We demonstrate the capabilities of the sensor through a parallel measurement of the wavefronts of 9 harmonics in a wavelength range between 25 and 49 nm, with up to lambda/32 precision.

  19. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    PubMed

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. CO Component Estimation Based on the Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  1. Continuous development of schemes for parallel computing of the electrostatics in biological systems: implementation in DelPhi.

    PubMed

    Li, Chuan; Petukh, Marharyta; Li, Lin; Alexov, Emil

    2013-08-15

    Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano-objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano-objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. Copyright © 2013 Wiley Periodicals, Inc.

  2. Conformational states and folding pathways of peptides revealed by principal-independent component analyses.

    PubMed

    Nguyen, Phuong H

    2007-05-15

    Principal component analysis is a powerful method for projecting multidimensional conformational space of peptides or proteins onto lower dimensional subspaces in which the main conformations are present, making it easier to reveal the structures of molecules from e.g. molecular dynamics simulation trajectories. However, the identification of all conformational states is still difficult if the subspaces consist of more than two dimensions. This is mainly due to the fact that the principal components are not independent with each other, and states in the subspaces cannot be visualized. In this work, we propose a simple and fast scheme that allows one to obtain all conformational states in the subspaces. The basic idea is that instead of directly identifying the states in the subspace spanned by principal components, we first transform this subspace into another subspace formed by components that are independent of one other. These independent components are obtained from the principal components by employing the independent component analysis method. Because of independence between components, all states in this new subspace are defined as all possible combinations of the states obtained from each single independent component. This makes the conformational analysis much simpler. We test the performance of the method by analyzing the conformations of the glycine tripeptide and the alanine hexapeptide. The analyses show that our method is simple and quickly reveal all conformational states in the subspaces. The folding pathways between the identified states of the alanine hexapeptide are analyzed and discussed in some detail. 2007 Wiley-Liss, Inc.

  3. Re-Examining Dissociations between Remembering and Knowing: Binary Judgments vs. Independent Ratings

    ERIC Educational Resources Information Center

    Brown, Aaron A.; Bodner, Glen E.

    2011-01-01

    When participants must classify their recognition experiences as remembering or knowing, variables often have dissociative effects on the two judgments. In contrast, when participants independently rate recollection "and" familiarity only parallel effects have been reported. To investigate this discrepancy we compared the effects of masked priming…

  4. Promoting Quality and Variety through the Public Financing of Privately Operated Schools in Qatar

    ERIC Educational Resources Information Center

    Constant, Louay; Goldman, Charles A.; Zellman, Gail L.; Augustine, Catherine H.; Galama, Titus; Gonzalez, Gabriella; Guarino, C. A.; Karam, Rita; Ryan, Gery W.; Salem, Hanine

    2010-01-01

    In 2002, Qatar began establishing publicly funded, privately operated "independent schools" in parallel with the existing, centralized Ministry of Education system. The reform that drove the establishment of the independent schools included accountability provisions such as (a) measuring school and student performance and (b)…

  5. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  6. Parallel transmit beamforming using orthogonal frequency division multiplexing applied to harmonic imaging--a feasibility study.

    PubMed

    Demi, Libertario; Verweij, Martin D; Van Dongen, Koen W A

    2012-11-01

    Real-time 2-D or 3-D ultrasound imaging systems are currently used for medical diagnosis. To achieve the required data acquisition rate, these systems rely on parallel beamforming, i.e., a single wide-angled beam is used for transmission and several narrow parallel beams are used for reception. When applied to harmonic imaging, the demand for high-amplitude pressure wave fields, necessary to generate the harmonic components, conflicts with the use of a wide-angled beam in transmission because this results in a large spatial decay of the acoustic pressure. To enhance the amplitude of the harmonics, it is preferable to do the reverse: transmit several narrow parallel beams and use a wide-angled beam in reception. Here, this concept is investigated to determine whether it can be used for harmonic imaging. The method proposed in this paper relies on orthogonal frequency division multiplexing (OFDM), which is used to create distinctive parallel beams in transmission. To test the proposed method, a numerical study has been performed, in which the transmit, receive, and combined beam profiles generated by a linear array have been simulated for the second-harmonic component. Compared with standard parallel beamforming, application of the proposed technique results in a gain of 12 dB for the main beam and in a reduction of the side lobes. Experimental verification in water has also been performed. Measurements obtained with a single-element emitting transducer and a hydrophone receiver confirm the possibility of exciting a practical ultrasound transducer with multiple Gaussian modulated pulses, each having a different center frequency, and the capability to generate distinguishable second-harmonic components.

  7. Implementing and analyzing the multi-threaded LP-inference

    NASA Astrophysics Data System (ADS)

    Bolotova, S. Yu; Trofimenko, E. V.; Leschinskaya, M. V.

    2018-03-01

    The logical production equations provide new possibilities for the backward inference optimization in intelligent production-type systems. The strategy of a relevant backward inference is aimed at minimization of a number of queries to external information source (either to a database or an interactive user). The idea of the method is based on the computing of initial preimages set and searching for the true preimage. The execution of each stage can be organized independently and in parallel and the actual work at a given stage can also be distributed between parallel computers. This paper is devoted to the parallel algorithms of the relevant inference based on the advanced scheme of the parallel computations “pipeline” which allows to increase the degree of parallelism. The author also provides some details of the LP-structures implementation.

  8. Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.

  9. Trajectories of disposable income among people of working ages diagnosed with multiple sclerosis: a nationwide register-based cohort study in Sweden 7 years before to 4 years after diagnosis with a population-based reference group.

    PubMed

    Murley, Chantelle; Mogard, Olof; Wiberg, Michael; Alexanderson, Kristina; Karampampa, Korinna; Friberg, Emilie; Tinghög, Petter

    2018-05-09

    To describe how disposable income (DI) and three main components changed, and analyse whether DI development differed from working-aged people with multiple sclerosis (MS) to a reference group from 7 years before to 4 years after diagnosis in Sweden. Population-based cohort study, 12-year follow-up (7 years before to 4 years after diagnosis). Swedish working-age population with microdata linked from two nationwide registers. Residents diagnosed with MS in 2009 aged 25-59 years (n=785), and references without MS (n=7847) randomly selected with stratified matching (sex, age, education and country of birth). DI was defined as the annual after tax sum of incomes (earnings and benefits) to measure individual economic welfare. Three main components of DI were analysed as annual sums: earnings, sickness absence benefits and disability pension benefits. We found no differences in mean annual DI between people with and without MS by independent t-tests (p values between 0.15 and 0.96). Differences were found for all studied components of DI from diagnosis year by independent t-tests, for example, in the final study year (2013): earnings (-64 867 Swedish Krona (SEK); 95% CI-79 203 to -50 528); sickness absence benefits (13 330 SEK; 95% CI 10 042 to 16 500); and disability pension benefits (21 360 SEK; 95% CI 17 380 to 25 350). A generalised estimating equation evaluated DI trajectory development between people with and without MS to find both trajectories developed in parallel, both before (-4039 SEK; 95% CI -10 536 to 2458) and after (-781 SEK; 95% CI -6988 to 5360) diagnosis. The key finding of parallel DI trajectory development between working-aged MS and references suggests minimal economic impact within the first 4 years of diagnosis. The Swedish welfare system was responsive to the observed reductions in earnings around MS diagnosis through balancing DI with morbidity-related benefits. Future decreases in economic welfare may be experienced as the disease progresses, although thorough investigation with future studies of modern cohorts are required. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. Parallel auto-correlative statistics with VTK.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  11. Isolation of n-decyl-alpha(1-->6) isomaltoside from a technical APG mixture and its identification by the parallel use of LC-MS and NMR spectroscopy

    PubMed

    Billian; Hock; Doetzer; Stan; Dreher

    2000-10-15

    The identification of n-decyl alpha(1-->6)isomaltoside as a main component of technical alkyl polyglucoside (APG) mixtures by the parallel use of liquid chromatography-mass spectrometry (LC-MS) and nuclear magnetic resonance (NMR) spectroscopy is described. Following enrichment on a styrene-divinylbenzene-based solid-phase extraction material, unknown components were separated by reversed-phase liquid chromatography (LC). Chemical characterization was achieved by both mass spectrometry and NMR spectroscopy. It is demonstrated that the combination of LC-MS with various NMR techniques is very suitable for stereochemical assignment of unknown components in technical APG mixtures.

  12. Three-Component Reaction Discovery Enabled by Mass Spectrometry of Self-Assembled Monolayers

    PubMed Central

    Montavon, Timothy J.; Li, Jing; Cabrera-Pardo, Jaime R.; Mrksich, Milan; Kozmin, Sergey A.

    2011-01-01

    Multi-component reactions have been extensively employed in many areas of organic chemistry. Despite significant progress, the discovery of such enabling transformations remains challenging. Here, we present the development of a parallel, label-free reaction-discovery platform, which can be used for identification of new multi-component transformations. Our approach is based on the parallel mass spectrometric screening of interfacial chemical reactions on arrays of self-assembled monolayers. This strategy enabled the identification of a simple organic phosphine that can catalyze a previously unknown condensation of siloxy alkynes, aldehydes and amines to produce 3-hydroxy amides with high efficiency and diastereoselectivity. The reaction was further optimized using solution phase methods. PMID:22169871

  13. Measuring Multiple Resistances Using Single-Point Excitation

    NASA Technical Reports Server (NTRS)

    Hall, Dan; Davies, Frank

    2009-01-01

    In a proposed method of determining the resistances of individual DC electrical devices connected in a series or parallel string, no attempt would be made to perform direct measurements on individual devices. Instead, (1) the devices would be instrumented by connecting reactive circuit components in parallel and/or in series with the devices, as appropriate; (2) a pulse or AC voltage excitation would be applied at a single point on the string; and (3) the transient or AC steady-state current response of the string would be measured at that point only. Each reactive component(s) associated with each device would be distinct in order to associate a unique time-dependent response with that device.

  14. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  15. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    PubMed

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  16. PARP1-mediated necrosis is dependent on parallel JNK and Ca2+/calpain pathways

    PubMed Central

    Douglas, Diana L.; Baines, Christopher P.

    2014-01-01

    ABSTRACT Poly(ADP-ribose) polymerase-1 (PARP1) is a nuclear enzyme that can trigger caspase-independent necrosis. Two main mechanisms for this have been proposed: one involving RIP1 and JNK kinases and mitochondrial permeability transition (MPT), the other involving calpain-mediated activation of Bax and mitochondrial release of apoptosis-inducing factor (AIF). However, whether these two mechanisms represent distinct pathways for PARP1-induced necrosis, or whether they are simply different components of the same pathway has yet to be tested. Mouse embryonic fibroblasts (MEFs) were treated with either N-methyl-N′-nitro-N-nitrosoguanidine (MNNG) or β-Lapachone, resulting in PARP1-dependent necrosis. This was associated with increases in calpain activity, JNK activation and AIF translocation. JNK inhibition significantly reduced MNNG- and β-Lapachone-induced JNK activation, AIF translocation, and necrosis, but not calpain activation. In contrast, inhibition of calpain either by Ca2+ chelation or knockdown attenuated necrosis, but did not affect JNK activation or AIF translocation. To our surprise, genetic and/or pharmacological inhibition of RIP1, AIF, Bax and the MPT pore failed to abrogate MNNG- and β-Lapachone-induced necrosis. In conclusion, although JNK and calpain both contribute to PARP1-induced necrosis, they do so via parallel mechanisms. PMID:25052090

  17. Device-independent parallel self-testing of two singlets

    NASA Astrophysics Data System (ADS)

    Wu, Xingyao; Bancal, Jean-Daniel; McKague, Matthew; Scarani, Valerio

    2016-06-01

    Device-independent self-testing offers the possibility of certifying the quantum state and measurements, up to local isometries, using only the statistics observed by querying uncharacterized local devices. In this paper we study parallel self-testing of two maximally entangled pairs of qubits; in particular, the local tensor product structure is not assumed but derived. We prove two criteria that achieve the desired result: a double use of the Clauser-Horne-Shimony-Holt inequality and the 3 ×3 magic square game. This demonstrate that the magic square game can only be perfectly won by measuring a two-singlet state. The tolerance to noise is well within reach of state-of-the-art experiments.

  18. Structural modeling of carbonaceous mesophase amphotropic mixtures under uniaxial extensional flow.

    PubMed

    Golmohammadi, Mojdeh; Rey, Alejandro D

    2010-07-21

    The extended Maier-Saupe model for binary mixtures of model carbonaceous mesophases (uniaxial discotic nematogens) under externally imposed flow, formulated in previous studies [M. Golmohammadi and A. D. Rey, Liquid Crystals 36, 75 (2009); M. Golmohammadi and A. D. Rey, Entropy 10, 183 (2008)], is used to characterize the effect of uniaxial extensional flow and concentration on phase behavior and structure of these mesogenic blends. The generic thermorheological phase diagram of the single-phase binary mixture, given in terms of temperature (T) and Deborah (De) number, shows the existence of four T-De transition lines that define regions that correspond to the following quadrupolar tensor order parameter structures: (i) oblate (perpendicular, parallel), (ii) prolate (perpendicular, parallel), (iii) scalene O(perpendicular, parallel), and (iv) scalene P(perpendicular, parallel), where the symbols (perpendicular, parallel) indicate alignment of the tensor order ellipsoid with respect to the extension axis. It is found that with increasing T the dominant component of the mixture exhibits weak deviations from the well-known pure species response to uniaxial extensional flow (uniaxial perpendicular nematic-->biaxial nematic-->uniaxial parallel paranematic). In contrast, the slaved component shows a strong deviation from the pure species response. This deviation is dictated by the asymmetric viscoelastic coupling effects emanating from the dominant component. Changes in conformation (oblate <==> prolate) and orientation (perpendicular <==> parallel) are effected through changes in pairs of eigenvalues of the quadrupolar tensor order parameter. The complexity of the structural sensitivity to temperature and extensional flow is a reflection of the dual lyotropic/thermotropic nature (amphotropic nature) of the mixture and their cooperation/competition. The analysis demonstrates that the simple structures (biaxial nematic and uniaxial paranematic) observed in pure discotic mesogens under uniaxial extensional flow are significantly enriched by the interaction of the lyotropic/thermotropic competition with the binary molecular architectures and with the quadrupolar nature of the flow.

  19. Independent component analysis for automatic note extraction from musical trills

    NASA Astrophysics Data System (ADS)

    Brown, Judith C.; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  20. The Fight Deck Perspective of the NASA Langley AILS Concept

    NASA Technical Reports Server (NTRS)

    Rine, Laura L.; Abbott, Terence S.; Lohr, Gary W.; Elliott, Dawn M.; Waller, Marvin C.; Perry, R. Brad

    2000-01-01

    Many US airports depend on parallel runway operations to meet the growing demand for day to day operations. In the current airspace system, Instrument Meteorological Conditions (IMC) reduce the capacity of close parallel runway operations; that is, runways spaced closer than 4300 ft. These capacity losses can result in landing delays causing inconveniences to the traveling public, interruptions in commerce, and increased operating costs to the airlines. This document presents the flight deck perspective component of the Airborne Information for Lateral Spacing (AILS) approaches to close parallel runways in IMC. It represents the ideas the NASA Langley Research Center (LaRC) AILS Development Team envisions to integrate a number of components and procedures into a workable system for conducting close parallel runway approaches. An initial documentation of the aspects of this concept was sponsored by LaRC and completed in 1996. Since that time a number of the aspects have evolved to a more mature state. This paper is an update of the earlier documentation.

  1. Reliability models applicable to space telescope solar array assembly system

    NASA Technical Reports Server (NTRS)

    Patil, S. A.

    1986-01-01

    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.

  2. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  3. Parallel arms races between garter snakes and newts involving tetrodotoxin as the phenotypic interface of coevolution.

    PubMed

    Brodie, Edmund D; Feldman, Chris R; Hanifin, Charles T; Motychak, Jeffrey E; Mulcahy, Daniel G; Williams, Becky L; Brodie, Edmund D

    2005-02-01

    Parallel "arms races" involving the same or similar phenotypic interfaces allow inference about selective forces driving coevolution, as well as the importance of phylogenetic and phenotypic constraints in coevolution. Here, we report the existence of apparent parallel arms races between species pairs of garter snakes and their toxic newt prey that indicate independent evolutionary origins of a key phenotype in the interface. In at least one area of sympatry, the aquatic garter snake, Thamnophis couchii, has evolved elevated resistance to the neurotoxin tetrodotoxin (TTX), present in the newt Taricha torosa. Previous studies have shown that a distantly related garter snake, Thamnophis sirtalis, has coevolved with another newt species that possesses TTX, Taricha granulosa. Patterns of within population variation and phenotypic tradeoffs between TTX resistance and sprint speed suggest that the mechanism of resistance is similar in both species of snake, yet phylogenetic evidence indicates the independent origins of elevated resistance to TTX.

  4. Automated three-component synthesis of a library of γ-lactams

    PubMed Central

    Fenster, Erik; Hill, David; Reiser, Oliver

    2012-01-01

    Summary A three-component method for the synthesis of γ-lactams from commercially available maleimides, aldehydes, and amines was adapted to parallel library synthesis. Improvements to the chemistry over previous efforts include the optimization of the method to a one-pot process, the management of by-products and excess reagents, the development of an automated parallel sequence, and the adaption of the method to permit the preparation of enantiomerically enriched products. These efforts culminated in the preparation of a library of 169 γ-lactams. PMID:23209515

  5. Fifteen years of PARAFAC application to organic matter fluorescence - progress, problems and possibilities

    NASA Astrophysics Data System (ADS)

    Murphy, K.; Stedmon, C. A.; Wunsch, U.

    2017-12-01

    The study of dissolved organic matter in aquatic milieu frequently involves measuring and interpreting fluorescence excitation emission matrices (EEMs) as a proxy for studying the total organic matter pool. Parallel Factor Analysis (PARAFAC) is used widely to identify and track independent organic matter fractions. This approach assumes that each EEM reflects the combined fluorescence signal from a limited number of unique, non-interacting chemical components, which are determined via a fitting algorithm. During the past fifteen years, considerable progress in understanding dissolved organic matter fluorescence has been achieved with the aid of PARAFAC; however, very few identical or ubiquitous fluorescence spectra have been independently identified. We studied the influence of wavelength selection on PARAFAC models and found this factor to have a decisive impact on PARAFAC spectra despite receiving little attention in most studies. Because large, chemically-diverse datasets may be too complex to analyse with PARAFAC, we are exploring novel methods for increasing variability in small datasets in order to reduce biases and increase interpretability. Our results suggest that spectral variability in PARAFAC models between studies are in many cases due to artefacts that could be minimised by careful experimental and modelling approaches.

  6. Evolution of central pattern generators and rhythmic behaviours

    PubMed Central

    Katz, Paul S.

    2016-01-01

    Comparisons of rhythmic movements and the central pattern generators (CPGs) that control them uncover principles about the evolution of behaviour and neural circuits. Over the course of evolutionary history, gradual evolution of behaviours and their neural circuitry within any lineage of animals has been a predominant occurrence. Small changes in gene regulation can lead to divergence of circuit organization and corresponding changes in behaviour. However, some behavioural divergence has resulted from large-scale rewiring of the neural network. Divergence of CPG circuits has also occurred without a corresponding change in behaviour. When analogous rhythmic behaviours have evolved independently, it has generally been with different neural mechanisms. Repeated evolution of particular rhythmic behaviours has occurred within some lineages due to parallel evolution or latent CPGs. Particular motor pattern generating mechanisms have also evolved independently in separate lineages. The evolution of CPGs and rhythmic behaviours shows that although most behaviours and neural circuits are highly conserved, the nature of the behaviour does not dictate the neural mechanism and that the presence of homologous neural components does not determine the behaviour. This suggests that although behaviour is generated by neural circuits, natural selection can act separately on these two levels of biological organization. PMID:26598733

  7. Evolution of central pattern generators and rhythmic behaviours.

    PubMed

    Katz, Paul S

    2016-01-05

    Comparisons of rhythmic movements and the central pattern generators (CPGs) that control them uncover principles about the evolution of behaviour and neural circuits. Over the course of evolutionary history, gradual evolution of behaviours and their neural circuitry within any lineage of animals has been a predominant occurrence. Small changes in gene regulation can lead to divergence of circuit organization and corresponding changes in behaviour. However, some behavioural divergence has resulted from large-scale rewiring of the neural network. Divergence of CPG circuits has also occurred without a corresponding change in behaviour. When analogous rhythmic behaviours have evolved independently, it has generally been with different neural mechanisms. Repeated evolution of particular rhythmic behaviours has occurred within some lineages due to parallel evolution or latent CPGs. Particular motor pattern generating mechanisms have also evolved independently in separate lineages. The evolution of CPGs and rhythmic behaviours shows that although most behaviours and neural circuits are highly conserved, the nature of the behaviour does not dictate the neural mechanism and that the presence of homologous neural components does not determine the behaviour. This suggests that although behaviour is generated by neural circuits, natural selection can act separately on these two levels of biological organization. © 2015 The Author(s).

  8. Parallel processing using an optical delay-based reservoir computer

    NASA Astrophysics Data System (ADS)

    Van der Sande, Guy; Nguimdo, Romain Modeste; Verschaffelt, Guy

    2016-04-01

    Delay systems subject to delayed optical feedback have recently shown great potential in solving computationally hard tasks. By implementing a neuro-inspired computational scheme relying on the transient response to optical data injection, high processing speeds have been demonstrated. However, reservoir computing systems based on delay dynamics discussed in the literature are designed by coupling many different stand-alone components which lead to bulky, lack of long-term stability, non-monolithic systems. Here we numerically investigate the possibility of implementing reservoir computing schemes based on semiconductor ring lasers. Semiconductor ring lasers are semiconductor lasers where the laser cavity consists of a ring-shaped waveguide. SRLs are highly integrable and scalable, making them ideal candidates for key components in photonic integrated circuits. SRLs can generate light in two counterpropagating directions between which bistability has been demonstrated. We demonstrate that two independent machine learning tasks , even with different nature of inputs with different input data signals can be simultaneously computed using a single photonic nonlinear node relying on the parallelism offered by photonics. We illustrate the performance on simultaneous chaotic time series prediction and a classification of the Nonlinear Channel Equalization. We take advantage of different directional modes to process individual tasks. Each directional mode processes one individual task to mitigate possible crosstalk between the tasks. Our results indicate that prediction/classification with errors comparable to the state-of-the-art performance can be obtained even with noise despite the two tasks being computed simultaneously. We also find that a good performance is obtained for both tasks for a broad range of the parameters. The results are discussed in detail in [Nguimdo et al., IEEE Trans. Neural Netw. Learn. Syst. 26, pp. 3301-3307, 2015

  9. Influence of Segmentation of Ring-Shaped NdFeB Magnets with Parallel Magnetization on Cylindrical Actuators

    PubMed Central

    Eckert, Paulo Roberto; Goltz, Evandro Claiton; Filho, Aly Ferreira Flores

    2014-01-01

    This work analyses the effects of segmentation followed by parallel magnetization of ring-shaped NdFeB permanent magnets used in slotless cylindrical linear actuators. The main purpose of the work is to evaluate the effects of that segmentation on the performance of the actuator and to present a general overview of the influence of parallel magnetization by varying the number of segments and comparing the results with ideal radially magnetized rings. The analysis is first performed by modelling mathematically the radial and circumferential components of magnetization for both radial and parallel magnetizations, followed by an analysis carried out by means of the 3D finite element method. Results obtained from the models are validated by measuring radial and tangential components of magnetic flux distribution in the air gap on a prototype which employs magnet rings with eight segments each with parallel magnetization. The axial force produced by the actuator was also measured and compared with the results obtained from numerical models. Although this analysis focused on a specific topology of cylindrical actuator, the observed effects on the topology could be extended to others in which surface-mounted permanent magnets are employed, including rotating electrical machines. PMID:25051032

  10. Influence of segmentation of ring-shaped NdFeB magnets with parallel magnetization on cylindrical actuators.

    PubMed

    Eckert, Paulo Roberto; Goltz, Evandro Claiton; Flores Filho, Aly Ferreira

    2014-07-21

    This work analyses the effects of segmentation followed by parallel magnetization of ring-shaped NdFeB permanent magnets used in slotless cylindrical linear actuators. The main purpose of the work is to evaluate the effects of that segmentation on the performance of the actuator and to present a general overview of the influence of parallel magnetization by varying the number of segments and comparing the results with ideal radially magnetized rings. The analysis is first performed by modelling mathematically the radial and circumferential components of magnetization for both radial and parallel magnetizations, followed by an analysis carried out by means of the 3D finite element method. Results obtained from the models are validated by measuring radial and tangential components of magnetic flux distribution in the air gap on a prototype which employs magnet rings with eight segments each with parallel magnetization. The axial force produced by the actuator was also measured and compared with the results obtained from numerical models. Although this analysis focused on a specific topology of cylindrical actuator, the observed effects on the topology could be extended to others in which surface-mounted permanent magnets are employed, including rotating electrical machines.

  11. Variants of Independence in the Perception of Facial Identity and Expression

    ERIC Educational Resources Information Center

    Fitousi, Daniel; Wenger, Michael J.

    2013-01-01

    A prominent theory in the face perception literature--the parallel-route hypothesis (Bruce & Young, 1986)--assumes a dedicated channel for the processing of identity that is separate and independent from the channel(s) in which nonidentity information is processed (e.g., expression, eye gaze). The current work subjected this assumption to…

  12. A divide and conquer approach to the nonsymmetric eigenvalue problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1991-01-01

    Serial computation combined with high communication costs on distributed-memory multiprocessors make parallel implementations of the QR method for the nonsymmetric eigenvalue problem inefficient. This paper introduces an alternative algorithm for the nonsymmetric tridiagonal eigenvalue problem based on rank two tearing and updating of the matrix. The parallelism of this divide and conquer approach stems from independent solution of the updating problems. 11 refs.

  13. Effective switching frequency multiplier inverter

    DOEpatents

    Su, Gui-Jia [Oak Ridge, TN; Peng, Fang Z [Okemos, MI

    2007-08-07

    A switching frequency multiplier inverter for low inductance machines that uses parallel connection of switches and each switch is independently controlled according to a pulse width modulation scheme. The effective switching frequency is multiplied by the number of switches connected in parallel while each individual switch operates within its limit of switching frequency. This technique can also be used for other power converters such as DC/DC, AC/DC converters.

  14. Reconfigurable Model Execution in the OpenMDAO Framework

    NASA Technical Reports Server (NTRS)

    Hwang, John T.

    2017-01-01

    NASA's OpenMDAO framework facilitates constructing complex models and computing their derivatives for multidisciplinary design optimization. Decomposing a model into components that follow a prescribed interface enables OpenMDAO to assemble multidisciplinary derivatives from the component derivatives using what amounts to the adjoint method, direct method, chain rule, global sensitivity equations, or any combination thereof, using the MAUD architecture. OpenMDAO also handles the distribution of processors among the disciplines by hierarchically grouping the components, and it automates the data transfer between components that are on different processors. These features have made OpenMDAO useful for applications in aircraft design, satellite design, wind turbine design, and aircraft engine design, among others. This paper presents new algorithms for OpenMDAO that enable reconfigurable model execution. This concept refers to dynamically changing, during execution, one or more of: the variable sizes, solution algorithm, parallel load balancing, or set of variables-i.e., adding and removing components, perhaps to switch to a higher-fidelity sub-model. Any component can reconfigure at any point, even when running in parallel with other components, and the reconfiguration algorithm presented here performs the synchronized updates to all other components that are affected. A reconfigurable software framework for multidisciplinary design optimization enables new adaptive solvers, adaptive parallelization, and new applications such as gradient-based optimization with overset flow solvers and adaptive mesh refinement. Benchmarking results demonstrate the time savings for reconfiguration compared to setting up the model again from scratch, which can be significant in large-scale problems. Additionally, the new reconfigurability feature is applied to a mission profile optimization problem for commercial aircraft where both the parametrization of the mission profile and the time discretization are adaptively refined, resulting in computational savings of roughly 10% and the elimination of oscillations in the optimized altitude profile.

  15. HPSEC reveals ubiquitous components in fluorescent dissolved organic matter across aquatic ecosystems

    NASA Astrophysics Data System (ADS)

    Wünsch, Urban; Murphy, Kathleen; Stedmon, Colin

    2017-04-01

    Absorbance and fluorescence spectroscopy are efficient tools for tracing the supply, turnover and fate of dissolved organic matter (DOM). The fluorescent fraction of DOM (FDOM) can be characterized by measuring excitation-emission matrices and decomposing the combined fluorescence signal into independent underlying fraction using Parallel Factor Analysis (PARAFAC). Comparisons between studies, facilitated by the OpenFluor database, reveal highly similar components across different aquatic systems and between studies. To obtain PARAFAC models in sufficient quality, scientists traditionally rely on analyzing dozens to hundreds of samples spanning environmental gradients. A cross-validation of this approach using different analytical tools has not yet been accomplished. In this study, we applied high-performance size-exclusion chromatography (HPSEC) to characterize the size-dependent optical properties of dissolved organic matter of samples from contrasting aquatic environments with online absorbance and fluorescence detectors. Each sample produced hundreds of absorbance spectra of colored DOM (CDOM) and hundreds of matrices of FDOM intensities. This approach facilitated the detailed study of CDOM spectral slopes and further allowed the reliable implementation of PARAFAC on individual samples. This revealed a high degree of overlap in the spectral properties of components identified from different sites. Moreover, many of the model components showed significant spectral congruence with spectra in the OpenFluor database. Our results provide evidence of the presence of ubiquitous FDOM components and additionally provide further evidence for the supramolecular assembly hypothesis. They demonstrate the potential for HPSEC to provide a wealth of new insights into the relationship between optical and chemical properties of DOM.

  16. F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).

  17. A comparative study of serial and parallel aeroelastic computations of wings

    NASA Technical Reports Server (NTRS)

    Byun, Chansup; Guruswamy, Guru P.

    1994-01-01

    A procedure for computing the aeroelasticity of wings on parallel multiple-instruction, multiple-data (MIMD) computers is presented. In this procedure, fluids are modeled using Euler equations, and structures are modeled using modal or finite element equations. The procedure is designed in such a way that each discipline can be developed and maintained independently by using a domain decomposition approach. In the present parallel procedure, each computational domain is scalable. A parallel integration scheme is used to compute aeroelastic responses by solving fluid and structural equations concurrently. The computational efficiency issues of parallel integration of both fluid and structural equations are investigated in detail. This approach, which reduces the total computational time by a factor of almost 2, is demonstrated for a typical aeroelastic wing by using various numbers of processors on the Intel iPSC/860.

  18. An efficient parallel algorithm for matrix-vector multiplication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrickson, B.; Leland, R.; Plimpton, S.

    The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in themore » well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.« less

  19. Hybrid massively parallel fast sweeping method for static Hamilton-Jacobi equations

    NASA Astrophysics Data System (ADS)

    Detrixhe, Miles; Gibou, Frédéric

    2016-10-01

    The fast sweeping method is a popular algorithm for solving a variety of static Hamilton-Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling, and show state-of-the-art speedup values for the fast sweeping method.

  20. Parallel computation using boundary elements in solid mechanics

    NASA Technical Reports Server (NTRS)

    Chien, L. S.; Sun, C. T.

    1990-01-01

    The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

  1. A Bold Step Forward: Juxtaposition of the Constructivist and Freeschooling Learning Model

    ERIC Educational Resources Information Center

    Chiatul, Victoria Oliaku

    2015-01-01

    This article discusses the juxtaposition of learning within the parallel structure of the constructivist and freeschooling models of education. To begin, characteristics describing the constructivist-learning model are provided, followed by a summary of the major components of the freeschooling-learning model. Finally, the parallel structure…

  2. Multifaceted Genomic Risk for Brain Function in Schizophrenia

    PubMed Central

    Chen, Jiayu; Calhoun, Vince D.; Pearlson, Godfrey D.; Ehrlich, Stefan; Turner, Jessica A.; Ho, Beng-Choon; Wassink, Thomas H.; Michael, Andrew M; Liu, Jingyu

    2012-01-01

    Recently, deriving candidate endophenotypes from brain imaging data has become a valuable approach to study genetic influences on schizophrenia (SZ), whose pathophysiology remains unclear. In this work we utilized a multivariate approach, parallel independent component analysis, to identify genomic risk components associated with brain function abnormalities in SZ. 5157 candidate single nucleotide polymorphisms (SNPs) were derived from genome-wide array based on their possible connections with SZ and further investigated for their associations with brain activations captured with functional magnetic resonance imaging (fMRI) during a sensorimotor task. Using data from 92 SZ patients and 116 healthy controls, we detected a significant correlation (r= 0.29; p= 2.41×10−5) between one fMRI component and one SNP component, both of which significantly differentiated patients from controls. The fMRI component mainly consisted of precentral and postcentral gyri, the major activated regions in the motor task. On average, higher activation in these regions was observed in participants with higher loadings of the linked SNP component, predominantly contributed to by 253 SNPs. 138 identified SNPs were from known coding regions of 100 unique genes. 31 identified SNPs did not differ between groups, but moderately correlated with some other group-discriminating SNPs, indicating interactions among alleles contributing towards elevated SZ susceptibility. The genes associated with the identified SNPs participated in four neurotransmitter pathways: GABA receptor signaling, dopamine receptor signaling, neuregulin signaling and glutamate receptor signaling. In summary, our work provides further evidence for the complexity of genomic risk to the functional brain abnormality in SZ and suggests a pathological role of interactions between SNPs, genes and multiple neurotransmitter pathways. PMID:22440650

  3. Improving model biases in an ESM with an isopycnic ocean component by accounting for wind work on oceanic near-inertial motions.

    NASA Astrophysics Data System (ADS)

    de Wet, P. D.; Bentsen, M.; Bethke, I.

    2016-02-01

    It is well-known that, when comparing climatological parameters such as ocean temperature and salinity to the output of an Earth System Model (ESM), the model exhibits biases. In ESMs with an isopycnic ocean component, such as NorESM, insufficient vertical mixing is thought to be one of the causes of such differences between observational and model data. However, enhancing the vertical mixing of the model's ocean component not only requires increasing the energy input, but also sound physical reasoning for doing so. Various authors have shown that the action of atmospheric winds on the ocean's surface is a major source of energy input into the upper ocean. However, due to model and computational constraints, oceanic processes linked to surface winds are incompletely accounted for. Consequently, despite significantly contributing to the energy required to maintain ocean stratification, most ESMs do not directly make provision for this energy. In this study we investigate the implementation of a routine in which the energy from work done on oceanic near-inertial motions is calculated in an offline slab model. The slab model, which has been well-documented in the literature, runs parallel to but independently from the ESM's ocean component. It receives wind fields with a frequency higher than that of the coupling frequency, allowing it to capture the fluctuations in the winds on shorter time scales. The additional energy calculated thus is then passed to the ocean component, avoiding the need for increased coupling between the components of the ESM. Results show localised reduction in, amongst others, the salinity and temperature biases of NorESM, confirming model sensitivity to wind-forcing and points to the need for better representation of surface processes in ESMs.

  4. Multimodel Kalman filtering for adaptive nonuniformity correction in infrared sensors.

    PubMed

    Pezoa, Jorge E; Hayat, Majeed M; Torres, Sergio N; Rahman, Md Saifur

    2006-06-01

    We present an adaptive technique for the estimation of nonuniformity parameters of infrared focal-plane arrays that is robust with respect to changes and uncertainties in scene and sensor characteristics. The proposed algorithm is based on using a bank of Kalman filters in parallel. Each filter independently estimates state variables comprising the gain and the bias matrices of the sensor, according to its own dynamic-model parameters. The supervising component of the algorithm then generates the final estimates of the state variables by forming a weighted superposition of all the estimates rendered by each Kalman filter. The weights are computed and updated iteratively, according to the a posteriori-likelihood principle. The performance of the estimator and its ability to compensate for fixed-pattern noise is tested using both simulated and real data obtained from two cameras operating in the mid- and long-wave infrared regime.

  5. Sluggish vagal brake reactivity to physical exercise challenge in children with selective mutism.

    PubMed

    Heilman, Keri J; Connolly, Sucheta D; Padilla, Wendy O; Wrzosek, Marika I; Graczyk, Patricia A; Porges, Stephen W

    2012-02-01

    Cardiovascular response patterns to laboratory-based social and physical exercise challenges were evaluated in 69 children and adolescents, 20 with selective mutism (SM), to identify possible neurophysiological mechanisms that may mediate the behavioral features of SM. Results suggest that SM is associated with a dampened response of the vagal brake to physical exercise that is manifested as reduced reactivity in heart rate and respiration. Polyvagal theory proposes that the regulation of the vagal brake is a neurophysiological component of an integrated social engagement system that includes the neural regulation of the laryngeal and pharyngeal muscles. Within this theoretical framework, sluggish vagal brake reactivity may parallel an inability to recruit efficiently the structures involved in speech. Thus, the findings suggest that dampened autonomic reactivity during mobilization behaviors may be a biomarker of SM that can be assessed independent of the social stimuli that elicit mutism.

  6. Small-scale anisotropic intermittency in magnetohydrodynamic turbulence at low magnetic Reynolds numbers.

    PubMed

    Okamoto, Naoya; Yoshimatsu, Katsunori; Schneider, Kai; Farge, Marie

    2014-03-01

    Small-scale anisotropic intermittency is examined in three-dimensional incompressible magnetohydrodynamic turbulence subjected to a uniformly imposed magnetic field. Orthonormal wavelet analyses are applied to direct numerical simulation data at moderate Reynolds number and for different interaction parameters. The magnetic Reynolds number is sufficiently low such that the quasistatic approximation can be applied. Scale-dependent statistical measures are introduced to quantify anisotropy in terms of the flow components, either parallel or perpendicular to the imposed magnetic field, and in terms of the different directions. Moreover, the flow intermittency is shown to increase with increasing values of the interaction parameter, which is reflected in strongly growing flatness values when the scale decreases. The scale-dependent anisotropy of energy is found to be independent of scale for all considered values of the interaction parameter. The strength of the imposed magnetic field does amplify the anisotropy of the flow.

  7. Life sciences flight experiments microcomputer

    NASA Technical Reports Server (NTRS)

    Bartram, Peter N.

    1987-01-01

    A promising microcomputer configuration for the Spacelab Life Sciences Lab. Equipment inventory consists of multiple processors. One processor's use is reserved, with additional processors dedicated to real time input and output operations. A simple form of such a configuration, with a processor board for analog to digital conversion and another processor board for digital to analog conversion, was studied. The system used digital parallel data lines between the boards, operating independently of the system bus. Good performance of individual components was demonstrated: the analog to digital converter was at over 10,000 samples per second. The combination of the data transfer between boards with the input or output functions on each board slowed performance, with a maximum throughput of 2800 to 2900 analog samples per second. Any of several techniques, such as use of the system bus for data transfer or the addition of direct memory access hardware to the processor boards, should give significantly improved performance.

  8. Parallel evolution of the make–accumulate–consume strategy in Saccharomyces and Dekkera yeasts

    PubMed Central

    Rozpędowska, Elżbieta; Hellborg, Linda; Ishchuk, Olena P.; Orhan, Furkan; Galafassi, Silvia; Merico, Annamaria; Woolfit, Megan; Compagno, Concetta; Piškur, Jure

    2011-01-01

    Saccharomyces yeasts degrade sugars to two-carbon components, in particular ethanol, even in the presence of excess oxygen. This characteristic is called the Crabtree effect and is the background for the 'make–accumulate–consume' life strategy, which in natural habitats helps Saccharomyces yeasts to out-compete other microorganisms. A global promoter rewiring in the Saccharomyces cerevisiae lineage, which occurred around 100 mya, was one of the main molecular events providing the background for evolution of this strategy. Here we show that the Dekkera bruxellensis lineage, which separated from the Saccharomyces yeasts more than 200 mya, also efficiently makes, accumulates and consumes ethanol and acetic acid. Analysis of promoter sequences indicates that both lineages independently underwent a massive loss of a specific cis-regulatory element from dozens of genes associated with respiration, and we show that also in D. bruxellensis this promoter rewiring contributes to the observed Crabtree effect. PMID:21556056

  9. Scandium(III) catalysis of transimination reactions. Independent and constitutionally coupled reversible processes.

    PubMed

    Giuseppone, Nicolas; Schmitt, Jean-Louis; Schwartz, Evan; Lehn, Jean-Marie

    2005-04-20

    Sc(OTf)(3) efficiently catalyzes the self-sufficient transimination reaction between various types of C=N bonds in organic solvents, with turnover frequencies up to 3600 h(-)(1) and rate accelerations up to 6 x 10(5). The mechanism of the crossover reaction in mixtures of amines and imines is studied, comparing parallel individual reactions with coupled equilibria. The intrinsic kinetic parameters for isolated reactions cannot simply be added up when several components are mixed, and the behavior of the system agrees with the presence of a unique mediator that constitutes the core of a network of competing reactions. In mixed systems, every single amine or imine competes for the same central hub, in accordance with their binding affinity for the catalyst metal ion center. More generally, the study extends the basic principles of constitutional dynamic chemistry to interconnected chemical transformations and provides a step toward dynamic systems of increasing complexity.

  10. Parallel Optical Random Access Memory (PORAM)

    NASA Technical Reports Server (NTRS)

    Alphonse, G. A.

    1989-01-01

    It is shown that the need to minimize component count, power and size, and to maximize packing density require a parallel optical random access memory to be designed in a two-level hierarchy: a modular level and an interconnect level. Three module designs are proposed, in the order of research and development requirements. The first uses state-of-the-art components, including individually addressed laser diode arrays, acousto-optic (AO) deflectors and magneto-optic (MO) storage medium, aimed at moderate size, moderate power, and high packing density. The next design level uses an electron-trapping (ET) medium to reduce optical power requirements. The third design uses a beam-steering grating surface emitter (GSE) array to reduce size further and minimize the number of components.

  11. Detection of independent functional networks during music listening using electroencephalogram and sLORETA-ICA.

    PubMed

    Jäncke, Lutz; Alahmadi, Nsreen

    2016-04-13

    The measurement of brain activation during music listening is a topic that is attracting increased attention from many researchers. Because of their high spatial accuracy, functional MRI measurements are often used for measuring brain activation in the context of music listening. However, this technique faces the issues of contaminating scanner noise and an uncomfortable experimental environment. Electroencephalogram (EEG), however, is a neural registration technique that allows the measurement of neurophysiological activation in silent and more comfortable experimental environments. Thus, it is optimal for recording brain activations during pleasant music stimulation. Using a new mathematical approach to calculate intracortical independent components (sLORETA-IC) on the basis of scalp-recorded EEG, we identified specific intracortical independent components during listening of a musical piece and scales, which differ substantially from intracortical independent components calculated from the resting state EEG. Most intracortical independent components are located bilaterally in perisylvian brain areas known to be involved in auditory processing and specifically in music perception. Some intracortical independent components differ between the music and scale listening conditions. The most prominent difference is found in the anterior part of the perisylvian brain region, with stronger activations seen in the left-sided anterior perisylvian regions during music listening, most likely indicating semantic processing during music listening. A further finding is that the intracortical independent components obtained for the music and scale listening are most prominent in higher frequency bands (e.g. beta-2 and beta-3), whereas the resting state intracortical independent components are active in lower frequency bands (alpha-1 and theta). This new technique for calculating intracortical independent components is able to differentiate independent neural networks associated with music and scale listening. Thus, this tool offers new opportunities for studying neural activations during music listening using the silent and more convenient EEG technology.

  12. Parallel-plate heat pipe apparatus having a shaped wick structure

    DOEpatents

    Rightley, Michael J.; Adkins, Douglas R.; Mulhall, James J.; Robino, Charles V.; Reece, Mark; Smith, Paul M.; Tigges, Chris P.

    2004-12-07

    A parallel-plate heat pipe is disclosed that utilizes a plurality of evaporator regions at locations where heat sources (e.g. semiconductor chips) are to be provided. A plurality of curvilinear capillary grooves are formed on one or both major inner surfaces of the heat pipe to provide an independent flow of a liquid working fluid to the evaporator regions to optimize heat removal from different-size heat sources and to mitigate the possibility of heat-source shadowing. The parallel-plate heat pipe has applications for heat removal from high-density microelectronics and laptop computers.

  13. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  14. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  15. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  16. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  17. On the impact of communication complexity on the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D. B.; Van Rosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  18. Parallel architecture for rapid image generation and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nerheim, R.J.

    1987-01-01

    A multiprocessor architecture inspired by the Disney multiplane camera is proposed. For many applications, this approach produces a natural mapping of processors to objects in a scene. Such a mapping promotes parallelism and reduces the hidden-surface work with minimal interprocessor communication and low-overhead cost. Existing graphics architectures store the final picture as a monolithic entity. The architecture here stores each object's image separately. It assembles the final composite picture from component images only when the video display needs to be refreshed. This organization simplifies the work required to animate moving objects that occlude other objects. In addition, the architecture hasmore » multiple processors that generate the component images in parallel. This further shortens the time needed to create a composite picture. In addition to generating images for animation, the architecture has the ability to decompose images.« less

  19. Line-drawing algorithms for parallel machines

    NASA Technical Reports Server (NTRS)

    Pang, Alex T.

    1990-01-01

    The fact that conventional line-drawing algorithms, when applied directly on parallel machines, can lead to very inefficient codes is addressed. It is suggested that instead of modifying an existing algorithm for a parallel machine, a more efficient implementation can be produced by going back to the invariants in the definition. Popular line-drawing algorithms are compared with two alternatives; distance to a line (a point is on the line if sufficiently close to it) and intersection with a line (a point on the line if an intersection point). For massively parallel single-instruction-multiple-data (SIMD) machines (with thousands of processors and up), the alternatives provide viable line-drawing algorithms. Because of the pixel-per-processor mapping, their performance is independent of the line length and orientation.

  20. An improved independent component analysis model for 3D chromatogram separation and its solution by multi-areas genetic algorithm

    PubMed Central

    2014-01-01

    Background The 3D chromatogram generated by High Performance Liquid Chromatography-Diode Array Detector (HPLC-DAD) has been researched widely in the field of herbal medicine, grape wine, agriculture, petroleum and so on. Currently, most of the methods used for separating a 3D chromatogram need to know the compounds' number in advance, which could be impossible especially when the compounds are complex or white noise exist. New method which extracts compounds from 3D chromatogram directly is needed. Methods In this paper, a new separation model named parallel Independent Component Analysis constrained by Reference Curve (pICARC) was proposed to transform the separation problem to a multi-parameter optimization issue. It was not necessary to know the number of compounds in the optimization. In order to find all the solutions, an algorithm named multi-areas Genetic Algorithm (mGA) was proposed, where multiple areas of candidate solutions were constructed according to the fitness and distances among the chromosomes. Results Simulations and experiments on a real life HPLC-DAD data set were used to demonstrate our method and its effectiveness. Through simulations, it can be seen that our method can separate 3D chromatogram to chromatogram peaks and spectra successfully even when they severely overlapped. It is also shown by the experiments that our method is effective to solve real HPLC-DAD data set. Conclusions Our method can separate 3D chromatogram successfully without knowing the compounds' number in advance, which is fast and effective. PMID:25474487

  1. Human cell structure-driven model construction for predicting protein subcellular location from biological images.

    PubMed

    Shao, Wei; Liu, Mingxia; Zhang, Daoqiang

    2016-01-01

    The systematic study of subcellular location pattern is very important for fully characterizing the human proteome. Nowadays, with the great advances in automated microscopic imaging, accurate bioimage-based classification methods to predict protein subcellular locations are highly desired. All existing models were constructed on the independent parallel hypothesis, where the cellular component classes are positioned independently in a multi-class classification engine. The important structural information of cellular compartments is missed. To deal with this problem for developing more accurate models, we proposed a novel cell structure-driven classifier construction approach (SC-PSorter) by employing the prior biological structural information in the learning model. Specifically, the structural relationship among the cellular components is reflected by a new codeword matrix under the error correcting output coding framework. Then, we construct multiple SC-PSorter-based classifiers corresponding to the columns of the error correcting output coding codeword matrix using a multi-kernel support vector machine classification approach. Finally, we perform the classifier ensemble by combining those multiple SC-PSorter-based classifiers via majority voting. We evaluate our method on a collection of 1636 immunohistochemistry images from the Human Protein Atlas database. The experimental results show that our method achieves an overall accuracy of 89.0%, which is 6.4% higher than the state-of-the-art method. The dataset and code can be downloaded from https://github.com/shaoweinuaa/. dqzhang@nuaa.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Alterations in memory networks in mild cognitive impairment and Alzheimer's disease: an independent component analysis.

    PubMed

    Celone, Kim A; Calhoun, Vince D; Dickerson, Bradford C; Atri, Alireza; Chua, Elizabeth F; Miller, Saul L; DePeau, Kristina; Rentz, Doreen M; Selkoe, Dennis J; Blacker, Deborah; Albert, Marilyn S; Sperling, Reisa A

    2006-10-04

    Memory function is likely subserved by multiple distributed neural networks, which are disrupted by the pathophysiological process of Alzheimer's disease (AD). In this study, we used multivariate analytic techniques to investigate memory-related functional magnetic resonance imaging (fMRI) activity in 52 individuals across the continuum of normal aging, mild cognitive impairment (MCI), and mild AD. Independent component analyses revealed specific memory-related networks that activated or deactivated during an associative memory paradigm. Across all subjects, hippocampal activation and parietal deactivation demonstrated a strong reciprocal relationship. Furthermore, we found evidence of a nonlinear trajectory of fMRI activation across the continuum of impairment. Less impaired MCI subjects showed paradoxical hyperactivation in the hippocampus compared with controls, whereas more impaired MCI subjects demonstrated significant hypoactivation, similar to the levels observed in the mild AD subjects. We found a remarkably parallel curve in the pattern of memory-related deactivation in medial and lateral parietal regions with greater deactivation in less-impaired MCI and loss of deactivation in more impaired MCI and mild AD subjects. Interestingly, the failure of deactivation in these regions was also associated with increased positive activity in a neocortical attentional network in MCI and AD. Our findings suggest that loss of functional integrity of the hippocampal-based memory systems is directly related to alterations of neural activity in parietal regions seen over the course of MCI and AD. These data may also provide functional evidence of the interaction between neocortical and medial temporal lobe pathology in early AD.

  3. Storm Time Evolution of Outer Radiation Belt Relativistic Electrons by a Nearly Continuous Distribution of Chorus

    NASA Astrophysics Data System (ADS)

    Yang, Chang; Xiao, Fuliang; He, Yihua; Liu, Si; Zhou, Qinghua; Guo, Mingyue; Zhao, Wanli

    2018-03-01

    During the 13-14 November 2012 storm, Van Allen Probe A simultaneously observed a 10 h period of enhanced chorus (including quasi-parallel and oblique propagation components) and relativistic electron fluxes over a broad range of L = 3-6 and magnetic local time = 2-10 within a complete orbit cycle. By adopting a Gaussian fit to the observed wave spectra, we obtain the wave parameters and calculate the bounce-averaged diffusion coefficients. We solve the Fokker-Planck diffusion equation to simulate flux evolutions of relativistic (1.8-4.2 MeV) electrons during two intervals when Probe A passed the location L = 4.3 along its orbit. The simulating results show that chorus with combined quasi-parallel and oblique components can produce a more pronounced flux enhancement in the pitch angle range ˜45°-80°, consistent well with the observation. The current results provide the first evidence on how relativistic electron fluxes vary under the drive of almost continuously distributed chorus with both quasi-parallel and oblique components within a complete orbit of Van Allen Probe.

  4. Configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks

    DOEpatents

    Archer, Charles J.; Inglett, Todd A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-03-02

    Methods, apparatus, and products are disclosed for configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks, the compute nodes in the operational group connected together for data communications through a global combining network, that include: partitioning the compute nodes in the operational group into a plurality of non-overlapping subgroups; designating one compute node from each of the non-overlapping subgroups as a master node; and assigning, to the compute nodes in each of the non-overlapping subgroups, class routing instructions that organize the compute nodes in that non-overlapping subgroup as a collective network such that the master node is a physical root.

  5. Rotation of EOFs by the Independent Component Analysis: Towards A Solution of the Mixing Problem in the Decomposition of Geophysical Time Series

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)

    2001-01-01

    The Independent Component Analysis is a recently developed technique for component extraction. This new method requires the statistical independence of the extracted components, a stronger constraint that uses higher-order statistics, instead of the classical decorrelation, a weaker constraint that uses only second-order statistics. This technique has been used recently for the analysis of geophysical time series with the goal of investigating the causes of variability in observed data (i.e. exploratory approach). We demonstrate with a data simulation experiment that, if initialized with a Principal Component Analysis, the Independent Component Analysis performs a rotation of the classical PCA (or EOF) solution. This rotation uses no localization criterion like other Rotation Techniques (RT), only the global generalization of decorrelation by statistical independence is used. This rotation of the PCA solution seems to be able to solve the tendency of PCA to mix several physical phenomena, even when the signal is just their linear sum.

  6. Independent component analysis decomposition of hospital emergency department throughput measures

    NASA Astrophysics Data System (ADS)

    He, Qiang; Chu, Henry

    2016-05-01

    We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.

  7. Development of an active food packaging system with antioxidant properties based on green tea extract.

    PubMed

    Carrizo, Daniel; Gullo, Giuseppe; Bosetti, Osvaldo; Nerín, Cristina

    2014-01-01

    A formula including green tea extract (GTE) was developed as an active food packaging material. This formula was moulded to obtain an independent component/device with antioxidant properties that could be easily coupled to industrial degassing valves for food packaging in special cases. GTE components (i.e., gallic acid, catechins and caffeine) were identified and quantified by HPLC-UV and UPLC-MS and migration/diffusion studies were carried out. Antioxidant properties of the formula alone and formula-valve were measured with static and dynamic methods. The results showed that the antioxidant capacity (scavenging of free radicals) of the new GTE formula was 40% higher than the non-active system (blank). This antioxidant activity increased in parallel with the GTE concentration. The functional properties of the industrial target valve (e.g., flexibility) were studied for different mixtures of GTE, and good results were found with 17% (w/w) of GTE. This new active formula can be an important addition for active packaging applications in the food packaging industry, with oxidative species-scavenging capacity, thus improving the safety and quality for the consumer and extending the shelf-life of the packaged food.

  8. Drosophila Learn Opposing Components of a Compound Food Stimulus

    PubMed Central

    Das, Gaurav; Klappenbach, Martín; Vrontou, Eleftheria; Perisse, Emmanuel; Clark, Christopher M.; Burke, Christopher J.; Waddell, Scott

    2014-01-01

    Summary Dopaminergic neurons provide value signals in mammals and insects [1–3]. During Drosophila olfactory learning, distinct subsets of dopaminergic neurons appear to assign either positive or negative value to odor representations in mushroom body neurons [4–9]. However, it is not known how flies evaluate substances that have mixed valence. Here we show that flies form short-lived aversive olfactory memories when trained with odors and sugars that are contaminated with the common insect repellent DEET. This DEET-aversive learning required the MB-MP1 dopaminergic neurons that are also required for shock learning [7]. Moreover, differential conditioning with DEET versus shock suggests that formation of these distinct aversive olfactory memories relies on a common negatively reinforcing dopaminergic mechanism. Surprisingly, as time passed after training, the behavior of DEET-sugar-trained flies reversed from conditioned odor avoidance into odor approach. In addition, flies that were compromised for reward learning exhibited a more robust and longer-lived aversive-DEET memory. These data demonstrate that flies independently process the DEET and sugar components to form parallel aversive and appetitive olfactory memories, with distinct kinetics, that compete to guide learned behavior. PMID:25042590

  9. Fungal Wound Infection (Not Colonization) Is Independently Associated With Mortality in Burn Patients

    DTIC Science & Technology

    2007-06-01

    morphology (pres- ence of parallel-walled, branching, septate hyphae ); (b) Mu- cor-like morphology (zygomycosis/mucormycosis: presence of wide...ribbon-like, rarely septate hyphae ); or (c) yeast-like morphology (presence of budding yeasts or rounded yeast-like structures). FWI was defined as...54) FWC and FWI Patients Pooled (n 175) Aspergillus-like morphology: presence of parallel-walled, branching, septate hyphae 94 (77.7%) 51 (94.4

  10. Method and means for measuring the anisotropy of a plasma in a magnetic field

    DOEpatents

    Shohet, J.L.; Greene, D.G.S.

    1973-10-23

    Anisotropy is measured of a free-free-bremsstrahlungradiation-generating plasma in a magnetic field by collimating the free-free bremsstrahlung radiation in a direction normal to the magnetic field and scattering the collimated free- free bremsstrahlung radiation to resolve the radiation into its vector components in a plane parallel to the electric field of the bremsstrahlung radiation. The scattered vector components are counted at particular energy levels in a direction parallel to the magnetic field and also normal to the magnetic field of the plasma to provide a measure of anisotropy of the plasma. (Official Gazette)

  11. Flexible All-Digital Receiver for Bandwidth Efficient Modulations

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Srinivasan, Meera; Simon, Marvin; Yan, Tsun-Yee

    2000-01-01

    An all-digital high data rate parallel receiver architecture developed jointly by Goddard Space Flight Center and the Jet Propulsion Laboratory is presented. This receiver utilizes only a small number of high speed components along with a majority of lower speed components operating in a parallel frequency domain structure implementable in CMOS, and can currently process up to 600 Mbps with standard QPSK modulation. Performance results for this receiver for bandwidth efficient QPSK modulation schemes such as square-root raised cosine pulse shaped QPSK and Feher's patented QPSK are presented, demonstrating the flexibility of the receiver architecture.

  12. Full 3D Analysis of the GE90 Turbofan Primary Flowpath

    NASA Technical Reports Server (NTRS)

    Turner, Mark G.

    2000-01-01

    The multistage simulations of the GE90 turbofan primary flowpath components have been performed. The multistage CFD code, APNASA, has been used to analyze the fan, fan OGV and booster, the 10-stage high-pressure compressor and the entire turbine system of the GE90 turbofan engine. The code has two levels of parallel, and for the 18 blade row full turbine simulation has 87.3 percent parallel efficiency with 121 processors on an SGI ORIGIN. Grid generation is accomplished with the multistage Average Passage Grid Generator, APG. Results for each component are shown which compare favorably with test data.

  13. Speciation and Sources of Brown Carbon in Precipitation at Seoul, Korea: Insights from Excitation-Emission Matrix Spectroscopy and Carbon Isotopic Analysis.

    PubMed

    Yan, Ge; Kim, Guebuem

    2017-10-17

    Brown carbon (BrC) plays a significant role in the Earth's radiative balance, yet its sources and chemical composition remain poorly understood. In this work, we investigated BrC in the atmospheric environment of Seoul by characterizing dissolved organic matter in precipitation using excitation-emission matrix (EEM) fluorescence spectroscopy coupled with parallel factor analysis (PARAFAC). The two independent fluorescent components identified by PARAFAC were attributed to humic-like substance (HULIS) and biologically derived material based on their significant correlations with measured HULIS isolated using solid-phase extraction and total hydrolyzable tyrosine. The year-long observation shows that HULIS contributes to 66 ± 13% of total fluorescence intensity of our samples on average. By using dual carbon ( 13 C and 14 C) isotopic analysis conducted on isolated HULIS, the HULIS fraction of BrC was found to be primarily derived from biomass burning and emission of terrestrial biogenic gases and particles (>70%), with minor contributions from fossil-fuel combustion. The knowledge derived from this study could contribute to the establishment of a characterizing system of BrC components identified by EEM spectroscopy. Our work demonstrates that, EEM fluorescence spectroscopy is a powerful tool in BrC study, on the basis of its chromophore resolving power, allowing investigation into individual components of BrC by other organic matter characterization techniques.

  14. Lattice Boltzmann formulation for conjugate heat transfer in heterogeneous media.

    PubMed

    Karani, Hamid; Huber, Christian

    2015-02-01

    In this paper, we propose an approach for studying conjugate heat transfer using the lattice Boltzmann method (LBM). The approach is based on reformulating the lattice Boltzmann equation for solving the conservative form of the energy equation. This leads to the appearance of a source term, which introduces the jump conditions at the interface between two phases or components with different thermal properties. The proposed source term formulation conserves conductive and advective heat flux simultaneously, which makes it suitable for modeling conjugate heat transfer in general multiphase or multicomponent systems. The simple implementation of the source term approach avoids any correction of distribution functions neighboring the interface and provides an algorithm that is independent from the topology of the interface. Moreover, our approach is independent of the choice of lattice discretization and can be easily applied to different advection-diffusion LBM solvers. The model is tested against several benchmark problems including steady-state convection-diffusion within two fluid layers with parallel and normal interfaces with respect to the flow direction, unsteady conduction in a three-layer stratified domain, and steady conduction in a two-layer annulus. The LBM results are in excellent agreement with analytical solution. Error analysis shows that our model is first-order accurate in space, but an extension to a second-order scheme is straightforward. We apply our LBM model to heat transfer in a two-component heterogeneous medium with a random microstructure. This example highlights that the method we propose is independent of the topology of interfaces between the different phases and, as such, is ideally suited for complex natural heterogeneous media. We further validate the present LBM formulation with a study of natural convection in a porous enclosure. The results confirm the reliability of the model in simulating complex coupled fluid and thermal dynamics in complex geometries.

  15. Kinetic and energy production analysis of pyrolysis of lignocellulosic biomass using a three-parallel Gaussian reaction model.

    PubMed

    Chen, Tianju; Zhang, Jinzhi; Wu, Jinhu

    2016-07-01

    The kinetic and energy productions of pyrolysis of a lignocellulosic biomass were investigated using a three-parallel Gaussian distribution method in this work. The pyrolysis experiment of the pine sawdust was performed using a thermogravimetric-mass spectroscopy (TG-MS) analyzer. A three-parallel Gaussian distributed activation energy model (DAEM)-reaction model was used to describe thermal decomposition behaviors of the three components, hemicellulose, cellulose and lignin. The first, second and third pseudocomponents represent the fractions of hemicellulose, cellulose and lignin, respectively. It was found that the model is capable of predicting the pyrolysis behavior of the pine sawdust. The activation energy distribution peaks for the three pseudo-components were centered at 186.8, 197.5 and 203.9kJmol(-1) for the pine sawdust, respectively. The evolution profiles of H2, CH4, CO, and CO2 were well predicted using the three-parallel Gaussian distribution model. In addition, the chemical composition of bio-oil was also obtained by pyrolysis-gas chromatography/mass spectrometry instrument (Py-GC/MS). Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Operation of high power converters in parallel

    NASA Technical Reports Server (NTRS)

    Decker, D. K.; Inouye, L. Y.

    1993-01-01

    High power converters that are used in space power subsystems are limited in power handling capability due to component and thermal limitations. For applications, such as Space Station Freedom, where multi-kilowatts of power must be delivered to user loads, parallel operation of converters becomes an attractive option when considering overall power subsystem topologies. TRW developed three different unequal power sharing approaches for parallel operation of converters. These approaches, known as droop, master-slave, and proportional adjustment, are discussed and test results are presented.

  17. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  18. Bilingual parallel programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach providesmore » and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.« less

  19. Hybrid massively parallel fast sweeping method for static Hamilton–Jacobi equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detrixhe, Miles, E-mail: mdetrixhe@engineering.ucsb.edu; University of California Santa Barbara, Santa Barbara, CA, 93106; Gibou, Frédéric, E-mail: fgibou@engineering.ucsb.edu

    The fast sweeping method is a popular algorithm for solving a variety of static Hamilton–Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling,more » and show state-of-the-art speedup values for the fast sweeping method.« less

  20. Argonne simulation framework for intelligent transportation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewing, T.; Doss, E.; Hanebutte, U.

    1996-04-01

    A simulation framework has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS). The simulator is designed to run on parallel computers and distributed (networked) computer systems; however, a version for a stand alone workstation is also available. The ITS simulator includes an Expert Driver Model (EDM) of instrumented ``smart`` vehicles with in-vehicle navigation units. The EDM is capable of performing optimal route planning and communicating with Traffic Management Centers (TMC). A dynamic road map data base is sued for optimum route planning, where the data is updated periodically tomore » reflect any changes in road or weather conditions. The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces that includes human-factors studies to support safety and operational research. Realistic modeling of variations of the posted driving speed are based on human factor studies that take into consideration weather, road conditions, driver`s personality and behavior and vehicle type. The simulator has been developed on a distributed system of networked UNIX computers, but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of the developed simulator is that vehicles will be represented by autonomous computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. Vehicle processes interact with each other and with ITS components by exchanging messages. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.« less

  1. Breakdown of the Frozen-in Condition and Plasma Acceleration: Dynamical Theory

    NASA Astrophysics Data System (ADS)

    Song, Y.; Lysak, R. L.

    2007-12-01

    The magnetic reconnection hypothesis emphasizes the importance of the breakdown of the frozen-in condition, explains the strong dependence of the geomagnetic activity on the IMF, and approximates an average qualitative description for many IMF controlled effects in magnetospheric physics. However, some important theoretical aspects of reconnection, including its definition, have not been carefully examined. The crucial components of such models, such as the largely-accepted X-line reconnection picture and the broadly-used explanations of the breakdown of the frozen-in condition, lack complete theoretical support. The important irreversible reactive interaction is intrinsically excluded and overlooked in most reconnection models. The generation of parallel electric fields must be the result of a reactive plasma interaction, which is associated with the temporal changes and spatial gradients of magnetic and velocity shears (Song and Lysak, 2006). Unlike previous descriptions of the magnetic reconnection process, which depend on dissipative-type coefficients or some passive terms in the generalized Ohm's law, the reactive interaction is a dynamical process, which favors localized high magnetic and/or mechanical stresses and a low plasma density. The reactive interaction is often closely associated with the radiation of shear Alfvén waves and is independent of any assumed dissipation coefficients. The generated parallel electric field makes an irreversible conversion between magnetic energy and the kinetic energy of the accelerated plasma and the bulk flow. We demonstrate how the reactive interaction, e.g., the nonlinear interaction of MHD mesoscale wave packets at current sheets and in the auroral acceleration region, can create and support parallel electric fields, causing the breakdown of the frozen-in condition and plasma acceleration.

  2. Generic accelerated sequence alignment in SeqAn using vectorization and multi-threading.

    PubMed

    Rahn, René; Budach, Stefan; Costanza, Pascal; Ehrhardt, Marcel; Hancox, Jonny; Reinert, Knut

    2018-05-03

    Pairwise sequence alignment is undoubtedly a central tool in many bioinformatics analyses. In this paper, we present a generically accelerated module for pairwise sequence alignments applicable for a broad range of applications. In our module, we unified the standard dynamic programming kernel used for pairwise sequence alignments and extended it with a generalized inter-sequence vectorization layout, such that many alignments can be computed simultaneously by exploiting SIMD (Single Instruction Multiple Data) instructions of modern processors. We then extended the module by adding two layers of thread-level parallelization, where we a) distribute many independent alignments on multiple threads and b) inherently parallelize a single alignment computation using a work stealing approach producing a dynamic wavefront progressing along the minor diagonal. We evaluated our alignment vectorization and parallelization on different processors, including the newest Intel® Xeon® (Skylake) and Intel® Xeon Phi™ (KNL) processors, and use cases. The instruction set AVX512-BW (Byte and Word), available on Skylake processors, can genuinely improve the performance of vectorized alignments. We could run single alignments 1600 times faster on the Xeon Phi™ and 1400 times faster on the Xeon® than executing them with our previous sequential alignment module. The module is programmed in C++ using the SeqAn (Reinert et al., 2017) library and distributed with version 2.4. under the BSD license. We support SSE4, AVX2, AVX512 instructions and included UME::SIMD, a SIMD-instruction wrapper library, to extend our module for further instruction sets. We thoroughly test all alignment components with all major C++ compilers on various platforms. rene.rahn@fu-berlin.de.

  3. Structural considerations for functional anti-EGFR × anti-CD3 bispecific diabodies in light of domain order and binding affinity.

    PubMed

    Asano, Ryutaro; Nagai, Keisuke; Makabe, Koki; Takahashi, Kento; Kumagai, Takashi; Kawaguchi, Hiroko; Ogata, Hiromi; Arai, Kyoko; Umetsu, Mitsuo; Kumagai, Izumi

    2018-03-02

    We previously reported a functional humanized bispecific diabody (bsDb) that targeted EGFR and CD3 (hEx3-Db) and enhancement of its cytotoxicity by rearranging the domain order in the V domain. Here, we further dissected the effect of domain order in bsDbs on their cross-linking ability and binding kinetics to elucidate general rules regarding the design of functional bsDbs. Using Ex3-Db as a model system, we first classified the four possible domain orders as anti-parallel (where both chimeric single-chain components are variable heavy domain (VH)-variable light domain (VL) or VL-VH order) and parallel types (both chimeric single-chain components are mixed with VH-VL and VL-VH order). Although anti-parallel Ex3-Dbs could cross-link the soluble target antigens, their cross-linking ability between soluble targets had no correlation with their growth inhibitory effects. In contrast, the binding affinity of one of the two constructs with a parallel-arrangement V domain was particularly low, and structural modeling supported this phenomenon. Similar results were observed with E2x3-Dbs, in which the V region of the anti-EGFR antibody clone in hEx3 was replaced with that of another anti-EGFR clone. Only anti-parallel types showed affinity-dependent cancer inhibitory effects in each molecule, and E2x3-LH (both components in VL-VH order) showed the most intense anti-tumor activity in vitro and in vivo . Our results showed that, in addition to rearranging the domain order of bsDbs, increasing their binding affinity may be an ideal strategy for enhancing the cytotoxicity of anti-parallel constructs and that E2x3-LH is particularly attractive as a candidate next-generation anti-cancer drug.

  4. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    NASA Astrophysics Data System (ADS)

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle

    2016-08-01

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.

  5. An O( N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    DOE PAGES

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; ...

    2016-08-10

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O( N 2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Methodmore » (FMM) to evaluate the integrals in O( N) operations, with O( N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.« less

  6. Beyond the Renderer: Software Architecture for Parallel Graphics and Visualization

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1996-01-01

    As numerous implementations have demonstrated, software-based parallel rendering is an effective way to obtain the needed computational power for a variety of challenging applications in computer graphics and scientific visualization. To fully realize their potential, however, parallel renderers need to be integrated into a complete environment for generating, manipulating, and delivering visual data. We examine the structure and components of such an environment, including the programming and user interfaces, rendering engines, and image delivery systems. We consider some of the constraints imposed by real-world applications and discuss the problems and issues involved in bringing parallel rendering out of the lab and into production.

  7. Parallel performance of TORT on the CRAY J90: Model and measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, A.; Azmy, Y.Y.

    1997-10-01

    A limitation on the parallel performance of TORT on the CRAY J90 is the amount of extra work introduced by the multitasking algorithm itself. The extra work beyond that of the serial version of the code, called overhead, arises from the synchronization of the parallel tasks and the accumulation of results by the master task. The goal of recent updates to TORT was to reduce the time consumed by these activities. To help understand which components of the multitasking algorithm contribute significantly to the overhead, a parallel performance model was constructed and compared to measurements of actual timings of themore » code.« less

  8. Static electric dipole polarizabilities of An(5+/6+) and AnO2 (+/2+) (An = U, Np, and Pu) ions.

    PubMed

    Parmar, Payal; Peterson, Kirk A; Clark, Aurora E

    2014-12-21

    The parallel components of static electric dipole polarizabilities have been calculated for the lowest lying spin-orbit states of the penta- and hexavalent oxidation states of the actinides (An) U, Np, and Pu, in both their atomic and molecular diyl ion forms (An(5+/6+) and AnO2 (+/2+)) using the numerical finite-field technique within a four-component relativistic framework. The four-component Dirac-Hartree-Fock method formed the reference for MP2 and CCSD(T) calculations, while multireference Fock space coupled-cluster (FSCC), intermediate Hamiltonian Fock space coupled-cluster (IH-FSCC) and Kramers restricted configuration interaction (KRCI) methods were used to incorporate additional electron correlation. It is observed that electron correlation has significant (∼5 a.u.(3)) impact upon the parallel component of the polarizabilities of the diyls. To the best of our knowledge, these quantities have not been previously reported and they can serve as reference values in the determination of various electronic and response properties (for example intermolecular forces, optical properties, etc.) relevant to the nuclear fuel cycle and material science applications. The highest quality numbers for the parallel components (αzz) of the polarizability for the lowest Ω levels corresponding to the ground electronic states are (in a.u.(3)) 44.15 and 41.17 for UO2 (+) and UO2 (2+), respectively, 45.64 and 41.42 for NpO2 (+) and NpO2 (2+), respectively, and 47.15 for the PuO2 (+) ion.

  9. Probabilistic structural mechanics research for parallel processing computers

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.

    1991-01-01

    Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.

  10. Vectorization and parallelization of the finite strip method for dynamic Mindlin plate problems

    NASA Technical Reports Server (NTRS)

    Chen, Hsin-Chu; He, Ai-Fang

    1993-01-01

    The finite strip method is a semi-analytical finite element process which allows for a discrete analysis of certain types of physical problems by discretizing the domain of the problem into finite strips. This method decomposes a single large problem into m smaller independent subproblems when m harmonic functions are employed, thus yielding natural parallelism at a very high level. In this paper we address vectorization and parallelization strategies for the dynamic analysis of simply-supported Mindlin plate bending problems and show how to prevent potential conflicts in memory access during the assemblage process. The vector and parallel implementations of this method and the performance results of a test problem under scalar, vector, and vector-concurrent execution modes on the Alliant FX/80 are also presented.

  11. DIALIGN P: fast pair-wise and multiple sequence alignment using parallel processors.

    PubMed

    Schmollinger, Martin; Nieselt, Kay; Kaufmann, Michael; Morgenstern, Burkhard

    2004-09-09

    Parallel computing is frequently used to speed up computationally expensive tasks in Bioinformatics. Herein, a parallel version of the multi-alignment program DIALIGN is introduced. We propose two ways of dividing the program into independent sub-routines that can be run on different processors: (a) pair-wise sequence alignments that are used as a first step to multiple alignment account for most of the CPU time in DIALIGN. Since alignments of different sequence pairs are completely independent of each other, they can be distributed to multiple processors without any effect on the resulting output alignments. (b) For alignments of large genomic sequences, we use a heuristics by splitting up sequences into sub-sequences based on a previously introduced anchored alignment procedure. For our test sequences, this combined approach reduces the program running time of DIALIGN by up to 97%. By distributing sub-routines to multiple processors, the running time of DIALIGN can be crucially improved. With these improvements, it is possible to apply the program in large-scale genomics and proteomics projects that were previously beyond its scope.

  12. Alignment between Protostellar Outflows and Filamentary Structure

    NASA Astrophysics Data System (ADS)

    Stephens, Ian W.; Dunham, Michael M.; Myers, Philip C.; Pokhrel, Riwaj; Sadavoy, Sarah I.; Vorobyov, Eduard I.; Tobin, John J.; Pineda, Jaime E.; Offner, Stella S. R.; Lee, Katherine I.; Kristensen, Lars E.; Jørgensen, Jes K.; Goodman, Alyssa A.; Bourke, Tyler L.; Arce, Héctor G.; Plunkett, Adele L.

    2017-09-01

    We present new Submillimeter Array (SMA) observations of CO(2-1) outflows toward young, embedded protostars in the Perseus molecular cloud as part of the Mass Assembly of Stellar Systems and their Evolution with the SMA (MASSES) survey. For 57 Perseus protostars, we characterize the orientation of the outflow angles and compare them with the orientation of the local filaments as derived from Herschel observations. We find that the relative angles between outflows and filaments are inconsistent with purely parallel or purely perpendicular distributions. Instead, the observed distribution of outflow-filament angles are more consistent with either randomly aligned angles or a mix of projected parallel and perpendicular angles. A mix of parallel and perpendicular angles requires perpendicular alignment to be more common by a factor of ˜3. Our results show that the observed distributions probably hold regardless of the protostar’s multiplicity, age, or the host core’s opacity. These observations indicate that the angular momentum axis of a protostar may be independent of the large-scale structure. We discuss the significance of independent protostellar rotation axes in the general picture of filament-based star formation.

  13. PISCES 2 users manual

    NASA Technical Reports Server (NTRS)

    Pratt, Terrence W.

    1987-01-01

    PISCES 2 is a programming environment and set of extensions to Fortran 77 for parallel programming. It is intended to provide a basis for writing programs for scientific and engineering applications on parallel computers in a way that is relatively independent of the particular details of the underlying computer architecture. This user's manual provides a complete description of the PISCES 2 system as it is currently implemented on the 20 processor Flexible FLEX/32 at NASA Langley Research Center.

  14. Parallel 3D Multi-Stage Simulation of a Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Turner, Mark G.; Topp, David A.

    1998-01-01

    A 3D multistage simulation of each component of a modern GE Turbofan engine has been made. An axisymmetric view of this engine is presented in the document. This includes a fan, booster rig, high pressure compressor rig, high pressure turbine rig and a low pressure turbine rig. In the near future, all components will be run in a single calculation for a solution of 49 blade rows. The simulation exploits the use of parallel computations by using two levels of parallelism. Each blade row is run in parallel and each blade row grid is decomposed into several domains and run in parallel. 20 processors are used for the 4 blade row analysis. The average passage approach developed by John Adamczyk at NASA Lewis Research Center has been further developed and parallelized. This is APNASA Version A. It is a Navier-Stokes solver using a 4-stage explicit Runge-Kutta time marching scheme with variable time steps and residual smoothing for convergence acceleration. It has an implicit K-E turbulence model which uses an ADI solver to factor the matrix. Between 50 and 100 explicit time steps are solved before a blade row body force is calculated and exchanged with the other blade rows. This outer iteration has been coined a "flip." Efforts have been made to make the solver linearly scaleable with the number of blade rows. Enough flips are run (between 50 and 200) so the solution in the entire machine is not changing. The K-E equations are generally solved every other explicit time step. One of the key requirements in the development of the parallel code was to make the parallel solution exactly (bit for bit) match the serial solution. This has helped isolate many small parallel bugs and guarantee the parallelization was done correctly. The domain decomposition is done only in the axial direction since the number of points axially is much larger than the other two directions. This code uses MPI for message passing. The parallel speed up of the solver portion (no 1/0 or body force calculation) for a grid which has 227 points axially.

  15. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms

    NASA Astrophysics Data System (ADS)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.

  16. A Framework for Load Balancing of Tensor Contraction Expressions via Dynamic Task Partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Pai-Wei; Stock, Kevin; Rajbhandari, Samyam

    In this paper, we introduce the Dynamic Load-balanced Tensor Contractions (DLTC), a domain-specific library for efficient task parallel execution of tensor contraction expressions, a class of computation encountered in quantum chemistry and physics. Our framework decomposes each contraction into smaller unit of tasks, represented by an abstraction referred to as iterators. We exploit an extra level of parallelism by having tasks across independent contractions executed concurrently through a dynamic load balancing run- time. We demonstrate the improved performance, scalability, and flexibility for the computation of tensor contraction expressions on parallel computers using examples from coupled cluster methods.

  17. CO component estimation based on the independent component analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independentmore » component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.« less

  18. Lattice Independent Component Analysis for Mobile Robot Localization

    NASA Astrophysics Data System (ADS)

    Villaverde, Ivan; Fernandez-Gauna, Borja; Zulueta, Ekaitz

    This paper introduces an approach to appearance based mobile robot localization using Lattice Independent Component Analysis (LICA). The Endmember Induction Heuristic Algorithm (EIHA) is used to select a set of Strong Lattice Independent (SLI) vectors, which can be assumed to be Affine Independent, and therefore candidates to be the endmembers of the data. Selected endmembers are used to compute the linear unmixing of the robot's acquired images. The resulting mixing coefficients are used as feature vectors for view recognition through classification. We show on a sample path experiment that our approach can recognise the localization of the robot and we compare the results with the Independent Component Analysis (ICA).

  19. Association of pulse pressure with new-onset atrial fibrillation in patients with hypertension and left ventricular hypertrophy: the Losartan Intervention For Endpoint (LIFE) reduction in hypertension study.

    PubMed

    Larstorp, Anne Cecilie K; Ariansen, Inger; Gjesdal, Knut; Olsen, Michael H; Ibsen, Hans; Devereux, Richard B; Okin, Peter M; Dahlöf, Björn; Kjeldsen, Sverre E; Wachtell, Kristian

    2012-08-01

    Previous studies have found pulse pressure (PP), a marker of arterial stiffness, to be an independent predictor of atrial fibrillation (AF) in general and hypertensive populations. We examined whether PP predicted new-onset AF in comparison with other blood pressure components in the Losartan Intervention For Endpoint reduction in hypertension study, a double-blind, randomized (losartan versus atenolol), parallel-group study, including 9193 patients with hypertension and electrocardiographic left ventricular hypertrophy. In 8810 patients with neither a history of AF nor AF at baseline, Minnesota coding of electrocardiograms confirmed new-onset AF in 353 patients (4.0%) during mean 4.9 years of follow-up. In multivariate Cox regression analyses, baseline and in-treatment PP and baseline and in-treatment systolic blood pressure predicted new-onset AF, independent of baseline age, height, weight, and Framingham Risk Score; sex, race, and treatment allocation; and in-treatment heart rate and Cornell product. PP was the strongest single blood pressure predictor of new-onset AF determined by the decrease in the -2 Log likelihood statistic, in comparison with systolic blood pressure, diastolic blood pressure, and mean arterial pressure. When evaluated in the same model, the predictive effect of systolic and diastolic blood pressures together was similar to that of PP. In this population of patients with hypertension and left ventricular hypertrophy, PP was the strongest single blood pressure predictor of new-onset AF, independent of other risk factors.

  20. Parallel evolution of serotonergic neuromodulation underlies independent evolution of rhythmic motor behavior.

    PubMed

    Lillvis, Joshua L; Katz, Paul S

    2013-02-06

    Neuromodulation can dynamically alter neuronal and synaptic properties, thereby changing the behavioral output of a neural circuit. It is therefore conceivable that natural selection might act upon neuromodulation as a mechanism for sculpting the behavioral repertoire of a species. Here we report that the presence of neuromodulation is correlated with the production of a behavior that most likely evolved independently in two species: Tritonia diomedea and Pleurobranchaea californica (Mollusca, Gastropoda, Opisthobranchia, Nudipleura). Individuals of both species exhibit escape swimming behaviors consisting of repeated dorsal-ventral whole-body flexions. The central pattern generator (CPG) circuits underlying these behaviors contain homologous identified neurons: DSI and C2 in Tritonia and As and A1 in Pleurobranchaea. Homologs of these neurons also can be found in Hermissenda crassicornis where they are named CPT and C2, respectively. However, members of this species do not exhibit an analogous swimming behavior. In Tritonia and Pleurobranchaea, but not in Hermissenda, the serotonergic DSI homologs modulated the strength of synapses made by C2 homologs. Furthermore, the serotonin receptor antagonist methysergide blocked this neuromodulation and the swimming behavior. Additionally, in Pleurobranchaea, the robustness of swimming correlated with the extent of the synaptic modulation. Finally, injection of serotonin induced the swimming behavior in Tritonia and Pleurobranchaea, but not in Hermissenda. This suggests that the analogous swimming behaviors of Tritonia and Pleurobranchaea share a common dependence on serotonergic neuromodulation. Thus, neuromodulation may provide a mechanism that enables species to acquire analogous behaviors independently using homologous neural circuit components.

  1. Mitochondrial gene rearrangements confirm the parallel evolution of the crab-like form.

    PubMed Central

    Morrison, C L; Harvey, A W; Lavery, S; Tieu, K; Huang, Y; Cunningham, C W

    2002-01-01

    The repeated appearance of strikingly similar crab-like forms in independent decapod crustacean lineages represents a remarkable case of parallel evolution. Uncertainty surrounding the phylogenetic relationships among crab-like lineages has hampered evolutionary studies. As is often the case, aligned DNA sequences by themselves were unable to fully resolve these relationships. Four nested mitochondrial gene rearrangements--including one of the few reported movements of an arthropod protein-coding gene--are congruent with the DNA phylogeny and help to resolve a crucial node. A phylogenetic analysis of DNA sequences, and gene rearrangements, supported five independent origins of the crab-like form, and suggests that the evolution of the crab-like form may be irreversible. This result supports the utility of mitochondrial gene rearrangements in phylogenetic reconstruction. PMID:11886621

  2. Parallel versus Serial Processing Dependencies in the Perisylvian Speech Network: A Granger Analysis of Intracranial EEG Data

    ERIC Educational Resources Information Center

    Gow, David W., Jr.; Keller, Corey J.; Eskandar, Emad; Meng, Nate; Cash, Sydney S.

    2009-01-01

    In this work, we apply Granger causality analysis to high spatiotemporal resolution intracranial EEG (iEEG) data to examine how different components of the left perisylvian language network interact during spoken language perception. The specific focus is on the characterization of serial versus parallel processing dependencies in the dominant…

  3. Comparison of Educators' and Industrial Managers' Work Motivation Using Parallel Forms of the Work Components Study Questionnaire.

    ERIC Educational Resources Information Center

    Thornton, Billy W.; And Others

    The idea that educators would differ from business managers on Herzberg's motivation factors and Blum's security orientations was posited. Parallel questionnaires were used to measure the motivational variables. The sample was composed of 432 teachers, 118 administrators, and 192 industrial managers. Data were analyzed using multivariate and…

  4. 75 FR 73128 - Certain Printing and Imaging Devices and Components Thereof; Notice of Commission Determination...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-29

    ...'' include any line extending parallel to the central axis of the roller? Or, does this refer to the central... ``a longitudinal direction'' can include any line extending parallel to the central axis of the roller...) The finding that the Taylor reference (``A Telerobot on the World Wide Web'') (RX-281) does not...

  5. Magnetic Helicity Injection and Thermal Transport

    NASA Astrophysics Data System (ADS)

    Moses, Ronald; Gerwin, Richard; Schoenberg, Kurt

    1999-11-01

    In magnetic helicity injection, a current is driven between electrodes, parallel to the magnetic field in the edge plasma of a machine.^1 Plasma instabilities distribute current throughout the plasma. To model the injection of magnetic helicity, K, into an arbitrary closed surface, K is defined as the volume integral of A^.B. To make K unique, a gauge is chosen where the tangential surface components of A are purely solenoidal. If magnetic fields within a plasma are time varying, yet undergo no macroscopic changes over an extended period, and if the plasma is subject to an Ohm’s law with Hall terms, then it is shown that no closed magnetic surfaces with sustained internal currents can exist continuously within the plasma.^2 It is also shown that parallel thermal transport connects all parts of the plasma to the helicity injection electrodes and requires the electrode voltage difference to be at least 2.5 to 3 times the peak plasma temperature. This ratio is almost independent of the length of the electron mean-free path. If magnetic helicity injection is to be used for fusion-grade plasmas, then high-voltage, high-impedance injection techniques must be developed. ^1T. R. Jarboe, Plasma Physics and Controlled Fusion, V36, 945-990 (June 1994). ^2R. W. Moses, 1991 Sherwood International Fusion Theory Conference, Seattle, WA (April 22-24, 1991).

  6. A distributed code for color in natural scenes derived from center-surround filtered cone signals

    PubMed Central

    Kellner, Christian J.; Wachtler, Thomas

    2013-01-01

    In the retina of trichromatic primates, chromatic information is encoded in an opponent fashion and transmitted to the lateral geniculate nucleus (LGN) and visual cortex via parallel pathways. Chromatic selectivities of neurons in the LGN form two separate clusters, corresponding to two classes of cone opponency. In the visual cortex, however, the chromatic selectivities are more distributed, which is in accordance with a population code for color. Previous studies of cone signals in natural scenes typically found opponent codes with chromatic selectivities corresponding to two directions in color space. Here we investigated how the non-linear spatio-chromatic filtering in the retina influences the encoding of color signals. Cone signals were derived from hyper-spectral images of natural scenes and preprocessed by center-surround filtering and rectification, resulting in parallel ON and OFF channels. Independent Component Analysis (ICA) on these signals yielded a highly sparse code with basis functions that showed spatio-chromatic selectivities. In contrast to previous analyses of linear transformations of cone signals, chromatic selectivities were not restricted to two main chromatic axes, but were more continuously distributed in color space, similar to the population code of color in the early visual cortex. Our results indicate that spatio-chromatic processing in the retina leads to a more distributed and more efficient code for natural scenes. PMID:24098289

  7. Multi-spectrometer calibration transfer based on independent component analysis.

    PubMed

    Liu, Yan; Xu, Hao; Xia, Zhenzhen; Gong, Zhiyong

    2018-02-26

    Calibration transfer is indispensable for practical applications of near infrared (NIR) spectroscopy due to the need for precise and consistent measurements across different spectrometers. In this work, a method for multi-spectrometer calibration transfer is described based on independent component analysis (ICA). A spectral matrix is first obtained by aligning the spectra measured on different spectrometers. Then, by using independent component analysis, the aligned spectral matrix is decomposed into the mixing matrix and the independent components of different spectrometers. These differing measurements between spectrometers can then be standardized by correcting the coefficients within the independent components. Two NIR datasets of corn and edible oil samples measured with three and four spectrometers, respectively, were used to test the reliability of this method. The results of both datasets reveal that spectra measurements across different spectrometers can be transferred simultaneously and that the partial least squares (PLS) models built with the measurements on one spectrometer can predict that the spectra can be transferred correctly on another.

  8. Real Time Monitoring of Dissolved Organic Carbon Concentration and Disinfection By-Product Formation Potential in a Surface Water Treatment Plant with Simulaneous UV-VIS Absorbance and Fluorescence Excitation-Emission Mapping

    NASA Astrophysics Data System (ADS)

    Gilmore, A. M.

    2015-12-01

    This study describes a method based on simultaneous absorbance and fluorescence excitation-emission mapping for rapidly and accurately monitoring dissolved organic carbon concentration and disinfection by-product formation potential for surface water sourced drinking water treatment. The method enables real-time monitoring of the Dissolved Organic Carbon (DOC), absorbance at 254 nm (UVA), the Specific UV Absorbance (SUVA) as well as the Simulated Distribution System Trihalomethane (THM) Formation Potential (SDS-THMFP) for the source and treated water among other component parameters. The method primarily involves Parallel Factor Analysis (PARAFAC) decomposition of the high and lower molecular weight humic and fulvic organic component concentrations. The DOC calibration method involves calculating a single slope factor (with the intercept fixed at 0 mg/l) by linear regression for the UVA divided by the ratio of the high and low molecular weight component concentrations. This method thus corrects for the changes in the molecular weight component composition as a function of the source water composition and coagulation treatment effects. The SDS-THMFP calibration involves a multiple linear regression of the DOC, organic component ratio, chlorine residual, pH and alkalinity. Both the DOC and SDS-THMFP correlations over a period of 18 months exhibited adjusted correlation coefficients with r2 > 0.969. The parameters can be reported as a function of compliance rules associated with required % removals of DOC (as a function of alkalinity) and predicted maximum contaminant levels (MCL) of THMs. The single instrument method, which is compatible with continuous flow monitoring or grab sampling, provides a rapid (2-3 minute) and precise indicator of drinking water disinfectant treatability without the need for separate UV photometric and DOC meter measurements or independent THM determinations.

  9. Parallelization of the FLAPW method

    NASA Astrophysics Data System (ADS)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  10. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    PubMed

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  11. System and method for representing and manipulating three-dimensional objects on massively parallel architectures

    DOEpatents

    Karasick, Michael S.; Strip, David R.

    1996-01-01

    A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modelling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modelling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modelling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication.

  12. JSD: Parallel Job Accounting on the IBM SP2

    NASA Technical Reports Server (NTRS)

    Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)

    1995-01-01

    The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.

  13. Using parallel computing for the display and simulation of the space debris environment

    NASA Astrophysics Data System (ADS)

    Möckel, M.; Wiedemann, C.; Flegel, S.; Gelhaus, J.; Vörsmann, P.; Klinkrad, H.; Krag, H.

    2011-07-01

    Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction to OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.

  14. Using parallel computing for the display and simulation of the space debris environment

    NASA Astrophysics Data System (ADS)

    Moeckel, Marek; Wiedemann, Carsten; Flegel, Sven Kevin; Gelhaus, Johannes; Klinkrad, Heiner; Krag, Holger; Voersmann, Peter

    Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction of OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.

  15. Spatial pattern separation of chemicals and frequency-independent components by terahertz spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuuki; Kawase, Kodo; Ikari, Tomofumi; Ito, Hiromasa; Ishikawa, Youichi; Minamide, Hiroaki

    2003-10-01

    We separated the component spatial patterns of frequency-dependent absorption in chemicals and frequency-independent components such as plastic, paper, and measurement noise in terahertz (THz) spectroscopic images, using known spectral curves. Our measurement system, which uses a widely tunable coherent THz-wave parametric oscillator source, can image at a specific frequency in the range 1-2 THz. The component patterns of chemicals can easily be extracted by use of the frequency-independent components. This method could be successfully used for nondestructive inspection for the detection of illegal drugs and devices of bioterrorism concealed, e.g., inside mail and packages.

  16. Design of a highly parallel board-level-interconnection with 320 Gbps capacity

    NASA Astrophysics Data System (ADS)

    Lohmann, U.; Jahns, J.; Limmer, S.; Fey, D.; Bauer, H.

    2012-01-01

    A parallel board-level interconnection design is presented consisting of 32 channels, each operating at 10 Gbps. The hardware uses available optoelectronic components (VCSEL, TIA, pin-diodes) and a combination of planarintegrated free-space optics, fiber-bundles and available MEMS-components, like the DMD™ from Texas Instruments. As a specific feature, we present a new modular inter-board interconnect, realized by 3D fiber-matrix connectors. The performance of the interconnect is evaluated with regard to optical properties and power consumption. Finally, we discuss the application of the interconnect for strongly distributed system architectures, as, for example, in high performance embedded computing systems and data centers.

  17. CRIT II electric, magnetic, and density measurements within an ionizing neutral stream

    NASA Technical Reports Server (NTRS)

    Swenson, C. M.; Kelley, M. C.; Primdahl, F.; Baker, K. D.

    1990-01-01

    Measurements from rocket-borne sensors inside a high-velocity neutral barium beam show a-factor-of-six increase in plasma density in a moving ionizing front. This region was colocated with intense fluctuating electric fields at frequencies well under the lower hybrid frequency for a barium plasma. Large quasi-dc electric and magnetic field fluctuations were also detected with a large component of the current and the electric field parallel to B(0). An Alfven wave with a finite electric field component parallel to the geomagnetic field was observed to propagate along B(0), where it was detected by an instrumented subpayload.

  18. A symmetrical subtraction combined with interpolated values for eliminating scattering from fluorescence EEM data

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Liu, Xiaofei; Wang, Yutian

    2016-08-01

    Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components.

  19. The gap technique does not rotate the femur parallel to the epicondylar axis.

    PubMed

    Matziolis, Georg; Boenicke, Hinrich; Pfiel, Sascha; Wassilew, Georgi; Perka, Carsten

    2011-02-01

    In the analysis of painful total knee replacements, the surgical epicondylar axis (SEA) has become established as a standard in the diagnosis of femoral component rotation. It remains unclear whether the gap technique widely used to determine femoral rotation, when applied correctly, results in a rotation parallel to the SEA. In this prospective study, 69 patients (69 joints) were included who received a navigated bicondylar surface replacement due to primary arthritis of the knee joint. In 67 cases in which a perfect soft-tissue balancing of the extension gap (<1° asymmetry) was achieved, the flexion gap and the rotation of the femoral component necessary for its symmetry was determined and documented. The femoral component was implanted additionally taking into account the posterior condylar axis and the Whiteside's line. Postoperatively, the rotation of the femoral component to the SEA was determined and this was used to calculate the angle between a femur implanted according to the gap technique and the SEA. If the gap technique had been used consistently, it would have resulted in a deviation of the femoral components by -0.6° ± 2.9° (-7.4°-5.9°) from the SEA. The absolute deviation would have been 2.4° ± 1.8°, with a range between 0.2° and 7.4°. Even if the extension gap is perfectly balanced, the gap technique does not lead to a parallel alignment of the femoral component to the SEA. Since the clinical results of this technique are equivalent to those of the femur first technique in the literature, an evaluation of this deviation as a malalignment must be considered critically.

  20. Using a constraint on the parallel velocity when determining electric fields with EISCAT

    NASA Technical Reports Server (NTRS)

    Caudal, G.; Blanc, M.

    1988-01-01

    A method is proposed to determine the perpendicular components of the ion velocity vector (and hence the perpendicular electric field) from EISCAT tristatic measurements, in which one introduces an additional constraint on the parallel velocity, in order to take account of our knowledge that the parallel velocity of ions is small. This procedure removes some artificial features introduced when the tristatic geometry becomes too unfavorable. It is particularly well suited for the southernmost or northernmost positions of the tristatic measurements performed by meridian scan experiments (CP3 mode).

  1. Multiple resonant railgun power supply

    DOEpatents

    Honig, E.M.; Nunnally, W.C.

    1985-06-19

    A multiple repetitive resonant railgun power supply provides energy for repetitively propelling projectiles from a pair of parallel rails. A plurality of serially connected paired parallel rails are powered by similar power supplies. Each supply comprises an energy storage capacitor, a storage inductor to form a resonant circuit with the energy storage capacitor and a magnetic switch to transfer energy between the resonant circuit and the pair of parallel rails for the propelling of projectiles. The multiple serial operation permits relatively small energy components to deliver overall relatively large amounts of energy to the projectiles being propelled.

  2. Multiple resonant railgun power supply

    DOEpatents

    Honig, Emanuel M.; Nunnally, William C.

    1988-01-01

    A multiple repetitive resonant railgun power supply provides energy for repetitively propelling projectiles from a pair of parallel rails. A plurality of serially connected paired parallel rails are powered by similar power supplies. Each supply comprises an energy storage capacitor, a storage inductor to form a resonant circuit with the energy storage capacitor and a magnetic switch to transfer energy between the resonant circuit and the pair of parallel rails for the propelling of projectiles. The multiple serial operation permits relatively small energy components to deliver overall relatively large amounts of energy to the projectiles being propelled.

  3. Rrp1b, a New Candidate Susceptibility Gene for Breast Cancer Progression and Metastasis

    PubMed Central

    Crawford, Nigel P. S; Qian, Xiaolan; Ziogas, Argyrios; Papageorge, Alex G; Boersma, Brenda J; Walker, Renard C; Lukes, Luanne; Rowe, William L; Zhang, Jinghui; Ambs, Stefan; Lowy, Douglas R; Anton-Culver, Hoda; Hunter, Kent W

    2007-01-01

    A novel candidate metastasis modifier, ribosomal RNA processing 1 homolog B (Rrp1b), was identified through two independent approaches. First, yeast two-hybrid, immunoprecipitation, and functional assays demonstrated a physical and functional interaction between Rrp1b and the previous identified metastasis modifier Sipa1. In parallel, using mouse and human metastasis gene expression data it was observed that extracellular matrix (ECM) genes are common components of metastasis predictive signatures, suggesting that ECM genes are either important markers or causal factors in metastasis. To investigate the relationship between ECM genes and poor prognosis in breast cancer, expression quantitative trait locus analysis of polyoma middle-T transgene-induced mammary tumor was performed. ECM gene expression was found to be consistently associated with Rrp1b expression. In vitro expression of Rrp1b significantly altered ECM gene expression, tumor growth, and dissemination in metastasis assays. Furthermore, a gene signature induced by ectopic expression of Rrp1b in tumor cells predicted survival in a human breast cancer gene expression dataset. Finally, constitutional polymorphism within RRP1B was found to be significantly associated with tumor progression in two independent breast cancer cohorts. These data suggest that RRP1B may be a novel susceptibility gene for breast cancer progression and metastasis. PMID:18081427

  4. State-plane analysis of parallel resonant converter

    NASA Technical Reports Server (NTRS)

    Oruganti, R.; Lee, F. C.

    1985-01-01

    A method for analyzing the complex operation of a parallel resonant converter is developed, utilizing graphical state-plane techniques. The comprehensive mode analysis uncovers, for the first time, the presence of other complex modes besides the continuous conduction mode and the discontinuous conduction mode and determines their theoretical boundaries. Based on the insight gained from the analysis, a novel, high-frequency resonant buck converter is proposed. The voltage conversion ratio of the new converter is almost independent of load.

  5. Independent origins of neurons and synapses: insights from ctenophores

    PubMed Central

    Moroz, Leonid L.; Kohn, Andrea B.

    2016-01-01

    There is more than one way to develop neuronal complexity, and animals frequently use different molecular toolkits to achieve similar functional outcomes. Genomics and metabolomics data from basal metazoans suggest that neural signalling evolved independently in ctenophores and cnidarians/bilaterians. This polygenesis hypothesis explains the lack of pan-neuronal and pan-synaptic genes across metazoans, including remarkable examples of lineage-specific evolution of neurogenic and signalling molecules as well as synaptic components. Sponges and placozoans are two lineages without neural and muscular systems. The possibility of secondary loss of neurons and synapses in the Porifera/Placozoa clades is a highly unlikely and less parsimonious scenario. We conclude that acetylcholine, serotonin, histamine, dopamine, octopamine and gamma-aminobutyric acid (GABA) were recruited as transmitters in the neural systems in cnidarian and bilaterian lineages. By contrast, ctenophores independently evolved numerous secretory peptides, indicating extensive adaptations within the clade and suggesting that early neural systems might be peptidergic. Comparative analysis of glutamate signalling also shows numerous lineage-specific innovations, implying the extensive use of this ubiquitous metabolite and intercellular messenger over the course of convergent and parallel evolution of mechanisms of intercellular communication. Therefore: (i) we view a neuron as a functional character but not a genetic character, and (ii) any given neural system cannot be considered as a single character because it is composed of different cell lineages with distinct genealogies, origins and evolutionary histories. Thus, when reconstructing the evolution of nervous systems, we ought to start with the identification of particular cell lineages by establishing distant neural homologies or examples of convergent evolution. In a corollary of the hypothesis of the independent origins of neurons, our analyses suggest that both electrical and chemical synapses evolved more than once. PMID:26598724

  6. Loss-of-function DNA sequence variant in the CLCNKA chloride channel implicates the cardio-renal axis in interindividual heart failure risk variation.

    PubMed

    Cappola, Thomas P; Matkovich, Scot J; Wang, Wei; van Booven, Derek; Li, Mingyao; Wang, Xuexia; Qu, Liming; Sweitzer, Nancy K; Fang, James C; Reilly, Muredach P; Hakonarson, Hakon; Nerbonne, Jeanne M; Dorn, Gerald W

    2011-02-08

    Common heart failure has a strong undefined heritable component. Two recent independent cardiovascular SNP array studies identified a common SNP at 1p36 in intron 2 of the HSPB7 gene as being associated with heart failure. HSPB7 resequencing identified other risk alleles but no functional gene variants. Here, we further show no effect of the HSPB7 SNP on cardiac HSPB7 mRNA levels or splicing, suggesting that the SNP marks the position of a functional variant in another gene. Accordingly, we used massively parallel platforms to resequence all coding exons of the adjacent CLCNKA gene, which encodes the K(a) renal chloride channel (ClC-K(a)). Of 51 exonic CLCNKA variants identified, one SNP (rs10927887, encoding Arg83Gly) was common, in linkage disequilibrium with the heart failure risk SNP in HSPB7, and associated with heart failure in two independent Caucasian referral populations (n = 2,606 and 1,168; combined P = 2.25 × 10(-6)). Individual genotyping of rs10927887 in the two study populations and a third independent heart failure cohort (combined n = 5,489) revealed an additive allele effect on heart failure risk that is independent of age, sex, and prior hypertension (odds ratio = 1.27 per allele copy; P = 8.3 × 10(-7)). Functional characterization of recombinant wild-type Arg83 and variant Gly83 ClC-K(a) chloride channel currents revealed ≈ 50% loss-of-function of the variant channel. These findings identify a common, functionally significant genetic risk factor for Caucasian heart failure. The variant CLCNKA risk allele, telegraphed by linked variants in the adjacent HSPB7 gene, uncovers a previously overlooked genetic mechanism affecting the cardio-renal axis.

  7. Data Partitioning and Load Balancing in Parallel Disk Systems

    NASA Technical Reports Server (NTRS)

    Scheuermann, Peter; Weikum, Gerhard; Zabback, Peter

    1997-01-01

    Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible waves, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent, self-reliant file system that aims to optimize striping by taking into account the requirements of the applications and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces.

  8. Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Jin, Hao-Qiang; anMey, Dieter; Hatay, Ferhat F.

    2003-01-01

    Clusters of SMP (Symmetric Multi-Processors) nodes provide support for a wide range of parallel programming paradigms. The shared address space within each node is suitable for OpenMP parallelization. Message passing can be employed within and across the nodes of a cluster. Multiple levels of parallelism can be achieved by combining message passing and OpenMP parallelization. Which programming paradigm is the best will depend on the nature of the given problem, the hardware components of the cluster, the network, and the available software. In this study we compare the performance of different implementations of the same CFD benchmark application, using the same numerical algorithm but employing different programming paradigms.

  9. Sunlight-induced changes in chromophores and fluorophores of wastewater-derived organic matter in receiving waters--the role of salinity.

    PubMed

    Yang, Xiaofang; Meng, Fangang; Huang, Guocheng; Sun, Li; Lin, Zheng

    2014-10-01

    Wastewater-derived organic matter (WOM) is an important constituent of discharge to urban rivers and is suspected of altering the naturally occurring dissolved organic matter (DOM) in water systems. This study investigated sunlight-induced changes in chromophores and fluorophores of WOM with different salinities (S = 0, 10, 20 and 30) that were collected from two wastewater treatment plants (WWTP-A and WWTP-B). The results showed that exposure to sunlight for 5.3 × 10(5) J/m(2) caused significant decreases in UV254-absorbing WOM (45-59% loss) compared to gross dissolved organic carbon (<15% loss). An increase in salinity accelerated the overall photo-degradation rates of the UV254-absorbing chromophores from both WOM and natural DOM. In addition, irradiated WOM at a higher salinity had a larger molecular size than that at a lower salinity. However, natural DOM did not display such behavior. Parallel factor analysis of the excitation-emission matrix determined the presence of two humic-like components (C1 and C2) and two protein-like components (C3 and C4). All the components in WOM followed second-order kinetics, except for the C4 component in WWTP-A, which fit zero-order photoreaction kinetics. The photo-degradation of the C1 component in both WWTPs appeared to be independent of salinity; however, the photo-degradation rates of the C2 and C3 components in both WWTPs and C4 in WWTP-B increased significantly with increasing salinity. In comparison, the photo-degradation of the C1 component was significantly facilitated by increased salinity in natural DOM, fitting first-order photoreaction kinetics. As such, the current knowledge concerning the photo-degradation of naturally occurring DOM cannot be extrapolated for the understanding of WOM photo-degradation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.

    1997-12-01

    Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.

  11. A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON

    PubMed Central

    King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix

    2008-01-01

    As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connell, Patrick; Frolov, Valeri P.; Kubiznak, David

    We obtain and study the equations describing the parallel transport of orthonormal frames along geodesics in a spacetime admitting a nondegenerate, principal, conformal Killing-Yano tensor h. We demonstrate that the operator F, obtained by a projection of h to a subspace orthogonal to the velocity, has in a generic case eigenspaces of dimension not greater than 2. Each of these eigenspaces is independently parallel propagated. This allows one to reduce the parallel transport equations to a set of first order, ordinary, differential equations for the angles of rotation in the 2D eigenspaces. General analysis is illustrated by studying the equationsmore » of the parallel transport in the Kerr-NUT-(A)dS metrics. Examples of three-, four-, and five-dimensional Kerr-NUT-(A)dS are considered, and it is shown that the obtained first order equations can be solved by a separation of variables.« less

  13. Parallel evolutionary computation in bioinformatics applications.

    PubMed

    Pinho, Jorge; Sobral, João Luis; Rocha, Miguel

    2013-05-01

    A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Parallel software support for computational structural mechanics

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1987-01-01

    The application of the parallel programming methodology known as the Force was conducted. Two application issues were addressed. The first involves the efficiency of the implementation and its completeness in terms of satisfying the needs of other researchers implementing parallel algorithms. Support for, and interaction with, other Computational Structural Mechanics (CSM) researchers using the Force was the main issue, but some independent investigation of the Barrier construct, which is extremely important to overall performance, was also undertaken. Another efficiency issue which was addressed was that of relaxing the strong synchronization condition imposed on the self-scheduled parallel DO loop. The Force was extended by the addition of logical conditions to the cases of a parallel case construct and by the inclusion of a self-scheduled version of this construct. The second issue involved applying the Force to the parallelization of finite element codes such as those found in the NICE/SPAR testbed system. One of the more difficult problems encountered is the determination of what information in COMMON blocks is actually used outside of a subroutine and when a subroutine uses a COMMON block merely as scratch storage for internal temporary results.

  15. 3-D parallel program for numerical calculation of gas dynamics problems with heat conductivity on distributed memory computational systems (CS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.

    1997-12-31

    The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle.more » The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.« less

  16. An integrated runtime and compile-time approach for parallelizing structured and block structured applications

    NASA Technical Reports Server (NTRS)

    Agrawal, Gagan; Sussman, Alan; Saltz, Joel

    1993-01-01

    Scientific and engineering applications often involve structured meshes. These meshes may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). A combined runtime and compile-time approach for parallelizing these applications on distributed memory parallel machines in an efficient and machine-independent fashion was described. A runtime library which can be used to port these applications on distributed memory machines was designed and implemented. The library is currently implemented on several different systems. To further ease the task of application programmers, methods were developed for integrating this runtime library with compilers for HPK-like parallel programming languages. How this runtime library was integrated with the Fortran 90D compiler being developed at Syracuse University is discussed. Experimental results to demonstrate the efficacy of our approach are presented. A multiblock Navier-Stokes solver template and a multigrid code were experimented with. Our experimental results show that our primitives have low runtime communication overheads. Further, the compiler parallelized codes perform within 20 percent of the code parallelized by manually inserting calls to the runtime library.

  17. A derivation and scalable implementation of the synchronous parallel kinetic Monte Carlo method for simulating long-time dynamics

    NASA Astrophysics Data System (ADS)

    Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2017-10-01

    Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.

  18. Making almost commuting matrices commute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hastings, Matthew B

    Suppose two Hermitian matrices A, B almost commute ({parallel}[A,B]{parallel} {<=} {delta}). Are they close to a commuting pair of Hermitian matrices, A', B', with {parallel}A-A'{parallel},{parallel}B-B'{parallel} {<=} {epsilon}? A theorem of H. Lin shows that this is uniformly true, in that for every {epsilon} > 0 there exists a {delta} > 0, independent of the size N of the matrices, for which almost commuting implies being close to a commuting pair. However, this theorem does not specifiy how {delta} depends on {epsilon}. We give uniform bounds relating {delta} and {epsilon}. The proof is constructive, giving an explicit algorithm to construct A'more » and B'. We provide tighter bounds in the case of block tridiagonal and tridiagnonal matrices. Within the context of quantum measurement, this implies an algorithm to construct a basis in which we can make a projective measurement that approximately measures two approximately commuting operators simultaneously. Finally, we comment briefly on the case of approximately measuring three or more approximately commuting operators using POVMs (positive operator-valued measures) instead of projective measurements.« less

  19. The nature of the mineral component of bone and the mechanism of calcification.

    PubMed

    Glimcher, M J

    1987-01-01

    From the physical chemical standpoint, the formation of a solid phase of Ca-P in bone represents a phase transformation, a process exemplified by the formation of ice from water. Considering the structural complexity and abundance of highly organized macromolecules in the cells and extracellular tissue spaces of mineralized tissues generally and in bone particularly, it is inconceivable that this phase transformation occurs by homogeneous nucleation, i.e., without the active participation of an organic component acting as a nucleator. This is almost surely true in biologic mineralization in general. Electron micrographs and low-angle neutron and X-ray diffraction studies clearly show that calcification of collagen fibrils occurs in an extremely intimate and highly organized fashion: initiation of crystal formation within the collagen fibrils in the hole zone region, with the long axes (c-axis) of the crystals aligned roughly parallel to the long axis of the fibril within which they are located. Crystals are initially formed in hole zone regions within individual fibrils separated by unmineralized regions. Calcification is initiated in spatially distinct nucleation sites. This indicates that such regions within a single, undirectional fibril represents independent sites for heterogeneous nucleation. Clearly, sites where mineralization is initiated in adjacent collagen fibrils are even further separated, emphasizing even more clearly that the process of progressive calcification of the collagen fibrils and therefore of the tissue is characterized principally by the presence of increasing numbers of independent nucleation sites within additional hole zone regions of the collagen fibrils. The increase in the mass of Ca-P apatite accrues principally by multiplication of more crystals, mostly by secondary nucleation from the crystals initially deposited in the hole zone region. Very little additional growth of the crystals occurs with time, the additional increase in mineral mass being principally the result of increase in the number of crystals (multiplication), not size of the crystals (crystal growth). The crystals within the collagen fibers grow in number and possibly in size to extend into the overlap zone of the collagen fibrils ("pores") so that all of the available space within the fibrils, which has possibly expanded in volume from its uncalcified level, is eventually occupied by the mineral crystals. It must be recognized that the calcification of separate tissue components and compartments (collagen, mitochondria, matrix vesicles) must be an independent physical chemical event.(ABSTRACT TRUNCATED AT 400 WORDS)

  20. Maximum flow-based resilience analysis: From component to system

    PubMed Central

    Jin, Chong; Li, Ruiying; Kang, Rui

    2017-01-01

    Resilience, the ability to withstand disruptions and recover quickly, must be considered during system design because any disruption of the system may cause considerable loss, including economic and societal. This work develops analytic maximum flow-based resilience models for series and parallel systems using Zobel’s resilience measure. The two analytic models can be used to evaluate quantitatively and compare the resilience of the systems with the corresponding performance structures. For systems with identical components, the resilience of the parallel system increases with increasing number of components, while the resilience remains constant in the series system. A Monte Carlo-based simulation method is also provided to verify the correctness of our analytic resilience models and to analyze the resilience of networked systems based on that of components. A road network example is used to illustrate the analysis process, and the resilience comparison among networks with different topologies but the same components indicates that a system with redundant performance is usually more resilient than one without redundant performance. However, not all redundant capacities of components can improve the system resilience, the effectiveness of the capacity redundancy depends on where the redundant capacity is located. PMID:28545135

  1. Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data

    ERIC Educational Resources Information Center

    Dinno, Alexis

    2009-01-01

    Horn's parallel analysis (PA) is the method of consensus in the literature on empirical methods for deciding how many components/factors to retain. Different authors have proposed various implementations of PA. Horn's seminal 1965 article, a 1996 article by Thompson and Daniel, and a 2004 article by Hayton, Allen, and Scarpello all make assertions…

  2. Sustainability Attitudes and Behavioral Motivations of College Students: Testing the Extended Parallel Process Model

    ERIC Educational Resources Information Center

    Perrault, Evan K.; Clark, Scott K.

    2018-01-01

    Purpose: A planet that can no longer sustain life is a frightening thought--and one that is often present in mass media messages. Therefore, this study aims to test the components of a classic fear appeal theory, the extended parallel process model (EPPM) and to determine how well its constructs predict sustainability behavioral intentions. This…

  3. Is schizophrenia the price that Homo sapiens pays for language?

    PubMed

    Crow, T J

    1997-12-19

    The dichotomy between schizophrenia and manic-depressive illness is, as E. Kraepelin suspected, flawed; no unequivocal separation can be achieved. There are no categories of psychosis, but only continua of variation. However, the definition of nuclear symptoms by K. Schneider reveals the fundamental characteristics of the core syndrome--it is independent of the environment and constant in incidence across populations that have been separated for thousands of years. The associated genetic variation must be as old as Homo sapiens and represent a component of diversity that crosses the population as a whole. The fecundity disadvantage that accompanies the syndrome requires a balance in a substantial and universal advantage; this advantage, it is proposed, is the speciation characteristic of language; language and psychosis have a common evolutionary origin. Language, it is suggested, originated in a critical change on the sex chromosomes (the 'speciation event'--the genetic change that defined the species) occurring in East Africa between 100 and 250 thousand years ago that allowed the two hemispheres to develop with a degree of independence. Language can be understood as bi-hemispheric with one component function--a linear output sequence--confined to the dominant hemisphere--and a second--parallel distributed sampling occurring mainly in the non-dominant hemisphere. This mechanism provides an account of the generativity of language. The significance of nuclear symptoms is that these reflect a breakdown of bi-hemispheric coordination of language, perhaps specifically of the process of 'indexicalisation' (the distinction between 'I' and 'you') of self- versus other-generated references. Nuclear symptoms can be described as 'language at the end of its tether'; the phenomena and population characteristics of the nuclear syndrome of schizophrenia thus yield clues to the origin of the species.

  4. Influence of continuous positive airway pressure on outcomes of rehabilitation in stroke patients with obstructive sleep apnea.

    PubMed

    Ryan, Clodagh M; Bayley, Mark; Green, Robin; Murray, Brian J; Bradley, T Douglas

    2011-04-01

    In stroke patients, obstructive sleep apnea (OSA) is associated with poorer functional outcomes than in those without OSA. We hypothesized that treatment of OSA by continuous positive airway pressure (CPAP) in stroke patients would enhance motor, functional, and neurocognitive recovery. This was a randomized, open label, parallel group trial with blind assessment of outcomes performed in stroke patients with OSA in a stroke rehabilitation unit. Patients were assigned to standard rehabilitation alone (control group) or to CPAP (CPAP group). The primary outcomes were the Canadian Neurological scale, the 6-minute walk test distance, sustained attention response test, and the digit or spatial span-backward. Secondary outcomes included Epworth Sleepiness scale, Stanford Sleepiness scale, Functional Independence measure, Chedoke McMaster Stroke assessment, neurocognitive function, and Beck depression inventory. Tests were performed at baseline and 1 month later. Patients assigned to CPAP (n=22) experienced no adverse events. Regarding primary outcomes, compared to the control group (n=22), the CPAP group experienced improvement in stroke-related impairment (Canadian Neurological scale score, P<0.001) but not in 6-minute walk test distance, sustained attention response test, or digit or spatial span-backward. Regarding secondary outcomes, the CPAP group experienced improvements in the Epworth Sleepiness scale (P<0.001), motor component of the Functional Independence measure (P=0.05), Chedoke-McMaster Stroke assessment of upper and lower limb motor recovery test of the leg (P=0.001), and the affective component of depression (P=0.006), but not neurocognitive function. Treatment of OSA by CPAP in stroke patients undergoing rehabilitation improved functional and motor, but not neurocognitive outcomes. URL: http://www.clinicaltrials.gov. Unique identifier: NCT00221065.

  5. Recent Progress on the Parallel Implementation of Moving-Body Overset Grid Schemes

    NASA Technical Reports Server (NTRS)

    Wissink, Andrew; Allen, Edwin (Technical Monitor)

    1998-01-01

    Viscous calculations about geometrically complex bodies in which there is relative motion between component parts is one of the most computationally demanding problems facing CFD researchers today. This presentation documents results from the first two years of a CHSSI-funded effort within the U.S. Army AFDD to develop scalable dynamic overset grid methods for unsteady viscous calculations with moving-body problems. The first pan of the presentation will focus on results from OVERFLOW-D1, a parallelized moving-body overset grid scheme that employs traditional Chimera methodology. The two processes that dominate the cost of such problems are the flow solution on each component and the intergrid connectivity solution. Parallel implementations of the OVERFLOW flow solver and DCF3D connectivity software are coupled with a proposed two-part static-dynamic load balancing scheme and tested on the IBM SP and Cray T3E multi-processors. The second part of the presentation will cover some recent results from OVERFLOW-D2, a new flow solver that employs Cartesian grids with various levels of refinement, facilitating solution adaption. A study of the parallel performance of the scheme on large distributed- memory multiprocessor computer architectures will be reported.

  6. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  7. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  8. Towards Solving the Mixing Problem in the Decomposition of Geophysical Time Series by Independent Component Analysis

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)

    2000-01-01

    The use of the Principal Component Analysis technique for the analysis of geophysical time series has been questioned in particular for its tendency to extract components that mix several physical phenomena even when the signal is just their linear sum. We demonstrate with a data simulation experiment that the Independent Component Analysis, a recently developed technique, is able to solve this problem. This new technique requires the statistical independence of components, a stronger constraint, that uses higher-order statistics, instead of the classical decorrelation a weaker constraint, that uses only second-order statistics. Furthermore, ICA does not require additional a priori information such as the localization constraint used in Rotational Techniques.

  9. Hyperspectral functional imaging of the human brain

    NASA Astrophysics Data System (ADS)

    Toronov, Vladislav; Schelkanova, Irina

    2013-03-01

    We performed the independent component analysis of the hyperspectral functional near-infrared data acquired on humans during exercise and rest. We found that the hyperspectral functional data acquired on the human brain requires only two physiologically meaningful components to cover more than 50% o the temporal variance in hundreds of wavelengths. The analysis of the spectra of independent components showed that these components could be interpreted as results of changes in the cerebral blood volume and blood flow. Also, we found significant contributions of water and cytochrome c oxydase into changes associated with the independent components. Another remarkable effect of ICA was its good performance in terms of the filtering of the data noise.

  10. Transmissive Nanohole Arrays for Massively-Parallel Optical Biosensing

    PubMed Central

    2015-01-01

    A high-throughput optical biosensing technique is proposed and demonstrated. This hybrid technique combines optical transmission of nanoholes with colorimetric silver staining. The size and spacing of the nanoholes are chosen so that individual nanoholes can be independently resolved in massive parallel using an ordinary transmission optical microscope, and, in place of determining a spectral shift, the brightness of each nanohole is recorded to greatly simplify the readout. Each nanohole then acts as an independent sensor, and the blocking of nanohole optical transmission by enzymatic silver staining defines the specific detection of a biological agent. Nearly 10000 nanoholes can be simultaneously monitored under the field of view of a typical microscope. As an initial proof of concept, biotinylated lysozyme (biotin-HEL) was used as a model analyte, giving a detection limit as low as 0.1 ng/mL. PMID:25530982

  11. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Technical Reports Server (NTRS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-01-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  12. Progressive Disintegration of Brain Networking from Normal Aging to Alzheimer Disease: Analysis of Independent Components of 18F-FDG PET Data.

    PubMed

    Pagani, Marco; Giuliani, Alessandro; Öberg, Johanna; De Carli, Fabrizio; Morbelli, Silvia; Girtler, Nicola; Arnaldi, Dario; Accardo, Jennifer; Bauckneht, Matteo; Bongioanni, Francesca; Chincarini, Andrea; Sambuceti, Gianmario; Jonsson, Cathrine; Nobili, Flavio

    2017-07-01

    Brain connectivity has been assessed in several neurodegenerative disorders investigating the mutual correlations between predetermined regions or nodes. Selective breakdown of brain networks during progression from normal aging to Alzheimer disease dementia (AD) has also been observed. Methods: We implemented independent-component analysis of 18 F-FDG PET data in 5 groups of subjects with cognitive states ranging from normal aging to AD-including mild cognitive impairment (MCI) not converting or converting to AD-to disclose the spatial distribution of the independent components in each cognitive state and their accuracy in discriminating the groups. Results: We could identify spatially distinct independent components in each group, with generation of local circuits increasing proportionally to the severity of the disease. AD-specific independent components first appeared in the late-MCI stage and could discriminate converting MCI and AD from nonconverting MCI with an accuracy of 83.5%. Progressive disintegration of the intrinsic networks from normal aging to MCI to AD was inversely proportional to the conversion time. Conclusion: Independent-component analysis of 18 F-FDG PET data showed a gradual disruption of functional brain connectivity with progression of cognitive decline in AD. This information might be useful as a prognostic aid for individual patients and as a surrogate biomarker in intervention trials. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  13. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    NASA Astrophysics Data System (ADS)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  14. Dax1 and Nanog act in parallel to stabilize mouse embryonic stem cells and induced pluripotency

    PubMed Central

    Zhang, Junlei; Liu, Gaoke; Ruan, Yan; Wang, Jiali; Zhao, Ke; Wan, Ying; Liu, Bing; Zheng, Hongting; Peng, Tao; Wu, Wei; He, Ping; Hu, Fu-Quan; Jian, Rui

    2014-01-01

    Nanog expression is heterogeneous and dynamic in embryonic stem cells (ESCs). However, the mechanism for stabilizing pluripotency during the transitions between Nanoghigh and Nanoglow states is not well understood. Here we report that Dax1 acts in parallel with Nanog to regulate mouse ESC (mESCs) identity. Dax1 stable knockdown mESCs are predisposed towards differentiation but do not lose pluripotency, whereas Dax1 overexpression supports LIF-independent self-renewal. Although partially complementary, Dax1 and Nanog function independently and cannot replace one another. They are both required for full reprogramming to induce pluripotency. Importantly, Dax1 is indispensable for self-renewal of Nanoglow mESCs. Moreover, we report that Dax1 prevents extra-embryonic endoderm (ExEn) commitment by directly repressing Gata6 transcription. Dax1 may also mediate inhibition of trophectoderm differentiation independent or as a downstream effector of Oct4. These findings establish a basal role of Dax1 in maintaining pluripotency during the state transition of mESCs and somatic cell reprogramming. PMID:25284313

  15. Electroosmotic velocity in an array of parallel soft cylinders in a salt-free medium.

    PubMed

    Ohshima, Hiroyuki

    2004-11-15

    A theory of electroosmosis in an array of parallel soft cylinders (i.e. polyelectrolyte-coated cylinders) in a salt-free medium is presented. It is shown that there is a certain critical value of the particle charge and that if the particle charge is greater than the critical value, then the electroosmotic velocity becomes constant independent of the particle charge due to the counterion condensation effects, as in the case of other electrokinetic phenomena in salt-free media.

  16. Scalable and balanced dynamic hybrid data assimilation

    NASA Astrophysics Data System (ADS)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them implemented as parallel model runs themselves. The only bottleneck in the process is the gathering and scattering of initial and final model state snapshots before and after the parallel runs which requires a very efficient and low-latency communication network. However, the volume of data communicated is small and the intervening minimization steps are only 3D-Var, which means their computational load is negligible compared with the fully parallel model runs. We present example results of scalable VEnKF with the 4D lake and shallow sea model COHERENS, assimilating simultaneously continuous in situ measurements in a single point and infrequent satellite images that cover a whole lake, with the fully scalable VEnKF.

  17. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data balancing normalization weights for the joint inversion of two or more data sets encourages the inversion to fit each data type equally well. A synthetic joint inversion of marine CSEM and MT data illustrates the algorithm's performance and parallel scaling on up to 480 processing cores. CSEM inversion of data from the Middle America Trench offshore Nicaragua demonstrates a real world application. The source code and MATLAB interface tools are freely available at http://mare2dem.ucsd.edu.

  18. The Impact of IBM Cell Technology on the Programming Paradigm in the Context of Computer Systems for Climate and Weather Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Shujia; Duffy, Daniel; Clune, Thomas

    The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratiomore » of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.« less

  19. Ranking and averaging independent component analysis by reproducibility (RAICAR).

    PubMed

    Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping

    2008-06-01

    Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data. Copyright 2007 Wiley-Liss, Inc.

  20. A SHATTERPROOF-like gene controls ripening in non-climacteric strawberries, and auxin and abscisic acid antagonistically affect its expression.

    PubMed

    Daminato, Margherita; Guzzo, Flavia; Casadoro, Giorgio

    2013-09-01

    Strawberries (Fragaria×ananassa) are false fruits the ripening of which follows the non-climacteric pathway. The role played by a C-type MADS-box gene [SHATTERPROOF-like (FaSHP)] in the ripening of strawberries has been studied by transiently modifying gene expression through either over-expression or RNA-interference-mediated down-regulation. The altered expression of the FaSHP gene caused a change in the time taken by the over-expressing and the down- regulated fruits to attain the pink stage, which was slightly shorter and much longer, respectively, compared to controls. In parallel with the modified ripening times, the metabolome components and the expression of ripening-related genes also appeared different in the transiently modified fruits. Differences in the response time of the analysed genes suggest that FaSHP can control the expression of ripening genes either directly or indirectly through other transcription factor-encoding genes. Because fleshy strawberries are false fruits these results indicate that C-type MADS-box genes like SHATTERPROOF may act as modulators of ripening in fleshy fruit-like structures independently of their anatomical origin. Treatment of strawberries with either auxin or abscisic acid had antagonistic impacts on both the expression of FaSHP and the expression of ripening-related genes and metabolome components.

  1. Electro-optic voltage sensor head

    DOEpatents

    Crawford, T.M.; Davidson, J.R.; Woods, G.K.

    1999-08-17

    The invention is an electro-optic voltage sensor head designed for integration with existing types of high voltage transmission and distribution apparatus. The sensor head contains a transducer, which comprises a transducing material in which the Pockels electro-optic effect is observed. In the practice of the invention at least one beam of electromagnetic radiation is routed into the transducing material of the transducer in the sensor head. The beam undergoes an electro-optic effect in the sensor head when the transducing material is subjected to an E-field. The electro-optic effect is observed as a differential phase a shift, also called differential phase modulation, of the beam components in orthogonal planes of the electromagnetic radiation. In the preferred embodiment the beam is routed through the transducer along an initial axis and then reflected by a retro-reflector back substantially parallel to the initial axis, making a double pass through the transducer for increased measurement sensitivity. The preferred embodiment of the sensor head also includes a polarization state rotator and at least one beam splitter for orienting the beam along major and minor axes and for splitting the beam components into two signals which are independent converse amplitude-modulated signals carrying E-field magnitude and hence voltage information from the sensor head by way of optic fibers. 6 figs.

  2. Electro-optic voltage sensor head

    DOEpatents

    Crawford, Thomas M.; Davidson, James R.; Woods, Gregory K.

    1999-01-01

    The invention is an electro-optic voltage sensor head designed for integration with existing types of high voltage transmission and distribution apparatus. The sensor head contains a transducer, which comprises a transducing material in which the Pockels electro-optic effect is observed. In the practice of the invention at least one beam of electromagnetic radiation is routed into the transducing material of the transducer in the sensor head. The beam undergoes an electro-optic effect in the sensor head when the transducing material is subjected to an E-field. The electro-optic effect is observed as a differential phase a shift, also called differential phase modulation, of the beam components in orthogonal planes of the electromagnetic radiation. In the preferred embodiment the beam is routed through the transducer along an initial axis and then reflected by a retro-reflector back substantially parallel to the initial axis, making a double pass through the transducer for increased measurement sensitivity. The preferred embodiment of the sensor head also includes a polarization state rotator and at least one beam splitter for orienting the beam along major and minor axes and for splitting the beam components into two signals which are independent converse amplitude-modulated signals carrying E-field magnitude and hence voltage information from the sensor head by way of optic fibers.

  3. Low-Frequency Waves in Cold Three-Component Plasmas

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Tang, Ying; Zhao, Jinsong; Lu, Jianyong

    2016-09-01

    The dispersion relation and electromagnetic polarization of the plasma waves are comprehensively studied in cold electron, proton, and heavy charged particle plasmas. Three modes are classified as the fast, intermediate, and slow mode waves according to different phase velocities. When plasmas contain positively-charged particles, the fast and intermediate modes can interact at the small propagating angles, whereas the two modes are separate at the large propagating angles. The near-parallel intermediate and slow waves experience the linear polarization, circular polarization, and linear polarization again, with the increasing wave number. The wave number regime corresponding to the above circular polarization shrinks as the propagating angle increases. Moreover, the fast and intermediate modes cause the reverse change of the electromagnetic polarization at the special wave number. While the heavy particles carry the negative charges, the dispersion relations of the fast and intermediate modes are always separate, being independent of the propagating angles. Furthermore, this study gives new expressions of the three resonance frequencies corresponding to the highly-oblique propagation waves in the general three-component plasmas, and shows the dependence of the resonance frequencies on the propagating angle, the concentration of the heavy particle, and the mass ratio among different kinds of particles. supported by National Natural Science Foundation of China (Nos. 11303099, 41531071 and 41574158), and the Youth Innovation Promotion Association CAS

  4. Anticrossproducts and cross divisions.

    PubMed

    de Leva, Paolo

    2008-01-01

    This paper defines, in the context of conventional vector algebra, the concept of anticrossproduct and a family of simple operations called cross or vector divisions. It is impossible to solve for a or b the equation axb=c, where a and b are three-dimensional space vectors, and axb is their cross product. However, the problem becomes solvable if some "knowledge about the unknown" (a or b) is available, consisting of one of its components, or the angle it forms with the other operand of the cross product. Independently of the selected reference frame orientation, the known component of a may be parallel to b, or vice versa. The cross divisions provide a compact and insightful symbolic representation of a family of algorithms specifically designed to solve problems of such kind. A generalized algorithm was also defined, incorporating the rules for selecting the appropriate kind of cross division, based on the type of input data. Four examples of practical application were provided, including the computation of the point of application of a force and the angular velocity of a rigid body. The definition and geometrical interpretation of the cross divisions stemmed from the concept of anticrossproduct. The "anticrossproducts of axb" were defined as the infinitely many vectors x(i) such that x(i)xb=axb.

  5. A symmetrical subtraction combined with interpolated values for eliminating scattering from fluorescence EEM data.

    PubMed

    Xu, Jing; Liu, Xiaofei; Wang, Yutian

    2016-08-05

    Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Multitasking for flows about multiple body configurations using the chimera grid scheme

    NASA Technical Reports Server (NTRS)

    Dougherty, F. C.; Morgan, R. L.

    1987-01-01

    The multitasking of a finite-difference scheme using multiple overset meshes is described. In this chimera, or multiple overset mesh approach, a multiple body configuration is mapped using a major grid about the main component of the configuration, with minor overset meshes used to map each additional component. This type of code is well suited to multitasking. Both steady and unsteady two dimensional computations are run on parallel processors on a CRAY-X/MP 48, usually with one mesh per processor. Flow field results are compared with single processor results to demonstrate the feasibility of running multiple mesh codes on parallel processors and to show the increase in efficiency.

  7. Global interrupt and barrier networks

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E; Heidelberger, Philip; Kopcsay, Gerard V.; Steinmacher-Burow, Burkhard D.; Takken, Todd E.

    2008-10-28

    A system and method for generating global asynchronous signals in a computing structure. Particularly, a global interrupt and barrier network is implemented that implements logic for generating global interrupt and barrier signals for controlling global asynchronous operations performed by processing elements at selected processing nodes of a computing structure in accordance with a processing algorithm; and includes the physical interconnecting of the processing nodes for communicating the global interrupt and barrier signals to the elements via low-latency paths. The global asynchronous signals respectively initiate interrupt and barrier operations at the processing nodes at times selected for optimizing performance of the processing algorithms. In one embodiment, the global interrupt and barrier network is implemented in a scalable, massively parallel supercomputing device structure comprising a plurality of processing nodes interconnected by multiple independent networks, with each node including one or more processing elements for performing computation or communication activity as required when performing parallel algorithm operations. One multiple independent network includes a global tree network for enabling high-speed global tree communications among global tree network nodes or sub-trees thereof. The global interrupt and barrier network may operate in parallel with the global tree network for providing global asynchronous sideband signals.

  8. Alignment between Protostellar Outflows and Filamentary Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephens, Ian W.; Dunham, Michael M.; Myers, Philip C.

    2017-09-01

    We present new Submillimeter Array (SMA) observations of CO(2–1) outflows toward young, embedded protostars in the Perseus molecular cloud as part of the Mass Assembly of Stellar Systems and their Evolution with the SMA (MASSES) survey. For 57 Perseus protostars, we characterize the orientation of the outflow angles and compare them with the orientation of the local filaments as derived from Herschel observations. We find that the relative angles between outflows and filaments are inconsistent with purely parallel or purely perpendicular distributions. Instead, the observed distribution of outflow-filament angles are more consistent with either randomly aligned angles or a mixmore » of projected parallel and perpendicular angles. A mix of parallel and perpendicular angles requires perpendicular alignment to be more common by a factor of ∼3. Our results show that the observed distributions probably hold regardless of the protostar’s multiplicity, age, or the host core’s opacity. These observations indicate that the angular momentum axis of a protostar may be independent of the large-scale structure. We discuss the significance of independent protostellar rotation axes in the general picture of filament-based star formation.« less

  9. Parallel approach to incorporating face image information into dialogue processing

    NASA Astrophysics Data System (ADS)

    Ren, Fuji

    2000-10-01

    There are many kinds of so-called irregular expressions in natural dialogues. Even if the content of a conversation is the same in words, different meanings can be interpreted by a person's feeling or face expression. To have a good understanding of dialogues, it is required in a flexible dialogue processing system to infer the speaker's view properly. However, it is difficult to obtain the meaning of the speaker's sentences in various scenes using traditional methods. In this paper, a new approach for dialogue processing that incorporates information from the speaker's face is presented. We first divide conversation statements into several simple tasks. Second, we process each simple task using an independent processor. Third, we employ some speaker's face information to estimate the view of the speakers to solve ambiguities in dialogues. The approach presented in this paper can work efficiently, because independent processors run in parallel, writing partial results to a shared memory, incorporating partial results at appropriate points, and complementing each other. A parallel algorithm and a method for employing the face information in a dialogue machine translation will be discussed, and some results will be included in this paper.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonachea, Dan; Hargrove, P.

    GASNet is a language-independent, low-level networking layer that provides network-independent, high-performance communication primitives tailored for implementing parallel global address space SPMD languages and libraries such as UPC, UPC++, Co-Array Fortran, Legion, Chapel, and many others. The interface is primarily intended as a compilation target and for use by runtime library writers (as opposed to end users), and the primary goals are high performance, interface portability, and expressiveness. GASNet stands for "Global-Address Space Networking".

  11. Life and dynamic capacity modeling for aircraft transmissions

    NASA Technical Reports Server (NTRS)

    Savage, Michael

    1991-01-01

    A computer program to simulate the dynamic capacity and life of parallel shaft aircraft transmissions is presented. Five basic configurations can be analyzed: single mesh, compound, parallel, reverted, and single plane reductions. In execution, the program prompts the user for the data file prefix name, takes input from a ASCII file, and writes its output to a second ASCII file with the same prefix name. The input data file includes the transmission configuration, the input shaft torque and speed, and descriptions of the transmission geometry and the component gears and bearings. The program output file describes the transmission, its components, their capabilities, locations, and loads. It also lists the dynamic capability, ninety percent reliability, and mean life of each component and the transmission as a system. Here, the program, its input and output files, and the theory behind the operation of the program are described.

  12. RHCV Telescope System Operations Manual

    DTIC Science & Technology

    2018-01-05

    hardware and software components. Several of the components are closely coupled and rely on one-another, while others are largely independent. This...of hardware and software components. Several of the components are closely coupled and rely on one-another, while others are largely independent. This...attendant training The use cases are briefly described in separate sections, and step-by-step instructions are presented. Each section begins on a new

  13. Automatic Adaptation of Tunable Distributed Applications

    DTIC Science & Technology

    2001-01-01

    size, weight, and battery life, with a single CPU, less memory, smaller hard disk, and lower bandwidth network connectivity. The power of PDAs is...wireless, and bluetooth [32] facilities; thus achieving different rates of data transmission. 1 With the trend of “write once, run everywhere...applications, a single component can execute on multiple processors (or machines) in parallel. These parallel applications, written in a specialized language

  14. A multi-component parallel-plate flow chamber system for studying the effect of exercise-induced wall shear stress on endothelial cells.

    PubMed

    Wang, Yan-Xia; Xiang, Cheng; Liu, Bo; Zhu, Yong; Luan, Yong; Liu, Shu-Tian; Qin, Kai-Rong

    2016-12-28

    In vivo studies have demonstrated that reasonable exercise training can improve endothelial function. To confirm the key role of wall shear stress induced by exercise on endothelial cells, and to understand how wall shear stress affects the structure and the function of endothelial cells, it is crucial to design and fabricate an in vitro multi-component parallel-plate flow chamber system which can closely replicate exercise-induced wall shear stress waveforms in artery. The in vivo wall shear stress waveforms from the common carotid artery of a healthy volunteer in resting and immediately after 30 min acute aerobic cycling exercise were first calculated by measuring the inner diameter and the center-line blood flow velocity with a color Doppler ultrasound. According to the above in vivo wall shear stress waveforms, we designed and fabricated a parallel-plate flow chamber system with appropriate components based on a lumped parameter hemodynamics model. To validate the feasibility of this system, human umbilical vein endothelial cells (HUVECs) line were cultured within the parallel-plate flow chamber under abovementioned two types of wall shear stress waveforms and the intracellular actin microfilaments and nitric oxide (NO) production level were evaluated using fluorescence microscope. Our results show that the trends of resting and exercise-induced wall shear stress waveforms, especially the maximal, minimal and mean wall shear stress as well as oscillatory shear index, generated by the parallel-plate flow chamber system are similar to those acquired from the common carotid artery. In addition, the cellular experiments demonstrate that the actin microfilaments and the production of NO within cells exposed to the two different wall shear stress waveforms exhibit different dynamic behaviors; there are larger numbers of actin microfilaments and higher level NO in cells exposed in exercise-induced wall shear stress condition than resting wall shear stress condition. The parallel-plate flow chamber system can well reproduce wall shear stress waveforms acquired from the common carotid artery in resting and immediately after exercise states. Furthermore, it can be used for studying the endothelial cells responses under resting and exercise-induced wall shear stress environments in vitro.

  15. The role of Bh4 in parallel evolution of hull colour in domesticated and weedy rice.

    PubMed

    Vigueira, C C; Li, W; Olsen, K M

    2013-08-01

    The two independent domestication events in the genus Oryza that led to African and Asian rice offer an extremely useful system for studying the genetic basis of parallel evolution. This system is also characterized by parallel de-domestication events, with two genetically distinct weedy rice biotypes in the US derived from the Asian domesticate. One important trait that has been altered by rice domestication and de-domestication is hull colour. The wild progenitors of the two cultivated rice species have predominantly black-coloured hulls, as does one of the two U.S. weed biotypes; both cultivated species and one of the US weedy biotypes are characterized by straw-coloured hulls. Using Black hull 4 (Bh4) as a hull colour candidate gene, we examined DNA sequence variation at this locus to study the parallel evolution of hull colour variation in the domesticated and weedy rice system. We find that independent Bh4-coding mutations have arisen in African and Asian rice that are correlated with the straw hull phenotype, suggesting that the same gene is responsible for parallel trait evolution. For the U.S. weeds, Bh4 haplotype sequences support current hypotheses on the phylogenetic relationship between the two biotypes and domesticated Asian rice; straw hull weeds are most similar to indica crops, and black hull weeds are most similar to aus crops. Tests for selection indicate that Asian crops and straw hull weeds deviate from neutrality at this gene, suggesting possible selection on Bh4 during both rice domestication and de-domestication. © 2013 The Authors. Journal of Evolutionary Biology © 2013 European Society For Evolutionary Biology.

  16. Electric currents and voltage drops along auroral field lines

    NASA Technical Reports Server (NTRS)

    Stern, D. P.

    1983-01-01

    An assessment is presented of the current state of knowledge concerning Birkeland currents and the parallel electric field, with discussions focusing on the Birkeland primary region 1 sheets, the region 2 sheets which parallel them and appear to close in the partial ring current, the cusp currents (which may be correlated with the interplanetary B(y) component), and the Harang filament. The energy required by the parallel electric field and the associated particle acceleration processes appears to be derived from the Birkeland currents, for which evidence is adduced from particles, inverted V spectra, rising ion beams and expanded loss cones. Conics may on the other hand signify acceleration by electrostatic ion cyclotron waves associated with beams accelerated by the parallel electric field.

  17. Reliability of a Parallel Pipe Network

    NASA Technical Reports Server (NTRS)

    Herrera, Edgar; Chamis, Christopher (Technical Monitor)

    2001-01-01

    The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.

  18. Parallel/Vector Integration Methods for Dynamical Astronomy

    NASA Astrophysics Data System (ADS)

    Fukushima, T.

    Progress of parallel/vector computers has driven us to develop suitable numerical integrators utilizing their computational power to the full extent while being independent on the size of system to be integrated. Unfortunately, the parallel version of Runge-Kutta type integrators are known to be not so efficient. Recently we developed a parallel version of the extrapolation method (Ito and Fukushima 1997), which allows variable timesteps and still gives an acceleration factor of 3-4 for general problems. While the vector-mode usage of Picard-Chebyshev method (Fukushima 1997a, 1997b) will lead the acceleration factor of order of 1000 for smooth problems such as planetary/satellites orbit integration. The success of multiple-correction PECE mode of time-symmetric implicit Hermitian integrator (Kokubo 1998) seems to enlighten Milankar's so-called "pipelined predictor corrector method", which is expected to lead an acceleration factor of 3-4. We will review these directions and discuss future prospects.

  19. Massively parallel multicanonical simulations

    NASA Astrophysics Data System (ADS)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  20. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.

  1. Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu

    1995-01-01

    As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.

  2. Percolator: Scalable Pattern Discovery in Dynamic Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhury, Sutanay; Purohit, Sumit; Lin, Peng

    We demonstrate Percolator, a distributed system for graph pattern discovery in dynamic graphs. In contrast to conventional mining systems, Percolator advocates efficient pattern mining schemes that (1) support pattern detection with keywords; (2) integrate incremental and parallel pattern mining; and (3) support analytical queries such as trend analysis. The core idea of Percolator is to dynamically decide and verify a small fraction of patterns and their in- stances that must be inspected in response to buffered updates in dynamic graphs, with a total mining cost independent of graph size. We demonstrate a) the feasibility of incremental pattern mining by walkingmore » through each component of Percolator, b) the efficiency and scalability of Percolator over the sheer size of real-world dynamic graphs, and c) how the user-friendly GUI of Percolator inter- acts with users to support keyword-based queries that detect, browse and inspect trending patterns. We also demonstrate two user cases of Percolator, in social media trend analysis and academic collaboration analysis, respectively.« less

  3. Perpendicular Diffusion Coefficient of Comic Rays: The Presence of Weak Adiabatic Focusing

    NASA Astrophysics Data System (ADS)

    Wang, J. F.; Qin, G.; Ma, Q. M.; Song, T.; Yuan, S. B.

    2017-08-01

    The influence of adiabatic focusing on particle diffusion is an important topic in astrophysics and plasma physics. In the past, several authors have explored the influence of along-field adiabatic focusing on the parallel diffusion of charged energetic particles. In this paper, using the unified nonlinear transport theory developed by Shalchi and the method of He and Schlickeiser, we derive a new nonlinear perpendicular diffusion coefficient for a non-uniform background magnetic field. This formula demonstrates that the particle perpendicular diffusion coefficient is modified by along-field adiabatic focusing. For isotropic pitch-angle scattering and the weak adiabatic focusing limit, the derived perpendicular diffusion coefficient is independent of the sign of adiabatic focusing characteristic length. For the two-component model, we simplify the perpendicular diffusion coefficient up to the second order of the power series of the adiabatic focusing characteristic quantity. We find that the first-order modifying factor is equal to zero and that the sign of the second order is determined by the energy of the particles.

  4. A New Era of Symmetries in the Hadronic Interaction

    NASA Astrophysics Data System (ADS)

    Crawford, Christopher

    2016-09-01

    The search for a weak component of the nuclear force began in 1957, shortly after the proposal of parity violation. While it has been observed in compound nuclei with large nuclear enhancements, a systematic characterization of the hadronic weak interaction is still forthcoming almost sixty years later. New experimental facilities and technology have rejuvenated efforts to map out this ``complexity frontier'' within the Standard Model, and we will soon have precision data from multiple few-body experiments. In parallel, modern effective field theories have provided a systematic model independent description of the hadronic interaction with estimates of higher-order effects. The characterization of discrete symmetries in hadronic systems has recently become important for the design and analysis of other precision symmetries measurements, for example, electron PV scattering and time-reversal violation experiments. These new developments in experiment, theory, and application have ushered in a new era in hadronic parity violation. We acknowledge support from DOE-NP under Contract DE-SC0008107.

  5. A discrete component low-noise preamplifier readout for a linear (1×16) SiC photodiode array

    NASA Astrophysics Data System (ADS)

    Kahle, Duncan; Aslam, Shahid; Herrero, Federico A.; Waczynski, Augustyn

    2016-09-01

    A compact, low-noise and inexpensive preamplifier circuit has been designed and fabricated to optimally readout a common cathode (1×16) channel 4H-SiC Schottky photodiode array for use in ultraviolet experiments. The readout uses an operational amplifier with 10 pF capacitor in the feedback loop in parallel with a low leakage switch for each of the channels. This circuit configuration allows for reiterative sample, integrate and reset. A sampling technique is given to remove Johnson noise, enabling a femtoampere level readout noise performance. Commercial-off-the-shelf acquisition electronics are used to digitize the preamplifier analog signals. The data logging acquisition electronics has a different integration circuit, which allows the bandwidth and gain to be independently adjusted. Using this readout, photoresponse measurements across the array between spectral wavelengths 200 nm and 370 nm are made to establish the array pixels external quantum efficiency, current responsivity and noise equivalent power.

  6. Casimir Effect in de Sitter Spacetime

    NASA Astrophysics Data System (ADS)

    Saharian, A. A.

    2011-06-01

    The vacuum expectation value of the energy-momentum tensor and the Casimir forces are investigated for a massive scalar field with an arbitrary curvature coupling parameter in the geometry of two parallel plates, on the background of de Sitter spacetime. The field is prepared in the Bunch-Davies vacuum state and is constrained to satisfy Robin boundary conditions on the plates. The vacuum energy-momentum tensor is non-diagonal, with the off-diagonal component corresponding to the energy flux along the direction normal to the plates. It is shown that the curvature of the background spacetime decisively influences the behavior of the Casimir forces at separations larger than the curvature radius of de Sitter spacetime. In dependence of the curvature coupling parameter and the mass of the field, two different regimes are realized, which exhibit monotonic or oscillatory behavior of the forces. The decay of the Casimir force at large plate separation is shown to be power-law, with independence of the value of the field mass.

  7. Online Meta-data Collection and Monitoring Framework for the STAR Experiment at RHIC

    NASA Astrophysics Data System (ADS)

    Arkhipkin, D.; Lauret, J.; Betts, W.; Van Buren, G.

    2012-12-01

    The STAR Experiment further exploits scalable message-oriented model principles to achieve a high level of control over online data streams. In this paper we present an AMQP-powered Message Interface and Reliable Architecture framework (MIRA), which allows STAR to orchestrate the activities of Meta-data Collection, Monitoring, Online QA and several Run-Time and Data Acquisition system components in a very efficient manner. The very nature of the reliable message bus suggests parallel usage of multiple independent storage mechanisms for our meta-data. We describe our experience with a robust data-taking setup employing MySQL- and HyperTable-based archivers for meta-data processing. In addition, MIRA has an AJAX-enabled web GUI, which allows real-time visualisation of online process flow and detector subsystem states, and doubles as a sophisticated alarm system when combined with complex event processing engines like Esper, Borealis or Cayuga. The performance data and our planned path forward are based on our experience during the 2011-2012 running of STAR.

  8. Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples

    PubMed Central

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  9. A Discrete Component Low-Noise Preamplifier Readout for a Linear (1x16) SiC Photodiode Array

    NASA Technical Reports Server (NTRS)

    Kahle, Duncan; Aslam, Shahid; Herrero, Frederico A.; Waczynski, Augustyn

    2016-01-01

    A compact, low-noise and inexpensive preamplifier circuit has been designed and fabricated to optimally readout a common cathode (1x16) channel 4H-SiC Schottky photodiode array for use in ultraviolet experiments. The readout uses an operational amplifier with 10 pF capacitor in the feedback loop in parallel with a low leakage switch for each of the channels. This circuit configuration allows for reiterative sample, integrate and reset. A sampling technique is given to remove Johnson noise, enabling a femtoampere level readout noise performance. Commercial-off-the-shelf acquisition electronics are used to digitize the preamplifier analogue signals. The data logging acquisition electronics has a different integration circuit, which allows the bandwidth and gain to be independently adjusted. Using this readout, photoresponse measurements across the array between spectral wavelengths 200 nm and 370 nm are made to establish the array pixels external quantum efficiency, current responsivity and noise equivalent power.

  10. An open-source, extensible system for laboratory timing and control

    NASA Astrophysics Data System (ADS)

    Gaskell, Peter E.; Thorn, Jeremy J.; Alba, Sequoia; Steck, Daniel A.

    2009-11-01

    We describe a simple system for timing and control, which provides control of analog, digital, and radio-frequency signals. Our system differs from most common laboratory setups in that it is open source, built from off-the-shelf components, synchronized to a common and accurate clock, and connected over an Ethernet network. A simple bus architecture facilitates creating new and specialized devices with only moderate experience in circuit design. Each device operates independently, requiring only an Ethernet network connection to the controlling computer, a clock signal, and a trigger signal. This makes the system highly robust and scalable. The devices can all be connected to a single external clock, allowing synchronous operation of a large number of devices for situations requiring precise timing of many parallel control and acquisition channels. Provided an accurate enough clock, these devices are capable of triggering events separated by one day with near-microsecond precision. We have achieved precisions of ˜0.1 ppb (parts per 109) over 16 s.

  11. From Interfaces to Bulk: Experimental-Computational Studies Across Time and Length Scales of Multi-Functional Ionic Polymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perahia, Dvora; Grest, Gary S.

    Neutron experiments coupled with computational components have resulted in unprecedented understanding of the factors that impact the behavior of ionic structured polymers. Additionally, new computational tools to study macromolecules, were developed. In parallel, this DOE funding have enabled the education of the next generation of material researchers who are able to take the advantage neutron tools offer to the understanding and design of advanced materials. Our research has provided unprecedented insight into one of the major factors that limits the use of ionizable polymers, combining the macroscopic view obtained from the experimental techniques with molecular insight extracted from computational studiesmore » leading to transformative knowledge that will impact the design of nano-structured, materials. With the focus on model systems, of broad interest to the scientific community and to industry, the research addressed challenges that cut across a large number of polymers, independent of the specific chemical structure or the transported species.« less

  12. Workshop on Structural Dynamics and Control Interaction of Flexible Structures

    NASA Technical Reports Server (NTRS)

    Davis, L. P.; Wilson, J. F.; Jewell, R. E.

    1987-01-01

    The Hubble Space Telescope features the most exacting line of sight jitter requirement thus far imposed on a spacecraft pointing system. Consideration of the fine pointing requirements prompted an attempt to isolate the telescope from the low level vibration disturbances generated by the attitude control system reaction wheels. The primary goal was to provide isolation from axial component of wheel disturbance without compromising the control system bandwidth. A passive isolation system employing metal springs in parallel with viscous fluid dampers was designed, fabricated, and space qualified. Stiffness and damping characteristics are deterministic, controlled independently, and were demonstrated to remain constant over at least five orders of input disturbance magnitude. The damping remained purely viscous even at the data collection threshold of .16 x .000001 in input displacement, a level much lower than the anticipated Hubble Space Telescope disturbance amplitude. Vibration attenuation goals were obtained and ground test of the vehicle has demonstrated the isolators are transparent to the attitude control system.

  13. Portable parallel stochastic optimization for the design of aeropropulsion components

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Rhodes, G. S.

    1994-01-01

    This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.

  14. System and method for representing and manipulating three-dimensional objects on massively parallel architectures

    DOEpatents

    Karasick, M.S.; Strip, D.R.

    1996-01-30

    A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modeling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modeling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modeling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication. 8 figs.

  15. Parallelization of the FLAPW method and comparison with the PPW method

    NASA Astrophysics Data System (ADS)

    Canning, Andrew; Mannstadt, Wolfgang; Freeman, Arthur

    2000-03-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. In the past the FLAPW method has been limited to systems of about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell running on up to 512 processors on a Cray T3E parallel supercomputer. Some results will also be presented on a comparison of the plane-wave pseudopotential method and the FLAPW method on large systems.

  16. Efficient multitasking: parallel versus serial processing of multiple tasks

    PubMed Central

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling. PMID:26441742

  17. Efficient multitasking: parallel versus serial processing of multiple tasks.

    PubMed

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.

  18. Using independent component analysis for electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Yan, Peimin; Mo, Yulong

    2004-05-01

    Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.

  19. Thermally determining flow and/or heat load distribution in parallel paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chainer, Timothy J.; Iyengar, Madhusudan K.; Parida, Pritish R.

    A method including obtaining calibration data for at least one sub-component in a heat transfer assembly, wherein the calibration data comprises at least one indication of coolant flow rate through the sub-component for a given surface temperature delta of the sub-component and a given heat load into said sub-component, determining a measured heat load into the sub-component, determining a measured surface temperature delta of the sub-component, and determining a coolant flow distribution in a first flow path comprising the sub-component from the calibration data according to the measured heat load and the measured surface temperature delta of the sub-component.

  20. Thermally determining flow and/or heat load distribution in parallel paths

    DOEpatents

    Chainer, Timothy J.; Iyengar, Madhusudan K.; Parida, Pritish R.

    2016-12-13

    A method including obtaining calibration data for at least one sub-component in a heat transfer assembly, wherein the calibration data comprises at least one indication of coolant flow rate through the sub-component for a given surface temperature delta of the sub-component and a given heat load into said sub-component, determining a measured heat load into the sub-component, determining a measured surface temperature delta of the sub-component, and determining a coolant flow distribution in a first flow path comprising the sub-component from the calibration data according to the measured heat load and the measured surface temperature delta of the sub-component.

  1. Selection of independent components based on cortical mapping of electromagnetic activity

    NASA Astrophysics Data System (ADS)

    Chan, Hui-Ling; Chen, Yong-Sheng; Chen, Li-Fen

    2012-10-01

    Independent component analysis (ICA) has been widely used to attenuate interference caused by noise components from the electromagnetic recordings of brain activity. However, the scalp topographies and associated temporal waveforms provided by ICA may be insufficient to distinguish functional components from artifactual ones. In this work, we proposed two component selection methods, both of which first estimate the cortical distribution of the brain activity for each component, and then determine the functional components based on the parcellation of brain activity mapped onto the cortical surface. Among all independent components, the first method can identify the dominant components, which have strong activity in the selected dominant brain regions, whereas the second method can identify those inter-regional associating components, which have similar component spectra between a pair of regions. For a targeted region, its component spectrum enumerates the amplitudes of its parceled brain activity across all components. The selected functional components can be remixed to reconstruct the focused electromagnetic signals for further analysis, such as source estimation. Moreover, the inter-regional associating components can be used to estimate the functional brain network. The accuracy of the cortical activation estimation was evaluated on the data from simulation studies, whereas the usefulness and feasibility of the component selection methods were demonstrated on the magnetoencephalography data recorded from a gender discrimination study.

  2. Python as a federation tool for GENESIS 3.0.

    PubMed

    Cornelis, Hugo; Rodriguez, Armando L; Coop, Allan D; Bower, James M

    2012-01-01

    The GENESIS simulation platform was one of the first broad-scale modeling systems in computational biology to encourage modelers to develop and share model features and components. Supported by a large developer community, it participated in innovative simulator technologies such as benchmarking, parallelization, and declarative model specification and was the first neural simulator to define bindings for the Python scripting language. An important feature of the latest version of GENESIS is that it decomposes into self-contained software components complying with the Computational Biology Initiative federated software architecture. This architecture allows separate scripting bindings to be defined for different necessary components of the simulator, e.g., the mathematical solvers and graphical user interface. Python is a scripting language that provides rich sets of freely available open source libraries. With clean dynamic object-oriented designs, they produce highly readable code and are widely employed in specialized areas of software component integration. We employ a simplified wrapper and interface generator to examine an application programming interface and make it available to a given scripting language. This allows independent software components to be 'glued' together and connected to external libraries and applications from user-defined Python or Perl scripts. We illustrate our approach with three examples of Python scripting. (1) Generate and run a simple single-compartment model neuron connected to a stand-alone mathematical solver. (2) Interface a mathematical solver with GENESIS 3.0 to explore a neuron morphology from either an interactive command-line or graphical user interface. (3) Apply scripting bindings to connect the GENESIS 3.0 simulator to external graphical libraries and an open source three dimensional content creation suite that supports visualization of models based on electron microscopy and their conversion to computational models. Employed in this way, the stand-alone software components of the GENESIS 3.0 simulator provide a framework for progressive federated software development in computational neuroscience.

  3. Multivariate Genetic Correlates of the Auditory Paired Stimuli-Based P2 Event-Related Potential in the Psychosis Dimension From the BSNIP Study.

    PubMed

    Mokhtari, Mohammadreza; Narayanan, Balaji; Hamm, Jordan P; Soh, Pauline; Calhoun, Vince D; Ruaño, Gualberto; Kocherla, Mohan; Windemuth, Andreas; Clementz, Brett A; Tamminga, Carol A; Sweeney, John A; Keshavan, Matcheri S; Pearlson, Godfrey D

    2016-05-01

    The complex molecular etiology of psychosis in schizophrenia (SZ) and psychotic bipolar disorder (PBP) is not well defined, presumably due to their multifactorial genetic architecture. Neurobiological correlates of psychosis can be identified through genetic associations of intermediate phenotypes such as event-related potential (ERP) from auditory paired stimulus processing (APSP). Various ERP components of APSP are heritable and aberrant in SZ, PBP and their relatives, but their multivariate genetic factors are less explored. We investigated the multivariate polygenic association of ERP from 64-sensor auditory paired stimulus data in 149 SZ, 209 PBP probands, and 99 healthy individuals from the multisite Bipolar-Schizophrenia Network on Intermediate Phenotypes study. Multivariate association of 64-channel APSP waveforms with a subset of 16 999 single nucleotide polymorphisms (SNPs) (reduced from 1 million SNP array) was examined using parallel independent component analysis (Para-ICA). Biological pathways associated with the genes were assessed using enrichment-based analysis tools. Para-ICA identified 2 ERP components, of which one was significantly correlated with a genetic network comprising multiple linearly coupled gene variants that explained ~4% of the ERP phenotype variance. Enrichment analysis revealed epidermal growth factor, endocannabinoid signaling, glutamatergic synapse and maltohexaose transport associated with P2 component of the N1-P2 ERP waveform. This ERP component also showed deficits in SZ and PBP. Aberrant P2 component in psychosis was associated with gene networks regulating several fundamental biologic functions, either general or specific to nervous system development. The pathways and processes underlying the gene clusters play a crucial role in brain function, plausibly implicated in psychosis. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  4. Python as a Federation Tool for GENESIS 3.0

    PubMed Central

    Cornelis, Hugo; Rodriguez, Armando L.; Coop, Allan D.; Bower, James M.

    2012-01-01

    The GENESIS simulation platform was one of the first broad-scale modeling systems in computational biology to encourage modelers to develop and share model features and components. Supported by a large developer community, it participated in innovative simulator technologies such as benchmarking, parallelization, and declarative model specification and was the first neural simulator to define bindings for the Python scripting language. An important feature of the latest version of GENESIS is that it decomposes into self-contained software components complying with the Computational Biology Initiative federated software architecture. This architecture allows separate scripting bindings to be defined for different necessary components of the simulator, e.g., the mathematical solvers and graphical user interface. Python is a scripting language that provides rich sets of freely available open source libraries. With clean dynamic object-oriented designs, they produce highly readable code and are widely employed in specialized areas of software component integration. We employ a simplified wrapper and interface generator to examine an application programming interface and make it available to a given scripting language. This allows independent software components to be ‘glued’ together and connected to external libraries and applications from user-defined Python or Perl scripts. We illustrate our approach with three examples of Python scripting. (1) Generate and run a simple single-compartment model neuron connected to a stand-alone mathematical solver. (2) Interface a mathematical solver with GENESIS 3.0 to explore a neuron morphology from either an interactive command-line or graphical user interface. (3) Apply scripting bindings to connect the GENESIS 3.0 simulator to external graphical libraries and an open source three dimensional content creation suite that supports visualization of models based on electron microscopy and their conversion to computational models. Employed in this way, the stand-alone software components of the GENESIS 3.0 simulator provide a framework for progressive federated software development in computational neuroscience. PMID:22276101

  5. A parallel method of atmospheric correction for multispectral high spatial resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhao, Shaoshuai; Ni, Chen; Cao, Jing; Li, Zhengqiang; Chen, Xingfeng; Ma, Yan; Yang, Leiku; Hou, Weizhen; Qie, Lili; Ge, Bangyu; Liu, Li; Xing, Jin

    2018-03-01

    The remote sensing image is usually polluted by atmosphere components especially like aerosol particles. For the quantitative remote sensing applications, the radiative transfer model based atmospheric correction is used to get the reflectance with decoupling the atmosphere and surface by consuming a long computational time. The parallel computing is a solution method for the temporal acceleration. The parallel strategy which uses multi-CPU to work simultaneously is designed to do atmospheric correction for a multispectral remote sensing image. The parallel framework's flow and the main parallel body of atmospheric correction are described. Then, the multispectral remote sensing image of the Chinese Gaofen-2 satellite is used to test the acceleration efficiency. When the CPU number is increasing from 1 to 8, the computational speed is also increasing. The biggest acceleration rate is 6.5. Under the 8 CPU working mode, the whole image atmospheric correction costs 4 minutes.

  6. Analyzing Tropical Waves Using the Parallel Ensemble Empirical Model Decomposition Method: Preliminary Results from Hurricane Sandy

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Cheung, Samson; Li, Jui-Lin F.; Wu, Yu-ling

    2013-01-01

    In this study, we discuss the performance of the parallel ensemble empirical mode decomposition (EMD) in the analysis of tropical waves that are associated with tropical cyclone (TC) formation. To efficiently analyze high-resolution, global, multiple-dimensional data sets, we first implement multilevel parallelism into the ensemble EMD (EEMD) and obtain a parallel speedup of 720 using 200 eight-core processors. We then apply the parallel EEMD (PEEMD) to extract the intrinsic mode functions (IMFs) from preselected data sets that represent (1) idealized tropical waves and (2) large-scale environmental flows associated with Hurricane Sandy (2012). Results indicate that the PEEMD is efficient and effective in revealing the major wave characteristics of the data, such as wavelengths and periods, by sifting out the dominant (wave) components. This approach has a potential for hurricane climate study by examining the statistical relationship between tropical waves and TC formation.

  7. The effect of cell design and test criteria on the series/parallel performance of nickel cadmium cells and batteries

    NASA Technical Reports Server (NTRS)

    Halpert, G.; Webb, D. A.

    1983-01-01

    Three batteries were operated in parallel from a common bus during charge and discharge. SMM utilized NASA Standard 20AH cells and batteries, and LANDSAT-D NASA 50AH cells and batteries of a similar design. Each battery consisted of 22 series connected cells providing the nominal 28V bus. The three batteries were charged in parallel using the voltage limit/current taper mode wherein the voltage limit was temperature compensated. Discharge occurred on the demand of the spacecraft instruments and electronics. Both flights were planned for three to five year missions. The series/parallel configuration of cells and batteries for the 3-5 yr mission required a well controlled product with built-in reliability and uniformity. Examples of how component, cell and battery selection methods affect the uniformity of the series/parallel operation of the batteries both in testing and in flight are given.

  8. Increasing the perceptual salience of relationships in parallel coordinate plots.

    PubMed

    Harter, Jonathan M; Wu, Xunlei; Alabi, Oluwafemi S; Phadke, Madhura; Pinto, Lifford; Dougherty, Daniel; Petersen, Hannah; Bass, Steffen; Taylor, Russell M

    2012-01-01

    We present three extensions to parallel coordinates that increase the perceptual salience of relationships between axes in multivariate data sets: (1) luminance modulation maintains the ability to preattentively detect patterns in the presence of overplotting, (2) adding a one-vs.-all variable display highlights relationships between one variable and all others, and (3) adding a scatter plot within the parallel-coordinates display preattentively highlights clusters and spatial layouts without strongly interfering with the parallel-coordinates display. These techniques can be combined with one another and with existing extensions to parallel coordinates, and two of them generalize beyond cases with known-important axes. We applied these techniques to two real-world data sets (relativistic heavy-ion collision hydrodynamics and weather observations with statistical principal component analysis) as well as the popular car data set. We present relationships discovered in the data sets using these methods.

  9. The effect of selection environment on the probability of parallel evolution.

    PubMed

    Bailey, Susan F; Rodrigue, Nicolas; Kassen, Rees

    2015-06-01

    Across the great diversity of life, there are many compelling examples of parallel and convergent evolution-similar evolutionary changes arising in independently evolving populations. Parallel evolution is often taken to be strong evidence of adaptation occurring in populations that are highly constrained in their genetic variation. Theoretical models suggest a few potential factors driving the probability of parallel evolution, but experimental tests are needed. In this study, we quantify the degree of parallel evolution in 15 replicate populations of Pseudomonas fluorescens evolved in five different environments that varied in resource type and arrangement. We identified repeat changes across multiple levels of biological organization from phenotype, to gene, to nucleotide, and tested the impact of 1) selection environment, 2) the degree of adaptation, and 3) the degree of heterogeneity in the environment on the degree of parallel evolution at the gene-level. We saw, as expected, that parallel evolution occurred more often between populations evolved in the same environment; however, the extent of parallel evolution varied widely. The degree of adaptation did not significantly explain variation in the extent of parallelism in our system but number of available beneficial mutations correlated negatively with parallel evolution. In addition, degree of parallel evolution was significantly higher in populations evolved in a spatially structured, multiresource environment, suggesting that environmental heterogeneity may be an important factor constraining adaptation. Overall, our results stress the importance of environment in driving parallel evolutionary changes and point to a number of avenues for future work for understanding when evolution is predictable. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. B-MIC: An Ultrafast Three-Level Parallel Sequence Aligner Using MIC.

    PubMed

    Cui, Yingbo; Liao, Xiangke; Zhu, Xiaoqian; Wang, Bingqiang; Peng, Shaoliang

    2016-03-01

    Sequence alignment is the central process for sequence analysis, where mapping raw sequencing data to reference genome. The large amount of data generated by NGS is far beyond the process capabilities of existing alignment tools. Consequently, sequence alignment becomes the bottleneck of sequence analysis. Intensive computing power is required to address this challenge. Intel recently announced the MIC coprocessor, which can provide massive computing power. The Tianhe-2 is the world's fastest supercomputer now equipped with three MIC coprocessors each compute node. A key feature of sequence alignment is that different reads are independent. Considering this property, we proposed a MIC-oriented three-level parallelization strategy to speed up BWA, a widely used sequence alignment tool, and developed our ultrafast parallel sequence aligner: B-MIC. B-MIC contains three levels of parallelization: firstly, parallelization of data IO and reads alignment by a three-stage parallel pipeline; secondly, parallelization enabled by MIC coprocessor technology; thirdly, inter-node parallelization implemented by MPI. In this paper, we demonstrate that B-MIC outperforms BWA by a combination of those techniques using Inspur NF5280M server and the Tianhe-2 supercomputer. To the best of our knowledge, B-MIC is the first sequence alignment tool to run on Intel MIC and it can achieve more than fivefold speedup over the original BWA while maintaining the alignment precision.

  11. Design of fuel cell powered data centers for sufficient reliability and availability

    NASA Astrophysics Data System (ADS)

    Ritchie, Alexa J.; Brouwer, Jacob

    2018-04-01

    It is challenging to design a sufficiently reliable fuel cell electrical system for use in data centers, which require 99.9999% uptime. Such a system could lower emissions and increase data center efficiency, but the reliability and availability of such a system must be analyzed and understood. Currently, extensive backup equipment is used to ensure electricity availability. The proposed design alternative uses multiple fuel cell systems each supporting a small number of servers to eliminate backup power equipment provided the fuel cell design has sufficient reliability and availability. Potential system designs are explored for the entire data center and for individual fuel cells. Reliability block diagram analysis of the fuel cell systems was accomplished to understand the reliability of the systems without repair or redundant technologies. From this analysis, it was apparent that redundant components would be necessary. A program was written in MATLAB to show that the desired system reliability could be achieved by a combination of parallel components, regardless of the number of additional components needed. Having shown that the desired reliability was achievable through some combination of components, a dynamic programming analysis was undertaken to assess the ideal allocation of parallel components.

  12. Ropes: Support for collective opertions among distributed threads

    NASA Technical Reports Server (NTRS)

    Haines, Matthew; Mehrotra, Piyush; Cronk, David

    1995-01-01

    Lightweight threads are becoming increasingly useful in supporting parallelism and asynchronous control structures in applications and language implementations. Recently, systems have been designed and implemented to support interprocessor communication between lightweight threads so that threads can be exploited in a distributed memory system. Their use, in this setting, has been largely restricted to supporting latency hiding techniques and functional parallelism within a single application. However, to execute data parallel codes independent of other threads in the system, collective operations and relative indexing among threads are required. This paper describes the design of ropes: a scoping mechanism for collective operations and relative indexing among threads. We present the design of ropes in the context of the Chant system, and provide performance results evaluating our initial design decisions.

  13. Solution of the within-group multidimensional discrete ordinates transport equations on massively parallel architectures

    NASA Astrophysics Data System (ADS)

    Zerr, Robert Joseph

    2011-12-01

    The integral transport matrix method (ITMM) has been used as the kernel of new parallel solution methods for the discrete ordinates approximation of the within-group neutron transport equation. The ITMM abandons the repetitive mesh sweeps of the traditional source iterations (SI) scheme in favor of constructing stored operators that account for the direct coupling factors among all the cells and between the cells and boundary surfaces. The main goals of this work were to develop the algorithms that construct these operators and employ them in the solution process, determine the most suitable way to parallelize the entire procedure, and evaluate the behavior and performance of the developed methods for increasing number of processes. This project compares the effectiveness of the ITMM with the SI scheme parallelized with the Koch-Baker-Alcouffe (KBA) method. The primary parallel solution method involves a decomposition of the domain into smaller spatial sub-domains, each with their own transport matrices, and coupled together via interface boundary angular fluxes. Each sub-domain has its own set of ITMM operators and represents an independent transport problem. Multiple iterative parallel solution methods have investigated, including parallel block Jacobi (PBJ), parallel red/black Gauss-Seidel (PGS), and parallel GMRES (PGMRES). The fastest observed parallel solution method, PGS, was used in a weak scaling comparison with the PARTISN code. Compared to the state-of-the-art SI-KBA with diffusion synthetic acceleration (DSA), this new method without acceleration/preconditioning is not competitive for any problem parameters considered. The best comparisons occur for problems that are difficult for SI DSA, namely highly scattering and optically thick. SI DSA execution time curves are generally steeper than the PGS ones. However, until further testing is performed it cannot be concluded that SI DSA does not outperform the ITMM with PGS even on several thousand or tens of thousands of processors. The PGS method does outperform SI DSA for the periodic heterogeneous layers (PHL) configuration problems. Although this demonstrates a relative strength/weakness between the two methods, the practicality of these problems is much less, further limiting instances where it would be beneficial to select ITMM over SI DSA. The results strongly indicate a need for a robust, stable, and efficient acceleration method (or preconditioner for PGMRES). The spatial multigrid (SMG) method is currently incomplete in that it does not work for all cases considered and does not effectively improve the convergence rate for all values of scattering ratio c or cell dimension h. Nevertheless, it does display the desired trend for highly scattering, optically thin problems. That is, it tends to lower the rate of growth of number of iterations with increasing number of processes, P, while not increasing the number of additional operations per iteration to the extent that the total execution time of the rapidly converging accelerated iterations exceeds that of the slower unaccelerated iterations. A predictive parallel performance model has been developed for the PBJ method. Timing tests were performed such that trend lines could be fitted to the data for the different components and used to estimate the execution times. Applied to the weak scaling results, the model notably underestimates construction time, but combined with a slight overestimation in iterative solution time, the model predicts total execution time very well for large P. It also does a decent job with the strong scaling results, closely predicting the construction time and time per iteration, especially as P increases. Although not shown to be competitive up to 1,024 processing elements with the current state of the art, the parallelized ITMM exhibits promising scaling trends. Ultimately, compared to the KBA method, the parallelized ITMM may be found to be a very attractive option for transport calculations spatially decomposed over several tens of thousands of processes. Acceleration/preconditioning of the parallelized ITMM once developed will improve the convergence rate and improve its competitiveness. (Abstract shortened by UMI.)

  14. System and method for generating steady state confining current for a toroidal plasma fusion reactor

    DOEpatents

    Fisch, Nathaniel J.

    1981-01-01

    A system for generating steady state confining current for a toroidal plasma fusion reactor providing steady-state generation of the thermonuclear power. A dense, hot toroidal plasma is initially prepared with a confining magnetic field with toroidal and poloidal components. Continuous wave RF energy is injected into said plasma to establish a spectrum of traveling waves in the plasma, where the traveling waves have momentum components substantially either all parallel, or all anti-parallel to the confining magnetic field. The injected RF energy is phased to couple to said traveling waves with both a phase velocity component and a wave momentum component in the direction of the plasma traveling wave components. The injected RF energy has a predetermined spectrum selected so that said traveling waves couple to plasma electrons having velocities in a predetermined range .DELTA.. The velocities in the range are substantially greater than the thermal electron velocity of the plasma. In addition, the range is sufficiently broad to produce a raised plateau having width .DELTA. in the plasma electron velocity distribution so that the plateau electrons provide steady-state current to generate a poloidal magnetic field component sufficient for confining the plasma. In steady state operation of the fusion reactor, the fusion power density in the plasma exceeds the power dissipated in the plasma.

  15. System and method for generating steady state confining current for a toroidal plasma fusion reactor

    DOEpatents

    Bers, Abraham

    1981-01-01

    A system for generating steady state confining current for a toroidal plasma fusion reactor providing steady-state generation of the thermonuclear power. A dense, hot toroidal plasma is initially prepared with a confining magnetic field with toroidal and poloidal components. Continuous wave RF energy is injected into said plasma to estalish a spectrum of traveling waves in the plasma, where the traveling waves have momentum components substantially either all parallel, or all anti-parallel to the confining magnetic field. The injected RF energy is phased to couple to said traveling waves with both a phase velocity component and a wave momentum component in the direction of the plasma traveling wave components. The injected RF energy has a predetermined spectrum selected so that said traveling waves couple to plasma electrons having velocities in a predetermined range .DELTA.. The velocities in the range are substantially greater than the thermal electron velocity of the plasma. In addition, the range is sufficiently broad to produce a raised plateau having width .DELTA. in the plasma electron velocity distribution so that the plateau electrons provide steady-state current to generate a poloidal magnetic field component sufficient for confining the plasma. In steady state operation of the fusion reactor, the fusion power density in the plasma exceeds the power dissipated inthe plasma.

  16. Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sussman, Alan

    2014-10-21

    This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.

  17. Sublattice parallel replica dynamics.

    PubMed

    Martínez, Enrique; Uberuaga, Blas P; Voter, Arthur F

    2014-06-01

    Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998)] by combining it with the synchronous sublattice approach of Shim and Amar [ and , Phys. Rev. B 71, 125432 (2005)], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.

  18. PyPele Rewritten To Use MPI

    NASA Technical Reports Server (NTRS)

    Hockney, George; Lee, Seungwon

    2008-01-01

    A computer program known as PyPele, originally written as a Pythonlanguage extension module of a C++ language program, has been rewritten in pure Python language. The original version of PyPele dispatches and coordinates parallel-processing tasks on cluster computers and provides a conceptual framework for spacecraft-mission- design and -analysis software tools to run in an embarrassingly parallel mode. The original version of PyPele uses SSH (Secure Shell a set of standards and an associated network protocol for establishing a secure channel between a local and a remote computer) to coordinate parallel processing. Instead of SSH, the present Python version of PyPele uses Message Passing Interface (MPI) [an unofficial de-facto standard language-independent application programming interface for message- passing on a parallel computer] while keeping the same user interface. The use of MPI instead of SSH and the preservation of the original PyPele user interface make it possible for parallel application programs written previously for the original version of PyPele to run on MPI-based cluster computers. As a result, engineers using the previously written application programs can take advantage of embarrassing parallelism without need to rewrite those programs.

  19. Development and Application of a Parallel LCAO Cluster Method

    NASA Astrophysics Data System (ADS)

    Patton, David C.

    1997-08-01

    CPU intensive steps in the SCF electronic structure calculations of clusters and molecules with a first-principles LCAO method have been fully parallelized via a message passing paradigm. Identification of the parts of the code that are composed of many independent compute-intensive steps is discussed in detail as they are the most readily parallelized. Most of the parallelization involves spatially decomposing numerical operations on a mesh. One exception is the solution of Poisson's equation which relies on distribution of the charge density and multipole methods. The method we use to parallelize this part of the calculation is quite novel and is covered in detail. We present a general method for dynamically load-balancing a parallel calculation and discuss how we use this method in our code. The results of benchmark calculations of the IR and Raman spectra of PAH molecules such as anthracene (C_14H_10) and tetracene (C_18H_12) are presented. These benchmark calculations were performed on an IBM SP2 and a SUN Ultra HPC server with both MPI and PVM. Scalability and speedup for these calculations is analyzed to determine the efficiency of the code. In addition, performance and usage issues for MPI and PVM are presented.

  20. A visual parallel-BCI speller based on the time-frequency coding strategy.

    PubMed

    Xu, Minpeng; Chen, Long; Zhang, Lixin; Qi, Hongzhi; Ma, Lan; Tang, Jiabei; Wan, Baikun; Ming, Dong

    2014-04-01

    Spelling is one of the most important issues in brain-computer interface (BCI) research. This paper is to develop a visual parallel-BCI speller system based on the time-frequency coding strategy in which the sub-speller switching among four simultaneously presented sub-spellers and the character selection are identified in a parallel mode. The parallel-BCI speller was constituted by four independent P300+SSVEP-B (P300 plus SSVEP blocking) spellers with different flicker frequencies, thereby all characters had a specific time-frequency code. To verify its effectiveness, 11 subjects were involved in the offline and online spellings. A classification strategy was designed to recognize the target character through jointly using the canonical correlation analysis and stepwise linear discriminant analysis. Online spellings showed that the proposed parallel-BCI speller had a high performance, reaching the highest information transfer rate of 67.4 bit min(-1), with an average of 54.0 bit min(-1) and 43.0 bit min(-1) in the three rounds and five rounds, respectively. The results indicated that the proposed parallel-BCI could be effectively controlled by users with attention shifting fluently among the sub-spellers, and highly improved the BCI spelling performance.

  1. Quantitative Image Feature Engine (QIFE): an Open-Source, Modular Engine for 3D Quantitative Feature Extraction from Volumetric Medical Images.

    PubMed

    Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy

    2017-10-06

    The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.

  2. Alleviation of rapid, futile ammonium cycling at the plasma membrane by potassium reveals K+-sensitive and -insensitive components of NH4+ transport.

    PubMed

    Szczerba, Mark W; Britto, Dev T; Balkos, Konstantine D; Kronzucker, Herbert J

    2008-01-01

    Futile plasma membrane cycling of ammonium (NH4+) is characteristic of low-affinity NH4+ transport, and has been proposed to be a critical factor in NH4+ toxicity. Using unidirectional flux analysis with the positron-emitting tracer 13N in intact seedlings of barley (Hordeum vulgare L.), it is shown that rapid, futile NH4+ cycling is alleviated by elevated K+ supply, and that low-affinity NH4+ transport is mediated by a K+-sensitive component, and by a second component that is independent of K+. At low external [K+] (0.1 mM), NH4+ influx (at an external [NH4+] of 10 mM) of 92 micromol g(-1) h(-1) was observed, with an efflux:influx ratio of 0.75, indicative of rapid, futile NH4+ cycling. Elevating K+ supply into the low-affinity K+ transport range (1.5-40 mM) reduced both influx and efflux of NH4+ by as much as 75%, and substantially reduced the efflux:influx ratio. The reduction of NH4+ fluxes was achieved rapidly upon exposure to elevated K+, within 1 min for influx and within 5 min for efflux. The channel inhibitor La3+ decreased high-capacity NH4+ influx only at low K+ concentrations, suggesting that the K+-sensitive component of NH4+ influx may be mediated by non-selective cation channels. Using respiratory measurements and current models of ion flux energetics, the energy cost of concomitant NH4+ and K+ transport at the root plasma membrane, and its consequences for plant growth are discussed. The study presents the first demonstration of the parallel operation of K+-sensitive and -insensitive NH4+ flux mechanisms in plants.

  3. Synthesis of blind source separation algorithms on reconfigurable FPGA platforms

    NASA Astrophysics Data System (ADS)

    Du, Hongtao; Qi, Hairong; Szu, Harold H.

    2005-03-01

    Recent advances in intelligence technology have boosted the development of micro- Unmanned Air Vehicles (UAVs) including Sliver Fox, Shadow, and Scan Eagle for various surveillance and reconnaissance applications. These affordable and reusable devices have to fit a series of size, weight, and power constraints. Cameras used on such micro-UAVs are therefore mounted directly at a fixed angle without any motion-compensated gimbals. This mounting scheme has resulted in the so-called jitter effect in which jitter is defined as sub-pixel or small amplitude vibrations. The jitter blur caused by the jitter effect needs to be corrected before any other processing algorithms can be practically applied. Jitter restoration has been solved by various optimization techniques, including Wiener approximation, maximum a-posteriori probability (MAP), etc. However, these algorithms normally assume a spatial-invariant blur model that is not the case with jitter blur. Szu et al. developed a smart real-time algorithm based on auto-regression (AR) with its natural generalization of unsupervised artificial neural network (ANN) learning to achieve restoration accuracy at the sub-pixel level. This algorithm resembles the capability of the human visual system, in which an agreement between the pair of eyes indicates "signal", otherwise, the jitter noise. Using this non-statistical method, for each single pixel, a deterministic blind sources separation (BSS) process can then be carried out independently based on a deterministic minimum of the Helmholtz free energy with a generalization of Shannon's information theory applied to open dynamic systems. From a hardware implementation point of view, the process of jitter restoration of an image using Szu's algorithm can be optimized by pixel-based parallelization. In our previous work, a parallelly structured independent component analysis (ICA) algorithm has been implemented on both Field Programmable Gate Array (FPGA) and Application-Specific Integrated Circuit (ASIC) using standard-height cells. ICA is an algorithm that can solve BSS problems by carrying out the all-order statistical, decorrelation-based transforms, in which an assumption that neighborhood pixels share the same but unknown mixing matrix A is made. In this paper, we continue our investigation on the design challenges of firmware approaches to smart algorithms. We think two levels of parallelization can be explored, including pixel-based parallelization and the parallelization of the restoration algorithm performed at each pixel. This paper focuses on the latter and we use ICA as an example to explain the design and implementation methods. It is well known that the capacity constraints of single FPGA have limited the implementation of many complex algorithms including ICA. Using the reconfigurability of FPGA, we show, in this paper, how to manipulate the FPGA-based system to provide extra computing power for the parallelized ICA algorithm with limited FPGA resources. The synthesis aiming at the pilchard re-configurable FPGA platform is reported. The pilchard board is embedded with single Xilinx VIRTEX 1000E FPGA and transfers data directly to CPU on the 64-bit memory bus at the maximum frequency of 133MHz. Both the feasibility performance evaluations and experimental results validate the effectiveness and practicality of this synthesis, which can be extended to the spatial-variant jitter restoration for micro-UAV deployment.

  4. USING THE ECLPSS SOFTWARE ENVIRONMENT TO BUILD A SPATIALLY EXPLICIT COMPONENT-BASED MODEL OF OZONE EFFECTS ON FOREST ECOSYSTEMS. (R827958)

    EPA Science Inventory

    We have developed a modeling framework to support grid-based simulation of ecosystems at multiple spatial scales, the Ecological Component Library for Parallel Spatial Simulation (ECLPSS). ECLPSS helps ecologists to build robust spatially explicit simulations of ...

  5. Parallel Three-Dimensional Computation of Fluid Dynamics and Fluid-Structure Interactions of Ram-Air Parachutes

    NASA Technical Reports Server (NTRS)

    Tezduyar, Tayfun E.

    1998-01-01

    This is a final report as far as our work at University of Minnesota is concerned. The report describes our research progress and accomplishments in development of high performance computing methods and tools for 3D finite element computation of aerodynamic characteristics and fluid-structure interactions (FSI) arising in airdrop systems, namely ram-air parachutes and round parachutes. This class of simulations involves complex geometries, flexible structural components, deforming fluid domains, and unsteady flow patterns. The key components of our simulation toolkit are a stabilized finite element flow solver, a nonlinear structural dynamics solver, an automatic mesh moving scheme, and an interface between the fluid and structural solvers; all of these have been developed within a parallel message-passing paradigm.

  6. On the Nonlinear Stability of Plane Parallel Shear Flow in a Coplanar Magnetic Field

    NASA Astrophysics Data System (ADS)

    Xu, Lanxi; Lan, Wanli

    2017-12-01

    Lyapunov direct method has been used to study the nonlinear stability of laminar flow between two parallel planes in the presence of a coplanar magnetic field for streamwise perturbations with stress-free boundary planes. Two Lyapunov functions are defined. By means of the first, it is proved that the transverse components of the perturbations decay unconditionally and asymptotically to zero for all Reynolds numbers and magnetic Reynolds numbers. By means of the second, it is showed that the other components of the perturbations decay conditionally and exponentially to zero for all Reynolds numbers and the magnetic Reynolds numbers below π ^2/2M, where M is the maximum of the absolute value of the velocity field of the laminar flow.

  7. Longitudinal elliptically polarized electromagnetic waves in off-diagonal magnetoelectric split-ring composites.

    PubMed

    Chui, S T; Wang, Weihua; Zhou, L; Lin, Z F

    2009-07-22

    We study the propagation of plane electromagnetic waves through different systems consisting of arrays of split rings of different orientations. Many extraordinary EM phenomena were discovered in such systems, contributed by the off-diagonal magnetoelectric susceptibilities. We find a mode such that the electric field becomes elliptically polarized with a component in the longitudinal direction (i.e. parallel to the wavevector). Even though the group velocity [Formula: see text] and the wavevector k are parallel, in the presence of damping, the Poynting vector does not just get 'broadened', but can possess a component perpendicular to the wavevector. The speed of light can be real even when the product ϵμ is negative. Other novel properties are explored.

  8. Nonlinear Extraction of Independent Components of Natural Images Using Radial Gaussianization

    PubMed Central

    Lyu, Siwei; Simoncelli, Eero P.

    2011-01-01

    We consider the problem of efficiently encoding a signal by transforming it to a new representation whose components are statistically independent. A widely studied linear solution, known as independent component analysis (ICA), exists for the case when the signal is generated as a linear transformation of independent nongaussian sources. Here, we examine a complementary case, in which the source is nongaussian and elliptically symmetric. In this case, no invertible linear transform suffices to decompose the signal into independent components, but we show that a simple nonlinear transformation, which we call radial gaussianization (RG), is able to remove all dependencies. We then examine this methodology in the context of natural image statistics. We first show that distributions of spatially proximal bandpass filter responses are better described as elliptical than as linearly transformed independent sources. Consistent with this, we demonstrate that the reduction in dependency achieved by applying RG to either nearby pairs or blocks of bandpass filter responses is significantly greater than that achieved by ICA. Finally, we show that the RG transformation may be closely approximated by divisive normalization, which has been used to model the nonlinear response properties of visual neurons. PMID:19191599

  9. Response-reinforcer dependency and resistance to change.

    PubMed

    Cançado, Carlos R X; Abreu-Rodrigues, Josele; Aló, Raquel Moreira; Hauck, Flávia; Doughty, Adam H

    2018-01-01

    The effects of the response-reinforcer dependency on resistance to change were studied in three experiments with rats. In Experiment 1, lever pressing produced reinforcers at similar rates after variable interreinforcer intervals in each component of a two-component multiple schedule. Across conditions, in the fixed component, all reinforcers were response-dependent; in the alternative component, the percentage of response-dependent reinforcers was 100, 50 (i.e., 50% response-dependent and 50% response-independent) or 10% (i.e., 10% response-dependent and 90% response-independent). Resistance to extinction was greater in the alternative than in the fixed component when the dependency in the former was 10%, but was similar between components when this dependency was 100 or 50%. In Experiment 2, a three-component multiple schedule was used. The dependency was 100% in one component and 10% in the other two. The 10% components differed on how reinforcers were programmed. In one component, as in Experiment 1, a reinforcer had to be collected before the scheduling of other response-dependent or independent reinforcers. In the other component, response-dependent and -independent reinforcers were programmed by superimposing a variable-time schedule on an independent variable-interval schedule. Regardless of the procedure used to program the dependency, resistance to extinction was greater in the 10% components than in the 100% component. These results were replicated in Experiment 3 in which, instead of extinction, VT schedules replaced the baseline schedules in each multiple-schedule component during the test. We argue that the relative change in dependency from Baseline to Test, which is greater when baseline dependencies are high rather than low, could account for the differential resistance to change in the present experiments. The inconsistencies in results across the present and previous experiments suggest that the effects of dependency on resistance to change are not well understood. Additional systematic analyses are important to further understand the effects of the response-reinforcer relation on resistance to change and to the development of a more comprehensive theory of behavioral persistence. © 2017 Society for the Experimental Analysis of Behavior.

  10. Contributions of Hippocampus and Striatum to Memory-Guided Behavior Depend on Past Experience

    PubMed Central

    2016-01-01

    The hippocampal and striatal memory systems are thought to operate independently and in parallel in supporting cognitive memory and habits, respectively. Much of the evidence for this principle comes from double dissociation data, in which damage to brain structure A causes deficits in Task 1 but not Task 2, whereas damage to structure B produces the reverse pattern of effects. Typically, animals are explicitly trained in one task. Here, we investigated whether this principle continues to hold when animals concurrently learn two types of tasks. Rats were trained on a plus maze in either a spatial navigation or a cue–response task (sequential training), whereas a third set of rats acquired both (concurrent training). Subsequently, the rats underwent either sham surgery or neurotoxic lesions of the hippocampus (HPC), medial dorsal striatum (DSM), or lateral dorsal striatum (DSL), followed by retention testing. Finally, rats in the sequential training condition also acquired the novel “other” task. When rats learned one task, HPC and DSL selectively supported spatial navigation and cue response, respectively. However, when rats learned both tasks, HPC and DSL additionally supported the behavior incongruent with the processing style of the corresponding memory system. Thus, in certain conditions, the hippocampal and striatal memory systems can operate cooperatively and in synergism. DSM significantly contributed to performance regardless of task or training procedure. Experience with the cue–response task facilitated subsequent spatial learning, whereas experience with spatial navigation delayed both concurrent and subsequent response learning. These findings suggest that there are multiple operational principles that govern memory networks. SIGNIFICANCE STATEMENT Currently, we distinguish among several types of memories, each supported by a distinct neural circuit. The memory systems are thought to operate independently and in parallel. Here, we demonstrate that the hippocampus and the dorsal striatum memory systems operate independently and in parallel when rats learn one type of task at a time, but interact cooperatively and in synergism when rats concurrently learn two types of tasks. Furthermore, new learning is modulated by past experiences. These results can be explained by a model in which independent and parallel information processing that occurs in the separate memory-related neural circuits is supplemented by information transfer between the memory systems at the level of the cortex. PMID:27307234

  11. Contributions of Hippocampus and Striatum to Memory-Guided Behavior Depend on Past Experience.

    PubMed

    Ferbinteanu, Janina

    2016-06-15

    The hippocampal and striatal memory systems are thought to operate independently and in parallel in supporting cognitive memory and habits, respectively. Much of the evidence for this principle comes from double dissociation data, in which damage to brain structure A causes deficits in Task 1 but not Task 2, whereas damage to structure B produces the reverse pattern of effects. Typically, animals are explicitly trained in one task. Here, we investigated whether this principle continues to hold when animals concurrently learn two types of tasks. Rats were trained on a plus maze in either a spatial navigation or a cue-response task (sequential training), whereas a third set of rats acquired both (concurrent training). Subsequently, the rats underwent either sham surgery or neurotoxic lesions of the hippocampus (HPC), medial dorsal striatum (DSM), or lateral dorsal striatum (DSL), followed by retention testing. Finally, rats in the sequential training condition also acquired the novel "other" task. When rats learned one task, HPC and DSL selectively supported spatial navigation and cue response, respectively. However, when rats learned both tasks, HPC and DSL additionally supported the behavior incongruent with the processing style of the corresponding memory system. Thus, in certain conditions, the hippocampal and striatal memory systems can operate cooperatively and in synergism. DSM significantly contributed to performance regardless of task or training procedure. Experience with the cue-response task facilitated subsequent spatial learning, whereas experience with spatial navigation delayed both concurrent and subsequent response learning. These findings suggest that there are multiple operational principles that govern memory networks. Currently, we distinguish among several types of memories, each supported by a distinct neural circuit. The memory systems are thought to operate independently and in parallel. Here, we demonstrate that the hippocampus and the dorsal striatum memory systems operate independently and in parallel when rats learn one type of task at a time, but interact cooperatively and in synergism when rats concurrently learn two types of tasks. Furthermore, new learning is modulated by past experiences. These results can be explained by a model in which independent and parallel information processing that occurs in the separate memory-related neural circuits is supplemented by information transfer between the memory systems at the level of the cortex. Copyright © 2016 the authors 0270-6474/16/366459-12$15.00/0.

  12. Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.

  13. New Parallel computing framework for radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostin, M.A.; /Michigan State U., NSCL; Mokhov, N.V.

    A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility canmore » be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.« less

  14. Parallel computation and the basis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.R.

    1993-05-01

    A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communications costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis andmore » Parallel Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less

  15. Tri-state oriented parallel processing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tenenbaum, J.; Wallach, Y.

    1982-08-01

    An alternating sequential/parallel system, the MOPPS was introduced a few years ago and is modified despite the fact that it solved satisfactorily a number of real-time problems. The new system, the TOPPS is described and compared to MOPPS and two applications are chosen to prove it to be superior. The advantage of having a third basic, the ring mode, is illustrated when solving sets of linear equations with band matrices. The advantage of having independent I/O for the slaves is illustrated for biomedical signal analysis. 11 references.

  16. Scalar Casimir densities and forces for parallel plates in cosmic string spacetime

    NASA Astrophysics Data System (ADS)

    Bezerra de Mello, E. R.; Saharian, A. A.; Abajyan, S. V.

    2018-04-01

    We analyze the Green function, the Casimir densities and forces associated with a massive scalar quantum field confined between two parallel plates in a higher dimensional cosmic string spacetime. The plates are placed orthogonal to the string, and the field obeys the Robin boundary conditions on them. The boundary-induced contributions are explicitly extracted in the vacuum expectation values (VEVs) of the field squared and of the energy-momentum tensor for both the single plate and two plates geometries. The VEV of the energy-momentum tensor, in additional to the diagonal components, contains an off diagonal component corresponding to the shear stress. The latter vanishes on the plates in special cases of Dirichlet and Neumann boundary conditions. For points outside the string core the topological contributions in the VEVs are finite on the plates. Near the string the VEVs are dominated by the boundary-free part, whereas at large distances the boundary-induced contributions dominate. Due to the nonzero off diagonal component of the vacuum energy-momentum tensor, in addition to the normal component, the Casimir forces have nonzero component parallel to the boundary (shear force). Unlike the problem on the Minkowski bulk, the normal forces acting on the separate plates, in general, do not coincide if the corresponding Robin coefficients are different. Another difference is that in the presence of the cosmic string the Casimir forces for Dirichlet and Neumann boundary conditions differ. For Dirichlet boundary condition the normal Casimir force does not depend on the curvature coupling parameter. This is not the case for other boundary conditions. A new qualitative feature induced by the cosmic string is the appearance of the shear stress acting on the plates. The corresponding force is directed along the radial coordinate and vanishes for Dirichlet and Neumann boundary conditions. Depending on the parameters of the problem, the radial component of the shear force can be either positive or negative.

  17. Automatic removal of eye-movement and blink artifacts from EEG signals.

    PubMed

    Gao, Jun Feng; Yang, Yong; Lin, Pan; Wang, Pei; Zheng, Chong Xun

    2010-03-01

    Frequent occurrence of electrooculography (EOG) artifacts leads to serious problems in interpreting and analyzing the electroencephalogram (EEG). In this paper, a robust method is presented to automatically eliminate eye-movement and eye-blink artifacts from EEG signals. Independent Component Analysis (ICA) is used to decompose EEG signals into independent components. Moreover, the features of topographies and power spectral densities of those components are extracted to identify eye-movement artifact components, and a support vector machine (SVM) classifier is adopted because it has higher performance than several other classifiers. The classification results show that feature-extraction methods are unsuitable for identifying eye-blink artifact components, and then a novel peak detection algorithm of independent component (PDAIC) is proposed to identify eye-blink artifact components. Finally, the artifact removal method proposed here is evaluated by the comparisons of EEG data before and after artifact removal. The results indicate that the method proposed could remove EOG artifacts effectively from EEG signals with little distortion of the underlying brain signals.

  18. Binocular optical axis parallelism detection precision analysis based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Ying, Jiaju; Liu, Bingqi

    2018-02-01

    According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.

  19. Seasonal characterization of CDOM for lakes in semiarid regions of Northeast China using excitation-emission matrix fluorescence and parallel factor analysis (EEM-PARAFAC)

    NASA Astrophysics Data System (ADS)

    Zhao, Ying; Song, Kaishan; Wen, Zhidan; Li, Lin; Zang, Shuying; Shao, Tiantian; Li, Sijia; Du, Jia

    2016-03-01

    The seasonal characteristics of fluorescent components in chromophoric dissolved organic matter (CDOM) for lakes in the semiarid region of Northeast China were examined by excitation-emission matrix (EEM) spectra and parallel factor analysis (PARAFAC). Two humic-like (C1 and C2) and protein-like (C3 and C4) components were identified using PARAFAC. The average fluorescence intensity of the four components differed under seasonal variation from June and August 2013 to February and April 2014. Components 1 and 2 exhibited a strong linear correlation (R2 = 0.628). Significantly positive linear relationships between CDOM absorption coefficients a(254) (R2 = 0.72, 0.46, p < 0.01), a(280) (R2 = 0.77, 0.47, p < 0.01), a(350) (R2 = 0.76, 0.78, p < 0.01) and Fmax for two humic-like components (C1 and C2) were exhibited, respectively. A significant relationship (R2 = 0.930) was found between salinity and dissolved organic carbon (DOC). However, almost no obvious correlation was found between salinity and EEM-PARAFAC-extracted components except for C3 (R2 = 0.469). Results from this investigation demonstrate that the EEM-PARAFAC technique can be used to evaluate the seasonal dynamics of CDOM fluorescent components for inland waters in the semiarid regions of Northeast China, and to quantify CDOM components for other waters with similar environmental conditions.

  20. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    PubMed Central

    2011-01-01

    Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples. PMID:21352538

  1. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines.

    PubMed

    Cieślik, Marcin; Mura, Cameron

    2011-02-25

    Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples.

  2. A first application of independent component analysis to extracting structure from stock returns.

    PubMed

    Back, A D; Weigend, A S

    1997-08-01

    This paper explores the application of a signal processing technique known as independent component analysis (ICA) or blind source separation to multivariate financial time series such as a portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new space of statistically independent components (ICs). We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results with those obtained using principal component analysis. The results indicate that the estimated ICs fall into two categories, (i) infrequent large shocks (responsible for the major changes in the stock prices), and (ii) frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall stock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs. In contrast, when using shocks derived from principal components instead of independent components, the reconstructed price is less similar to the original one. ICA is shown to be a potentially powerful method of analyzing and understanding driving mechanisms in financial time series. The application to portfolio optimization is described in Chin and Weigend (1998).

  3. Attentional Control via Parallel Target-Templates in Dual-Target Search

    PubMed Central

    Barrett, Doug J. K.; Zobay, Oliver

    2014-01-01

    Simultaneous search for two targets has been shown to be slower and less accurate than independent searches for the same two targets. Recent research suggests this ‘dual-target cost’ may be attributable to a limit in the number of target-templates than can guide search at any one time. The current study investigated this possibility by comparing behavioural responses during single- and dual-target searches for targets defined by their orientation. The results revealed an increase in reaction times for dual- compared to single-target searches that was largely independent of the number of items in the display. Response accuracy also decreased on dual- compared to single-target searches: dual-target accuracy was higher than predicted by a model restricting search guidance to a single target-template and lower than predicted by a model simulating two independent single-target searches. These results are consistent with a parallel model of dual-target search in which attentional control is exerted by more than one target-template at a time. The requirement to maintain two target-templates simultaneously, however, appears to impose a reduction in the specificity of the memory representation that guides search for each target. PMID:24489793

  4. Horizontal transfer of the msp130 gene supported the evolution of metazoan biomineralization.

    PubMed

    Ettensohn, Charles A

    2014-05-01

    It is widely accepted that biomineralized structures appeared independently in many metazoan clades during the Cambrian. How this occurred, and whether it involved the parallel co-option of a common set of biochemical and developmental pathways (i.e., a shared biomineralization "toolkit"), are questions that remain unanswered. Here, I provide evidence that horizontal gene transfer supported the evolution of biomineralization in some metazoans. I show that Msp130 proteins, first described as proteins expressed selectively by the biomineral-forming primary mesenchyme cells of the sea urchin embryo, have a much wider taxonomic distribution than was previously appreciated. Msp130 proteins are present in several invertebrate deuterostomes and in one protostome clade (molluscs). Surprisingly, closely related proteins are also present in many bacteria and several algae, and I propose that msp130 genes were introduced into metazoan lineages via multiple, independent horizontal gene transfer events. Phylogenetic analysis shows that the introduction of an ancestral msp130 gene occurred in the sea urchin lineage more than 250 million years ago and that msp130 genes underwent independent, parallel duplications in each of the metazoan phyla in which these genes are found. © 2014 Wiley Periodicals, Inc.

  5. Exploiting loop level parallelism in nonprocedural dataflow programs

    NASA Technical Reports Server (NTRS)

    Gokhale, Maya B.

    1987-01-01

    Discussed are how loop level parallelism is detected in a nonprocedural dataflow program, and how a procedural program with concurrent loops is scheduled. Also discussed is a program restructuring technique which may be applied to recursive equations so that concurrent loops may be generated for a seemingly iterative computation. A compiler which generates C code for the language described below has been implemented. The scheduling component of the compiler and the restructuring transformation are described.

  6. Decentralized Adaptive Control For Robots

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1989-01-01

    Precise knowledge of dynamics not required. Proposed scheme for control of multijointed robotic manipulator calls for independent control subsystem for each joint, consisting of proportional/integral/derivative feedback controller and position/velocity/acceleration feedforward controller, both with adjustable gains. Independent joint controller compensates for unpredictable effects, gravitation, and dynamic coupling between motions of joints, while forcing joints to track reference trajectories. Scheme amenable to parallel processing in distributed computing system wherein each joint controlled by relatively simple algorithm on dedicated microprocessor.

  7. The adverse outcome pathway knowledge base

    EPA Science Inventory

    The rapid advancement of the Adverse Outcome Pathway (AOP) framework has been paralleled by the development of tools to store, analyse, and explore AOPs. The AOP Knowledge Base (AOP-KB) project has brought three independently developed platforms (Effectopedia, AOP-Wiki, and AOP-X...

  8. Scalable Parallel Density-based Clustering and Applications

    NASA Astrophysics Data System (ADS)

    Patwary, Mostofa Ali

    2014-04-01

    Recently, density-based clustering algorithms (DBSCAN and OPTICS) have gotten significant attention of the scientific community due to their unique capability of discovering arbitrary shaped clusters and eliminating noise data. These algorithms have several applications, which require high performance computing, including finding halos and subhalos (clusters) from massive cosmology data in astrophysics, analyzing satellite images, X-ray crystallography, and anomaly detection. However, parallelization of these algorithms are extremely challenging as they exhibit inherent sequential data access order, unbalanced workload resulting in low parallel efficiency. To break the data access sequentiality and to achieve high parallelism, we develop new parallel algorithms, both for DBSCAN and OPTICS, designed using graph algorithmic techniques. For example, our parallel DBSCAN algorithm exploits the similarities between DBSCAN and computing connected components. Using datasets containing up to a billion floating point numbers, we show that our parallel density-based clustering algorithms significantly outperform the existing algorithms, achieving speedups up to 27.5 on 40 cores on shared memory architecture and speedups up to 5,765 using 8,192 cores on distributed memory architecture. In our experiments, we found that while achieving the scalability, our algorithms produce clustering results with comparable quality to the classical algorithms.

  9. Performance of the SERI parallel-passage dehumidifer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlepp, D.; Barlow, R.

    1984-09-01

    The key component in improving the performance of solar desiccant cooling systems is the dehumidifier. A parallel-passage geometry for the desiccant dehumidifier has been identified as meeting key criteria of low pressure drop, high mass transfer efficiency, and compact size. An experimental program to build and test a small-scale prototype of this design was undertaken in FY 1982, and the results are presented in this report. Computer models to predict the adsorption/desorption behavior of desiccant dehumidifiers were updated to take into account the geometry of the bed and predict potential system performance using the new component design. The parallel-passage designmore » proved to have high mass transfer effectiveness and low pressure drop over a wide range of test conditions typical of desiccant cooling system operation. The prototype dehumidifier averaged 93% effectiveness at pressure drops of less than 50 Pa at design point conditions. Predictions of system performance using models validated with the experimental data indicate that system thermal coefficients of performance (COPs) of 1.0 to 1.2 and electrical COPs above 8.5 are possible using this design.« less

  10. Acceleration characteristics of human ocular accommodation.

    PubMed

    Bharadwaj, Shrikant R; Schor, Clifton M

    2005-01-01

    Position and velocity of accommodation are known to increase with stimulus magnitude, however, little is known about acceleration properties. We investigated three acceleration properties: peak acceleration, time-to-peak acceleration and total duration of acceleration to step changes in defocus. Peak velocity and total duration of acceleration increased with response magnitude. Peak acceleration and time-to-peak acceleration remained independent of response magnitude. Independent first-order and second-order dynamic components of accommodation demonstrate that neural control of accommodation has an initial open-loop component that is independent of response magnitude and a closed-loop component that increases with response magnitude.

  11. On the measurement of fiber orientation in fiberboard

    Treesearch

    Otto Suchsland; Charles W. McMillin

    1983-01-01

    An attempt to measure the vertical component of fiber orientation in fiberboard is described. The experiment is based on the obvious reduction of the furnish fiber length which occurs by cutting thin microtome sections of the board parallel to the board plane. Only when no vertical fiber orientation component is present will the fibers contained in these sections have...

  12. Parallel approach for bioinspired algorithms

    NASA Astrophysics Data System (ADS)

    Zaporozhets, Dmitry; Zaruba, Daria; Kulieva, Nina

    2018-05-01

    In the paper, a probabilistic parallel approach based on the population heuristic, such as a genetic algorithm, is suggested. The authors proposed using a multithreading approach at the micro level at which new alternative solutions are generated. On each iteration, several threads that independently used the same population to generate new solutions can be started. After the work of all threads, a selection operator combines obtained results in the new population. To confirm the effectiveness of the suggested approach, the authors have developed software on the basis of which experimental computations can be carried out. The authors have considered a classic optimization problem – finding a Hamiltonian cycle in a graph. Experiments show that due to the parallel approach at the micro level, increment of running speed can be obtained on graphs with 250 and more vertices.

  13. Design and implementation of highly parallel pipelined VLSI systems

    NASA Astrophysics Data System (ADS)

    Delange, Alphonsus Anthonius Jozef

    A methodology and its realization as a prototype CAD (Computer Aided Design) system for the design and analysis of complex multiprocessor systems is presented. The design is an iterative process in which the behavioral specifications of the system components are refined into structural descriptions consisting of interconnections and lower level components etc. A model for the representation and analysis of multiprocessor systems at several levels of abstraction and an implementation of a CAD system based on this model are described. A high level design language, an object oriented development kit for tool design, a design data management system, and design and analysis tools such as a high level simulator and graphics design interface which are integrated into the prototype system and graphics interface are described. Procedures for the synthesis of semiregular processor arrays, and to compute the switching of input/output signals, memory management and control of processor array, and sequencing and segmentation of input/output data streams due to partitioning and clustering of the processor array during the subsequent synthesis steps, are described. The architecture and control of a parallel system is designed and each component mapped to a module or module generator in a symbolic layout library, compacted for design rules of VLSI (Very Large Scale Integration) technology. An example of the design of a processor that is a useful building block for highly parallel pipelined systems in the signal/image processing domains is given.

  14. A highly parallel multigrid-like method for the solution of the Euler equations

    NASA Technical Reports Server (NTRS)

    Tuminaro, Ray S.

    1989-01-01

    We consider a highly parallel multigrid-like method for the solution of the two dimensional steady Euler equations. The new method, introduced as filtering multigrid, is similar to a standard multigrid scheme in that convergence on the finest grid is accelerated by iterations on coarser grids. In the filtering method, however, additional fine grid subproblems are processed concurrently with coarse grid computations to further accelerate convergence. These additional problems are obtained by splitting the residual into a smooth and an oscillatory component. The smooth component is then used to form a coarse grid problem (similar to standard multigrid) while the oscillatory component is used for a fine grid subproblem. The primary advantage in the filtering approach is that fewer iterations are required and that most of the additional work per iteration can be performed in parallel with the standard coarse grid computations. We generalize the filtering algorithm to a version suitable for nonlinear problems. We emphasize that this generalization is conceptually straight-forward and relatively easy to implement. In particular, no explicit linearization (e.g., formation of Jacobians) needs to be performed (similar to the FAS multigrid approach). We illustrate the nonlinear version by applying it to the Euler equations, and presenting numerical results. Finally, a performance evaluation is made based on execution time models and convergence information obtained from numerical experiments.

  15. Evolution of Kelvin-Helmholtz instability at Venus in the presence of the parallel magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, H. Y.; Key Laboratory of Planetary Sciences, Chinese Academy of Sciences, Nanjing 210008; Cao, J. B.

    2015-06-15

    Two-dimensional MHD simulations were performed to study the evolution of the Kelvin-Helmholtz (KH) instability at the Venusian ionopause in response to the strong flow shear in presence of the in-plane magnetic field parallel to the flow direction. The physical behavior of the KH instability as well as the triggering and occurrence conditions for highly rolled-up vortices are characterized through several physical parameters, including Alfvén Mach number on the upper side of the layer, the density ratio, and the ratio of parallel magnetic fields between two sides of the layer. Using these parameters, the simulations show that both the high densitymore » ratio and the parallel magnetic field component across the boundary layer play a role of stabilizing the instability. In the high density ratio case, the amount of total magnetic energy in the final quasi-steady status is much more than that in the initial status, which is clearly different from the case with low density ratio. We particularly investigate the nonlinear development of the case that has a high density ratio and uniform magnetic field. Before the instability saturation, a single magnetic island is formed and evolves into two quasi-steady islands in the non-linear phase. A quasi-steady pattern eventually forms and is embedded within a uniform magnetic field and a broadened boundary layer. The estimation of loss rates of ions from Venus indicates that the stabilizing effect of the parallel magnetic field component on the KH instability becomes strong in the case of high density ratio.« less

  16. Conceptual design of a hybrid parallel mechanism for mask exchanging of TMT

    NASA Astrophysics Data System (ADS)

    Wang, Jianping; Zhou, Hongfei; Li, Kexuan; Zhou, Zengxiang; Zhai, Chao

    2015-10-01

    Mask exchange system is an important part of the Multi-Object Broadband Imaging Echellette (MOBIE) on the Thirty Meter Telescope (TMT). To solve the problem of stiffness changing with the gravity vector of the mask exchange system in the MOBIE, the hybrid parallel mechanism design method was introduced into the whole research. By using the characteristics of high stiffness and precision of parallel structure, combined with large moving range of serial structure, a conceptual design of a hybrid parallel mask exchange system based on 3-RPS parallel mechanism was presented. According to the position requirements of the MOBIE, the SolidWorks structure model of the hybrid parallel mask exchange robot was established and the appropriate installation position without interfering with the related components and light path in the MOBIE of TMT was analyzed. Simulation results in SolidWorks suggested that 3-RPS parallel platform had good stiffness property in different gravity vector directions. Furthermore, through the research of the mechanism theory, the inverse kinematics solution of the 3-RPS parallel platform was calculated and the mathematical relationship between the attitude angle of moving platform and the angle of ball-hinges on the moving platform was established, in order to analyze the attitude adjustment ability of the hybrid parallel mask exchange robot. The proposed conceptual design has some guiding significance for the design of mask exchange system of the MOBIE on TMT.

  17. GWM-VI: groundwater management with parallel processing for multiple MODFLOW versions

    USGS Publications Warehouse

    Banta, Edward R.; Ahlfeld, David P.

    2013-01-01

    Groundwater Management–Version Independent (GWM–VI) is a new version of the Groundwater Management Process of MODFLOW. The Groundwater Management Process couples groundwater-flow simulation with a capability to optimize stresses on the simulated aquifer based on an objective function and constraints imposed on stresses and aquifer state. GWM–VI extends prior versions of Groundwater Management in two significant ways—(1) it can be used with any version of MODFLOW that meets certain requirements on input and output, and (2) it is structured to allow parallel processing of the repeated runs of the MODFLOW model that are required to solve the optimization problem. GWM–VI uses the same input structure for files that describe the management problem as that used by prior versions of Groundwater Management. GWM–VI requires only minor changes to the input files used by the MODFLOW model. GWM–VI uses the Joint Universal Parameter IdenTification and Evaluation of Reliability Application Programming Interface (JUPITER-API) to implement both version independence and parallel processing. GWM–VI communicates with the MODFLOW model by manipulating certain input files and interpreting results from the MODFLOW listing file and binary output files. Nearly all capabilities of prior versions of Groundwater Management are available in GWM–VI. GWM–VI has been tested with MODFLOW-2005, MODFLOW-NWT (a Newton formulation for MODFLOW-2005), MF2005-FMP2 (the Farm Process for MODFLOW-2005), SEAWAT, and CFP (Conduit Flow Process for MODFLOW-2005). This report provides sample problems that demonstrate a range of applications of GWM–VI and the directory structure and input information required to use the parallel-processing capability.

  18. Characterization of Harmonic Signal Acquisition with Parallel Dipole and Multipole Detectors

    NASA Astrophysics Data System (ADS)

    Park, Sung-Gun; Anderson, Gordon A.; Bruce, James E.

    2018-04-01

    Fourier transform ion cyclotron resonance mass spectrometry (FTICR-MS) is a powerful instrument for the study of complex biological samples due to its high resolution and mass measurement accuracy. However, the relatively long signal acquisition periods needed to achieve high resolution can serve to limit applications of FTICR-MS. The use of multiple pairs of detector electrodes enables detection of harmonic frequencies present at integer multiples of the fundamental cyclotron frequency, and the obtained resolving power for a given acquisition period increases linearly with the order of harmonic signal. However, harmonic signal detection also increases spectral complexity and presents challenges for interpretation. In the present work, ICR cells with independent dipole and harmonic detection electrodes and preamplifiers are demonstrated. A benefit of this approach is the ability to independently acquire fundamental and multiple harmonic signals in parallel using the same ions under identical conditions, enabling direct comparison of achieved performance as parameters are varied. Spectra from harmonic signals showed generally higher resolving power than spectra acquired with fundamental signals and equal signal duration. In addition, the maximum observed signal to noise (S/N) ratio from harmonic signals exceeded that of fundamental signals by 50 to 100%. Finally, parallel detection of fundamental and harmonic signals enables deconvolution of overlapping harmonic signals since observed fundamental frequencies can be used to unambiguously calculate all possible harmonic frequencies. Thus, the present application of parallel fundamental and harmonic signal acquisition offers a general approach to improve utilization of harmonic signals to yield high-resolution spectra with decreased acquisition time. [Figure not available: see fulltext.

  19. Massively Convergent Evolution for Ribosomal Protein Gene Content in Plastid and Mitochondrial Genomes

    PubMed Central

    Maier, Uwe-G; Zauner, Stefan; Woehle, Christian; Bolte, Kathrin; Hempel, Franziska; Allen, John F.; Martin, William F.

    2013-01-01

    Plastid and mitochondrial genomes have undergone parallel evolution to encode the same functional set of genes. These encode conserved protein components of the electron transport chain in their respective bioenergetic membranes and genes for the ribosomes that express them. This highly convergent aspect of organelle genome evolution is partly explained by the redox regulation hypothesis, which predicts a separate plastid or mitochondrial location for genes encoding bioenergetic membrane proteins of either photosynthesis or respiration. Here we show that convergence in organelle genome evolution is far stronger than previously recognized, because the same set of genes for ribosomal proteins is independently retained by both plastid and mitochondrial genomes. A hitherto unrecognized selective pressure retains genes for the same ribosomal proteins in both organelles. On the Escherichia coli ribosome assembly map, the retained proteins are implicated in 30S and 50S ribosomal subunit assembly and initial rRNA binding. We suggest that ribosomal assembly imposes functional constraints that govern the retention of ribosomal protein coding genes in organelles. These constraints are subordinate to redox regulation for electron transport chain components, which anchor the ribosome to the organelle genome in the first place. As organelle genomes undergo reduction, the rRNAs also become smaller. Below size thresholds of approximately 1,300 nucleotides (16S rRNA) and 2,100 nucleotides (26S rRNA), all ribosomal protein coding genes are lost from organelles, while electron transport chain components remain organelle encoded as long as the organelles use redox chemistry to generate a proton motive force. PMID:24259312

  20. Category-based guidance of spatial attention during visual search for feature conjunctions.

    PubMed

    Nako, Rebecca; Grubert, Anna; Eimer, Martin

    2016-10-01

    The question whether alphanumerical category is involved in the control of attentional target selection during visual search remains a contentious issue. We tested whether category-based attentional mechanisms would guide the allocation of attention under conditions where targets were defined by a combination of alphanumerical category and a basic visual feature, and search displays could contain both targets and partially matching distractor objects. The N2pc component was used as an electrophysiological marker of attentional object selection in tasks where target objects were defined by a conjunction of color and category (Experiment 1) or shape and category (Experiment 2). Some search displays contained the target or a nontarget object that matched either the target color/shape or its category among 3 nonmatching distractors. In other displays, the target and a partially matching nontarget object appeared together. N2pc components were elicited not only by targets and by color- or shape-matching nontargets, but also by category-matching nontarget objects, even on trials where a target was present in the same display. On these trials, the summed N2pc components to the 2 types of partially matching nontargets were initially equal in size to the target N2pc, suggesting that attention was allocated simultaneously and independently to all objects with target-matching features during the early phase of attentional processing. Results demonstrate that alphanumerical category is a genuine guiding feature that can operate in parallel with color or shape information to control the deployment of attention during visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emms, David M.; Covshoff, Sarah; Hibberd, Julian M.

    C4 photosynthesis is considered one of the most remarkable examples of evolutionary convergence in eukaryotes. However, it is unknown whether the evolution of C4 photosynthesis required the evolution of new genes. Genome-wide gene-tree species-tree reconciliation of seven monocot species that span two origins of C4 photosynthesis revealed that there was significant parallelism in the duplication and retention of genes coincident with the evolution of C4 photosynthesis in these lineages. Specifically, 21 orthologous genes were duplicated and retained independently in parallel at both C4 origins. Analysis of this gene cohort revealed that the set of parallel duplicated and retained genes ismore » enriched for genes that are preferentially expressed in bundle sheath cells, the cell type in which photosynthesis was activated during C4 evolution. Moreover, functional analysis of the cohort of parallel duplicated genes identified SWEET-13 as a potential key transporter in the evolution of C4 photosynthesis in grasses, and provides new insight into the mechanism of phloem loading in these C4 species.« less

  2. Recognition of the optical packet header for two channels utilizing the parallel reservoir computing based on a semiconductor ring laser

    NASA Astrophysics Data System (ADS)

    Bao, Xiurong; Zhao, Qingchun; Yin, Hongxi; Qin, Jie

    2018-05-01

    In this paper, an all-optical parallel reservoir computing (RC) system with two channels for the optical packet header recognition is proposed and simulated, which is based on a semiconductor ring laser (SRL) with the characteristic of bidirectional light paths. The parallel optical loops are built through the cross-feedback of the bidirectional light paths where every optical loop can independently recognize each injected optical packet header. Two input signals are mapped and recognized simultaneously by training all-optical parallel reservoir, which is attributed to the nonlinear states in the laser. The recognition of optical packet headers for two channels from 4 bits to 32 bits is implemented through the simulation optimizing system parameters and therefore, the optimal recognition error ratio is 0. Since this structure can combine with the wavelength division multiplexing (WDM) optical packet switching network, the wavelength of each channel of optical packet headers for recognition can be different, and a better recognition result can be obtained.

  3. Parallelized implicit propagators for the finite-difference Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Parker, Jonathan; Taylor, K. T.

    1995-08-01

    We describe the application of block Gauss-Seidel and block Jacobi iterative methods to the design of implicit propagators for finite-difference models of the time-dependent Schrödinger equation. The block-wise iterative methods discussed here are mixed direct-iterative methods for solving simultaneous equations, in the sense that direct methods (e.g. LU decomposition) are used to invert certain block sub-matrices, and iterative methods are used to complete the solution. We describe parallel variants of the basic algorithm that are well suited to the medium- to coarse-grained parallelism of work-station clusters, and MIMD supercomputers, and we show that under a wide range of conditions, fine-grained parallelism of the computation can be achieved. Numerical tests are conducted on a typical one-electron atom Hamiltonian. The methods converge robustly to machine precision (15 significant figures), in some cases in as few as 6 or 7 iterations. The rate of convergence is nearly independent of the finite-difference grid-point separations.

  4. Characterization of extracellular polymeric substances in biofilms under long-term exposure to ciprofloxacin antibiotic using fluorescence excitation-emission matrix and parallel factor analysis.

    PubMed

    Gu, Chaochao; Gao, Pin; Yang, Fan; An, Dongxuan; Munir, Mariya; Jia, Hanzhong; Xue, Gang; Ma, Chunyan

    2017-05-01

    The presence of antibiotic residues in the environment has been regarded as an emerging concern due to their potential adverse environmental consequences such as antibiotic resistance. However, the interaction between antibiotics and extracellular polymeric substances (EPSs) of biofilms in wastewater treatment systems is not entirely clear. In this study, the effect of ciprofloxacin (CIP) antibiotic on biofilm EPS matrix was investigated and characterized using fluorescence excitation-emission matrix (EEM) and parallel factor (PARAFAC) analysis. Physicochemical analysis showed that the proteins were the major EPS fraction, and their contents increased gradually with an increase in CIP concentration (0-300 μg/L). Based on the characterization of biofilm tightly bound EPS (TB-EPS) by EEM, three fluorescent components were identified by PARAFAC analysis. Component C1 was associated with protein-like substances, and components C2 and C3 belonged to humic-like substances. Component C1 exhibited an increasing trend as the CIP addition increased. Pearson's correlation results showed that CIP correlated significantly with the protein contents and component C1, while strong correlations were also found among UV 254 , dissolved organic carbon, humic acids, and component C3. A combined use of EEM-PARAFAC analysis and chemical measurements was demonstrated as a favorable approach for the characterization of variations in biofilm EPS in the presence of CIP antibiotic.

  5. Insights into the interaction between carbamazepine and natural dissolved organic matter in the Yangtze Estuary using fluorescence excitation-emission matrix spectra coupled with parallel factor analysis.

    PubMed

    Wang, Ying; Zhang, Manman; Fu, Jun; Li, Tingting; Wang, Jinggang; Fu, Yingyu

    2016-10-01

    The interaction between carbamazepine (CBZ) and dissolved organic matter (DOM) from three zones (the nearshore, the river channel, and the coastal areas) in the Yangtze Estuary was investigated using fluorescence quenching titration combined with excitation emission matrix spectra and parallel factor analysis (PARAFAC). The complexation between CBZ and DOM was demonstrated by the increase in hydrogen bonding and the disappearance of the C=O stretch obtained from the Fourier transform infrared spectroscopy analysis. The results indicated that two protein-like substances (component 2 and component3) and two humic-like substances (component 1 and 4) were identified in the DOM from the Yangtze Estuary. The fluorescence quenching curves of each component with the addition of CBZ and the Ryan and Weber model calculation results both demonstrated that the different components exhibited different complexation activities with CBZ. The protein-like components had a stronger affinity with CBZ than did the humic-like substances. On the other hand, the autochthonous tyrosine-like C2 played an important role in the complexation with DOM from the river channel and coastal areas, while C3 influenced by anthropogenic activities showed an obvious effect in the nearshore area. DOMs from the river channel have the highest binding capacity for CBZ, which may ascribe to the relatively high phenol content group in the DOM.

  6. Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian

    The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functionalmore » characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.« less

  7. RRAM-based parallel computing architecture using k-nearest neighbor classification for pattern recognition

    NASA Astrophysics Data System (ADS)

    Jiang, Yuning; Kang, Jinfeng; Wang, Xinan

    2017-03-01

    Resistive switching memory (RRAM) is considered as one of the most promising devices for parallel computing solutions that may overcome the von Neumann bottleneck of today’s electronic systems. However, the existing RRAM-based parallel computing architectures suffer from practical problems such as device variations and extra computing circuits. In this work, we propose a novel parallel computing architecture for pattern recognition by implementing k-nearest neighbor classification on metal-oxide RRAM crossbar arrays. Metal-oxide RRAM with gradual RESET behaviors is chosen as both the storage and computing components. The proposed architecture is tested by the MNIST database. High speed (~100 ns per example) and high recognition accuracy (97.05%) are obtained. The influence of several non-ideal device properties is also discussed, and it turns out that the proposed architecture shows great tolerance to device variations. This work paves a new way to achieve RRAM-based parallel computing hardware systems with high performance.

  8. Comparison of three-dimensional fluorescence analysis methods for predicting formation of trihalomethanes and haloacetic acids.

    PubMed

    Peleato, Nicolás M; Andrews, Robert C

    2015-01-01

    This work investigated the application of several fluorescence excitation-emission matrix analysis methods as natural organic matter (NOM) indicators for use in predicting the formation of trihalomethanes (THMs) and haloacetic acids (HAAs). Waters from four different sources (two rivers and two lakes) were subjected to jar testing followed by 24hr disinfection by-product formation tests using chlorine. NOM was quantified using three common measures: dissolved organic carbon, ultraviolet absorbance at 254 nm, and specific ultraviolet absorbance as well as by principal component analysis, peak picking, and parallel factor analysis of fluorescence spectra. Based on multi-linear modeling of THMs and HAAs, principle component (PC) scores resulted in the lowest mean squared prediction error of cross-folded test sets (THMs: 43.7 (μg/L)(2), HAAs: 233.3 (μg/L)(2)). Inclusion of principle components representative of protein-like material significantly decreased prediction error for both THMs and HAAs. Parallel factor analysis did not identify a protein-like component and resulted in prediction errors similar to traditional NOM surrogates as well as fluorescence peak picking. These results support the value of fluorescence excitation-emission matrix-principal component analysis as a suitable NOM indicator in predicting the formation of THMs and HAAs for the water sources studied. Copyright © 2014. Published by Elsevier B.V.

  9. Direct Machining of Low-Loss THz Waveguide Components With an RF Choke.

    PubMed

    Lewis, Samantha M; Nanni, Emilio A; Temkin, Richard J

    2014-12-01

    We present results for the successful fabrication of low-loss THz metallic waveguide components using direct machining with a CNC end mill. The approach uses a split-block machining process with the addition of an RF choke running parallel to the waveguide. The choke greatly reduces coupling to the parasitic mode of the parallel-plate waveguide produced by the split-block. This method has demonstrated loss as low as 0.2 dB/cm at 280 GHz for a copper WR-3 waveguide. It has also been used in the fabrication of 3 and 10 dB directional couplers in brass, demonstrating excellent agreement with design simulations from 240-260 GHz. The method may be adapted to structures with features on the order of 200 μm.

  10. Use of RORA for Complex Ground-Water Flow Conditions

    USGS Publications Warehouse

    Rutledge, A.T.

    2004-01-01

    The RORA computer program for estimating recharge is based on a condition in which ground water flows perpendicular to the nearest stream that receives ground-water discharge. The method, therefore, does not explicitly account for the ground-water-flow component that is parallel to the stream. Hypothetical finite-difference simulations are used to demonstrate effects of complex flow conditions that consist of two components: one that is perpendicular to the stream and one that is parallel to the stream. Results of the simulations indicate that the RORA program can be used if certain constraints are applied in the estimation of the recession index, an input variable to the program. These constraints apply to a mathematical formulation based on aquifer properties, recession of ground-water levels, and recession of streamflow.

  11. A visual parallel-BCI speller based on the time-frequency coding strategy

    NASA Astrophysics Data System (ADS)

    Xu, Minpeng; Chen, Long; Zhang, Lixin; Qi, Hongzhi; Ma, Lan; Tang, Jiabei; Wan, Baikun; Ming, Dong

    2014-04-01

    Objective. Spelling is one of the most important issues in brain-computer interface (BCI) research. This paper is to develop a visual parallel-BCI speller system based on the time-frequency coding strategy in which the sub-speller switching among four simultaneously presented sub-spellers and the character selection are identified in a parallel mode. Approach. The parallel-BCI speller was constituted by four independent P300+SSVEP-B (P300 plus SSVEP blocking) spellers with different flicker frequencies, thereby all characters had a specific time-frequency code. To verify its effectiveness, 11 subjects were involved in the offline and online spellings. A classification strategy was designed to recognize the target character through jointly using the canonical correlation analysis and stepwise linear discriminant analysis. Main results. Online spellings showed that the proposed parallel-BCI speller had a high performance, reaching the highest information transfer rate of 67.4 bit min-1, with an average of 54.0 bit min-1 and 43.0 bit min-1 in the three rounds and five rounds, respectively. Significance. The results indicated that the proposed parallel-BCI could be effectively controlled by users with attention shifting fluently among the sub-spellers, and highly improved the BCI spelling performance.

  12. Parallel Preconditioning for CFD Problems on the CM-5

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.

  13. On high-latitude convection field inhomogeneities, parallel electric fields and inverted-V precipitation events

    NASA Technical Reports Server (NTRS)

    Lennartsson, W.

    1977-01-01

    A simple model of a static electric field with a component parallel to the magnetic field is proposed for calculating the electric field and current distributions at various altitudes when the horizontal distribution of the convection electric field is given at a certain altitude above the auroral ionosphere. The model is shown to be compatible with satellite observations of inverted-V electron precipitation structures and associated irregularities in the convection electric field.

  14. Electroencephalographic dynamics of musical emotion perception revealed by independent spectral components.

    PubMed

    Lin, Yuan-Pin; Duann, Jeng-Ren; Chen, Jyh-Horng; Jung, Tzyy-Ping

    2010-04-21

    This study explores the electroencephalographic (EEG) correlates of emotional experience during music listening. Independent component analysis and analysis of variance were used to separate statistically independent spectral changes of the EEG in response to music-induced emotional processes. An independent brain process with equivalent dipole located in the fronto-central region exhibited distinct δ-band and θ-band power changes associated with self-reported emotional states. Specifically, the emotional valence was associated with δ-power decreases and θ-power increases in the frontal-central area, whereas the emotional arousal was accompanied by increases in both δ and θ powers. The resultant emotion-related component activations that were less interfered by the activities from other brain processes complement previous EEG studies of emotion perception to music.

  15. Understanding decimal proportions: discrete representations, parallel access, and privileged processing of zero.

    PubMed

    Varma, Sashank; Karl, Stacy R

    2013-05-01

    Much of the research on mathematical cognition has focused on the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9, with considerably less attention paid to more abstract number classes. The current research investigated how people understand decimal proportions--rational numbers between 0 and 1 expressed in the place-value symbol system. The results demonstrate that proportions are represented as discrete structures and processed in parallel. There was a semantic interference effect: When understanding a proportion expression (e.g., "0.29"), both the correct proportion referent (e.g., 0.29) and the incorrect natural number referent (e.g., 29) corresponding to the visually similar natural number expression (e.g., "29") are accessed in parallel, and when these referents lead to conflicting judgments, performance slows. There was also a syntactic interference effect, generalizing the unit-decade compatibility effect for natural numbers: When comparing two proportions, their tenths and hundredths components are processed in parallel, and when the different components lead to conflicting judgments, performance slows. The results also reveal that zero decimals--proportions ending in zero--serve multiple cognitive functions, including eliminating semantic interference and speeding processing. The current research also extends the distance, semantic congruence, and SNARC effects from natural numbers to decimal proportions. These findings inform how people understand the place-value symbol system, and the mental implementation of mathematical symbol systems more generally. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Generation of Highly Oblique Lower Band Chorus Via Nonlinear Three-Wave Resonance

    DOE PAGES

    Fu, Xiangrong; Gary, Stephen Peter; Reeves, Geoffrey D.; ...

    2017-09-05

    Chorus in the inner magnetosphere has been observed frequently at geomagnetically active times, typically exhibiting a two-band structure with a quasi-parallel lower band and an upper band with a broad range of wave normal angles. But recent observations by Van Allen Probes confirm another type of lower band chorus, which has a large wave normal angle close to the resonance cone angle. It has been proposed that these waves could be generated by a low-energy beam-like electron component or by temperature anisotropy of keV electrons in the presence of a low-energy plateau-like electron component. This paper, however, presents an alternativemore » mechanism for generation of this highly oblique lower band chorus. Through a nonlinear three-wave resonance, a quasi-parallel lower band chorus wave can interact with a mildly oblique upper band chorus wave, producing a highly oblique quasi-electrostatic lower band chorus wave. This theoretical analysis is confirmed by 2-D electromagnetic particle-in-cell simulations. Furthermore, as the newly generated waves propagate away from the equator, their wave normal angle can further increase and they are able to scatter low-energy electrons to form a plateau-like structure in the parallel velocity distribution. As a result, the three-wave resonance mechanism may also explain the generation of quasi-parallel upper band chorus which has also been observed in the magnetosphere.« less

  17. Chromophoric dissolved organic matter (CDOM) variability in Barataria Basin using excitation-emission matrix (EEM) fluorescence and parallel factor analysis (PARAFAC).

    PubMed

    Singh, Shatrughan; D'Sa, Eurico J; Swenson, Erick M

    2010-07-15

    Chromophoric dissolved organic matter (CDOM) variability in Barataria Basin, Louisiana, USA,was examined by excitation emission matrix (EEM) fluorescence combined with parallel factor analysis (PARAFAC). CDOM optical properties of absorption and fluorescence at 355nm along an axial transect (36 stations) during March, April, and May 2008 showed an increasing trend from the marine end member to the upper basin with mean CDOM absorption of 11.06 + or - 5.01, 10.05 + or - 4.23, 11.67 + or - 6.03 (m(-)(1)) and fluorescence 0.80 + or - 0.37, 0.78 + or - 0.39, 0.75 + or - 0.51 (RU), respectively. PARAFAC analysis identified two terrestrial humic-like (component 1 and 2), one non-humic like (component 3), and one soil derived humic acid like (component 4) components. The spatial variation of the components showed an increasing trend from station 1 (near the mouth of basin) to station 36 (end member of bay; upper basin). Deviations from this increasing trend were observed at a bayou channel with very high chlorophyll-a concentrations especially for component 3 in May 2008 that suggested autochthonous production of CDOM. The variability of components with salinity indicated conservative mixing along the middle part of the transect. Component 1 and 4 were found to be relatively constant, while components 2 and 3 revealed an inverse relationship for the sampling period. Total organic carbon showed increasing trend for each of the components. An increase in humification and a decrease in fluorescence indices along the transect indicated an increase in terrestrial derived organic matter and reduced microbial activity from lower to upper basin. The use of these indices along with PARAFAC results improved dissolved organic matter characterization in the Barataria Basin. Copyright 2010 Elsevier B.V. All rights reserved.

  18. Communication Optimal Parallel Multiplication of Sparse Random Matrices

    DTIC Science & Technology

    2013-02-21

    Definition 2.1), and (2) the algorithm is sparsity- independent, where the computation is statically partitioned to processors independent of the sparsity...struc- ture of the input matrices (see Definition 2.5). The second assumption applies to nearly all existing al- gorithms for general sparse matrix-matrix...where A and B are n× n ER(d) matrices: Definition 2.1 An ER(d) matrix is an adjacency matrix of an Erdős-Rényi graph with parameters n and d/n. That

  19. Parallel and nonparallel behavioural evolution in response to parasitism and predation in Trinidadian guppies.

    PubMed

    Jacquin, L; Reader, S M; Boniface, A; Mateluna, J; Patalas, I; Pérez-Jvostov, F; Hendry, A P

    2016-07-01

    Natural enemies such as predators and parasites are known to shape intraspecific variability of behaviour and personality in natural populations, yet several key questions remain: (i) What is the relative importance of predation vs. parasitism in shaping intraspecific variation of behaviour across generations? (ii) What are the contributions of genetic and plastic effects to this behavioural divergence? (iii) And to what extent are responses to predation and parasitism repeatable across independent evolutionary lineages? We addressed these questions using Trinidadian guppies (Poecilia reticulata) (i) varying in their exposure to dangerous fish predators and Gyrodactylus ectoparasites for (ii) both wild-caught F0 and laboratory-reared F2 individuals and coming from (iii) multiple independent evolutionary lineages (i.e. independent drainages). Several key findings emerged. First, a population's history of predation and parasitism influenced behavioural profiles, but to different extent depending on the behaviour considered (activity, shoaling or boldness). Second, we had evidence for some genetic effects of predation regime on behaviour, with differences in activity of F2 laboratory-reared individuals, but not for parasitism, which had only plastic effects on the boldness of wild-caught F0 individuals. Third, the two lineages showed a mixture of parallel and nonparallel responses to predation/parasitism, with parallel responses being stronger for predation than for parasitism and for activity and boldness than for shoaling. These findings suggest that different sets of behaviours provide different pay-offs in alternative predation/parasitism environments and that parasitism has more transient effects in shaping intraspecific variation of behaviour than does predation. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  20. Comparative evaluation of the indigenous microbial diversity vs. drilling fluid contaminants in the NEEM Greenland ice core.

    PubMed

    Miteva, Vanya; Burlingame, Caroline; Sowers, Todd; Brenchley, Jean

    2014-08-01

    Demonstrating that the detected microbial diversity in nonaseptically drilled deep ice cores is truly indigenous is challenging because of potential contamination with exogenous microbial cells. The NEEM Greenland ice core project provided a first-time opportunity to determine the origin and extent of contamination throughout drilling. We performed multiple parallel cultivation and culture-independent analyses of five decontaminated ice core samples from different depths (100-2051 m), the drilling fluid and its components Estisol and Coasol, and the drilling chips collected during drilling. We created a collection of diverse bacterial and fungal isolates (84 from the drilling fluid and its components, 45 from decontaminated ice, and 66 from drilling chips). Their categorization as contaminants or intrinsic glacial ice microorganisms was based on several criteria, including phylogenetic analyses, genomic fingerprinting, phenotypic characteristics, and presence in drilling fluid, chips, and/or ice. Firmicutes and fungi comprised the dominant group of contaminants among isolates and cloned rRNA genes. Conversely, most Proteobacteria and Actinobacteria originating from the ice were identified as intrinsic. This study provides a database of potential contaminants useful for future studies of NEEM cores and can contribute toward developing standardized protocols for contamination detection and ensuring the authenticity of the microbial diversity in deep glacial ice. © 2014 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.

  1. Latitudinal Clines of the Human Vitamin D Receptor and Skin Color Genes.

    PubMed

    Tiosano, Dov; Audi, Laura; Climer, Sharlee; Zhang, Weixiong; Templeton, Alan R; Fernández-Cancio, Monica; Gershoni-Baruch, Ruth; Sánchez-Muro, José Miguel; El Kholy, Mohamed; Hochberg, Zèev

    2016-05-03

    The well-documented latitudinal clines of genes affecting human skin color presumably arise from the need for protection from intense ultraviolet radiation (UVR) vs. the need to use UVR for vitamin D synthesis. Sampling 751 subjects from a broad range of latitudes and skin colors, we investigated possible multilocus correlated adaptation of skin color genes with the vitamin D receptor gene (VDR), using a vector correlation metric and network method called BlocBuster. We discovered two multilocus networks involving VDR promoter and skin color genes that display strong latitudinal clines as multilocus networks, even though many of their single gene components do not. Considered one by one, the VDR components of these networks show diverse patterns: no cline, a weak declining latitudinal cline outside of Africa, and a strong in- vs. out-of-Africa frequency pattern. We confirmed these results with independent data from HapMap. Standard linkage disequilibrium analyses did not detect these networks. We applied BlocBuster across the entire genome, showing that our networks are significant outliers for interchromosomal disequilibrium that overlap with environmental variation relevant to the genes' functions. These results suggest that these multilocus correlations most likely arose from a combination of parallel selective responses to a common environmental variable and coadaptation, given the known Mendelian epistasis among VDR and the skin color genes. Copyright © 2016 Tiosano et al.

  2. Latitudinal Clines of the Human Vitamin D Receptor and Skin Color Genes

    PubMed Central

    Tiosano, Dov; Audi, Laura; Climer, Sharlee; Zhang, Weixiong; Templeton, Alan R.; Fernández-Cancio, Monica; Gershoni-Baruch, Ruth; Sánchez-Muro, José Miguel; El Kholy, Mohamed; Hochberg, Zèev

    2016-01-01

    The well-documented latitudinal clines of genes affecting human skin color presumably arise from the need for protection from intense ultraviolet radiation (UVR) vs. the need to use UVR for vitamin D synthesis. Sampling 751 subjects from a broad range of latitudes and skin colors, we investigated possible multilocus correlated adaptation of skin color genes with the vitamin D receptor gene (VDR), using a vector correlation metric and network method called BlocBuster. We discovered two multilocus networks involving VDR promoter and skin color genes that display strong latitudinal clines as multilocus networks, even though many of their single gene components do not. Considered one by one, the VDR components of these networks show diverse patterns: no cline, a weak declining latitudinal cline outside of Africa, and a strong in- vs. out-of-Africa frequency pattern. We confirmed these results with independent data from HapMap. Standard linkage disequilibrium analyses did not detect these networks. We applied BlocBuster across the entire genome, showing that our networks are significant outliers for interchromosomal disequilibrium that overlap with environmental variation relevant to the genes’ functions. These results suggest that these multilocus correlations most likely arose from a combination of parallel selective responses to a common environmental variable and coadaptation, given the known Mendelian epistasis among VDR and the skin color genes. PMID:26921301

  3. Effects of a GaSb buffer layer on an InGaAs overlayer grown on Ge(111) substrates: Strain, twin generation, and surface roughness

    NASA Astrophysics Data System (ADS)

    Kajikawa, Y.; Nishigaichi, M.; Tenma, S.; Kato, K.; Katsube, S.

    2018-04-01

    InGaAs layers were grown by molecular-beam epitaxy on nominal and vicinal Ge(111) substrates with inserting GaSb buffer layers. High-resolution X-ray diffraction using symmetric 333 and asymmetric 224 reflections was employed to analyze the crystallographic properties of the grown layers. By using the two reflections, we determined the lattice constants (the unit cell length a and the angle α between axes) of the grown layers with taking into account the rhombohedral distortion of the lattices of the grown layers. This allowed us the independent determination of the strain components (perpendicular and parallel components to the substrate surface, ε⊥ and ε//) and the composition x of the InxGa1-xAs layers by assuming the distortion coefficient D, which is defined as the ratio of ε⊥ against ε//. Furthermore, the twin ratios were determined for the GaSb and the InGaAs layers by comparing asymmetric 224 reflections from the twin domain with that from the normal domain of the layers. As a result, it has been shown that the twin ratio in the InGaAs layer can be decreased to be less than 0.1% by the use of the vicinal substrate together with annealing the GaSb buffer layer during the growth interruption before the InGaAs overgrowth.

  4. An 81.6 μW FastICA processor for epileptic seizure detection.

    PubMed

    Yang, Chia-Hsiang; Shih, Yi-Hsin; Chiueh, Herming

    2015-02-01

    To improve the performance of epileptic seizure detection, independent component analysis (ICA) is applied to multi-channel signals to separate artifacts and signals of interest. FastICA is an efficient algorithm to compute ICA. To reduce the energy dissipation, eigenvalue decomposition (EVD) is utilized in the preprocessing stage to reduce the convergence time of iterative calculation of ICA components. EVD is computed efficiently through an array structure of processing elements running in parallel. Area-efficient EVD architecture is realized by leveraging the approximate Jacobi algorithm, leading to a 77.2% area reduction. By choosing proper memory element and reduced wordlength, the power and area of storage memory are reduced by 95.6% and 51.7%, respectively. The chip area is minimized through fixed-point implementation and architectural transformations. Given a latency constraint of 0.1 s, an 86.5% area reduction is achieved compared to the direct-mapped architecture. Fabricated in 90 nm CMOS, the core area of the chip is 0.40 mm(2). The FastICA processor, part of an integrated epileptic control SoC, dissipates 81.6 μW at 0.32 V. The computation delay of a frame of 256 samples for 8 channels is 84.2 ms. Compared to prior work, 0.5% power dissipation, 26.7% silicon area, and 3.4 × computation speedup are achieved. The performance of the chip was verified by human dataset.

  5. Automation and quality assurance of the production cycle

    NASA Astrophysics Data System (ADS)

    Hajdu, L.; Didenko, L.; Lauret, J.

    2010-04-01

    Processing datasets on the order of tens of terabytes is an onerous task, faced by production coordinators everywhere. Users solicit data productions and, especially for simulation data, the vast amount of parameters (and sometime incomplete requests) point at the need for a tracking, control and archiving all requests made so a coordinated handling could be made by the production team. With the advent of grid computing the parallel processing power has increased but traceability has also become increasing problematic due to the heterogeneous nature of Grids. Any one of a number of components may fail invalidating the job or execution flow in various stages of completion and re-submission of a few of the multitude of jobs (keeping the entire dataset production consistency) a difficult and tedious process. From the definition of the workflow to its execution, there is a strong need for validation, tracking, monitoring and reporting of problems. To ease the process of requesting production workflow, STAR has implemented several components addressing the full workflow consistency. A Web based online submission request module, implemented using Drupal's Content Management System API, enforces ahead that all parameters are described in advance in a uniform fashion. Upon submission, all jobs are independently tracked and (sometime experiment-specific) discrepancies are detected and recorded providing detailed information on where/how/when the job failed. Aggregate information on success and failure are also provided in near real-time.

  6. New approach for rapid assessment of trophic status of Yellow Sea and East China Sea using easy-to-measure parameters

    NASA Astrophysics Data System (ADS)

    Kong, Xianyu; Liu, Yanfang; Jian, Huimin; Su, Rongguo; Yao, Qingzhen; Shi, Xiaoyong

    2017-10-01

    To realize potential cost savings in coastal monitoring programs and provide timely advice for marine management, there is an urgent need for efficient evaluation tools based on easily measured variables for the rapid and timely assessment of estuarine and offshore eutrophication. In this study, using parallel factor analysis (PARAFAC), principal component analysis (PCA), and discriminant function analysis (DFA) with the trophic index (TRIX) for reference, we developed an approach for rapidly assessing the eutrophication status of coastal waters using easy-to-measure parameters, including chromophoric dissolved organic matter (CDOM), fluorescence excitation-emission matrices, CDOM UV-Vis absorbance, and other water-quality parameters (turbidity, chlorophyll a, and dissolved oxygen). First, we decomposed CDOM excitation-emission matrices (EEMs) by PARAFAC to identify three components. Then, we applied PCA to simplify the complexity of the relationships between the water-quality parameters. Finally, we used the PCA score values as independent variables in DFA to develop a eutrophication assessment model. The developed model yielded classification accuracy rates of 97.1%, 80.5%, 90.3%, and 89.1% for good, moderate, and poor water qualities, and for the overall data sets, respectively. Our results suggest that these easy-to-measure parameters could be used to develop a simple approach for rapid in-situ assessment and monitoring of the eutrophication of estuarine and offshore areas.

  7. Extracting Independent Local Oscillatory Geophysical Signals by Geodetic Tropospheric Delay

    NASA Technical Reports Server (NTRS)

    Botai, O. J.; Combrinck, L.; Sivakumar, V.; Schuh, H.; Bohm, J.

    2010-01-01

    Zenith Tropospheric Delay (ZTD) due to water vapor derived from space geodetic techniques and numerical weather prediction simulated-reanalysis data exhibits non-linear and non-stationary properties akin to those in the crucial geophysical signals of interest to the research community. These time series, once decomposed into additive (and stochastic) components, have information about the long term global change (the trend) and other interpretable (quasi-) periodic components such as seasonal cycles and noise. Such stochastic component(s) could be a function that exhibits at most one extremum within a data span or a monotonic function within a certain temporal span. In this contribution, we examine the use of the combined Ensemble Empirical Mode Decomposition (EEMD) and Independent Component Analysis (ICA): the EEMD-ICA algorithm to extract the independent local oscillatory stochastic components in the tropospheric delay derived from the European Centre for Medium-Range Weather Forecasts (ECMWF) over six geodetic sites (HartRAO, Hobart26, Wettzell, Gilcreek, Westford, and Tsukub32). The proposed methodology allows independent geophysical processes to be extracted and assessed. Analysis of the quality index of the Independent Components (ICs) derived for each cluster of local oscillatory components (also called the Intrinsic Mode Functions (IMFs)) for all the geodetic stations considered in the study demonstrate that they are strongly site dependent. Such strong dependency seems to suggest that the localized geophysical signals embedded in the ZTD over the geodetic sites are not correlated. Further, from the viewpoint of non-linear dynamical systems, four geophysical signals the Quasi-Biennial Oscillation (QBO) index derived from the NCEP/NCAR reanalysis, the Southern Oscillation Index (SOI) anomaly from NCEP, the SIDC monthly Sun Spot Number (SSN), and the Length of Day (LoD) are linked to the extracted signal components from ZTD. Results from the synchronization analysis show that ZTD and the geophysical signals exhibit (albeit subtle) site dependent phase synchronization index.

  8. An Expert Assistant for Computer Aided Parallelization

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Chun, Robert; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit

    2004-01-01

    The prototype implementation of an expert system was developed to assist the user in the computer aided parallelization process. The system interfaces to tools for automatic parallelization and performance analysis. By fusing static program structure information and dynamic performance analysis data the expert system can help the user to filter, correlate, and interpret the data gathered by the existing tools. Sections of the code that show poor performance and require further attention are rapidly identified and suggestions for improvements are presented to the user. In this paper we describe the components of the expert system and discuss its interface to the existing tools. We present a case study to demonstrate the successful use in full scale scientific applications.

  9. A comparison of independent component analysis algorithms and measures to discriminate between EEG and artifact components.

    PubMed

    Dharmaprani, Dhani; Nguyen, Hoang K; Lewis, Trent W; DeLosAngeles, Dylan; Willoughby, John O; Pope, Kenneth J

    2016-08-01

    Independent Component Analysis (ICA) is a powerful statistical tool capable of separating multivariate scalp electrical signals into their additive independent or source components, specifically EEG or electroencephalogram and artifacts. Although ICA is a widely accepted EEG signal processing technique, classification of the recovered independent components (ICs) is still flawed, as current practice still requires subjective human decisions. Here we build on the results from Fitzgibbon et al. [1] to compare three measures and three ICA algorithms. Using EEG data acquired during neuromuscular paralysis, we tested the ability of the measures (spectral slope, peripherality and spatial smoothness) and algorithms (FastICA, Infomax and JADE) to identify components containing EMG. Spatial smoothness showed differentiation between paralysis and pre-paralysis ICs comparable to spectral slope, whereas peripherality showed less differentiation. A combination of the measures showed better differentiation than any measure alone. Furthermore, FastICA provided the best discrimination between muscle-free and muscle-contaminated recordings in the shortest time, suggesting it may be the most suited to EEG applications of the considered algorithms. Spatial smoothness results suggest that a significant number of ICs are mixed, i.e. contain signals from more than one biological source, and so the development of an ICA algorithm that is optimised to produce ICs that are easily classifiable is warranted.

  10. Implementation of a Serial Replica Exchange Method in a Physics-Based United-Residue (UNRES) Force Field

    PubMed Central

    Shen, Hujun; Czaplewski, Cezary; Liwo, Adam; Scheraga, Harold A.

    2009-01-01

    The kinetic-trapping problem in simulating protein folding can be overcome by using a Replica Exchange Method (REM). However, in implementing REM in molecular dynamics simulations, synchronization between processors on parallel computers is required, and communication between processors limits its ability to sample conformational space in a complex system efficiently. To minimize communication between processors during the simulation, a Serial Replica Exchange Method (SREM) has been proposed recently by Hagan et al. (J. Phys. Chem. B 2007, 111, 1416–1423). Here, we report the implementation of this new SREM algorithm with our physics-based united-residue (UNRES) force field. The method has been tested on the protein 1E0L with a temperature-independent UNRES force field and on terminally blocked deca-alanine (Ala10) and 1GAB with the recently introduced temperature-dependent UNRES force field. With the temperature-independent force field, SREM reproduces the results of REM but is more efficient in terms of wall-clock time and scales better on distributed-memory machines. However, exact application of SREM to the temperature-dependent UNRES algorithm requires the determination of a four-dimensional distribution of UNRES energy components instead of a one-dimensional energy distribution for each temperature, which is prohibitively expensive. Hence, we assumed that the temperature dependence of the force field can be ignored for neighboring temperatures. This version of SREM worked for Ala10 which is a simple system but failed to reproduce the thermodynamic results as well as regular REM on the more complex 1GAB protein. Hence, SREM can be applied to the temperature-independent but not to the temperature-dependent UNRES force field. PMID:20011673

  11. mRNA levels of circadian clock components Bmal1 and Per2 alter independently from dosing time-dependent efficacy of combination treatment with valsartan and amlodipine in spontaneously hypertensive rats.

    PubMed

    Potucek, Peter; Radik, Michal; Doka, Gabriel; Kralova, Eva; Krenek, Peter; Klimas, Jan

    2017-01-01

    Chronopharmacological effects of antihypertensives play a role in the outcome of hypertension therapy. However, studies produce contradictory findings when combination of valsartan plus amlodipine (VA) is applied. Here, we hypothesized different efficacy of morning versus evening dosing of VA in spontaneously hypertensive rats (SHR) and the involvement of circadian clock genes Bmal1 and Per2. We tested the therapy outcome in short-term and also long-term settings. SHRs aged between 8 and 10 weeks were treated with 10 mg/kg of valsartan and 4 mg/kg of amlodipine, either in the morning or in the evening with treatment duration 1 or 6 weeks and compared with parallel placebo groups. After short-term treatment, only morning dosing resulted in significant blood pressure (BP) control (measured by tail-cuff method) when compared to placebo, while after long-term treatment, both dosing groups gained similar superior results in BP control against placebo. However, mRNA levels of Bmal1 and Per2 (measured by RT-PCR) exhibited an independent pattern, with similar alterations in left and right ventricle, kidney as well as in aorta predominantly in groups with evening dosing in both, short-term and also long-term settings. This was accompanied by increased cardiac mRNA expression of plasminogen activator inhibitor-1. In summary, morning dosing proved to be advantageous due to earlier onset of antihypertensive action; however, long-term treatment was demonstrated to be effective regardless of administration time. Our findings also suggest that combination of VA may serve as an independent modulator of circadian clock and might influence disease progression beyond the primary BP lowering effect.

  12. Event-based hydrological modeling for detecting dominant hydrological process and suitable model strategy for semi-arid catchments

    NASA Astrophysics Data System (ADS)

    Huang, Pengnian; Li, Zhijia; Chen, Ji; Li, Qiaoling; Yao, Cheng

    2016-11-01

    To simulate the hydrological processes in semi-arid areas properly is still challenging. This study assesses the impact of different modeling strategies on simulating flood processes in semi-arid catchments. Four classic hydrological models, TOPMODEL, XINANJIANG (XAJ), SAC-SMA and TANK, were selected and applied to three semi-arid catchments in North China. Based on analysis and comparison of the simulation results of these classic models, four new flexible models were constructed and used to further investigate the suitability of various modeling strategies for semi-arid environments. Numerical experiments were also designed to examine the performances of the models. The results show that in semi-arid catchments a suitable model needs to include at least one nonlinear component to simulate the main process of surface runoff generation. If there are more than two nonlinear components in the hydrological model, they should be arranged in parallel, rather than in series. In addition, the results show that the parallel nonlinear components should be combined by multiplication rather than addition. Moreover, this study reveals that the key hydrological process over semi-arid catchments is the infiltration excess surface runoff, a non-linear component.

  13. Effects of skylight polarization, cloudiness, and view angle on the detection of oil on water.

    NASA Technical Reports Server (NTRS)

    Millard, J. P.; Arvesen, J. C.

    1971-01-01

    Three passive radiometric techniques, which use the contrast of sunlight reflected and backscattered from oil and water in specific wavelength regions, have potential application for remote sensing of oil spills. These techniques consist of measuring (1) total radiance, (2) the polarization components (normal and parallel) of radiance, and (3) the difference between the normal and parallel components. In this paper, the best view directions for these techniques are evaluated, conclusions are drawn as to the most promising technique, and explanations are developed to describe why previous total-radiance measurements yielded highest contrast between oil and water under overcast skies. The technique based on measurement of only the normal polorization component appears to be the most promising. The differential technique should be further investigated because of its potential to reduce the component of backscattered light from below the surface of the water. Measurements should be made about 45 deg nadir view angle in the direction opposite the sun. Overcast sky conditions provide a higher intensity of skylight relative to clear sky conditions and a lower intensity of backscatter within the water relative to surface reflectance. These factors result in higher contrast between oil and water under overcast skies.

  14. Method for simultaneous overlapped communications between neighboring processors in a multiple

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1991-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  15. Insight into the heterogeneous adsorption of humic acid fluorescent components on multi-walled carbon nanotubes by excitation-emission matrix and parallel factor analysis.

    PubMed

    Yang, Chenghu; Liu, Yangzhi; Cen, Qiulin; Zhu, Yaxian; Zhang, Yong

    2018-02-01

    The heterogeneous adsorption behavior of commercial humic acid (HA) on pristine and functionalized multi-walled carbon nanotubes (MWCNTs) was investigated by fluorescence excitation-emission matrix and parallel factor (EEM- PARAFAC) analysis. The kinetics, isotherms, thermodynamics and mechanisms of adsorption of HA fluorescent components onto MWCNTs were the focus of the present study. Three humic-like fluorescent components were distinguished, including one carboxylic-like fluorophore C1 (λ ex /λ em = (250, 310) nm/428nm), and two phenolic-like fluorophores, C2 (λ ex /λ em = (300, 460) nm/552nm) and C3 (λ ex /λ em = (270, 375) nm/520nm). The Lagergren pseudo-second-order model can be used to describe the adsorption kinetics of the HA fluorescent components. In addition, both the Freundlich and Langmuir models can be suitably employed to describe the adsorption of the HA fluorescent components onto MWCNTs with significantly high correlation coefficients (R 2 > 0.94, P< 0.05). The dissimilarity in the adsorption affinity (K d ) and nonlinear adsorption degree from the HA fluorescent components to MWCNTs was clearly observed. The adsorption mechanism suggested that the π-π electron donor-acceptor (EDA) interaction played an important role in the interaction between HA fluorescent components and the three MWCNTs. Furthermore, the values of the thermodynamic parameters, including the Gibbs free energy change (ΔG°), enthalpy change (ΔH°) and entropy change (ΔS°), showed that the adsorption of the HA fluorescent components on MWCNTs was spontaneous and exothermic. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Field-aligned structure of the storm time Pc 5 wave of November 14-15, 1979

    NASA Astrophysics Data System (ADS)

    Takahashi, K.; Higbie, P. R.; Fennell, J. F.; Amata, E.

    1988-02-01

    Magnetic field data from the four satellites--SCATHA (P78-2), GOES 2, GOES 3, and GEOS 2--have been analyzed to examine the magnetic-field-aligned structure of a storm time Pc 5 wave which occurred on November 14-15, 1979. The wave had both transverse and compressional components. At a given instance, the compressional and the radial components oscillated in phase or 180 deg out of phase, and the compressional and the azimuthal components oscillated +90 deg or -90 deg out of phase. In addition, each component changed its amplitude with magnetic latitude: the compressional component had a minimum at the magnetic equator, whereas the transverse components had a maximum at the equator and minima several degrees off the equator. At 180 deg relative phase switching among the components occurred across the latitudes of amplitude minima. From these observations, the field-line displacement of the wave is confirmed to have an antisymmetric standing structure about the magnetic equator with a parallel wave length of a few earth radii. We aslo observed other intriguing properties of the wave, such as different parallel wavelengths of different field components and small-amplitude second harmonics near the nodes. A dielectric tensor appropriate for the ring current plasma is found to give an explanation for the relation between the polarization and the propagation of the wave. However, plasma data available from SCATHA do not support either the drift-mirror instability of Hasegawa or tht coupling between a drift mirror wave and a shear Alfven wave, as discussed by Walker et al.

  17. Parallel consensual neural networks.

    PubMed

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  18. Microwave conductance properties of aligned multiwall carbon nanotube textile sheets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Brian L.; Martinez, Patricia; Zakhidov, Anvar A.

    2015-07-06

    Understanding the conductance properties of multi-walled carbon nanotube (MWNT) textile sheets in the microwave regime is essential for their potential use in high-speed and high-frequency applications. To expand current knowledge, complex high-frequency conductance measurements from 0.01 to 50 GHz and across temperatures from 4.2 K to 300 K and magnetic fields up to 2 T were made on textile sheets of highly aligned MWNTs with strand alignment oriented both parallel and perpendicular to the microwave electric field polarization. Sheets were drawn from 329 and 520 μm high MWNT forests that resulted in different DC resistance anisotropy. For all samples, themore » microwave conductance can be modeled approximately by a shunt capacitance in parallel with a frequency-independent conductance, but with no inductive contribution. Finally, this is consistent with diffusive Drude conduction as the primary transport mechanism up to 50 GHz. Further, it is found that the microwave conductance is essentially independent of both temperature and magnetic field.« less

  19. An integrated control strategy for the composite braking system of an electric vehicle with independently driven axles

    NASA Astrophysics Data System (ADS)

    Sun, Fengchun; Liu, Wei; He, Hongwen; Guo, Hongqiang

    2016-08-01

    For an electric vehicle with independently driven axles, an integrated braking control strategy was proposed to coordinate the regenerative braking and the hydraulic braking. The integrated strategy includes three modes, namely the hybrid composite mode, the parallel composite mode and the pure hydraulic mode. For the hybrid composite mode and the parallel composite mode, the coefficients of distributing the braking force between the hydraulic braking and the two motors' regenerative braking were optimised offline, and the response surfaces related to the driving state parameters were established. Meanwhile, the six-sigma method was applied to deal with the uncertainty problems for reliability. Additionally, the pure hydraulic mode is activated to ensure the braking safety and stability when the predictive failure of the response surfaces occurs. Experimental results under given braking conditions showed that the braking requirements could be well met with high braking stability and energy regeneration rate, and the reliability of the braking strategy was guaranteed on general braking conditions.

  20. Parallel cascade selection molecular dynamics for efficient conformational sampling and free energy calculation of proteins

    NASA Astrophysics Data System (ADS)

    Kitao, Akio; Harada, Ryuhei; Nishihara, Yasutaka; Tran, Duy Phuoc

    2016-12-01

    Parallel Cascade Selection Molecular Dynamics (PaCS-MD) was proposed as an efficient conformational sampling method to investigate conformational transition pathway of proteins. In PaCS-MD, cycles of (i) selection of initial structures for multiple independent MD simulations and (ii) conformational sampling by independent MD simulations are repeated until the convergence of the sampling. The selection is conducted so that protein conformation gradually approaches a target. The selection of snapshots is a key to enhance conformational changes by increasing the probability of rare event occurrence. Since the procedure of PaCS-MD is simple, no modification of MD programs is required; the selections of initial structures and the restart of the next cycle in the MD simulations can be handled with relatively simple scripts with straightforward implementation. Trajectories generated by PaCS-MD were further analyzed by the Markov state model (MSM), which enables calculation of free energy landscape. The combination of PaCS-MD and MSM is reported in this work.

Top