Sample records for parallel functional testing

  1. Methods and Models for the Construction of Weakly Parallel Tests. Research Report 90-4.

    ERIC Educational Resources Information Center

    Adema, Jos J.

    Methods are proposed for the construction of weakly parallel tests, that is, tests with the same test information function. A mathematical programing model for constructing tests with a prespecified test information function and a heuristic for assigning items to tests such that their information functions are equal play an important role in the…

  2. To develop behavioral tests of vestibular functioning in the Wistar rat

    NASA Technical Reports Server (NTRS)

    Nielson, H. C.

    1980-01-01

    Two tests of vestibular functioning in the rat were developed. The first test was the water maze. In the water maze the rat does not have the normal proprioceptive feedback from its limbs to help it maintain its orientation, and must rely primarily on the sensory input from its visual and vestibular systems. By altering lighting conditions and visual cues the vestibular functioning without visual cues was assessed. Whether there was visual compensation for some vestibular dysfunction was determined. The second test measured vestibular functioning of the rat's behavior on a parallel swing. In this test the rat's postural adjustments while swinging on the swing with the otoliths being stimulated were assessed. Less success was achieved in developing the parallel swing as a test of vestibular functioning than with the water maze. The major problem was incorrect initial assumptions of what the rat's probable behavior on the parallel swing would be.

  3. An Alternative Methodology for Creating Parallel Test Forms Using the IRT Information Function.

    ERIC Educational Resources Information Center

    Ackerman, Terry A.

    The purpose of this paper is to report results on the development of a new computer-assisted methodology for creating parallel test forms using the item response theory (IRT) information function. Recently, several researchers have approached test construction from a mathematical programming perspective. However, these procedures require…

  4. Serial and Parallel Attentive Visual Searches: Evidence from Cumulative Distribution Functions of Response Times

    ERIC Educational Resources Information Center

    Sung, Kyongje

    2008-01-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the…

  5. Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

    2008-01-01

    In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…

  6. Evaluating Statistical Targets for Assembling Parallel Mixed-Format Test Forms

    ERIC Educational Resources Information Center

    Debeer, Dries; Ali, Usama S.; van Rijn, Peter W.

    2017-01-01

    Test assembly is the process of selecting items from an item pool to form one or more new test forms. Often new test forms are constructed to be parallel with an existing (or an ideal) test. Within the context of item response theory, the test information function (TIF) or the test characteristic curve (TCC) are commonly used as statistical…

  7. Score Equating and Nominally Parallel Language Tests.

    ERIC Educational Resources Information Center

    Moy, Raymond

    Score equating requires that the forms to be equated are functionally parallel. That is, the two test forms should rank order examinees in a similar fashion. In language proficiency testing situations, this assumption is often put into doubt because of the numerous tests that have been proposed as measures of language proficiency and the…

  8. Serial and parallel attentive visual searches: evidence from cumulative distribution functions of response times.

    PubMed

    Sung, Kyongje

    2008-12-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.

  9. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  10. When the lowest energy does not induce native structures: parallel minimization of multi-energy values by hybridizing searching intelligences.

    PubMed

    Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou

    2012-01-01

    Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise.

  11. When the Lowest Energy Does Not Induce Native Structures: Parallel Minimization of Multi-Energy Values by Hybridizing Searching Intelligences

    PubMed Central

    Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou

    2012-01-01

    Background Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. Results A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. Conclusions This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise. PMID:23028708

  12. Multitasking TORT under UNICOS: Parallel performance models and measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, A.; Azmy, Y.Y.

    1999-09-27

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.

  13. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azmy, Y.Y.; Barnett, D.A.

    1999-09-27

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.

  14. A preclinical cognitive test battery to parallel the National Institute of Health Toolbox in humans: bridging the translational gap.

    PubMed

    Snigdha, Shikha; Milgram, Norton W; Willis, Sherry L; Albert, Marylin; Weintraub, S; Fortin, Norbert J; Cotman, Carl W

    2013-07-01

    A major goal of animal research is to identify interventions that can promote successful aging and delay or reverse age-related cognitive decline in humans. Recent advances in standardizing cognitive assessment tools for humans have the potential to bring preclinical work closer to human research in aging and Alzheimer's disease. The National Institute of Health (NIH) has led an initiative to develop a comprehensive Toolbox for Neurologic Behavioral Function (NIH Toolbox) to evaluate cognitive, motor, sensory and emotional function for use in epidemiologic and clinical studies spanning 3 to 85 years of age. This paper aims to analyze the strengths and limitations of animal behavioral tests that can be used to parallel those in the NIH Toolbox. We conclude that there are several paradigms available to define a preclinical battery that parallels the NIH Toolbox. We also suggest areas in which new tests may benefit the development of a comprehensive preclinical test battery for assessment of cognitive function in animal models of aging and Alzheimer's disease. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. A preclinical cognitive test battery to parallel the National Institute of Health Toolbox in humans: bridging the translational gap

    PubMed Central

    Snigdha, Shikha; Milgram, Norton W.; Willis, Sherry L.; Albert, Marylin; Weintraub, S.; Fortin, Norbert J.; Cotman, Carl W.

    2013-01-01

    A major goal of animal research is to identify interventions that can promote successful aging and delay or reverse age-related cognitive decline in humans. Recent advances in standardizing cognitive assessment tools for humans have the potential to bring preclinical work closer to human research in aging and Alzheimer’s disease. The National Institute of Health (NIH) has led an initiative to develop a comprehensive Toolbox for Neurologic Behavioral Function (NIH Toolbox) to evaluate cognitive, motor, sensory and emotional function for use in epidemiologic and clinical studies spanning 3 to 85 years of age. This paper aims to analyze the strengths and limitations of animal behavioral tests that can be used to parallel those in the NIH Toolbox. We conclude that there are several paradigms available to define a preclinical battery that parallels the NIH Toolbox. We also suggest areas in which new tests may benefit the development of a comprehensive preclinical test battery for assessment of cognitive function in animal models of aging and Alzheimer’s disease. PMID:23434040

  16. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  17. A Knowledge-Based Approach for Item Exposure Control in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Doong, Shing H.

    2009-01-01

    The purpose of this study is to investigate a functional relation between item exposure parameters (IEPs) and item parameters (IPs) over parallel pools. This functional relation is approximated by a well-known tool in machine learning. Let P and Q be parallel item pools and suppose IEPs for P have been obtained via a Sympson and Hetter-type…

  18. An Evaluation of Different Statistical Targets for Assembling Parallel Forms in Item Response Theory

    PubMed Central

    Ali, Usama S.; van Rijn, Peter W.

    2015-01-01

    Assembly of parallel forms is an important step in the test development process. Therefore, choosing a suitable theoretical framework to generate well-defined test specifications is critical. The performance of different statistical targets of test specifications using the test characteristic curve (TCC) and the test information function (TIF) was investigated. Test length, the number of test forms, and content specifications are considered as well. The TCC target results in forms that are parallel in difficulty, but not necessarily in terms of precision. Vice versa, test forms created using a TIF target are parallel in terms of precision, but not necessarily in terms of difficulty. As sometimes the focus is either on TIF or TCC, differences in either difficulty or precision can arise. Differences in difficulty can be mitigated by equating, but differences in precision cannot. In a series of simulations using a real item bank, the two-parameter logistic model, and mixed integer linear programming for automated test assembly, these differences were found to be quite substantial. When both TIF and TCC are combined into one target with manipulation to relative importance, these differences can be made to disappear.

  19. Improved packing of protein side chains with parallel ant colonies.

    PubMed

    Quan, Lijun; Lü, Qiang; Li, Haiou; Xia, Xiaoyan; Wu, Hongjie

    2014-01-01

    The accurate packing of protein side chains is important for many computational biology problems, such as ab initio protein structure prediction, homology modelling, and protein design and ligand docking applications. Many of existing solutions are modelled as a computational optimisation problem. As well as the design of search algorithms, most solutions suffer from an inaccurate energy function for judging whether a prediction is good or bad. Even if the search has found the lowest energy, there is no certainty of obtaining the protein structures with correct side chains. We present a side-chain modelling method, pacoPacker, which uses a parallel ant colony optimisation strategy based on sharing a single pheromone matrix. This parallel approach combines different sources of energy functions and generates protein side-chain conformations with the lowest energies jointly determined by the various energy functions. We further optimised the selected rotamers to construct subrotamer by rotamer minimisation, which reasonably improved the discreteness of the rotamer library. We focused on improving the accuracy of side-chain conformation prediction. For a testing set of 442 proteins, 87.19% of X1 and 77.11% of X12 angles were predicted correctly within 40° of the X-ray positions. We compared the accuracy of pacoPacker with state-of-the-art methods, such as CIS-RR and SCWRL4. We analysed the results from different perspectives, in terms of protein chain and individual residues. In this comprehensive benchmark testing, 51.5% of proteins within a length of 400 amino acids predicted by pacoPacker were superior to the results of CIS-RR and SCWRL4 simultaneously. Finally, we also showed the advantage of using the subrotamers strategy. All results confirmed that our parallel approach is competitive to state-of-the-art solutions for packing side chains. This parallel approach combines various sources of searching intelligence and energy functions to pack protein side chains. It provides a frame-work for combining different inaccuracy/usefulness objective functions by designing parallel heuristic search algorithms.

  20. Parallel File System I/O Performance Testing On LANL Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiens, Isaac Christian; Green, Jennifer Kathleen

    2016-08-18

    These are slides from a presentation on parallel file system I/O performance testing on LANL clusters. I/O is a known bottleneck for HPC applications. Performance optimization of I/O is often required. This summer project entailed integrating IOR under Pavilion and automating the results analysis. The slides cover the following topics: scope of the work, tools utilized, IOR-Pavilion test workflow, build script, IOR parameters, how parameters are passed to IOR, *run_ior: functionality, Python IOR-Output Parser, Splunk data format, Splunk dashboard and features, and future work.

  1. Methods of parallel computation applied on granular simulations

    NASA Astrophysics Data System (ADS)

    Martins, Gustavo H. B.; Atman, Allbens P. F.

    2017-06-01

    Every year, parallel computing has becoming cheaper and more accessible. As consequence, applications were spreading over all research areas. Granular materials is a promising area for parallel computing. To prove this statement we study the impact of parallel computing in simulations of the BNE (Brazil Nut Effect). This property is due the remarkable arising of an intruder confined to a granular media when vertically shaken against gravity. By means of DEM (Discrete Element Methods) simulations, we study the code performance testing different methods to improve clock time. A comparison between serial and parallel algorithms, using OpenMP® is also shown. The best improvement was obtained by optimizing the function that find contacts using Verlet's cells.

  2. Low Temperature (30 K) TID Test Results of a Radiation Hardened 128 Channel Serial-to-Parallel Converter

    NASA Technical Reports Server (NTRS)

    Meyer, Stephen; Buchner, Stephen; Moseley, Harvey; Ray, Knute; Tuttle, Jim; Quinn, Ed; Buchanan, Ernie; Bloom, Dave; Hait, Tom; Pearce, Mike; hide

    2006-01-01

    This viewgraph presentation reviews the low temperature, Total Ionizing Dose (TID) tests of radiation hardened serial to parallel converter to be used on the James Webb Space Telescope. The test results show that the original HV583 level shifter - a COTS part -was not suitable for JWST because the supply currents exceeded specs after 20 krad( Si) .The HV584 - functionally similar to the HV583 -was designed using RHBD approach that reduced the leakage currents to within acceptable levels and had only a small effect on the level-shifted output voltage.

  3. DOE SBIR Phase-1 Report on Hybrid CPU-GPU Parallel Development of the Eulerian-Lagrangian Barracuda Multiphase Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dr. Dale M. Snider

    2011-02-28

    This report gives the result from the Phase-1 work on demonstrating greater than 10x speedup of the Barracuda computer program using parallel methods and GPU processors (General-Purpose Graphics Processing Unit or Graphics Processing Unit). Phase-1 demonstrated a 12x speedup on a typical Barracuda function using the GPU processor. The problem test case used about 5 million particles and 250,000 Eulerian grid cells. The relative speedup, compared to a single CPU, increases with increased number of particles giving greater than 12x speedup. Phase-1 work provided a path for reformatting data structure modifications to give good parallel performance while keeping a friendlymore » environment for new physics development and code maintenance. The implementation of data structure changes will be in Phase-2. Phase-1 laid the ground work for the complete parallelization of Barracuda in Phase-2, with the caveat that implemented computer practices for parallel programming done in Phase-1 gives immediate speedup in the current Barracuda serial running code. The Phase-1 tasks were completed successfully laying the frame work for Phase-2. The detailed results of Phase-1 are within this document. In general, the speedup of one function would be expected to be higher than the speedup of the entire code because of I/O functions and communication between the algorithms. However, because one of the most difficult Barracuda algorithms was parallelized in Phase-1 and because advanced parallelization methods and proposed parallelization optimization techniques identified in Phase-1 will be used in Phase-2, an overall Barracuda code speedup (relative to a single CPU) is expected to be greater than 10x. This means that a job which takes 30 days to complete will be done in 3 days. Tasks completed in Phase-1 are: Task 1: Profile the entire Barracuda code and select which subroutines are to be parallelized (See Section Choosing a Function to Accelerate) Task 2: Select a GPU consultant company and jointly parallelize subroutines (CPFD chose the small business EMPhotonics for the Phase-1 the technical partner. See Section Technical Objective and Approach) Task 3: Integrate parallel subroutines into Barracuda (See Section Results from Phase-1 and its subsections) Task 4: Testing, refinement, and optimization of parallel methodology (See Section Results from Phase-1 and Section Result Comparison Program) Task 5: Integrate Phase-1 parallel subroutines into Barracuda and release (See Section Results from Phase-1 and its subsections) Task 6: Roadmap of Phase-2 (See Section Plan for Phase-2) With the completion of Phase 1 we have the base understanding to completely parallelize Barracuda. An overview of the work to move Barracuda to a parallelized code is given in Plan for Phase-2.« less

  4. Functional assessment of the ex vivo vocal folds through biomechanical testing: A review

    PubMed Central

    Dion, Gregory R.; Jeswani, Seema; Roof, Scott; Fritz, Mark; Coelho, Paulo; Sobieraj, Michael; Amin, Milan R.; Branski, Ryan C.

    2016-01-01

    The human vocal folds are complex structures made up of distinct layers that vary in cellular and extracellular composition. The mechanical properties of vocal fold tissue are fundamental to the study of both the acoustics and biomechanics of voice production. To date, quantitative methods have been applied to characterize the vocal fold tissue in both normal and pathologic conditions. This review describes, summarizes, and discusses the most commonly employed methods for vocal fold biomechanical testing. Force-elongation, torsional parallel plate rheometry, simple-shear parallel plate rheometry, linear skin rheometry, and indentation are the most frequently employed biomechanical tests for vocal fold tissues and each provide material properties data that can be used to compare native tissue verses diseased for treated tissue. Force-elongation testing is clinically useful, as it allows for functional unit testing, while rheometry provides physiologically relevant shear data, and nanoindentation permits micrometer scale testing across different areas of the vocal fold as well as whole organ testing. Thoughtful selection of the testing technique during experimental design to evaluate a hypothesis is important to optimizing biomechanical testing of vocal fold tissues. PMID:27127075

  5. Empirical valence bond models for reactive potential energy surfaces: a parallel multilevel genetic program approach.

    PubMed

    Bellucci, Michael A; Coker, David F

    2011-07-28

    We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics

  6. Parallelization of interpolation, solar radiation and water flow simulation modules in GRASS GIS using OpenMP

    NASA Astrophysics Data System (ADS)

    Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav

    2017-10-01

    In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.

  7. Improved packing of protein side chains with parallel ant colonies

    PubMed Central

    2014-01-01

    Introduction The accurate packing of protein side chains is important for many computational biology problems, such as ab initio protein structure prediction, homology modelling, and protein design and ligand docking applications. Many of existing solutions are modelled as a computational optimisation problem. As well as the design of search algorithms, most solutions suffer from an inaccurate energy function for judging whether a prediction is good or bad. Even if the search has found the lowest energy, there is no certainty of obtaining the protein structures with correct side chains. Methods We present a side-chain modelling method, pacoPacker, which uses a parallel ant colony optimisation strategy based on sharing a single pheromone matrix. This parallel approach combines different sources of energy functions and generates protein side-chain conformations with the lowest energies jointly determined by the various energy functions. We further optimised the selected rotamers to construct subrotamer by rotamer minimisation, which reasonably improved the discreteness of the rotamer library. Results We focused on improving the accuracy of side-chain conformation prediction. For a testing set of 442 proteins, 87.19% of X1 and 77.11% of X12 angles were predicted correctly within 40° of the X-ray positions. We compared the accuracy of pacoPacker with state-of-the-art methods, such as CIS-RR and SCWRL4. We analysed the results from different perspectives, in terms of protein chain and individual residues. In this comprehensive benchmark testing, 51.5% of proteins within a length of 400 amino acids predicted by pacoPacker were superior to the results of CIS-RR and SCWRL4 simultaneously. Finally, we also showed the advantage of using the subrotamers strategy. All results confirmed that our parallel approach is competitive to state-of-the-art solutions for packing side chains. Conclusions This parallel approach combines various sources of searching intelligence and energy functions to pack protein side chains. It provides a frame-work for combining different inaccuracy/usefulness objective functions by designing parallel heuristic search algorithms. PMID:25474164

  8. Many-to-one form-to-function mapping weakens parallel morphological evolution.

    PubMed

    Thompson, Cole J; Ahmed, Newaz I; Veen, Thor; Peichel, Catherine L; Hendry, Andrew P; Bolnick, Daniel I; Stuart, Yoel E

    2017-11-01

    Evolutionary ecologists aim to explain and predict evolutionary change under different selective regimes. Theory suggests that such evolutionary prediction should be more difficult for biomechanical systems in which different trait combinations generate the same functional output: "many-to-one mapping." Many-to-one mapping of phenotype to function enables multiple morphological solutions to meet the same adaptive challenges. Therefore, many-to-one mapping should undermine parallel morphological evolution, and hence evolutionary predictability, even when selection pressures are shared among populations. Studying 16 replicate pairs of lake- and stream-adapted threespine stickleback (Gasterosteus aculeatus), we quantified three parts of the teleost feeding apparatus and used biomechanical models to calculate their expected functional outputs. The three feeding structures differed in their form-to-function relationship from one-to-one (lower jaw lever ratio) to increasingly many-to-one (buccal suction index, opercular 4-bar linkage). We tested for (1) weaker linear correlations between phenotype and calculated function, and (2) less parallel evolution across lake-stream pairs, in the many-to-one systems relative to the one-to-one system. We confirm both predictions, thus supporting the theoretical expectation that increasing many-to-one mapping undermines parallel evolution. Therefore, sole consideration of morphological variation within and among populations might not serve as a proxy for functional variation when multiple adaptive trait combinations exist. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  9. A new parallel algorithm of MP2 energy calculations.

    PubMed

    Ishimura, Kazuya; Pulay, Peter; Nagase, Shigeru

    2006-03-01

    A new parallel algorithm has been developed for second-order Møller-Plesset perturbation theory (MP2) energy calculations. Its main projected applications are for large molecules, for instance, for the calculation of dispersion interaction. Tests on a moderate number of processors (2-16) show that the program has high CPU and parallel efficiency. Timings are presented for two relatively large molecules, taxol (C(47)H(51)NO(14)) and luciferin (C(11)H(8)N(2)O(3)S(2)), the former with the 6-31G* and 6-311G** basis sets (1,032 and 1,484 basis functions, 164 correlated orbitals), and the latter with the aug-cc-pVDZ and aug-cc-pVTZ basis sets (530 and 1,198 basis functions, 46 correlated orbitals). An MP2 energy calculation on C(130)H(10) (1,970 basis functions, 265 correlated orbitals) completed in less than 2 h on 128 processors.

  10. Parallel Execution of Functional Mock-up Units in Buildings Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozmen, Ozgur; Nutaro, James J.; New, Joshua Ryan

    2016-06-30

    A Functional Mock-up Interface (FMI) defines a standardized interface to be used in computer simulations to develop complex cyber-physical systems. FMI implementation by a software modeling tool enables the creation of a simulation model that can be interconnected, or the creation of a software library called a Functional Mock-up Unit (FMU). This report describes an FMU wrapper implementation that imports FMUs into a C++ environment and uses an Euler solver that executes FMUs in parallel using Open Multi-Processing (OpenMP). The purpose of this report is to elucidate the runtime performance of the solver when a multi-component system is imported asmore » a single FMU (for the whole system) or as multiple FMUs (for different groups of components as sub-systems). This performance comparison is conducted using two test cases: (1) a simple, multi-tank problem; and (2) a more realistic use case based on the Modelica Buildings Library. In both test cases, the performance gains are promising when each FMU consists of a large number of states and state events that are wrapped in a single FMU. Load balancing is demonstrated to be a critical factor in speeding up parallel execution of multiple FMUs.« less

  11. Integrated microfluidic devices for combinatorial cell-based assays.

    PubMed

    Yu, Zeta Tak For; Kamei, Ken-ichiro; Takahashi, Hiroko; Shu, Chengyi Jenny; Wang, Xiaopu; He, George Wenfu; Silverman, Robert; Radu, Caius G; Witte, Owen N; Lee, Ki-Bum; Tseng, Hsian-Rong

    2009-06-01

    The development of miniaturized cell culture platforms for performing parallel cultures and combinatorial assays is important in cell biology from the single-cell level to the system level. In this paper we developed an integrated microfluidic cell-culture platform, Cell-microChip (Cell-microChip), for parallel analyses of the effects of microenvironmental cues (i.e., culture scaffolds) on different mammalian cells and their cellular responses to external stimuli. As a model study, we demonstrated the ability of culturing and assaying several mammalian cells, such as NIH 3T3 fibroblast, B16 melanoma and HeLa cell lines, in a parallel way. For functional assays, first we tested drug-induced apoptotic responses from different cell lines. As a second functional assay, we performed "on-chip" transfection of a reporter gene encoding an enhanced green fluorescent protein (EGFP) followed by live-cell imaging of transcriptional activation of cyclooxygenase 2 (Cox-2) expression. Collectively, our Cell-microChip approach demonstrated the capability to carry out parallel operations and the potential to further integrate advanced functions and applications in the broader space of combinatorial chemistry and biology.

  12. Integrated microfluidic devices for combinatorial cell-based assays

    PubMed Central

    Yu, Zeta Tak For; Kamei, Ken-ichiro; Takahashi, Hiroko; Shu, Chengyi Jenny; Wang, Xiaopu; He, George Wenfu; Silverman, Robert

    2010-01-01

    The development of miniaturized cell culture platforms for performing parallel cultures and combinatorial assays is important in cell biology from the single-cell level to the system level. In this paper we developed an integrated microfluidic cell-culture platform, Cell-microChip (Cell-μChip), for parallel analyses of the effects of microenvir-onmental cues (i.e., culture scaffolds) on different mammalian cells and their cellular responses to external stimuli. As a model study, we demonstrated the ability of culturing and assaying several mammalian cells, such as NIH 3T3 fibro-blast, B16 melanoma and HeLa cell lines, in a parallel way. For functional assays, first we tested drug-induced apoptotic responses from different cell lines. As a second functional assay, we performed "on-chip" transfection of a reporter gene encoding an enhanced green fluorescent protein (EGFP) followed by live-cell imaging of transcriptional activation of cyclooxygenase 2 (Cox-2) expression. Collectively, our Cell-μChip approach demonstrated the capability to carry out parallel operations and the potential to further integrate advanced functions and applications in the broader space of combinatorial chemistry and biology. PMID:19130244

  13. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostin, Mikhail; Mokhov, Nikolai; Niita, Koji

    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA andmore » MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.« less

  14. Modified current follower-based immittance function simulators

    NASA Astrophysics Data System (ADS)

    Alpaslan, Halil; Yuce, Erkan

    2017-12-01

    In this paper, four immittance function simulators consisting of a single modified current follower with single Z- terminal and a minimum number of passive components are proposed. The first proposed circuit can provide +L parallel with +R and the second proposed one can realise -L parallel with -R. The third proposed structure can provide +L series with +R and the fourth proposed one can realise -L series with -R. However, all the proposed immittance function simulators need a single resistive matching constraint. Parasitic impedance effects on all the proposed immittance function simulators are investigated. A second-order current-mode (CM) high-pass filter derived from the first proposed immittance function simulator is given as an application example. Also, a second-order CM low-pass filter derived from the third proposed immittance function simulator is given as an application example. A number of simulation results based on SPICE programme and an experimental test result are given to verify the theory.

  15. Parallel functional category deficits in clauses and nominal phrases: The case of English agrammatism

    PubMed Central

    Wang, Honglei; Yoshida, Masaya; Thompson, Cynthia K.

    2015-01-01

    Individuals with agrammatic aphasia exhibit restricted patterns of impairment of functional morphemes, however, syntactic characterization of the impairment is controversial. Previous studies have focused on functional morphology in clauses only. This study extends the empirical domain by testing functional morphemes in English nominal phrases in aphasia and comparing patients’ impairment to their impairment of functional morphemes in English clauses. In the linguistics literature, it is assumed that clauses and nominal phrases are structurally parallel but exhibit inflectional differences. The results of the present study indicated that aphasic speakers evinced similar impairment patterns in clauses and nominal phrases. These findings are consistent with the Distributed Morphology Hypothesis (DMH), suggesting that the source of functional morphology deficits among agrammatics relates to difficulty implementing rules that convert inflectional features into morphemes. Our findings, however, are inconsistent with the Tree Pruning Hypothesis (TPH), which suggests that patients have difficulty building complex hierarchical structures. PMID:26379370

  16. Three pillars for achieving quantum mechanical molecular dynamics simulations of huge systems: Divide-and-conquer, density-functional tight-binding, and massively parallel computation.

    PubMed

    Nishizawa, Hiroaki; Nishimura, Yoshifumi; Kobayashi, Masato; Irle, Stephan; Nakai, Hiromi

    2016-08-05

    The linear-scaling divide-and-conquer (DC) quantum chemical methodology is applied to the density-functional tight-binding (DFTB) theory to develop a massively parallel program that achieves on-the-fly molecular reaction dynamics simulations of huge systems from scratch. The functions to perform large scale geometry optimization and molecular dynamics with DC-DFTB potential energy surface are implemented to the program called DC-DFTB-K. A novel interpolation-based algorithm is developed for parallelizing the determination of the Fermi level in the DC method. The performance of the DC-DFTB-K program is assessed using a laboratory computer and the K computer. Numerical tests show the high efficiency of the DC-DFTB-K program, a single-point energy gradient calculation of a one-million-atom system is completed within 60 s using 7290 nodes of the K computer. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Of Small Beauties and Large Beasts: The Quality of Distractors on Multiple-Choice Tests Is More Important than Their Quantity

    ERIC Educational Resources Information Center

    Papenberg, Martin; Musch, Jochen

    2017-01-01

    In multiple-choice tests, the quality of distractors may be more important than their number. We therefore examined the joint influence of distractor quality and quantity on test functioning by providing a sample of 5,793 participants with five parallel test sets consisting of items that differed in the number and quality of distractors.…

  18. The characteristics and limitations of the MPS/MMS battery charging system

    NASA Technical Reports Server (NTRS)

    Ford, F. E.; Palandati, C. F.; Davis, J. F.; Tasevoli, C. M.

    1980-01-01

    A series of tests was conducted on two 12 ampere hour nickel cadmium batteries under a simulated cycle regime using the multiple voltage versus temperature levels designed into the modular power system (MPS). These tests included: battery recharge as a function of voltage control level; temperature imbalance between two parallel batteries; a shorted or partially shorted cell in one of the two parallel batteries; impedance imbalance of one of the parallel battery circuits; and disabling and enabling one of the batteries from the bus at various charge and discharge states. The results demonstrate that the eight commandable voltage versus temperature levels designed into the MPS provide a very flexible system that not only can accommodate a wide range of normal power system operation, but also provides a high degree of flexibility in responding to abnormal operating conditions.

  19. Distributed Function Mining for Gene Expression Programming Based on Fast Reduction.

    PubMed

    Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou

    2016-01-01

    For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining.

  20. COMPARABLE MEASURES OF COGNITIVE FUNCTION IN HUMAN INFANTS AND LABORATORY ANIMALS TO IDENTIFY ENVIRONMENTAL HEALTH RISKS TO CHILDREN

    EPA Science Inventory

    The importance of including neurodevelopmental end points in environmental studies is clear. A validated measure of cognitive function in human infants that also has a homologous or parallel test in laboratory animal studies will provide a valuable approach for large-scale studie...

  1. Highly Parallel Alternating Directions Algorithm for Time Dependent Problems

    NASA Astrophysics Data System (ADS)

    Ganzha, M.; Georgiev, K.; Lirkov, I.; Margenov, S.; Paprzycki, M.

    2011-11-01

    In our work, we consider the time dependent Stokes equation on a finite time interval and on a uniform rectangular mesh, written in terms of velocity and pressure. For this problem, a parallel algorithm based on a novel direction splitting approach is developed. Here, the pressure equation is derived from a perturbed form of the continuity equation, in which the incompressibility constraint is penalized in a negative norm induced by the direction splitting. The scheme used in the algorithm is composed of two parts: (i) velocity prediction, and (ii) pressure correction. This is a Crank-Nicolson-type two-stage time integration scheme for two and three dimensional parabolic problems in which the second-order derivative, with respect to each space variable, is treated implicitly while the other variable is made explicit at each time sub-step. In order to achieve a good parallel performance the solution of the Poison problem for the pressure correction is replaced by solving a sequence of one-dimensional second order elliptic boundary value problems in each spatial direction. The parallel code is implemented using the standard MPI functions and tested on two modern parallel computer systems. The performed numerical tests demonstrate good level of parallel efficiency and scalability of the studied direction-splitting-based algorithm.

  2. Searching for globally optimal functional forms for interatomic potentials using genetic programming with parallel tempering.

    PubMed

    Slepoy, A; Peters, M D; Thompson, A P

    2007-11-30

    Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. Copyright (c) 2007 Wiley Periodicals, Inc.

  3. Fluorinated colloidal gold immunolabels for imaging select proteins in parallel with lipids using high-resolution secondary ion mass spectrometry

    PubMed Central

    Wilson, Robert L.; Frisz, Jessica F.; Hanafin, William P.; Carpenter, Kevin J.; Hutcheon, Ian D.; Weber, Peter K.; Kraft, Mary L.

    2014-01-01

    The local abundance of specific lipid species near a membrane protein is hypothesized to influence the protein’s activity. The ability to simultaneously image the distributions of specific protein and lipid species in the cell membrane would facilitate testing these hypotheses. Recent advances in imaging the distribution of cell membrane lipids with mass spectrometry have created the desire for membrane protein probes that can be simultaneously imaged with isotope labeled lipids. Such probes would enable conclusive tests of whether specific proteins co-localize with particular lipid species. Here, we describe the development of fluorine-functionalized colloidal gold immunolabels that facilitate the detection and imaging of specific proteins in parallel with lipids in the plasma membrane using high-resolution SIMS performed with a NanoSIMS. First, we developed a method to functionalize colloidal gold nanoparticles with a partially fluorinated mixed monolayer that permitted NanoSIMS detection and rendered the functionalized nanoparticles dispersible in aqueous buffer. Then, to allow for selective protein labeling, we attached the fluorinated colloidal gold nanoparticles to the nonbinding portion of antibodies. By combining these functionalized immunolabels with metabolic incorporation of stable isotopes, we demonstrate that influenza hemagglutinin and cellular lipids can be imaged in parallel using NanoSIMS. These labels enable a general approach to simultaneously imaging specific proteins and lipids with high sensitivity and lateral resolution, which may be used to evaluate predictions of protein co-localization with specific lipid species. PMID:22284327

  4. Parallel Alterations of Functional Connectivity during Execution and Imagination after Motor Imagery Learning

    PubMed Central

    Zhang, Rushao; Hui, Mingqi; Long, Zhiying; Zhao, Xiaojie; Yao, Li

    2012-01-01

    Background Neural substrates underlying motor learning have been widely investigated with neuroimaging technologies. Investigations have illustrated the critical regions of motor learning and further revealed parallel alterations of functional activation during imagination and execution after learning. However, little is known about the functional connectivity associated with motor learning, especially motor imagery learning, although benefits from functional connectivity analysis attract more attention to the related explorations. We explored whether motor imagery (MI) and motor execution (ME) shared parallel alterations of functional connectivity after MI learning. Methodology/Principal Findings Graph theory analysis, which is widely used in functional connectivity exploration, was performed on the functional magnetic resonance imaging (fMRI) data of MI and ME tasks before and after 14 days of consecutive MI learning. The control group had no learning. Two measures, connectivity degree and interregional connectivity, were calculated and further assessed at a statistical level. Two interesting results were obtained: (1) The connectivity degree of the right posterior parietal lobe decreased in both MI and ME tasks after MI learning in the experimental group; (2) The parallel alterations of interregional connectivity related to the right posterior parietal lobe occurred in the supplementary motor area for both tasks. Conclusions/Significance These computational results may provide the following insights: (1) The establishment of motor schema through MI learning may induce the significant decrease of connectivity degree in the posterior parietal lobe; (2) The decreased interregional connectivity between the supplementary motor area and the right posterior parietal lobe in post-test implicates the dissociation between motor learning and task performing. These findings and explanations further revealed the neural substrates underpinning MI learning and supported that the potential value of MI learning in motor function rehabilitation and motor skill learning deserves more attention and further investigation. PMID:22629308

  5. Vectorization and parallelization of the finite strip method for dynamic Mindlin plate problems

    NASA Technical Reports Server (NTRS)

    Chen, Hsin-Chu; He, Ai-Fang

    1993-01-01

    The finite strip method is a semi-analytical finite element process which allows for a discrete analysis of certain types of physical problems by discretizing the domain of the problem into finite strips. This method decomposes a single large problem into m smaller independent subproblems when m harmonic functions are employed, thus yielding natural parallelism at a very high level. In this paper we address vectorization and parallelization strategies for the dynamic analysis of simply-supported Mindlin plate bending problems and show how to prevent potential conflicts in memory access during the assemblage process. The vector and parallel implementations of this method and the performance results of a test problem under scalar, vector, and vector-concurrent execution modes on the Alliant FX/80 are also presented.

  6. Roller-gear drives for robotic manipulators design, fabrication and test

    NASA Technical Reports Server (NTRS)

    Anderson, William J.; Shipitalo, William

    1991-01-01

    Two single axis planetary roller-gear drives and a two axis roller-gear drive with dual inputs were designed for use as robotic transmissions. Each of the single axis drives is a two planet row, four planet arrangement with spur gears and compressively loaded cylindrical rollers acting in parallel. The two axis drive employs bevel gears and cone rollers acting in parallel. The rollers serve a dual function: they remove backlash from the system, and they transmit torque when the gears are not fully engaged.

  7. libvdwxc: a library for exchange-correlation functionals in the vdW-DF family

    NASA Astrophysics Data System (ADS)

    Hjorth Larsen, Ask; Kuisma, Mikael; Löfgren, Joakim; Pouillon, Yann; Erhart, Paul; Hyldgaard, Per

    2017-09-01

    We present libvdwxc, a general library for evaluating the energy and potential for the family of vdW-DF exchange-correlation functionals. libvdwxc is written in C and provides an efficient implementation of the vdW-DF method and can be interfaced with various general-purpose DFT codes. Currently, the Gpaw and Octopus codes implement interfaces to libvdwxc. The present implementation emphasizes scalability and parallel performance, and thereby enables ab initio calculations of nanometer-scale complexes. The numerical accuracy is benchmarked on the S22 test set whereas parallel performance is benchmarked on ligand-protected gold nanoparticles ({{Au}}144{({{SC}}11{{NH}}25)}60) up to 9696 atoms.

  8. MPF: A portable message passing facility for shared memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.; Mcguire, Patrick J.

    1987-01-01

    The design, implementation, and performance evaluation of a message passing facility (MPF) for shared memory multiprocessors are presented. The MPF is based on a message passing model conceptually similar to conversations. Participants (parallel processors) can enter or leave a conversation at any time. The message passing primitives for this model are implemented as a portable library of C function calls. The MPF is currently operational on a Sequent Balance 21000, and several parallel applications were developed and tested. Several simple benchmark programs are presented to establish interprocess communication performance for common patterns of interprocess communication. Finally, performance figures are presented for two parallel applications, linear systems solution, and iterative solution of partial differential equations.

  9. NDL-v2.0: A new version of the numerical differentiation library for parallel architectures

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Voglis, C.; Papageorgiou, D. G.; Lagaris, I. E.

    2014-07-01

    We present a new version of the numerical differentiation library (NDL) used for the numerical estimation of first and second order partial derivatives of a function by finite differencing. In this version we have restructured the serial implementation of the code so as to achieve optimal task-based parallelization. The pure shared-memory parallelization of the library has been based on the lightweight OpenMP tasking model allowing for the full extraction of the available parallelism and efficient scheduling of multiple concurrent library calls. On multicore clusters, parallelism is exploited by means of TORC, an MPI-based multi-threaded tasking library. The new MPI implementation of NDL provides optimal performance in terms of function calls and, furthermore, supports asynchronous execution of multiple library calls within legacy MPI programs. In addition, a Python interface has been implemented for all cases, exporting the functionality of our library to sequential Python codes. Catalog identifier: AEDG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 63036 No. of bytes in distributed program, including test data, etc.: 801872 Distribution format: tar.gz Programming language: ANSI Fortran-77, ANSI C, Python. Computer: Distributed systems (clusters), shared memory systems. Operating system: Linux, Unix. Has the code been vectorized or parallelized?: Yes. RAM: The library uses O(N) internal storage, N being the dimension of the problem. It can use up to O(N2) internal storage for Hessian calculations, if a task throttling factor has not been set by the user. Classification: 4.9, 4.14, 6.5. Catalog identifier of previous version: AEDG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180(2009)1404 Does the new version supersede the previous version?: Yes Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, and sensitivity analysis. For a large number of scientific and engineering applications, the underlying functions correspond to simulation codes for which analytical estimation of derivatives is difficult or almost impossible. A parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with a carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Reasons for new version: The updated version was motivated by our endeavors to extend a parallel Bayesian uncertainty quantification framework [1], by incorporating higher order derivative information as in most state-of-the-art stochastic simulation methods such as Stochastic Newton MCMC [2] and Riemannian Manifold Hamiltonian MC [3]. The function evaluations are simulations with significant time-to-solution, which also varies with the input parameters such as in [1, 4]. The runtime of the N-body-type of problem changes considerably with the introduction of a longer cut-off between the bodies. In the first version of the library, the OpenMP-parallel subroutines spawn a new team of threads and distribute the function evaluations with a PARALLEL DO directive. This limits the functionality of the library as multiple concurrent calls require nested parallelism support from the OpenMP environment. Therefore, either their function evaluations will be serialized or processor oversubscription is likely to occur due to the increased number of OpenMP threads. In addition, the Hessian calculations include two explicit parallel regions that compute first the diagonal and then the off-diagonal elements of the array. Due to the barrier between the two regions, the parallelism of the calculations is not fully exploited. These issues have been addressed in the new version by first restructuring the serial code and then running the function evaluations in parallel using OpenMP tasks. Although the MPI-parallel implementation of the first version is capable of fully exploiting the task parallelism of the PNDL routines, it does not utilize the caching mechanism of the serial code and, therefore, performs some redundant function evaluations in the Hessian and Jacobian calculations. This can lead to: (a) higher execution times if the number of available processors is lower than the total number of tasks, and (b) significant energy consumption due to wasted processor cycles. Overcoming these drawbacks, which become critical as the time of a single function evaluation increases, was the primary goal of this new version. Due to the code restructure, the MPI-parallel implementation (and the OpenMP-parallel in accordance) avoids redundant calls, providing optimal performance in terms of the number of function evaluations. Another limitation of the library was that the library subroutines were collective and synchronous calls. In the new version, each MPI process can issue any number of subroutines for asynchronous execution. We introduce two library calls that provide global and local task synchronizations, similarly to the BARRIER and TASKWAIT directives of OpenMP. The new MPI-implementation is based on TORC, a new tasking library for multicore clusters [5-7]. TORC improves the portability of the software, as it relies exclusively on the POSIX-Threads and MPI programming interfaces. It allows MPI processes to utilize multiple worker threads, offering a hybrid programming and execution environment similar to MPI+OpenMP, in a completely transparent way. Finally, to further improve the usability of our software, a Python interface has been implemented on top of both the OpenMP and MPI versions of the library. This allows sequential Python codes to exploit shared and distributed memory systems. Summary of revisions: The revised code improves the performance of both parallel (OpenMP and MPI) implementations. The functionality and the user-interface of the MPI-parallel version have been extended to support the asynchronous execution of multiple PNDL calls, issued by one or multiple MPI processes. A new underlying tasking library increases portability and allows MPI processes to have multiple worker threads. For both implementations, an interface to the Python programming language has been added. Restrictions: The library uses only double precision arithmetic. The MPI implementation assumes the homogeneity of the execution environment provided by the operating system. Specifically, the processes of a single MPI application must have identical address space and a user function resides at the same virtual address. In addition, address space layout randomization should not be used for the application. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 23 ms for the serial distribution, 25 ms for the OpenMP with 2 threads, 53 ms and 1.01 s for the MPI parallel distribution using 2 threads and 2 processes respectively and yield-time for idle workers equal to 10 ms. References: [1] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Bayesian uncertainty quantification and propagation in molecular dynamics simulations: a high performance computing framework, J. Chem. Phys 137 (14). [2] H.P. Flath, L.C. Wilcox, V. Akcelik, J. Hill, B. van Bloemen Waanders, O. Ghattas, Fast algorithms for Bayesian uncertainty quantification in large-scale linear inverse problems based on low-rank partial Hessian approximations, SIAM J. Sci. Comput. 33 (1) (2011) 407-432. [3] M. Girolami, B. Calderhead, Riemann manifold Langevin and Hamiltonian Monte Carlo methods, J. R. Stat. Soc. Ser. B (Stat. Methodol.) 73 (2) (2011) 123-214. [4] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Data driven, predictive molecular dynamics for nanoscale flow simulations under uncertainty, J. Phys. Chem. B 117 (47) (2013) 14808-14816. [5] P.E. Hadjidoukas, E. Lappas, V.V. Dimakopoulos, A runtime library for platform-independent task parallelism, in: PDP, IEEE, 2012, pp. 229-236. [6] C. Voglis, P.E. Hadjidoukas, D.G. Papageorgiou, I. Lagaris, A parallel hybrid optimization algorithm for fitting interatomic potentials, Appl. Soft Comput. 13 (12) (2013) 4481-4492. [7] P.E. Hadjidoukas, C. Voglis, V.V. Dimakopoulos, I. Lagaris, D.G. Papageorgiou, Supporting adaptive and irregular parallelism for non-linear numerical optimization, Appl. Math. Comput. 231 (2014) 544-559.

  10. Design and Implementation of a Parallel Multivariate Ensemble Kalman Filter for the Poseidon Ocean General Circulation Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Koblinsky, Chester (Technical Monitor)

    2001-01-01

    A multivariate ensemble Kalman filter (MvEnKF) implemented on a massively parallel computer architecture has been implemented for the Poseidon ocean circulation model and tested with a Pacific Basin model configuration. There are about two million prognostic state-vector variables. Parallelism for the data assimilation step is achieved by regionalization of the background-error covariances that are calculated from the phase-space distribution of the ensemble. Each processing element (PE) collects elements of a matrix measurement functional from nearby PEs. To avoid the introduction of spurious long-range covariances associated with finite ensemble sizes, the background-error covariances are given compact support by means of a Hadamard (element by element) product with a three-dimensional canonical correlation function. The methodology and the MvEnKF configuration are discussed. It is shown that the regionalization of the background covariances; has a negligible impact on the quality of the analyses. The parallel algorithm is very efficient for large numbers of observations but does not scale well beyond 100 PEs at the current model resolution. On a platform with distributed memory, memory rather than speed is the limiting factor.

  11. JETSPIN: A specific-purpose open-source software for simulations of nanofiber electrospinning

    NASA Astrophysics Data System (ADS)

    Lauricella, Marco; Pontrelli, Giuseppe; Coluzza, Ivan; Pisignano, Dario; Succi, Sauro

    2015-12-01

    We present the open-source computer program JETSPIN, specifically designed to simulate the electrospinning process of nanofibers. Its capabilities are shown with proper reference to the underlying model, as well as a description of the relevant input variables and associated test-case simulations. The various interactions included in the electrospinning model implemented in JETSPIN are discussed in detail. The code is designed to exploit different computational architectures, from single to parallel processor workstations. This paper provides an overview of JETSPIN, focusing primarily on its structure, parallel implementations, functionality, performance, and availability.

  12. Validation of Shear Wave Elastography in Skeletal Muscle

    PubMed Central

    Eby, Sarah F.; Song, Pengfei; Chen, Shigao; Chen, Qingshan; Greenleaf, James F.; An, Kai-Nan

    2013-01-01

    Skeletal muscle is a very dynamic tissue, thus accurate quantification of skeletal muscle stiffness throughout its functional range is crucial to improve the physical functioning and independence following pathology. Shear wave elastography (SWE) is an ultrasound-based technique that characterizes tissue mechanical properties based on the propagation of remotely induced shear waves. The objective of this study is to validate SWE throughout the functional range of motion of skeletal muscle for three ultrasound transducer orientations. We hypothesized that combining traditional materials testing (MTS) techniques with SWE measurements will show increased stiffness measures with increasing tensile load, and will correlate well with each other for trials in which the transducer is parallel to underlying muscle fibers. To evaluate this hypothesis, we monitored the deformation throughout tensile loading of four porcine brachialis whole-muscle tissue specimens, while simultaneously making SWE measurements of the same specimen. We used regression to examine the correlation between Young's modulus from MTS and shear modulus from SWE for each of the transducer orientations. We applied a generalized linear model to account for repeated testing. Model parameters were estimated via generalized estimating equations. The regression coefficient was 0.1944, with a 95% confidence interval of (0.1463 – 0.2425) for parallel transducer trials. Shear waves did not propagate well for both the 45° and perpendicular transducer orientations. Both parallel SWE and MTS showed increased stiffness with increasing tensile load. This study provides the necessary first step for additional studies that can evaluate the distribution of stiffness throughout muscle. PMID:23953670

  13. Development of a Simulink Library for the Design, Testing and Simulation of Software Defined GPS Radios. With Application to the Development of Parallel Correlator Structures

    DTIC Science & Technology

    2014-05-01

    function Value = Select_Element(Index,Signal) %# eml Value = Signal(Index); Code Listing 1 Code for Selector Block 12 | P a g e 4.3...code for the Simulink function shiftedSignal = fcn(signal,Shift) %# eml shiftedSignal = circshift(signal,Shift); Code Listing 2 Code for CircShift

  14. Real-Time Parallel Software Design Case Study: Implementation of the RASSP SAR Benchmark on the Intel Paragon.

    DTIC Science & Technology

    1996-01-01

    Real-Time 19 5 Conclusion 23 List of References 25 ii LIST OF FIGURES FIGURE PAGE 3-1 Test Bench Pseudo Code 7 3-2 Fast Convolution...3-1 shows pseudo - code for a test bench with two application nodes. The outer test bench wrapper consists of three functions: pipeline_init, pipeline...exit_func); Figure 3-1. Test Bench Pseudo Code The application wrapper is contained in the pipeline routine and similarly consists of an

  15. The R package "sperrorest" : Parallelized spatial error estimation and variable importance assessment for geospatial machine learning

    NASA Astrophysics Data System (ADS)

    Schratz, Patrick; Herrmann, Tobias; Brenning, Alexander

    2017-04-01

    Computational and statistical prediction methods such as the support vector machine have gained popularity in remote-sensing applications in recent years and are often compared to more traditional approaches like maximum-likelihood classification. However, the accuracy assessment of such predictive models in a spatial context needs to account for the presence of spatial autocorrelation in geospatial data by using spatial cross-validation and bootstrap strategies instead of their now more widely used non-spatial equivalent. The R package sperrorest by A. Brenning [IEEE International Geoscience and Remote Sensing Symposium, 1, 374 (2012)] provides a generic interface for performing (spatial) cross-validation of any statistical or machine-learning technique available in R. Since spatial statistical models as well as flexible machine-learning algorithms can be computationally expensive, parallel computing strategies are required to perform cross-validation efficiently. The most recent major release of sperrorest therefore comes with two new features (aside from improved documentation): The first one is the parallelized version of sperrorest(), parsperrorest(). This function features two parallel modes to greatly speed up cross-validation runs. Both parallel modes are platform independent and provide progress information. par.mode = 1 relies on the pbapply package and calls interactively (depending on the platform) parallel::mclapply() or parallel::parApply() in the background. While forking is used on Unix-Systems, Windows systems use a cluster approach for parallel execution. par.mode = 2 uses the foreach package to perform parallelization. This method uses a different way of cluster parallelization than the parallel package does. In summary, the robustness of parsperrorest() is increased with the implementation of two independent parallel modes. A new way of partitioning the data in sperrorest is provided by partition.factor.cv(). This function gives the user the possibility to perform cross-validation at the level of some grouping structure. As an example, in remote sensing of agricultural land uses, pixels from the same field contain nearly identical information and will thus be jointly placed in either the test set or the training set. Other spatial sampling resampling strategies are already available and can be extended by the user.

  16. Lattice dynamics calculations based on density-functional perturbation theory in real space

    NASA Astrophysics Data System (ADS)

    Shang, Honghui; Carbogno, Christian; Rinke, Patrick; Scheffler, Matthias

    2017-06-01

    A real-space formalism for density-functional perturbation theory (DFPT) is derived and applied for the computation of harmonic vibrational properties in molecules and solids. The practical implementation using numeric atom-centered orbitals as basis functions is demonstrated exemplarily for the all-electron Fritz Haber Institute ab initio molecular simulations (FHI-aims) package. The convergence of the calculations with respect to numerical parameters is carefully investigated and a systematic comparison with finite-difference approaches is performed both for finite (molecules) and extended (periodic) systems. Finally, the scaling tests and scalability tests on massively parallel computer systems demonstrate the computational efficiency.

  17. Parallel reduced-instruction-set-computer architecture for real-time symbolic pattern matching

    NASA Astrophysics Data System (ADS)

    Parson, Dale E.

    1991-03-01

    This report discusses ongoing work on a parallel reduced-instruction- set-computer (RISC) architecture for automatic production matching. The PRIOPS compiler takes advantage of the memoryless character of automatic processing by translating a program's collection of automatic production tests into an equivalent combinational circuit-a digital circuit without memory, whose outputs are immediate functions of its inputs. The circuit provides a highly parallel, fine-grain model of automatic matching. The compiler then maps the combinational circuit onto RISC hardware. The heart of the processor is an array of comparators capable of testing production conditions in parallel, Each comparator attaches to private memory that contains virtual circuit nodes-records of the current state of nodes and busses in the combinational circuit. All comparator memories hold identical information, allowing simultaneous update for a single changing circuit node and simultaneous retrieval of different circuit nodes by different comparators. Along with the comparator-based logic unit is a sequencer that determines the current combination of production-derived comparisons to try, based on the combined success and failure of previous combinations of comparisons. The memoryless nature of automatic matching allows the compiler to designate invariant memory addresses for virtual circuit nodes, and to generate the most effective sequences of comparison test combinations. The result is maximal utilization of parallel hardware, indicating speed increases and scalability beyond that found for course-grain, multiprocessor approaches to concurrent Rete matching. Future work will consider application of this RISC architecture to the standard (controlled) Rete algorithm, where search through memory dominates portions of matching.

  18. Measuring effectiveness of a university by a parallel network DEA model

    NASA Astrophysics Data System (ADS)

    Kashim, Rosmaini; Kasim, Maznah Mat; Rahman, Rosshairy Abd

    2017-11-01

    Universities contribute significantly to the development of human capital and socio-economic improvement of a country. Due to that, Malaysian universities carried out various initiatives to improve their performance. Most studies have used the Data Envelopment Analysis (DEA) model to measure efficiency rather than effectiveness, even though, the measurement of effectiveness is important to realize how effective a university in achieving its ultimate goals. A university system has two major functions, namely teaching and research and every function has different resources based on its emphasis. Therefore, a university is actually structured as a parallel production system with its overall effectiveness is the aggregated effectiveness of teaching and research. Hence, this paper is proposing a parallel network DEA model to measure the effectiveness of a university. This model includes internal operations of both teaching and research functions into account in computing the effectiveness of a university system. In literature, the graduate and the number of program offered are defined as the outputs, then, the employed graduates and the numbers of programs accredited from professional bodies are considered as the outcomes for measuring the teaching effectiveness. Amount of grants is regarded as the output of research, while the different quality of publications considered as the outcomes of research. A system is considered effective if only all functions are effective. This model has been tested using a hypothetical set of data consisting of 14 faculties at a public university in Malaysia. The results show that none of the faculties is relatively effective for the overall performance. Three faculties are effective in teaching and two faculties are effective in research. The potential applications of the parallel network DEA model allow the top management of a university to identify weaknesses in any functions in their universities and take rational steps for improvement.

  19. Design and realization of test system for testing parallelism and jumpiness of optical axis of photoelectric equipment

    NASA Astrophysics Data System (ADS)

    Shi, Sheng-bing; Chen, Zhen-xing; Qin, Shao-gang; Song, Chun-yan; Jiang, Yun-hong

    2014-09-01

    With the development of science and technology, photoelectric equipment comprises visible system, infrared system, laser system and so on, integration, information and complication are higher than past. Parallelism and jumpiness of optical axis are important performance of photoelectric equipment,directly affect aim, ranging, orientation and so on. Jumpiness of optical axis directly affect hit precision of accurate point damage weapon, but we lack the facility which is used for testing this performance. In this paper, test system which is used fo testing parallelism and jumpiness of optical axis is devised, accurate aim isn't necessary and data processing are digital in the course of testing parallelism, it can finish directly testing parallelism of multi-axes, aim axis and laser emission axis, parallelism of laser emission axis and laser receiving axis and first acuualizes jumpiness of optical axis of optical sighting device, it's a universal test system.

  20. Testing convergent and parallel adaptations in talpids humeral mechanical performance by means of geometric morphometrics and finite element analysis.

    PubMed

    Piras, P; Sansalone, G; Teresi, L; Kotsakis, T; Colangelo, P; Loy, A

    2012-07-01

    The shape and mechanical performance in Talpidae humeri were studied by means of Geometric Morphometrics and Finite Element Analysis, including both extinct and extant taxa. The aim of this study was to test whether the ability to dig, quantified by humerus mechanical performance, was characterized by convergent or parallel adaptations in different clades of complex tunnel digger within Talpidae, that is, Talpinae+Condylura (monophyletic) and some complex tunnel diggers not belonging to this clade. Our results suggest that the pattern underlying Talpidae humerus evolution is evolutionary parallelism. However, this insight changed to true convergence when we tested an alternative phylogeny based on molecular data, with Condylura moved to a more basal phylogenetic position. Shape and performance analyses, as well as specific comparative methods, provided strong evidence that the ability to dig complex tunnels reached a functional optimum in distantly related taxa. This was also confirmed by the lower phenotypic variance in complex tunnel digger taxa, compared to non-complex tunnel diggers. Evolutionary rates of phenotypic change showed a smooth deceleration in correspondence with the most recent common ancestor of the Talpinae+Condylura clade. Copyright © 2012 Wiley Periodicals, Inc.

  1. HOMOLOGOUS MEASURES OF COGNITIVE FUNCTION IN HUMAN INFANTS AND LABORATORY ANIMALS TO IDENTIFY ENVIRONMENTAL HEALTH RISKS TO CHILDREN

    EPA Science Inventory

    The importance of including neurodevelopmental endpoints in environmental studies is clear. A validated measure of cognitive fucntion in human infants that also has a parallel test in laboratory animal studies will provide a valuable approach for largescale studies. Such a ho...

  2. Validated Test Method 1316: Liquid-Solid Partitioning as a Function of Liquid-to-Solid Ratio in Solid Materials Using a Parallel Batch Procedure

    EPA Pesticide Factsheets

    Describes procedures written based on the assumption that they will be performed by analysts who are formally trained in at least the basic principles of chemical analysis and in the use of the subject technology.

  3. Massively parallel quantum computer simulator

    NASA Astrophysics Data System (ADS)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.

  4. A numerical differentiation library exploiting parallel architectures

    NASA Astrophysics Data System (ADS)

    Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.

    2009-08-01

    We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Restrictions: The library uses only double precision arithmetic. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.

  5. Magnetic intermittency of solar wind turbulence in the dissipation range

    NASA Astrophysics Data System (ADS)

    Pei, Zhongtian; He, Jiansen; Tu, Chuanyi; Marsch, Eckart; Wang, Linghua

    2016-04-01

    The feature, nature, and fate of intermittency in the dissipation range are an interesting topic in the solar wind turbulence. We calculate the distribution of flatness for the magnetic field fluctuations as a functionof angle and scale. The flatness distribution shows a "butterfly" pattern, with two wings located at angles parallel/anti-parallel to local mean magnetic field direction and main body located at angles perpendicular to local B0. This "butterfly" pattern illustrates that the flatness profile in (anti-) parallel direction approaches to the maximum value at larger scale and drops faster than that in perpendicular direction. The contours for probability distribution functions at different scales illustrate a "vase" pattern, more clear in parallel direction, which confirms the scale-variation of flatness and indicates the intermittency generation and dissipation. The angular distribution of structure function in the dissipation range shows an anisotropic pattern. The quasi-mono-fractal scaling of structure function in the dissipation range is also illustrated and investigated with the mathematical model for inhomogeneous cascading (extended p-model). Different from the inertial range, the extended p-model for the dissipation range results in approximate uniform fragmentation measure. However, more complete mathematicaland physical model involving both non-uniform cascading and dissipation is needed. The nature of intermittency may be strong structures or large amplitude fluctuations, which may be tested with magnetic helicity. In one case study, we find the heating effect in terms of entropy for large amplitude fluctuations seems to be more obvious than strong structures.

  6. Massively parallel sparse matrix function calculations with NTPoly

    NASA Astrophysics Data System (ADS)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  7. Aging and feature search: the effect of search area.

    PubMed

    Burton-Danner, K; Owsley, C; Jackson, G R

    2001-01-01

    The preattentive system involves the rapid parallel processing of visual information in the visual scene so that attention can be directed to meaningful objects and locations in the environment. This study used the feature search methodology to examine whether there are aging-related deficits in parallel-processing capabilities when older adults are required to visually search a large area of the visual field. Like young subjects, older subjects displayed flat, near-zero slopes for the Reaction Time x Set Size function when searching over a broad area (30 degrees radius) of the visual field, implying parallel processing of the visual display. These same older subjects exhibited impairment in another task, also dependent on parallel processing, performed over the same broad field area; this task, called the useful field of view test, has more complex task demands. Results imply that aging-related breakdowns of parallel processing over a large visual field area are not likely to emerge when required responses are simple, there is only one task to perform, and there is no limitation on visual inspection time.

  8. A heterogeneous computing accelerated SCE-UA global optimization method using OpenMP, OpenCL, CUDA, and OpenACC.

    PubMed

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang

    2017-10-01

    The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.

  9. On the Development of an Efficient Parallel Hybrid Solver with Application to Acoustically Treated Aero-Engine Nacelles

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Nark, Douglas M.; Nguyen, Duc T.; Tungkahotara, Siroj

    2006-01-01

    A finite element solution to the convected Helmholtz equation in a nonuniform flow is used to model the noise field within 3-D acoustically treated aero-engine nacelles. Options to select linear or cubic Hermite polynomial basis functions and isoparametric elements are included. However, the key feature of the method is a domain decomposition procedure that is based upon the inter-mixing of an iterative and a direct solve strategy for solving the discrete finite element equations. This procedure is optimized to take full advantage of sparsity and exploit the increased memory and parallel processing capability of modern computer architectures. Example computations are presented for the Langley Flow Impedance Test facility and a rectangular mapping of a full scale, generic aero-engine nacelle. The accuracy and parallel performance of this new solver are tested on both model problems using a supercomputer that contains hundreds of central processing units. Results show that the method gives extremely accurate attenuation predictions, achieves super-linear speedup over hundreds of CPUs, and solves upward of 25 million complex equations in a quarter of an hour.

  10. Principle-Based Inferences in Preschoolers' Categorization of Novel Artifacts.

    ERIC Educational Resources Information Center

    Nelson, Deborah G. Kemler; And Others

    Two parallel studies investigated the influence of principle-based and attribute-based similarity relations on new category learning by preschoolers. One of two possible functions of a single novel artifact (which differed between studies) was modeled for children and practiced by children. Children then judged which test objects received the same…

  11. Detecting opportunities for parallel observations on the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Lucks, Michael

    1992-01-01

    The presence of multiple scientific instruments aboard the Hubble Space Telescope provides opportunities for parallel science, i.e., the simultaneous use of different instruments for different observations. Determining whether candidate observations are suitable for parallel execution depends on numerous criteria (some involving quantitative tradeoffs) that may change frequently. A knowledge based approach is presented for constructing a scoring function to rank candidate pairs of observations for parallel science. In the Parallel Observation Matching System (POMS), spacecraft knowledge and schedulers' preferences are represented using a uniform set of mappings, or knowledge functions. Assessment of parallel science opportunities is achieved via composition of the knowledge functions in a prescribed manner. The knowledge acquisition, and explanation facilities of the system are presented. The methodology is applicable to many other multiple criteria assessment problems.

  12. Production Maintenance Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jason Gabler, David Skinner

    2005-11-01

    PMI is a XML framework for formulating tests of software and software environments which operate in a relatively push button manner, i.e., can be automated, and that provide results that are readily consumable/publishable via RSS. Insofar as possible the tests are carried out in manner congruent with real usage. PMI drives shell scripts via a perl program which is charge of timing, validating each test, and controlling the flow through sets of tests. Testing in PMI is built up hierarchically. A suite of tests may start by testing basic functionalities (file system is writable, compiler is found and functions, shellmore » environment behaves as expected, etc.) and work up to large more complicated activities (execution of parallel code, file transfers, etc.) At each step in this hierarchy a failure leads to generation of a text message or RSS that can be tagged as to who should be notified of the failure. There are two functionalities that PMI has been directed at. 1) regular and automated testing of multi user environments and 2) version-wise testing of new software releases prior to their deployment in a production mode.« less

  13. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  14. Identification of rare X-linked neuroligin variants by massively parallel sequencing in males with autism spectrum disorder.

    PubMed

    Steinberg, Karyn Meltz; Ramachandran, Dhanya; Patel, Viren C; Shetty, Amol C; Cutler, David J; Zwick, Michael E

    2012-09-28

    Autism spectrum disorder (ASD) is highly heritable, but the genetic risk factors for it remain largely unknown. Although structural variants with large effect sizes may explain up to 15% ASD, genome-wide association studies have failed to uncover common single nucleotide variants with large effects on phenotype. The focus within ASD genetics is now shifting to the examination of rare sequence variants of modest effect, which is most often achieved via exome selection and sequencing. This strategy has indeed identified some rare candidate variants; however, the approach does not capture the full spectrum of genetic variation that might contribute to the phenotype. We surveyed two loci with known rare variants that contribute to ASD, the X-linked neuroligin genes by performing massively parallel Illumina sequencing of the coding and noncoding regions from these genes in males from families with multiplex autism. We annotated all variant sites and functionally tested a subset to identify other rare mutations contributing to ASD susceptibility. We found seven rare variants at evolutionary conserved sites in our study population. Functional analyses of the three 3' UTR variants did not show statistically significant effects on the expression of NLGN3 and NLGN4X. In addition, we identified two NLGN3 intronic variants located within conserved transcription factor binding sites that could potentially affect gene regulation. These data demonstrate the power of massively parallel, targeted sequencing studies of affected individuals for identifying rare, potentially disease-contributing variation. However, they also point out the challenges and limitations of current methods of direct functional testing of rare variants and the difficulties of identifying alleles with modest effects.

  15. Identification of rare X-linked neuroligin variants by massively parallel sequencing in males with autism spectrum disorder

    PubMed Central

    2012-01-01

    Background Autism spectrum disorder (ASD) is highly heritable, but the genetic risk factors for it remain largely unknown. Although structural variants with large effect sizes may explain up to 15% ASD, genome-wide association studies have failed to uncover common single nucleotide variants with large effects on phenotype. The focus within ASD genetics is now shifting to the examination of rare sequence variants of modest effect, which is most often achieved via exome selection and sequencing. This strategy has indeed identified some rare candidate variants; however, the approach does not capture the full spectrum of genetic variation that might contribute to the phenotype. Methods We surveyed two loci with known rare variants that contribute to ASD, the X-linked neuroligin genes by performing massively parallel Illumina sequencing of the coding and noncoding regions from these genes in males from families with multiplex autism. We annotated all variant sites and functionally tested a subset to identify other rare mutations contributing to ASD susceptibility. Results We found seven rare variants at evolutionary conserved sites in our study population. Functional analyses of the three 3’ UTR variants did not show statistically significant effects on the expression of NLGN3 and NLGN4X. In addition, we identified two NLGN3 intronic variants located within conserved transcription factor binding sites that could potentially affect gene regulation. Conclusions These data demonstrate the power of massively parallel, targeted sequencing studies of affected individuals for identifying rare, potentially disease-contributing variation. However, they also point out the challenges and limitations of current methods of direct functional testing of rare variants and the difficulties of identifying alleles with modest effects. PMID:23020841

  16. Are there reliable changes in memory and executive functions after cognitive behavioural therapy in patients with obsessive-compulsive disorder?

    PubMed

    Vandborg, Sanne Kjær; Hartmann, Tue Borst; Bennedsen, Birgit Egedal; Pedersen, Anders Degn; Thomsen, Per Hove

    2015-01-01

    Patients with obsessive-compulsive disorder (OCD) have impaired memory and executive functions, but it is unclear whether these functions improve after cognitive behavioural therapy (CBT) of OCD symptoms. The primary aim of this study was to investigate whether memory and executive functions change after CBT in patients with OCD. We assessed 39 patients with OCD before and after CBT with neuropsychological tests of memory and executive functions. To correct for practice effects, 39 healthy controls (HCs) were assessed at two parallel time intervals with the neuropsychological tests. There were no changes in memory and executive functions after CBT in patients with OCD when results were corrected for practice effects. Patients performed worse on a test of visuospatial memory and organisational skills (Rey complex figure test [RCFT]) compared to HCs both before and after CBT (ps = .002-.036). The finding of persistent poor RCFT performances indicates that patients with OCD have impaired visuospatial memory and organisational skills that may be trait-related rather than state-dependent. These impairments may need to be considered in treatment. Our findings underline the importance of correcting for practice effects when investigating changes in cognitive functions.

  17. Series and parallel arc-fault circuit interrupter tests.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jay Dean; Fresquez, Armando J.; Gudgel, Bob

    2013-07-01

    While the 2011 National Electrical Codeª (NEC) only requires series arc-fault protection, some arc-fault circuit interrupter (AFCI) manufacturers are designing products to detect and mitigate both series and parallel arc-faults. Sandia National Laboratories (SNL) has extensively investigated the electrical differences of series and parallel arc-faults and has offered possible classification and mitigation solutions. As part of this effort, Sandia National Laboratories has collaborated with MidNite Solar to create and test a 24-string combiner box with an AFCI which detects, differentiates, and de-energizes series and parallel arc-faults. In the case of the MidNite AFCI prototype, series arc-faults are mitigated by openingmore » the PV strings, whereas parallel arc-faults are mitigated by shorting the array. A range of different experimental series and parallel arc-fault tests with the MidNite combiner box were performed at the Distributed Energy Technologies Laboratory (DETL) at SNL in Albuquerque, NM. In all the tests, the prototype de-energized the arc-faults in the time period required by the arc-fault circuit interrupt testing standard, UL 1699B. The experimental tests confirm series and parallel arc-faults can be successfully mitigated with a combiner box-integrated solution.« less

  18. Statistical power as a function of Cronbach alpha of instrument questionnaire items.

    PubMed

    Heo, Moonseong; Kim, Namhee; Faith, Myles S

    2015-10-14

    In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C(α), can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C(α) have been lacking for various study designs. We formulate a statistical model for parallel items to derive power functions as a function of C(α) under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C(α) is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. It is shown that C(α) is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C(α). Closed-form power functions and samples size determination formulas are derived in terms of C(α), for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C(α), regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C(α), or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.

  19. The Effectiveness of Mindfulness Training on Behavioral Problems and Attentional Functioning in Adolescents with ADHD

    ERIC Educational Resources Information Center

    van de Weijer-Bergsma, Eva; Formsma, Anne R.; de Bruin, Esther I.; Bogels, Susan M.

    2012-01-01

    The effectiveness of an 8-week mindfulness training for adolescents aged 11-15 years with ADHD and parallel Mindful Parenting training for their parents was evaluated, using questionnaires as well as computerized attention tests. Adolescents (N = 10), their parents (N = 19) and tutors (N = 7) completed measurements before, immediately after, 8…

  20. Strength, Deformation and Friction of in situ Rock

    DTIC Science & Technology

    1974-12-01

    Kayenta sandstone, Mixed Company site, Colorado. 30 21. Strength as a function of density for specimen cored perpendicular and parallel to bedding. 30...saturation. 33 24. Photomicrograph of Kayenta sandstone (x 30). 35 25. Stress difference as a function of density for triaxial tests up to P = 4.0...specimen size on strength for Kayenta sandstone, Mixed Company site Colorado. m Sä £ 3 s Q 3/« In, j. O 2 In. X ’ X3/4(n.ll • 2ln. II it

  1. LAPR: An experimental aircraft pushbroom scanner

    NASA Technical Reports Server (NTRS)

    Wharton, S. W.; Irons, J. I.; Heugel, F.

    1980-01-01

    A three band Linear Array Pushbroom Radiometer (LAPR) was built and flown on an experimental basis by NASA at the Goddard Space Flight Center. The functional characteristics of the instrument and the methods used to preprocess the data, including radiometric correction, are described. The radiometric sensitivity of the instrument was tested and compared to that of the Thematic Mapper and the Multispectral Scanner. The radiometric correction procedure was evaluated quantitatively, using laboratory testing, and qualitatively, via visual examination of the LAPR test flight imagery. Although effective radiometric correction could not yet be demonstrated via laboratory testing, radiometric distortion did not preclude the visual interpretation or parallel piped classification of the test imagery.

  2. Comparison of adult age differences in verbal and visuo-spatial memory: the importance of 'pure', parallel and validated measures.

    PubMed

    Kemps, Eva; Newson, Rachel

    2006-04-01

    The study compared age-related decrements in verbal and visuo-spatial memory across a broad elderly adult age range. Twenty-four young (18-25 years), 24 young-old (65-74 years), 24 middle-old (75-84 years) and 24 old-old (85-93 years) adults completed parallel recall and recognition measures of verbal and visuo-spatial memory from the Doors and People Test (Baddeley, Emslie & Nimmo-Smith, 1994). These constituted 'pure' and validated indices of either verbal or visuo-spatial memory. Verbal and visuo-spatial memory declined similarly with age, with a steeper decline in recall than recognition. Unlike recognition memory, recall performance also showed a heightened decline after the age of 85. Age-associated memory loss in both modalities was largely due to working memory and executive function. Processing speed and sensory functioning (vision, hearing) made minor contributions to memory performance and age differences in it. Together, these findings demonstrate common, rather than differential, age-related effects on verbal and visuo-spatial memory. They also emphasize the importance of using 'pure', parallel and validated measures of verbal and visuo-spatial memory in memory ageing research.

  3. Components of visual search in childhood-onset schizophrenia and attention-deficit/hyperactivity disorder.

    PubMed

    Karatekin, C; Asarnow, R F

    1998-10-01

    This study tested the hypotheses that visual search impairments in schizophrenia are due to a delay in initiation of search or a slow rate of serial search. We determined the specificity of these impairments by comparing children with schizophrenia to children with attention-deficit hyperactivity disorder (ADHD) and age-matched normal children. The hypotheses were tested within the framework of feature integration theory by administering children tasks tapping parallel and serial search. Search rate was estimated from the slope of the search functions, and duration of the initial stages of search from time to make the first saccade on each trial. As expected, manual response times were elevated in both clinical groups. Contrary to expectation, ADHD, but not schizophrenic, children were delayed in initiation of serial search. Finally, both groups showed a clear dissociation between intact parallel search rates and slowed serial search rates.

  4. Reference datasets for bioequivalence trials in a two-group parallel design.

    PubMed

    Fuglsang, Anders; Schütz, Helmut; Labes, Detlew

    2015-03-01

    In order to help companies qualify and validate the software used to evaluate bioequivalence trials with two parallel treatment groups, this work aims to define datasets with known results. This paper puts a total 11 datasets into the public domain along with proposed consensus obtained via evaluations from six different software packages (R, SAS, WinNonlin, OpenOffice Calc, Kinetica, EquivTest). Insofar as possible, datasets were evaluated with and without the assumption of equal variances for the construction of a 90% confidence interval. Not all software packages provide functionality for the assumption of unequal variances (EquivTest, Kinetica), and not all packages can handle datasets with more than 1000 subjects per group (WinNonlin). Where results could be obtained across all packages, one showed questionable results when datasets contained unequal group sizes (Kinetica). A proposal is made for the results that should be used as validation targets.

  5. Algorithms and programming tools for image processing on the MPP, part 2

    NASA Technical Reports Server (NTRS)

    Reeves, Anthony P.

    1986-01-01

    A number of algorithms were developed for image warping and pyramid image filtering. Techniques were investigated for the parallel processing of a large number of independent irregular shaped regions on the MPP. In addition some utilities for dealing with very long vectors and for sorting were developed. Documentation pages for the algorithms which are available for distribution are given. The performance of the MPP for a number of basic data manipulations was determined. From these results it is possible to predict the efficiency of the MPP for a number of algorithms and applications. The Parallel Pascal development system, which is a portable programming environment for the MPP, was improved and better documentation including a tutorial was written. This environment allows programs for the MPP to be developed on any conventional computer system; it consists of a set of system programs and a library of general purpose Parallel Pascal functions. The algorithms were tested on the MPP and a presentation on the development system was made to the MPP users group. The UNIX version of the Parallel Pascal System was distributed to a number of new sites.

  6. In situ patterned micro 3D liver constructs for parallel toxicology testing in a fluidic device

    PubMed Central

    Skardal, Aleksander; Devarasetty, Mahesh; Soker, Shay; Hall, Adam R

    2017-01-01

    3D tissue models are increasingly being implemented for drug and toxicology testing. However, the creation of tissue-engineered constructs for this purpose often relies on complex biofabrication techniques that are time consuming, expensive, and difficult to scale up. Here, we describe a strategy for realizing multiple tissue constructs in a parallel microfluidic platform using an approach that is simple and can be easily scaled for high-throughput formats. Liver cells mixed with a UV-crosslinkable hydrogel solution are introduced into parallel channels of a sealed microfluidic device and photopatterned to produce stable tissue constructs in situ. The remaining uncrosslinked material is washed away, leaving the structures in place. By using a hydrogel that specifically mimics the properties of the natural extracellular matrix, we closely emulate native tissue, resulting in constructs that remain stable and functional in the device during a 7-day culture time course under recirculating media flow. As proof of principle for toxicology analysis, we expose the constructs to ethyl alcohol (0–500 mM) and show that the cell viability and the secretion of urea and albumin decrease with increasing alcohol exposure, while markers for cell damage increase. PMID:26355538

  7. In situ patterned micro 3D liver constructs for parallel toxicology testing in a fluidic device.

    PubMed

    Skardal, Aleksander; Devarasetty, Mahesh; Soker, Shay; Hall, Adam R

    2015-09-10

    3D tissue models are increasingly being implemented for drug and toxicology testing. However, the creation of tissue-engineered constructs for this purpose often relies on complex biofabrication techniques that are time consuming, expensive, and difficult to scale up. Here, we describe a strategy for realizing multiple tissue constructs in a parallel microfluidic platform using an approach that is simple and can be easily scaled for high-throughput formats. Liver cells mixed with a UV-crosslinkable hydrogel solution are introduced into parallel channels of a sealed microfluidic device and photopatterned to produce stable tissue constructs in situ. The remaining uncrosslinked material is washed away, leaving the structures in place. By using a hydrogel that specifically mimics the properties of the natural extracellular matrix, we closely emulate native tissue, resulting in constructs that remain stable and functional in the device during a 7-day culture time course under recirculating media flow. As proof of principle for toxicology analysis, we expose the constructs to ethyl alcohol (0-500 mM) and show that the cell viability and the secretion of urea and albumin decrease with increasing alcohol exposure, while markers for cell damage increase.

  8. Parallel digital forensics infrastructure.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexicomore » Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.« less

  9. Rocket measurement of auroral partial parallel distribution functions

    NASA Astrophysics Data System (ADS)

    Lin, C.-A.

    1980-01-01

    The auroral partial parallel distribution functions are obtained by using the observed energy spectra of electrons. The experiment package was launched by a Nike-Tomahawk rocket from Poker Flat, Alaska over a bright auroral band and covered an altitude range of up to 180 km. Calculated partial distribution functions are presented with emphasis on their slopes. The implications of the slopes are discussed. It should be pointed out that the slope of the partial parallel distribution function obtained from one energy spectra will be changed by superposing another energy spectra on it.

  10. Advances in molecular quantum chemistry contained in the Q-Chem 4 program package

    NASA Astrophysics Data System (ADS)

    Shao, Yihan; Gan, Zhengting; Epifanovsky, Evgeny; Gilbert, Andrew T. B.; Wormit, Michael; Kussmann, Joerg; Lange, Adrian W.; Behn, Andrew; Deng, Jia; Feng, Xintian; Ghosh, Debashree; Goldey, Matthew; Horn, Paul R.; Jacobson, Leif D.; Kaliman, Ilya; Khaliullin, Rustam Z.; Kuś, Tomasz; Landau, Arie; Liu, Jie; Proynov, Emil I.; Rhee, Young Min; Richard, Ryan M.; Rohrdanz, Mary A.; Steele, Ryan P.; Sundstrom, Eric J.; Woodcock, H. Lee, III; Zimmerman, Paul M.; Zuev, Dmitry; Albrecht, Ben; Alguire, Ethan; Austin, Brian; Beran, Gregory J. O.; Bernard, Yves A.; Berquist, Eric; Brandhorst, Kai; Bravaya, Ksenia B.; Brown, Shawn T.; Casanova, David; Chang, Chun-Min; Chen, Yunqing; Chien, Siu Hung; Closser, Kristina D.; Crittenden, Deborah L.; Diedenhofen, Michael; DiStasio, Robert A., Jr.; Do, Hainam; Dutoi, Anthony D.; Edgar, Richard G.; Fatehi, Shervin; Fusti-Molnar, Laszlo; Ghysels, An; Golubeva-Zadorozhnaya, Anna; Gomes, Joseph; Hanson-Heine, Magnus W. D.; Harbach, Philipp H. P.; Hauser, Andreas W.; Hohenstein, Edward G.; Holden, Zachary C.; Jagau, Thomas-C.; Ji, Hyunjun; Kaduk, Benjamin; Khistyaev, Kirill; Kim, Jaehoon; Kim, Jihan; King, Rollin A.; Klunzinger, Phil; Kosenkov, Dmytro; Kowalczyk, Tim; Krauter, Caroline M.; Lao, Ka Un; Laurent, Adèle D.; Lawler, Keith V.; Levchenko, Sergey V.; Lin, Ching Yeh; Liu, Fenglai; Livshits, Ester; Lochan, Rohini C.; Luenser, Arne; Manohar, Prashant; Manzer, Samuel F.; Mao, Shan-Ping; Mardirossian, Narbe; Marenich, Aleksandr V.; Maurer, Simon A.; Mayhall, Nicholas J.; Neuscamman, Eric; Oana, C. Melania; Olivares-Amaya, Roberto; O'Neill, Darragh P.; Parkhill, John A.; Perrine, Trilisa M.; Peverati, Roberto; Prociuk, Alexander; Rehn, Dirk R.; Rosta, Edina; Russ, Nicholas J.; Sharada, Shaama M.; Sharma, Sandeep; Small, David W.; Sodt, Alexander; Stein, Tamar; Stück, David; Su, Yu-Chuan; Thom, Alex J. W.; Tsuchimochi, Takashi; Vanovschi, Vitalii; Vogt, Leslie; Vydrov, Oleg; Wang, Tao; Watson, Mark A.; Wenzel, Jan; White, Alec; Williams, Christopher F.; Yang, Jun; Yeganeh, Sina; Yost, Shane R.; You, Zhi-Qiang; Zhang, Igor Ying; Zhang, Xing; Zhao, Yan; Brooks, Bernard R.; Chan, Garnet K. L.; Chipman, Daniel M.; Cramer, Christopher J.; Goddard, William A., III; Gordon, Mark S.; Hehre, Warren J.; Klamt, Andreas; Schaefer, Henry F., III; Schmidt, Michael W.; Sherrill, C. David; Truhlar, Donald G.; Warshel, Arieh; Xu, Xin; Aspuru-Guzik, Alán; Baer, Roi; Bell, Alexis T.; Besley, Nicholas A.; Chai, Jeng-Da; Dreuw, Andreas; Dunietz, Barry D.; Furlani, Thomas R.; Gwaltney, Steven R.; Hsu, Chao-Ping; Jung, Yousung; Kong, Jing; Lambrecht, Daniel S.; Liang, WanZhen; Ochsenfeld, Christian; Rassolov, Vitaly A.; Slipchenko, Lyudmila V.; Subotnik, Joseph E.; Van Voorhis, Troy; Herbert, John M.; Krylov, Anna I.; Gill, Peter M. W.; Head-Gordon, Martin

    2015-01-01

    A summary of the technical advances that are incorporated in the fourth major release of the Q-Chem quantum chemistry program is provided, covering approximately the last seven years. These include developments in density functional theory methods and algorithms, nuclear magnetic resonance (NMR) property evaluation, coupled cluster and perturbation theories, methods for electronically excited and open-shell species, tools for treating extended environments, algorithms for walking on potential surfaces, analysis tools, energy and electron transfer modelling, parallel computing capabilities, and graphical user interfaces. In addition, a selection of example case studies that illustrate these capabilities is given. These include extensive benchmarks of the comparative accuracy of modern density functionals for bonded and non-bonded interactions, tests of attenuated second order Møller-Plesset (MP2) methods for intermolecular interactions, a variety of parallel performance benchmarks, and tests of the accuracy of implicit solvation models. Some specific chemical examples include calculations on the strongly correlated Cr2 dimer, exploring zeolite-catalysed ethane dehydrogenation, energy decomposition analysis of a charged ter-molecular complex arising from glycerol photoionisation, and natural transition orbitals for a Frenkel exciton state in a nine-unit model of a self-assembling nanotube.

  11. Stress and decision making: neural correlates of the interaction between stress, executive functions, and decision making under risk.

    PubMed

    Gathmann, Bettina; Schulte, Frank P; Maderwald, Stefan; Pawlikowski, Mirko; Starcke, Katrin; Schäfer, Lena C; Schöler, Tobias; Wolf, Oliver T; Brand, Matthias

    2014-03-01

    Stress and additional load on the executive system, produced by a parallel working memory task, impair decision making under risk. However, the combination of stress and a parallel task seems to preserve the decision-making performance [e.g., operationalized by the Game of Dice Task (GDT)] from decreasing, probably by a switch from serial to parallel processing. The question remains how the brain manages such demanding decision-making situations. The current study used a 7-tesla magnetic resonance imaging (MRI) system in order to investigate the underlying neural correlates of the interaction between stress (induced by the Trier Social Stress Test), risky decision making (GDT), and a parallel executive task (2-back task) to get a better understanding of those behavioral findings. The results show that on a behavioral level, stressed participants did not show significant differences in task performance. Interestingly, when comparing the stress group (SG) with the control group, the SG showed a greater increase in neural activation in the anterior prefrontal cortex when performing the 2-back task simultaneously with the GDT than when performing each task alone. This brain area is associated with parallel processing. Thus, the results may suggest that in stressful dual-tasking situations, where a decision has to be made when in parallel working memory is demanded, a stronger activation of a brain area associated with parallel processing takes place. The findings are in line with the idea that stress seems to trigger a switch from serial to parallel processing in demanding dual-tasking situations.

  12. Leukocytosis and natural killer cell function parallel neurobehavioral fatigue induced by 64 hours of sleep deprivation.

    PubMed

    Dinges, D F; Douglas, S D; Zaugg, L; Campbell, D E; McMann, J M; Whitehouse, W G; Orne, E C; Kapoor, S C; Icaza, E; Orne, M T

    1994-05-01

    The hypothesis that sleep deprivation depresses immune function was tested in 20 adults, selected on the basis of their normal blood chemistry, monitored in a laboratory for 7 d, and kept awake for 64 h. At 2200 h each day measurements were taken of total leukocytes (WBC), monocytes, granulocytes, lymphocytes, eosinophils, erythrocytes (RBC), B and T lymphocyte subsets, activated T cells, and natural killer (NK) subpopulations (CD56/CD8 dual-positive cells, CD16-positive cells, CD57-positive cells). Functional tests included NK cytotoxicity, lymphocyte stimulation with mitogens, and DNA analysis of cell cycle. Sleep loss was associated with leukocytosis and increased NK cell activity. At the maximum sleep deprivation, increases were observed in counts of WBC, granulocytes, monocytes, NK activity, and the proportion of lymphocytes in the S phase of the cell cycle. Changes in monocyte counts correlated with changes in other immune parameters. Counts of CD4, CD16, CD56, and CD57 lymphocytes declined after one night without sleep, whereas CD56 and CD57 counts increased after two nights. No changes were observed in other lymphocyte counts, in proliferative responses to mitogens, or in plasma levels of cortisol or adrenocorticotropin hormone. The physiologic leukocytosis and NK activity increases during deprivation were eliminated by recovery sleep in a manner parallel to neurobehavioral function, suggesting that the immune alterations may be associated with biological pressure for sleep.

  13. Leukocytosis and natural killer cell function parallel neurobehavioral fatigue induced by 64 hours of sleep deprivation.

    PubMed Central

    Dinges, D F; Douglas, S D; Zaugg, L; Campbell, D E; McMann, J M; Whitehouse, W G; Orne, E C; Kapoor, S C; Icaza, E; Orne, M T

    1994-01-01

    The hypothesis that sleep deprivation depresses immune function was tested in 20 adults, selected on the basis of their normal blood chemistry, monitored in a laboratory for 7 d, and kept awake for 64 h. At 2200 h each day measurements were taken of total leukocytes (WBC), monocytes, granulocytes, lymphocytes, eosinophils, erythrocytes (RBC), B and T lymphocyte subsets, activated T cells, and natural killer (NK) subpopulations (CD56/CD8 dual-positive cells, CD16-positive cells, CD57-positive cells). Functional tests included NK cytotoxicity, lymphocyte stimulation with mitogens, and DNA analysis of cell cycle. Sleep loss was associated with leukocytosis and increased NK cell activity. At the maximum sleep deprivation, increases were observed in counts of WBC, granulocytes, monocytes, NK activity, and the proportion of lymphocytes in the S phase of the cell cycle. Changes in monocyte counts correlated with changes in other immune parameters. Counts of CD4, CD16, CD56, and CD57 lymphocytes declined after one night without sleep, whereas CD56 and CD57 counts increased after two nights. No changes were observed in other lymphocyte counts, in proliferative responses to mitogens, or in plasma levels of cortisol or adrenocorticotropin hormone. The physiologic leukocytosis and NK activity increases during deprivation were eliminated by recovery sleep in a manner parallel to neurobehavioral function, suggesting that the immune alterations may be associated with biological pressure for sleep. PMID:7910171

  14. Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2017-03-01

    Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.

  15. Processing data communications events by awakening threads in parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for themore » context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.« less

  16. Modeling Sound Propagation Through Non-Axisymmetric Jets

    NASA Technical Reports Server (NTRS)

    Leib, Stewart J.

    2014-01-01

    A method for computing the far-field adjoint Green's function of the generalized acoustic analogy equations under a locally parallel mean flow approximation is presented. The method is based on expanding the mean-flow-dependent coefficients in the governing equation and the scalar Green's function in truncated Fourier series in the azimuthal direction and a finite difference approximation in the radial direction in circular cylindrical coordinates. The combined spectral/finite difference method yields a highly banded system of algebraic equations that can be efficiently solved using a standard sparse system solver. The method is applied to test cases, with mean flow specified by analytical functions, corresponding to two noise reduction concepts of current interest: the offset jet and the fluid shield. Sample results for the Green's function are given for these two test cases and recommendations made as to the use of the method as part of a RANS-based jet noise prediction code.

  17. Decoupled form and function in disparate herbivorous dinosaur clades

    NASA Astrophysics Data System (ADS)

    Lautenschlager, Stephan; Brassey, Charlotte A.; Button, David J.; Barrett, Paul M.

    2016-05-01

    Convergent evolution, the acquisition of morphologically similar traits in unrelated taxa due to similar functional demands or environmental factors, is a common phenomenon in the animal kingdom. Consequently, the occurrence of similar form is used routinely to address fundamental questions in morphofunctional research and to infer function in fossils. However, such qualitative assessments can be misleading and it is essential to test form/function relationships quantitatively. The parallel occurrence of a suite of morphologically convergent craniodental characteristics in three herbivorous, phylogenetically disparate dinosaur clades (Sauropodomorpha, Ornithischia, Theropoda) provides an ideal test case. A combination of computational biomechanical models (Finite Element Analysis, Multibody Dynamics Analysis) demonstrate that despite a high degree of morphological similarity between representative taxa (Plateosaurus engelhardti, Stegosaurus stenops, Erlikosaurus andrewsi) from these clades, their biomechanical behaviours are notably different and difficult to predict on the basis of form alone. These functional differences likely reflect dietary specialisations, demonstrating the value of quantitative biomechanical approaches when evaluating form/function relationships in extinct taxa.

  18. Decoupled form and function in disparate herbivorous dinosaur clades.

    PubMed

    Lautenschlager, Stephan; Brassey, Charlotte A; Button, David J; Barrett, Paul M

    2016-05-20

    Convergent evolution, the acquisition of morphologically similar traits in unrelated taxa due to similar functional demands or environmental factors, is a common phenomenon in the animal kingdom. Consequently, the occurrence of similar form is used routinely to address fundamental questions in morphofunctional research and to infer function in fossils. However, such qualitative assessments can be misleading and it is essential to test form/function relationships quantitatively. The parallel occurrence of a suite of morphologically convergent craniodental characteristics in three herbivorous, phylogenetically disparate dinosaur clades (Sauropodomorpha, Ornithischia, Theropoda) provides an ideal test case. A combination of computational biomechanical models (Finite Element Analysis, Multibody Dynamics Analysis) demonstrate that despite a high degree of morphological similarity between representative taxa (Plateosaurus engelhardti, Stegosaurus stenops, Erlikosaurus andrewsi) from these clades, their biomechanical behaviours are notably different and difficult to predict on the basis of form alone. These functional differences likely reflect dietary specialisations, demonstrating the value of quantitative biomechanical approaches when evaluating form/function relationships in extinct taxa.

  19. Simulation of partially coherent light propagation using parallel computing devices

    NASA Astrophysics Data System (ADS)

    Magalhães, Tiago C.; Rebordão, José M.

    2017-08-01

    Light acquires or loses coherence and coherence is one of the few optical observables. Spectra can be derived from coherence functions and understanding any interferometric experiment is also relying upon coherence functions. Beyond the two limiting cases (full coherence or incoherence) the coherence of light is always partial and it changes with propagation. We have implemented a code to compute the propagation of partially coherent light from the source plane to the observation plane using parallel computing devices (PCDs). In this paper, we restrict the propagation in free space only. To this end, we used the Open Computing Language (OpenCL) and the open-source toolkit PyOpenCL, which gives access to OpenCL parallel computation through Python. To test our code, we chose two coherence source models: an incoherent source and a Gaussian Schell-model source. In the former case, we divided into two different source shapes: circular and rectangular. The results were compared to the theoretical values. Our implemented code allows one to choose between the PyOpenCL implementation and a standard one, i.e using the CPU only. To test the computation time for each implementation (PyOpenCL and standard), we used several computer systems with different CPUs and GPUs. We used powers of two for the dimensions of the cross-spectral density matrix (e.g. 324, 644) and a significant speed increase is observed in the PyOpenCL implementation when compared to the standard one. This can be an important tool for studying new source models.

  20. Exploring visuospatial abilities and their contribution to constructional abilities and nonverbal intelligence.

    PubMed

    Trojano, Luigi; Siciliano, Mattia; Cristinzio, Chiara; Grossi, Dario

    2018-01-01

    The present study aimed at exploring relationships among the visuospatial tasks included in the Battery for Visuospatial Abilities (BVA), and at assessing the relative contribution of different facets of visuospatial processing on tests tapping constructional abilities and nonverbal abstract reasoning. One hundred forty-four healthy subjects with a normal score on Mini Mental State Examination completed the BVA plus Raven's Coloured Progressive Matrices and Constructional Apraxia test. We used Principal Axis Factoring and Parallel Analysis to investigate relationships among the BVA visuospatial tasks, and performed regression analyses to assess the visuospatial contribution to constructional abilities and nonverbal abstract reasoning. Principal Axis Factoring and Parallel Analysis revealed two eigenvalues exceeding 1, accounting for about 60% of the variance. A 2-factor model provided the best fit. Factor 1 included sub-tests exploring "complex" visuospatial skills, whereas Factor 2 included two subtests tapping "simple" visuospatial skills. Regression analyses revealed that both Factor 1 and Factor 2 significantly affected performance on Raven's Coloured Progressive Matrices, whereas only the Factor 1 affected performance on Constructional Apraxia test. Our results supported functional segregation proposed by De Renzi, suggesting clinical caution to utilize a single test to assess visuospatial domain, and qualified the visuospatial contribution in drawing and non-verbal intelligence test.

  1. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  2. A 2D MTF approach to evaluate and guide dynamic imaging developments.

    PubMed

    Chao, Tzu-Cheng; Chung, Hsiao-Wen; Hoge, W Scott; Madore, Bruno

    2010-02-01

    As the number and complexity of partially sampled dynamic imaging methods continue to increase, reliable strategies to evaluate performance may prove most useful. In the present work, an analytical framework to evaluate given reconstruction methods is presented. A perturbation algorithm allows the proposed evaluation scheme to perform robustly without requiring knowledge about the inner workings of the method being evaluated. A main output of the evaluation process consists of a two-dimensional modulation transfer function, an easy-to-interpret visual rendering of a method's ability to capture all combinations of spatial and temporal frequencies. Approaches to evaluate noise properties and artifact content at all spatial and temporal frequencies are also proposed. One fully sampled phantom and three fully sampled cardiac cine datasets were subsampled (R = 4 and 8) and reconstructed with the different methods tested here. A hybrid method, which combines the main advantageous features observed in our assessments, was proposed and tested in a cardiac cine application, with acceleration factors of 3.5 and 6.3 (skip factors of 4 and 8, respectively). This approach combines features from methods such as k-t sensitivity encoding, unaliasing by Fourier encoding the overlaps in the temporal dimension-sensitivity encoding, generalized autocalibrating partially parallel acquisition, sensitivity profiles from an array of coils for encoding and reconstruction in parallel, self, hybrid referencing with unaliasing by Fourier encoding the overlaps in the temporal dimension and generalized autocalibrating partially parallel acquisition, and generalized autocalibrating partially parallel acquisition-enhanced sensitivity maps for sensitivity encoding reconstructions.

  3. Carotid chemoreceptors tune breathing via multipath routing: reticular chain and loop operations supported by parallel spike train correlations.

    PubMed

    Morris, Kendall F; Nuding, Sarah C; Segers, Lauren S; Iceman, Kimberly E; O'Connor, Russell; Dean, Jay B; Ott, Mackenzie M; Alencar, Pierina A; Shuman, Dale; Horton, Kofi-Kermit; Taylor-Clark, Thomas E; Bolser, Donald C; Lindsey, Bruce G

    2018-02-01

    We tested the hypothesis that carotid chemoreceptors tune breathing through parallel circuit paths that target distinct elements of an inspiratory neuron chain in the ventral respiratory column (VRC). Microelectrode arrays were used to monitor neuronal spike trains simultaneously in the VRC, peri-nucleus tractus solitarius (p-NTS)-medial medulla, the dorsal parafacial region of the lateral tegmental field (FTL-pF), and medullary raphe nuclei together with phrenic nerve activity during selective stimulation of carotid chemoreceptors or transient hypoxia in 19 decerebrate, neuromuscularly blocked, and artificially ventilated cats. Of 994 neurons tested, 56% had a significant change in firing rate. A total of 33,422 cell pairs were evaluated for signs of functional interaction; 63% of chemoresponsive neurons were elements of at least one pair with correlational signatures indicative of paucisynaptic relationships. We detected evidence for postinspiratory neuron inhibition of rostral VRC I-Driver (pre-Bötzinger) neurons, an interaction predicted to modulate breathing frequency, and for reciprocal excitation between chemoresponsive p-NTS neurons and more downstream VRC inspiratory neurons for control of breathing depth. Chemoresponsive pericolumnar tonic expiratory neurons, proposed to amplify inspiratory drive by disinhibition, were correlationally linked to afferent and efferent "chains" of chemoresponsive neurons extending to all monitored regions. The chains included coordinated clusters of chemoresponsive FTL-pF neurons with functional links to widespread medullary sites involved in the control of breathing. The results support long-standing concepts on brain stem network architecture and a circuit model for peripheral chemoreceptor modulation of breathing with multiple circuit loops and chains tuned by tegmental field neurons with quasi-periodic discharge patterns. NEW & NOTEWORTHY We tested the long-standing hypothesis that carotid chemoreceptors tune the frequency and depth of breathing through parallel circuit operations targeting the ventral respiratory column. Responses to stimulation of the chemoreceptors and identified functional connectivity support differential tuning of inspiratory neuron burst duration and firing rate and a model of brain stem network architecture incorporating tonic expiratory "hub" neurons regulated by convergent neuronal chains and loops through rostral lateral tegmental field neurons with quasi-periodic discharge patterns.

  4. Parallel sites implicate functional convergence of the hearing gene prestin among echolocating mammals.

    PubMed

    Liu, Zhen; Qi, Fei-Yan; Zhou, Xin; Ren, Hai-Qing; Shi, Peng

    2014-09-01

    Echolocation is a sensory system whereby certain mammals navigate and forage using sound waves, usually in environments where visibility is limited. Curiously, echolocation has evolved independently in bats and whales, which occupy entirely different environments. Based on this phenotypic convergence, recent studies identified several echolocation-related genes with parallel sites at the protein sequence level among different echolocating mammals, and among these, prestin seems the most promising. Although previous studies analyzed the evolutionary mechanism of prestin, the functional roles of the parallel sites in the evolution of mammalian echolocation are not clear. By functional assays, we show that a key parameter of prestin function, 1/α, is increased in all echolocating mammals and that the N7T parallel substitution accounted for this functional convergence. Moreover, another parameter, V1/2, was shifted toward the depolarization direction in a toothed whale, the bottlenose dolphin (Tursiops truncatus) and a constant-frequency (CF) bat, the Stoliczka's trident bat (Aselliscus stoliczkanus). The parallel site of I384T between toothed whales and CF bats was responsible for this functional convergence. Furthermore, the two parameters (1/α and V1/2) were correlated with mammalian high-frequency hearing, suggesting that the convergent changes of the prestin function in echolocating mammals may play important roles in mammalian echolocation. To our knowledge, these findings present the functional patterns of echolocation-related genes in echolocating mammals for the first time and rigorously demonstrate adaptive parallel evolution at the protein sequence level, paving the way to insights into the molecular mechanism underlying mammalian echolocation. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Social interaction shapes babbling: Testing parallels between birdsong and speech

    NASA Astrophysics Data System (ADS)

    Goldstein, Michael H.; King, Andrew P.; West, Meredith J.

    2003-06-01

    Birdsong is considered a model of human speech development at behavioral and neural levels. Few direct tests of the proposed analogs exist, however. Here we test a mechanism of phonological development in human infants that is based on social shaping, a selective learning process first documented in songbirds. By manipulating mothers' reactions to their 8-month-old infants' vocalizations, we demonstrate that phonological features of babbling are sensitive to nonimitative social stimulation. Contingent, but not noncontingent, maternal behavior facilitates more complex and mature vocal behavior. Changes in vocalizations persist after the manipulation. The data show that human infants use social feedback, facilitating immediate transitions in vocal behavior. Social interaction creates rapid shifts to developmentally more advanced sounds. These transitions mirror the normal development of speech, supporting the predictions of the avian social shaping model. These data provide strong support for a parallel in function between vocal precursors of songbirds and infants. Because imitation is usually considered the mechanism for vocal learning in both taxa, the findings introduce social shaping as a general process underlying the development of speech and song.

  6. Introducing parallelism to histogramming functions for GEM systems

    NASA Astrophysics Data System (ADS)

    Krawczyk, Rafał D.; Czarski, Tomasz; Kolasinski, Piotr; Pozniak, Krzysztof T.; Linczuk, Maciej; Byszuk, Adrian; Chernyshova, Maryna; Juszczyk, Bartlomiej; Kasprowicz, Grzegorz; Wojenski, Andrzej; Zabolotny, Wojciech

    2015-09-01

    This article is an assessment of potential parallelization of histogramming algorithms in GEM detector system. Histogramming and preprocessing algorithms in MATLAB were analyzed with regard to adding parallelism. Preliminary implementation of parallel strip histogramming resulted in speedup. Analysis of algorithms parallelizability is presented. Overview of potential hardware and software support to implement parallel algorithm is discussed.

  7. Change in the coil distribution of electrodynamic suspension system

    NASA Technical Reports Server (NTRS)

    Tanaka, Hisashi

    1992-01-01

    At the Miyazaki Maglev Test Center, the initial test runs were completed using a system design that required the superconducting coils to be parallel with the ground levitation coils. Recently, the coil distribution was changed to a system such that the two types of coils were perpendicular to each other. Further system changes will lead to the construction of a side wall levitation system. It is hoped that the development will culminate in a system whereby a superconducting coil will maintain all the functions: levitation, propulsion, and guidance.

  8. 4BMS-X Design and Test Activation

    NASA Technical Reports Server (NTRS)

    Peters, Warren T.; Knox, James C.

    2017-01-01

    In support of the NASA goals to reduce power, volume and mass requirements on future CO2 (Carbon Dioxide) removal systems for exploration missions, a 4BMS (Four Bed Molecular Sieve) test bed was fabricated and activated at the NASA Marshall Space Flight Center. The 4BMS-X (Four Bed Molecular Sieve-Exploration) test bed used components similar in size, spacing, and function to those on the flight ISS flight CDRA system, but were assembled in an open framework. This open framework allows for quick integration of changes to components, beds and material systems. The test stand is highly instrumented to provide data necessary to anchor predictive modeling efforts occurring in parallel to testing. System architecture and test data collected on the initial configurations will be presented.

  9. Global magnetosphere simulations using constrained-transport Hall-MHD with CWENO reconstruction

    NASA Astrophysics Data System (ADS)

    Lin, L.; Germaschewski, K.; Maynard, K. M.; Abbott, S.; Bhattacharjee, A.; Raeder, J.

    2013-12-01

    We present a new CWENO (Centrally-Weighted Essentially Non-Oscillatory) reconstruction based MHD solver for the OpenGGCM global magnetosphere code. The solver was built using libMRC, a library for creating efficient parallel PDE solvers on structured grids. The use of libMRC gives us access to its core functionality of providing an automated code generation framework which takes a user provided PDE right hand side in symbolic form to generate an efficient, computer architecture specific, parallel code. libMRC also supports block-structured adaptive mesh refinement and implicit-time stepping through integration with the PETSc library. We validate the new CWENO Hall-MHD solver against existing solvers both in standard test problems as well as in global magnetosphere simulations.

  10. Examining Parallelism of Sets of Psychometric Measures Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Patelis, Thanos; Marcoulides, George A.

    2011-01-01

    A latent variable modeling approach that can be used to examine whether several psychometric tests are parallel is discussed. The method consists of sequentially testing the properties of parallel measures via a corresponding relaxation of parameter constraints in a saturated model or an appropriately constructed latent variable model. The…

  11. The Development of Reading and Spelling in Arabic Orthography: Two Parallel Processes?

    ERIC Educational Resources Information Center

    Taha, Haitham

    2016-01-01

    The parallels between reading and spelling skills in Arabic were tested. One-hundred forty-three native Arab students, with typical reading development, from second, fourth, and sixth grades were tested with reading, spelling and orthographic decision tasks. The results indicated a full parallel between the reading and spelling performances within…

  12. Ontogeny of surface markers on functionally distinct T cell subsets in the chicken.

    PubMed

    Traill, K N; Böck, G; Boyd, R L; Ratheiser, K; Wick, G

    1984-01-01

    Three subsets of chicken peripheral T cells (T1, T2 and T3) have been identified in peripheral blood of adult chickens on the basis of fluorescence intensity after staining with certain xenogeneic anti-thymus cell sera (from turkeys and rabbits). They differentiate between 3-10 weeks of age in parallel with development of responsiveness to the mitogens concanavalin A (Con A), phytohemagglutinin (PHA) and pokeweed mitogen (PWM). Functional tests on the T subsets, sorted with a fluorescence-activated cell sorter, have shown that T2, 3 cells respond to Con A, PHA and PWM and are capable of eliciting a graft-vs.-host reaction (GvHR). In contrast, although T1 cells respond to Con A, they respond poorly to PHA and not at all to PWM or in GvHR. There was some indication of cooperation between T1 and T2,3 cells for the PHA response. Parallels between these chicken subsets and helper and suppressor/cytotoxic subsets in mammalian systems are discussed.

  13. Parallel-Processing Test Bed For Simulation Software

    NASA Technical Reports Server (NTRS)

    Blech, Richard; Cole, Gary; Townsend, Scott

    1996-01-01

    Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).

  14. A more secure parallel keyed hash function based on chaotic neural network

    NASA Astrophysics Data System (ADS)

    Huang, Zhongquan

    2011-08-01

    Although various hash functions based on chaos or chaotic neural network were proposed, most of them can not work efficiently in parallel computing environment. Recently, an algorithm for parallel keyed hash function construction based on chaotic neural network was proposed [13]. However, there is a strict limitation in this scheme that its secret keys must be nonce numbers. In other words, if the keys are used more than once in this scheme, there will be some potential security flaw. In this paper, we analyze the cause of vulnerability of the original one in detail, and then propose the corresponding enhancement measures, which can remove the limitation on the secret keys. Theoretical analysis and computer simulation indicate that the modified hash function is more secure and practical than the original one. At the same time, it can keep the parallel merit and satisfy the other performance requirements of hash function, such as good statistical properties, high message and key sensitivity, and strong collision resistance, etc.

  15. Matpar: Parallel Extensions for MATLAB

    NASA Technical Reports Server (NTRS)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  16. Purple L1 Milestone Review Panel GPFS Functionality and Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loewe, W E

    2006-12-01

    The GPFS deliverable for the Purple system requires the functionality and performance necessary for ASC I/O needs. The functionality includes POSIX and MPIIO compatibility, and multi-TB file capability across the entire machine. The bandwidth performance required is 122.15 GB/s, as necessary for productive and defensive I/O requirements, and the metadata performance requirement is 5,000 file stats per second. To determine success for this deliverable, several tools are employed. For functionality testing of POSIX, 10TB-files, and high-node-count capability, the parallel file system bandwidth performance test IOR is used. IOR is an MPI-coordinated application that can write and then read to amore » single shared file or to an individual file per process and check the data integrity of the file(s). The MPIIO functionality is tested with the MPIIO test suite from the MPICH library. Bandwidth performance is tested using IOR for the required 122.15 GB/s sustained write. All IOR tests are performanced with data checking enabled. Metadata performance is tested after ''aging'' the file system with 80% data block usage and 20% inode usage. The fdtree metadata test is expected to create/remove a large directory/file structure in under 20 minutes time, akin to interactive metadata usage. Multiple (10) instances of ''ls -lR'', each performing over 100K stats, are run concurrently in different large directories to demonstrate 5,000 stats/sec.« less

  17. Synthesis and functional characterization of novel derivatives related to oxotremorine and oxotremorine-M.

    PubMed

    Dallanoce, C; Conti, P; De Amici, M; De Micheli, C; Barocelli, E; Chiavarini, M; Ballabeni, V; Bertoni, S; Impicciatore, M

    1999-08-01

    Two subseries of nonquaternized (5a-10a) and quaternized derivatives (5b-10b) related to oxotremorine and oxotremorine-M were synthesized and tested. The agonist potency at the muscarinic receptor subtypes of the new compounds was estimated in three classical in vitro functional assays: M1 rabbit vas deferens, M2 guinea pig left atrium and M3 guinea pig ileum. In addition, the occurrence of central muscarinic effects was evaluated as tremorigenic activity after intraperitoneal administration in mice. In in vitro tests a nonselective muscarinic activity was exhibited by all the derivatives with potencies values that, in some instances, surpassed those of the reference compounds (i.e. 8b). Functional selectivity was evidenced only for the oxotremorine-like derivative 9a, which behaved as a mixed M3-agonist/M1-antagonist (pD2 = 5.85; pA2 = 4.76, respectively). In in vivo tests non-quaternary compounds were able to evoke central muscarinic effects, with a potency order parallel to that observed in vitro.

  18. The Potential Impact of Not Being Able to Create Parallel Tests on Expected Classification Accuracy

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2011-01-01

    In many practical testing situations, alternate test forms from the same testing program are not strictly parallel to each other and instead the test forms exhibit small psychometric differences. This article investigates the potential practical impact that these small psychometric differences can have on expected classification accuracy. Ten…

  19. Sample size determination for equivalence assessment with multiple endpoints.

    PubMed

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  20. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  1. Genetic algorithms using SISAL parallel programming language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tejada, S.

    1994-05-06

    Genetic algorithms are a mathematical optimization technique developed by John Holland at the University of Michigan [1]. The SISAL programming language possesses many of the characteristics desired to implement genetic algorithms. SISAL is a deterministic, functional programming language which is inherently parallel. Because SISAL is functional and based on mathematical concepts, genetic algorithms can be efficiently translated into the language. Several of the steps involved in genetic algorithms, such as mutation, crossover, and fitness evaluation, can be parallelized using SISAL. In this paper I will l discuss the implementation and performance of parallel genetic algorithms in SISAL.

  2. Enabling CSPA Operations Through Pilot Involvement in Longitudinal Approach Spacing

    NASA Technical Reports Server (NTRS)

    Battiste, Vernol (Technical Monitor); Pritchett, Amy

    2003-01-01

    Several major airports around the United States have, or plan to have, closely-spaced parallel runways. This project complemented current and previous research by examining the pilots ability to control their position longitudinally within their approach stream.This project s results considered spacing for separation from potential positions of wake vortices from the parallel approach. This preventive function could enable CSPA operations to very closely spaced runways. This work also considered how pilot involvement in longitudinal spacing could allow for more efficient traffic flow, by allowing pilots to keep their aircraft within tighter arrival slots then air traffic control (ATC) might be able to establish, and by maintaining space within the arrival stream for corresponding departure slots. To this end, this project conducted several research studies providing an analytic and computational basis for calculating appropriate aircraft spacings, experimental results from a piloted flight simulator test, and an experimental testbed for future simulator tests. The following sections summarize the results of these three efforts.

  3. Concurrent Cuba

    NASA Astrophysics Data System (ADS)

    Hahn, T.

    2016-10-01

    The parallel version of the multidimensional numerical integration package Cuba is presented and achievable speed-ups discussed. The parallelization is based on the fork/wait POSIX functions, needs no extra software installed, imposes almost no constraints on the integrand function, and works largely automatically.

  4. Multiple List Learning in Adults with Autism Spectrum Disorder: Parallels with Frontal Lobe Damage or Further Evidence of Diminished Relational Processing?

    ERIC Educational Resources Information Center

    Bowler, Dermot M.; Gaigg, Sebastian B.; Gardiner, John M.

    2010-01-01

    To test the effects of providing relational cues at encoding and/or retrieval on multi-trial, multi-list free recall in adults with high-functioning autism spectrum disorder (ASD), 16 adults with ASD and 16 matched typical adults learned a first followed by a second categorised list of 24 words. Category labels were provided at encoding,…

  5. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  6. A Comparison of Parallel and Integrated Models for Implementation of Routine HIV Screening in a Large, Urban Emergency Department.

    PubMed

    Hankin, Abigail; Freiman, Heather; Copeland, Brittney; Travis, Natasha; Shah, Bijal

    2016-01-01

    This study compared two approaches for implementation of non-targeted HIV screening in the emergency department (ED): (1) designated HIV counselors screening in parallel with ED care and (2) nurse-based screening integrated into patient triage. A retrospective analysis was performed to compare parallel and integrated screening models using data from the first 12 months of each program. Data for the parallel screening model were extracted from information collected by HIV test counselors and the electronic medical record (EMR). Integrated screening model data were extracted from the EMR and supplemented by data collected by HIV social workers during patient interaction. For both programs, data included demographics, HIV test offer, test acceptance or declination, and test result. A Z-test between two proportions was performed to compare screening frequencies and results. During the first 12 months of parallel screening, approximately 120,000 visits were made to the ED, with 3,816 (3%) HIV tests administered and 65 (2%) new diagnoses of HIV infection. During the first 12 months of integrated screening, 111,738 patients were triaged in the ED, with 16,329 (15%) patients tested and 190 (1%) new diagnoses. Integrated screening resulted in an increased frequency of HIV screening compared with parallel screening (0.15 tests per ED patient visit vs. 0.03 tests per ED patient visit, p<0.001) and an increase in the absolute number of new diagnoses (190 vs. 65), representing a slight decrease in the proportion of new diagnoses (1% vs. 2%, p=0.007). Non-targeted, integrated HIV screening, with test offer and order by ED nurses during patient triage, is feasible and resulted in an increased frequency of HIV screening and a threefold increase in the absolute number of newly identified HIV-positive patients.

  7. Processing communications events in parallel active messaging interface by awakening thread from wait state

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-22

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for the context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.

  8. Classical test theory and Rasch analysis validation of the Upper Limb Functional Index in subjects with upper limb musculoskeletal disorders.

    PubMed

    Bravini, Elisabetta; Franchignoni, Franco; Giordano, Andrea; Sartorio, Francesco; Ferriero, Giorgio; Vercelli, Stefano; Foti, Calogero

    2015-01-01

    To perform a comprehensive analysis of the psychometric properties and dimensionality of the Upper Limb Functional Index (ULFI) using both classical test theory and Rasch analysis (RA). Prospective, single-group observational design. Freestanding rehabilitation center. Convenience sample of Italian-speaking subjects with upper limb musculoskeletal disorders (N=174). Not applicable. The Italian version of the ULFI. Data were analyzed using parallel analysis, exploratory factor analysis, and RA for evaluating dimensionality, functioning of rating scale categories, item fit, hierarchy of item difficulties, and reliability indices. Parallel analysis revealed 2 factors explaining 32.5% and 10.7% of the response variance. RA confirmed the failure of the unidimensionality assumption, and 6 items out of the 25 misfitted the Rasch model. When the analysis was rerun excluding the misfitting items, the scale showed acceptable fit values, loading meaningfully to a single factor. Item separation reliability and person separation reliability were .98 and .89, respectively. Cronbach alpha was .92. RA revealed weakness of the scale concerning dimensionality and internal construct validity. However, a set of 19 ULFI items defined through the statistical process demonstrated a unidimensional structure, good psychometric properties, and clinical meaningfulness. These findings represent a useful starting point for further analyses of the tool (based on modern psychometric approaches and confirmatory factor analysis) in larger samples, including different patient populations and nationalities. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  9. Shift-and-invert parallel spectral transformation eigensolver: Massively parallel performance for density-functional based tight-binding

    DOE PAGES

    Zhang, Hong; Zapol, Peter; Dixon, David A.; ...

    2015-11-17

    The Shift-and-invert parallel spectral transformations (SIPs), a computational approach to solve sparse eigenvalue problems, is developed for massively parallel architectures with exceptional parallel scalability and robustness. The capabilities of SIPs are demonstrated by diagonalization of density-functional based tight-binding (DFTB) Hamiltonian and overlap matrices for single-wall metallic carbon nanotubes, diamond nanowires, and bulk diamond crystals. The largest (smallest) example studied is a 128,000 (2000) atom nanotube for which ~330,000 (~5600) eigenvalues and eigenfunctions are obtained in ~190 (~5) seconds when parallelized over 266,144 (16,384) Blue Gene/Q cores. Weak scaling and strong scaling of SIPs are analyzed and the performance of SIPsmore » is compared with other novel methods. Different matrix ordering methods are investigated to reduce the cost of the factorization step, which dominates the time-to-solution at the strong scaling limit. As a result, a parallel implementation of assembling the density matrix from the distributed eigenvectors is demonstrated.« less

  10. Shift-and-invert parallel spectral transformation eigensolver: Massively parallel performance for density-functional based tight-binding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hong; Zapol, Peter; Dixon, David A.

    The Shift-and-invert parallel spectral transformations (SIPs), a computational approach to solve sparse eigenvalue problems, is developed for massively parallel architectures with exceptional parallel scalability and robustness. The capabilities of SIPs are demonstrated by diagonalization of density-functional based tight-binding (DFTB) Hamiltonian and overlap matrices for single-wall metallic carbon nanotubes, diamond nanowires, and bulk diamond crystals. The largest (smallest) example studied is a 128,000 (2000) atom nanotube for which ~330,000 (~5600) eigenvalues and eigenfunctions are obtained in ~190 (~5) seconds when parallelized over 266,144 (16,384) Blue Gene/Q cores. Weak scaling and strong scaling of SIPs are analyzed and the performance of SIPsmore » is compared with other novel methods. Different matrix ordering methods are investigated to reduce the cost of the factorization step, which dominates the time-to-solution at the strong scaling limit. As a result, a parallel implementation of assembling the density matrix from the distributed eigenvectors is demonstrated.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Yong; Department of Materials Science and Engineering, University of Tennessee, Knoxville, TN 37996; Liu Fengxiao

    Cemented carbides with a functionally graded structure have significantly improved mechanical properties and lifetimes in cutting, drilling and molding. In this work, WC-6 wt.% Co cemented carbides with three-layer graded structure (surface layer rich in WC, mid layer rich in Co and the inner part of the average composition) were prepared by carburizing pre-sintered {eta}-phase-containing cemented carbides. The three-point bending fatigue tests based on the total-life approach were conducted on both WC-6wt%Co functionally graded cemented carbides (FGCC) and conventional WC-6wt%Co cemented carbides. The functionally graded cemented carbide shows a slightly higher fatigue limit ({approx}100 MPa) than the conventional ones undermore » the present testing conditions. However, the fatigue crack nucleation behavior of FGCC is different from that of the conventional ones. The crack nucleates preferentially along the Co-gradient and perpendicular to the tension surface in FGCC, while parallel to the tension surface in conventional cemented carbides.« less

  12. A microcomputer-based testing station for dynamic and static testing of protective relay systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, W.J.; Li, R.J.; Gu, J.C.

    1995-12-31

    Dynamic and static relay performance testing before installation in the field is a subject of great interest to utility relay engineers. The common practice in utility testing of new relays is to put the new unit to be tested in parallel with an existing functioning relay in the system, wait until an actual transient occurs and then observe and analyze the performance of new relay. It is impossible to have a thorough test of the protective relay system through this procedure. An equipment, Microcomputer-Based Testing Station (or PC-Based Testing Station), that can perform both static and dynamic testing of themore » relay is described in this paper. The Power System Simulation Laboratory at the University of Texas at Arlington is a scaled-down, three-phase, physical power system which correlates well with the important components for a real power system and is an ideal facility for the dynamic and static testing of protective relay systems. A brief introduction to the configuration of this laboratory is presented. Test results of several protective functions by using this laboratory illustrate the usefulness of this test set-up.« less

  13. ωB97X-V: A 10-parameter, range-separated hybrid, generalized gradient approximation density functional with nonlocal correlation, designed by a survival-of-the-fittest strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardirossian, Narbe; Head-Gordon, Martin

    2013-12-18

    A 10-parameter, range-separated hybrid (RSH), generalized gradient approximation (GGA) density functional with nonlocal correlation (VV10) is presented in this paper. Instead of truncating the B97-type power series inhomogeneity correction factors (ICF) for the exchange, same-spin correlation, and opposite-spin correlation functionals uniformly, all 16 383 combinations of the linear parameters up to fourth order (m = 4) are considered. These functionals are individually fit to a training set and the resulting parameters are validated on a primary test set in order to identify the 3 optimal ICF expansions. Through this procedure, it is discovered that the functional that performs best onmore » the training and primary test sets has 7 linear parameters, with 3 additional nonlinear parameters from range-separation and nonlocal correlation. The resulting density functional, ωB97X-V, is further assessed on a secondary test set, the parallel-displaced coronene dimer, as well as several geometry datasets. Finally and furthermore, the basis set dependence and integration grid sensitivity of ωB97X-V are analyzed and documented in order to facilitate the use of the functional.« less

  14. Locating hardware faults in a parallel computer

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  15. Classification of hyperspectral imagery using MapReduce on a NVIDIA graphics processing unit (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ramirez, Andres; Rahnemoonfar, Maryam

    2017-04-01

    A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.

  16. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  17. A hybrid Jaya algorithm for reliability-redundancy allocation problems

    NASA Astrophysics Data System (ADS)

    Ghavidel, Sahand; Azizivahed, Ali; Li, Li

    2018-04-01

    This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching-learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability-redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series-parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30-100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results.

  18. Identifying failure in a tree network of a parallel computer

    DOEpatents

    Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.

    2010-08-24

    Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

  19. Improved treatment of exact exchange in Quantum ESPRESSO

    DOE PAGES

    Barnes, Taylor A.; Kurth, Thorsten; Carrier, Pierre; ...

    2017-01-18

    Here, we present an algorithm and implementation for the parallel computation of exact exchange in Quantum ESPRESSO (QE) that exhibits greatly improved strong scaling. QE is an open-source software package for electronic structure calculations using plane wave density functional theory, and supports the use of local, semi-local, and hybrid DFT functionals. Wider application of hybrid functionals is desirable for the improved simulation of electronic band energy alignments and thermodynamic properties, but the computational complexity of evaluating the exact exchange potential limits the practical application of hybrid functionals to large systems and requires efficient implementations. We demonstrate that existing implementations ofmore » hybrid DFT that utilize a single data structure for both the local and exact exchange regions of the code are significantly limited in the degree of parallelization achievable. We present a band-pair parallelization approach, in which the calculation of exact exchange is parallelized and evaluated independently from the parallelization of the remainder of the calculation, with the wavefunction data being efficiently transformed on-the-fly into a form that is optimal for each part of the calculation. For a 64 water molecule supercell, our new algorithm reduces the overall time to solution by nearly an order of magnitude.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, Taylor A.; Kurth, Thorsten; Carrier, Pierre

    Here, we present an algorithm and implementation for the parallel computation of exact exchange in Quantum ESPRESSO (QE) that exhibits greatly improved strong scaling. QE is an open-source software package for electronic structure calculations using plane wave density functional theory, and supports the use of local, semi-local, and hybrid DFT functionals. Wider application of hybrid functionals is desirable for the improved simulation of electronic band energy alignments and thermodynamic properties, but the computational complexity of evaluating the exact exchange potential limits the practical application of hybrid functionals to large systems and requires efficient implementations. We demonstrate that existing implementations ofmore » hybrid DFT that utilize a single data structure for both the local and exact exchange regions of the code are significantly limited in the degree of parallelization achievable. We present a band-pair parallelization approach, in which the calculation of exact exchange is parallelized and evaluated independently from the parallelization of the remainder of the calculation, with the wavefunction data being efficiently transformed on-the-fly into a form that is optimal for each part of the calculation. For a 64 water molecule supercell, our new algorithm reduces the overall time to solution by nearly an order of magnitude.« less

  1. Requirements for implementing real-time control functional modules on a hierarchical parallel pipelined system

    NASA Technical Reports Server (NTRS)

    Wheatley, Thomas E.; Michaloski, John L.; Lumia, Ronald

    1989-01-01

    Analysis of a robot control system leads to a broad range of processing requirements. One fundamental requirement of a robot control system is the necessity of a microcomputer system in order to provide sufficient processing capability.The use of multiple processors in a parallel architecture is beneficial for a number of reasons, including better cost performance, modular growth, increased reliability through replication, and flexibility for testing alternate control strategies via different partitioning. A survey of the progression from low level control synchronizing primitives to higher level communication tools is presented. The system communication and control mechanisms of existing robot control systems are compared to the hierarchical control model. The impact of this design methodology on the current robot control systems is explored.

  2. Casimir force in O(n) systems with a diffuse interface.

    PubMed

    Dantchev, Daniel; Grüneberg, Daniel

    2009-04-01

    We study the behavior of the Casimir force in O(n) systems with a diffuse interface and slab geometry infinity;{d-1}xL , where 2infinity limit of O(n) models with antiperiodic boundary conditions applied along the finite dimension L of the film. We observe that the Casimir amplitude Delta_{Casimir}(dmid R:J_{ perpendicular},J_{ parallel}) of the anisotropic d -dimensional system is related to that of the isotropic system Delta_{Casimir}(d) via Delta_{Casimir}(dmid R:J_{ perpendicular},J_{ parallel})=(J_{ perpendicular}J_{ parallel});{(d-1)2}Delta_{Casimir}(d) . For d=3 we derive the exact Casimir amplitude Delta_{Casimir}(3,mid R:J_{ perpendicular},J_{ parallel})=[Cl_{2}(pi3)3-zeta(3)(6pi)](J_{ perpendicular}J_{ parallel}) , as well as the exact scaling functions of the Casimir force and of the helicity modulus Upsilon(T,L) . We obtain that beta_{c}Upsilon(T_{c},L)=(2pi;{2})[Cl_{2}(pi3)3+7zeta(3)(30pi)](J_{ perpendicular}J_{ parallel})L;{-1} , where T_{c} is the critical temperature of the bulk system. We find that the contributions in the excess free energy due to the existence of a diffuse interface result in a repulsive Casimir force in the whole temperature region.

  3. Does Sensory Function Decline Independently or Concomitantly with Age? Data from the Baltimore Longitudinal Study of Aging.

    PubMed

    Gadkaree, Shekhar K; Sun, Daniel Q; Li, Carol; Lin, Frank R; Ferrucci, Luigi; Simonsick, Eleanor M; Agrawal, Yuri

    2016-01-01

    Objectives . To investigate whether sensory function declines independently or in parallel with age within a single individual. Methods . Cross-sectional analysis of Baltimore Longitudinal Study of Aging (BLSA) participants who underwent vision (visual acuity threshold), proprioception (ankle joint proprioceptive threshold), vestibular function (cervical vestibular-evoked myogenic potential), hearing (pure-tone average audiometric threshold), and Health ABC physical performance battery testing. Results . A total of 276 participants (mean age 70 years, range 26-93) underwent all four sensory tests. The function of all four systems declined with age. After age adjustment, there were no significant associations between sensory systems. Among 70-79-year-olds, dual or triple sensory impairment was associated with poorer physical performance. Discussion . Our findings suggest that beyond the common mechanism of aging, other distinct (nonshared) etiologic mechanisms may contribute to decline in each sensory system. Multiple sensory impairments influence physical performance among individuals in middle old-age (age 70-79).

  4. Does Sensory Function Decline Independently or Concomitantly with Age? Data from the Baltimore Longitudinal Study of Aging

    PubMed Central

    Gadkaree, Shekhar K.; Sun, Daniel Q.; Li, Carol; Lin, Frank R.; Ferrucci, Luigi; Simonsick, Eleanor M.

    2016-01-01

    Objectives. To investigate whether sensory function declines independently or in parallel with age within a single individual. Methods. Cross-sectional analysis of Baltimore Longitudinal Study of Aging (BLSA) participants who underwent vision (visual acuity threshold), proprioception (ankle joint proprioceptive threshold), vestibular function (cervical vestibular-evoked myogenic potential), hearing (pure-tone average audiometric threshold), and Health ABC physical performance battery testing. Results. A total of 276 participants (mean age 70 years, range 26–93) underwent all four sensory tests. The function of all four systems declined with age. After age adjustment, there were no significant associations between sensory systems. Among 70–79-year-olds, dual or triple sensory impairment was associated with poorer physical performance. Discussion. Our findings suggest that beyond the common mechanism of aging, other distinct (nonshared) etiologic mechanisms may contribute to decline in each sensory system. Multiple sensory impairments influence physical performance among individuals in middle old-age (age 70–79). PMID:27774319

  5. Linking the development and functioning of a carnivorous pitcher plant's microbial digestive community.

    PubMed

    Armitage, David W

    2017-11-01

    Ecosystem development theory predicts that successional turnover in community composition can influence ecosystem functioning. However, tests of this theory in natural systems are made difficult by a lack of replicable and tractable model systems. Using the microbial digestive associates of a carnivorous pitcher plant, I tested hypotheses linking host age-driven microbial community development to host functioning. Monitoring the yearlong development of independent microbial digestive communities in two pitcher plant populations revealed a number of trends in community succession matching theoretical predictions. These included mid-successional peaks in bacterial diversity and metabolic substrate use, predictable and parallel successional trajectories among microbial communities, and convergence giving way to divergence in community composition and carbon substrate use. Bacterial composition, biomass, and diversity positively influenced the rate of prey decomposition, which was in turn positively associated with a host leaf's nitrogen uptake efficiency. Overall digestive performance was greatest during late summer. These results highlight links between community succession and ecosystem functioning and extend succession theory to host-associated microbial communities.

  6. Development of gallium arsenide high-speed, low-power serial parallel interface modules: Executive summary

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Final report to NASA LeRC on the development of gallium arsenide (GaAS) high-speed, low power serial/parallel interface modules. The report discusses the development and test of a family of 16, 32 and 64 bit parallel to serial and serial to parallel integrated circuits using a self aligned gate MESFET technology developed at the Honeywell Sensors and Signal Processing Laboratory. Lab testing demonstrated 1.3 GHz clock rates at a power of 300 mW. This work was accomplished under contract number NAS3-24676.

  7. Similarity of the Multidimensional Space Defined by Parallel Forms of a Mathematics Test.

    ERIC Educational Resources Information Center

    Reckase, Mark D.; And Others

    The purpose of the paper is to determine whether test forms of the Mathematics Usage Test (AAP Math) of the American College Testing Program are parallel in a multidimensional sense. The AAP Math is an achievement test of mathematics concepts acquired by high school students by the end of their third year. To determine the dimensionality of the…

  8. Not just a theory--the utility of mathematical models in evolutionary biology.

    PubMed

    Servedio, Maria R; Brandvain, Yaniv; Dhole, Sumit; Fitzpatrick, Courtney L; Goldberg, Emma E; Stern, Caitlin A; Van Cleve, Jeremy; Yeh, D Justin

    2014-12-01

    Progress in science often begins with verbal hypotheses meant to explain why certain biological phenomena exist. An important purpose of mathematical models in evolutionary research, as in many other fields, is to act as “proof-of-concept” tests of the logic in verbal explanations, paralleling the way in which empirical data are used to test hypotheses. Because not all subfields of biology use mathematics for this purpose, misunderstandings of the function of proof-of-concept modeling are common. In the hope of facilitating communication, we discuss the role of proof-of-concept modeling in evolutionary biology.

  9. Rectifier cabinet static breaker

    DOEpatents

    Costantino, Jr, Roger A.; Gliebe, Ronald J.

    1992-09-01

    A rectifier cabinet static breaker replaces a blocking diode pair with an SCR and the installation of a power transistor in parallel with the latch contactor to commutate the SCR to the off state. The SCR serves as a static breaker with fast turnoff capability providing an alternative way of achieving reactor scram in addition to performing the function of the replaced blocking diodes. The control circuitry for the rectifier cabinet static breaker includes on-line test capability and an LED indicator light to denote successful test completion. Current limit circuitry provides high-speed protection in the event of overload.

  10. Parallel ALLSPD-3D: Speeding Up Combustor Analysis Via Parallel Processing

    NASA Technical Reports Server (NTRS)

    Fricker, David M.

    1997-01-01

    The ALLSPD-3D Computational Fluid Dynamics code for reacting flow simulation was run on a set of benchmark test cases to determine its parallel efficiency. These test cases included non-reacting and reacting flow simulations with varying numbers of processors. Also, the tests explored the effects of scaling the simulation with the number of processors in addition to distributing a constant size problem over an increasing number of processors. The test cases were run on a cluster of IBM RS/6000 Model 590 workstations with ethernet and ATM networking plus a shared memory SGI Power Challenge L workstation. The results indicate that the network capabilities significantly influence the parallel efficiency, i.e., a shared memory machine is fastest and ATM networking provides acceptable performance. The limitations of ethernet greatly hamper the rapid calculation of flows using ALLSPD-3D.

  11. A comparison of parallel and diverging screw angles in the stability of locked plate constructs.

    PubMed

    Wähnert, D; Windolf, M; Brianza, S; Rothstock, S; Radtke, R; Brighenti, V; Schwieger, K

    2011-09-01

    We investigated the static and cyclical strength of parallel and angulated locking plate screws using rigid polyurethane foam (0.32 g/cm(3)) and bovine cancellous bone blocks. Custom-made stainless steel plates with two conically threaded screw holes with different angulations (parallel, 10° and 20° divergent) and 5 mm self-tapping locking screws underwent pull-out and cyclical pull and bending tests. The bovine cancellous blocks were only subjected to static pull-out testing. We also performed finite element analysis for the static pull-out test of the parallel and 20° configurations. In both the foam model and the bovine cancellous bone we found the significantly highest pull-out force for the parallel constructs. In the finite element analysis there was a 47% more damage in the 20° divergent constructs than in the parallel configuration. Under cyclical loading, the mean number of cycles to failure was significantly higher for the parallel group, followed by the 10° and 20° divergent configurations. In our laboratory setting we clearly showed the biomechanical disadvantage of a diverging locking screw angle under static and cyclical loading.

  12. The equivalent thermal properties of a single fracture

    NASA Astrophysics Data System (ADS)

    Sangaré, D.; Thovert, J.-F.; Adler, P. M.

    2008-10-01

    The normal resistance and the tangential conductivity of a single fracture with Gaussian or self-affine surfaces are systematically studied as functions of the nature of the materials in contact and of the geometrical parameters. Analytical formulas are provided in the lubrication limit for fractures with sinusoidal apertures; these formulas are used to substantiate empirical formulas for resistance and conductivity. Other approximations based on the combination of series and parallel formulas are tested.

  13. A comprehensive study of MPI parallelism in three-dimensional discrete element method (DEM) simulation of complex-shaped granular particles

    NASA Astrophysics Data System (ADS)

    Yan, Beichuan; Regueiro, Richard A.

    2018-02-01

    A three-dimensional (3D) DEM code for simulating complex-shaped granular particles is parallelized using message-passing interface (MPI). The concepts of link-block, ghost/border layer, and migration layer are put forward for design of the parallel algorithm, and theoretical scalability function of 3-D DEM scalability and memory usage is derived. Many performance-critical implementation details are managed optimally to achieve high performance and scalability, such as: minimizing communication overhead, maintaining dynamic load balance, handling particle migrations across block borders, transmitting C++ dynamic objects of particles between MPI processes efficiently, eliminating redundant contact information between adjacent MPI processes. The code executes on multiple US Department of Defense (DoD) supercomputers and tests up to 2048 compute nodes for simulating 10 million three-axis ellipsoidal particles. Performance analyses of the code including speedup, efficiency, scalability, and granularity across five orders of magnitude of simulation scale (number of particles) are provided, and they demonstrate high speedup and excellent scalability. It is also discovered that communication time is a decreasing function of the number of compute nodes in strong scaling measurements. The code's capability of simulating a large number of complex-shaped particles on modern supercomputers will be of value in both laboratory studies on micromechanical properties of granular materials and many realistic engineering applications involving granular materials.

  14. Connecting the Brain to Itself through an Emulation

    PubMed Central

    Serruya, Mijail D.

    2017-01-01

    Pilot clinical trials of human patients implanted with devices that can chronically record and stimulate ensembles of hundreds to thousands of individual neurons offer the possibility of expanding the substrate of cognition. Parallel trains of firing rate activity can be delivered in real-time to an array of intermediate external modules that in turn can trigger parallel trains of stimulation back into the brain. These modules may be built in software, VLSI firmware, or biological tissue as in vitro culture preparations or in vivo ectopic construct organoids. Arrays of modules can be constructed as early stage whole brain emulators, following canonical intra- and inter-regional circuits. By using machine learning algorithms and classic tasks known to activate quasi-orthogonal functional connectivity patterns, bedside testing can rapidly identify ensemble tuning properties and in turn cycle through a sequence of external module architectures to explore which can causatively alter perception and behavior. Whole brain emulation both (1) serves to augment human neural function, compensating for disease and injury as an auxiliary parallel system, and (2) has its independent operation bootstrapped by a human-in-the-loop to identify optimal micro- and macro-architectures, update synaptic weights, and entrain behaviors. In this manner, closed-loop brain-computer interface pilot clinical trials can advance strong artificial intelligence development and forge new therapies to restore independence in children and adults with neurological conditions. PMID:28713235

  15. FWT2D: A massively parallel program for frequency-domain full-waveform tomography of wide-aperture seismic data—Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves

    2009-03-01

    This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.

  16. Applications of New Surrogate Global Optimization Algorithms including Efficient Synchronous and Asynchronous Parallelism for Calibration of Expensive Nonlinear Geophysical Simulation Models.

    NASA Astrophysics Data System (ADS)

    Shoemaker, C. A.; Pang, M.; Akhtar, T.; Bindel, D.

    2016-12-01

    New parallel surrogate global optimization algorithms are developed and applied to objective functions that are expensive simulations (possibly with multiple local minima). The algorithms can be applied to most geophysical simulations, including those with nonlinear partial differential equations. The optimization does not require simulations be parallelized. Asynchronous (and synchronous) parallel execution is available in the optimization toolbox "pySOT". The parallel algorithms are modified from serial to eliminate fine grained parallelism. The optimization is computed with open source software pySOT, a Surrogate Global Optimization Toolbox that allows user to pick the type of surrogate (or ensembles), the search procedure on surrogate, and the type of parallelism (synchronous or asynchronous). pySOT also allows the user to develop new algorithms by modifying parts of the code. In the applications here, the objective function takes up to 30 minutes for one simulation, and serial optimization can take over 200 hours. Results from Yellowstone (NSF) and NCSS (Singapore) supercomputers are given for groundwater contaminant hydrology simulations with applications to model parameter estimation and decontamination management. All results are compared with alternatives. The first results are for optimization of pumping at many wells to reduce cost for decontamination of groundwater at a superfund site. The optimization runs with up to 128 processors. Superlinear speed up is obtained for up to 16 processors, and efficiency with 64 processors is over 80%. Each evaluation of the objective function requires the solution of nonlinear partial differential equations to describe the impact of spatially distributed pumping and model parameters on model predictions for the spatial and temporal distribution of groundwater contaminants. The second application uses an asynchronous parallel global optimization for groundwater quality model calibration. The time for a single objective function evaluation varies unpredictably, so efficiency is improved with asynchronous parallel calculations to improve load balancing. The third application (done at NCSS) incorporates new global surrogate multi-objective parallel search algorithms into pySOT and applies it to a large watershed calibration problem.

  17. PARALLEL ASSAY OF OXYGEN EQUILIBRIA OF HEMOGLOBIN

    PubMed Central

    Lilly, Laura E.; Blinebry, Sara K.; Viscardi, Chelsea M.; Perez, Luis; Bonaventura, Joe; McMahon, Tim J.

    2013-01-01

    Methods to systematically analyze in parallel the function of multiple protein or cell samples in vivo or ex vivo (i.e. functional proteomics) in a controlled gaseous environment have thus far been limited. Here we describe an apparatus and procedure that enables, for the first time, parallel assay of oxygen equilibria in multiple samples. Using this apparatus, numerous simultaneous oxygen equilibrium curves (OECs) can be obtained under truly identical conditions from blood cell samples or purified hemoglobins (Hbs). We suggest that the ability to obtain these parallel datasets under identical conditions can be of immense value, both to biomedical researchers and clinicians who wish to monitor blood health, and to physiologists studying non-human organisms and the effects of climate change on these organisms. Parallel monitoring techniques are essential in order to better understand the functions of critical cellular proteins. The procedure can be applied to human studies, wherein an OEC can be analyzed in light of an individual’s entire genome. Here, we analyzed intraerythrocytic Hb, a protein that operates at the organism’s environmental interface and then comes into close contact with virtually all of the organism’s cells. The apparatus is theoretically scalable, and establishes a functional proteomic screen that can be correlated with genomic information on the same individuals. This new method is expected to accelerate our general understanding of protein function, an increasingly challenging objective as advances in proteomic and genomic throughput outpace the ability to study proteins’ functional properties. PMID:23827235

  18. Architectures for reasoning in parallel

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.

    1989-01-01

    The research conducted has dealt with rule-based expert systems. The algorithms that may lead to effective parallelization of them were investigated. Both the forward and backward chained control paradigms were investigated in the course of this work. The best computer architecture for the developed and investigated algorithms has been researched. Two experimental vehicles were developed to facilitate this research. They are Backpac, a parallel backward chained rule-based reasoning system and Datapac, a parallel forward chained rule-based reasoning system. Both systems have been written in Multilisp, a version of Lisp which contains the parallel construct, future. Applying the future function to a function causes the function to become a task parallel to the spawning task. Additionally, Backpac and Datapac have been run on several disparate parallel processors. The machines are an Encore Multimax with 10 processors, the Concert Multiprocessor with 64 processors, and a 32 processor BBN GP1000. Both the Concert and the GP1000 are switch-based machines. The Multimax has all its processors hung off a common bus. All are shared memory machines, but have different schemes for sharing the memory and different locales for the shared memory. The main results of the investigations come from experiments on the 10 processor Encore and the Concert with partitions of 32 or less processors. Additionally, experiments have been run with a stripped down version of EMYCIN.

  19. Synthesis of Efficient Structures for Concurrent Computation.

    DTIC Science & Technology

    1983-10-01

    formal presentation of these techniques, called virtualisation and aggregation, can be found n [King-83$. 113.2 Census Functions Trees perform broadcast... Functions .. .. .. .. ... .... ... ... .... ... ... ....... 6 4 User-Assisted Aggregation .. .. .. .. ... ... ... .... ... .. .......... 6 5 Parallel...6. Simple Parallel Structure for Broadcasting .. .. .. .. .. . ... .. . .. . .... 4 Figure 7. Internal Structure of a Prefix Computation Network

  20. Algorithms for the Construction of Parallel Tests by Zero-One Programming. Project Psychometric Aspects of Item Banking No. 7. Research Report 86-7.

    ERIC Educational Resources Information Center

    Boekkooi-Timminga, Ellen

    Nine methods for automated test construction are described. All are based on the concepts of information from item response theory. Two general kinds of methods for the construction of parallel tests are presented: (1) sequential test design; and (2) simultaneous test design. Sequential design implies that the tests are constructed one after the…

  1. Parallelization and automatic data distribution for nuclear reactor simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebrock, L.M.

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directlymore » affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.« less

  2. Design and implementation of online automatic judging system

    NASA Astrophysics Data System (ADS)

    Liang, Haohui; Chen, Chaojie; Zhong, Xiuyu; Chen, Yuefeng

    2017-06-01

    For lower efficiency and poorer reliability in programming training and competition by currently artificial judgment, design an Online Automatic Judging (referred to as OAJ) System. The OAJ system including the sandbox judging side and Web side, realizes functions of automatically compiling and running the tested codes, and generating evaluation scores and corresponding reports. To prevent malicious codes from damaging system, the OAJ system utilizes sandbox, ensuring the safety of the system. The OAJ system uses thread pools to achieve parallel test, and adopt database optimization mechanism, such as horizontal split table, to improve the system performance and resources utilization rate. The test results show that the system has high performance, high reliability, high stability and excellent extensibility.

  3. Solar panel acceptance testing using a pulsed solar simulator

    NASA Technical Reports Server (NTRS)

    Hershey, T. L.

    1977-01-01

    Utilizing specific parameters as area of an individual cell, number in series and parallel, and established coefficient of current and voltage temperature dependence, a solar array irradiated with one solar constant at AMO and at ambient temperature can be characterized by a current-voltage curve for different intensities, temperatures, and even different configurations. Calibration techniques include: uniformity in area, depth and time, absolute and transfer irradiance standards, dynamic and functional check out procedures. Typical data are given for individual cell (2x2 cm) to complete flat solar array (5x5 feet) with 2660 cells and on cylindrical test items with up to 10,000 cells. The time and energy saving of such testing techniques are emphasized.

  4. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    USDA-ARS?s Scientific Manuscript database

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  5. Testing the physiological plausibility of conflicting psychological models of response inhibition: A forward inference fMRI study.

    PubMed

    Criaud, Marion; Longcamp, Marieke; Anton, Jean-Luc; Nazarian, Bruno; Roth, Muriel; Sescousse, Guillaume; Strafella, Antonio P; Ballanger, Bénédicte; Boulinguez, Philippe

    2017-08-30

    The neural mechanisms underlying response inhibition and related disorders are unclear and controversial for several reasons. First, it is a major challenge to assess the psychological bases of behaviour, and ultimately brain-behaviour relationships, of a function which is precisely intended to suppress overt measurable behaviours. Second, response inhibition is difficult to disentangle from other parallel processes involved in more general aspects of cognitive control. Consequently, different psychological and anatomo-functional models coexist, which often appear in conflict with each other even though they are not necessarily mutually exclusive. The standard model of response inhibition in go/no-go tasks assumes that inhibitory processes are reactively and selectively triggered by the stimulus that participants must refrain from reacting to. Recent alternative models suggest that action restraint could instead rely on reactive but non-selective mechanisms (all automatic responses are automatically inhibited in uncertain contexts) or on proactive and non-selective mechanisms (a gating function by which reaction to any stimulus is prevented in anticipation of stimulation when the situation is unpredictable). Here, we assessed the physiological plausibility of these different models by testing their respective predictions regarding event-related BOLD modulations (forward inference using fMRI). We set up a single fMRI design which allowed for us to record simultaneously the different possible forms of inhibition while limiting confounds between response inhibition and parallel cognitive processes. We found BOLD dynamics consistent with non-selective models. These results provide new theoretical and methodological lines of inquiry for the study of basic functions involved in behavioural control and related disorders. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Parallelization of elliptic solver for solving 1D Boussinesq model

    NASA Astrophysics Data System (ADS)

    Tarwidi, D.; Adytia, D.

    2018-03-01

    In this paper, a parallel implementation of an elliptic solver in solving 1D Boussinesq model is presented. Numerical solution of Boussinesq model is obtained by implementing a staggered grid scheme to continuity, momentum, and elliptic equation of Boussinesq model. Tridiagonal system emerging from numerical scheme of elliptic equation is solved by cyclic reduction algorithm. The parallel implementation of cyclic reduction is executed on multicore processors with shared memory architectures using OpenMP. To measure the performance of parallel program, large number of grids is varied from 28 to 214. Two test cases of numerical experiment, i.e. propagation of solitary and standing wave, are proposed to evaluate the parallel program. The numerical results are verified with analytical solution of solitary and standing wave. The best speedup of solitary and standing wave test cases is about 2.07 with 214 of grids and 1.86 with 213 of grids, respectively, which are executed by using 8 threads. Moreover, the best efficiency of parallel program is 76.2% and 73.5% for solitary and standing wave test cases, respectively.

  7. Incremental Parallelization of Non-Data-Parallel Programs Using the Charon Message-Passing Library

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.

    2000-01-01

    Message passing is among the most popular techniques for parallelizing scientific programs on distributed-memory architectures. The reasons for its success are wide availability (MPI), efficiency, and full tuning control provided to the programmer. A major drawback, however, is that incremental parallelization, as offered by compiler directives, is not generally possible, because all data structures have to be changed throughout the program simultaneously. Charon remedies this situation through mappings between distributed and non-distributed data. It allows breaking up the parallelization into small steps, guaranteeing correctness at every stage. Several tools are available to help convert legacy codes into high-performance message-passing programs. They usually target data-parallel applications, whose loops carrying most of the work can be distributed among all processors without much dependency analysis. Others do a full dependency analysis and then convert the code virtually automatically. Even more toolkits are available that aid construction from scratch of message passing programs. None, however, allows piecemeal translation of codes with complex data dependencies (i.e. non-data-parallel programs) into message passing codes. The Charon library (available in both C and Fortran) provides incremental parallelization capabilities by linking legacy code arrays with distributed arrays. During the conversion process, non-distributed and distributed arrays exist side by side, and simple mapping functions allow the programmer to switch between the two in any location in the program. Charon also provides wrapper functions that leave the structure of the legacy code intact, but that allow execution on truly distributed data. Finally, the library provides a rich set of communication functions that support virtually all patterns of remote data demands in realistic structured grid scientific programs, including transposition, nearest-neighbor communication, pipelining, gather/scatter, and redistribution. At the end of the conversion process most intermediate Charon function calls will have been removed, the non-distributed arrays will have been deleted, and virtually the only remaining Charon functions calls are the high-level, highly optimized communications. Distribution of the data is under complete control of the programmer, although a wide range of useful distributions is easily available through predefined functions. A crucial aspect of the library is that it does not allocate space for distributed arrays, but accepts programmer-specified memory. This has two major consequences. First, codes parallelized using Charon do not suffer from encapsulation; user data is always directly accessible. This provides high efficiency, and also retains the possibility of using message passing directly for highly irregular communications. Second, non-distributed arrays can be interpreted as (trivial) distributions in the Charon sense, which allows them to be mapped to truly distributed arrays, and vice versa. This is the mechanism that enables incremental parallelization. In this paper we provide a brief introduction of the library and then focus on the actual steps in the parallelization process, using some representative examples from, among others, the NAS Parallel Benchmarks. We show how a complicated two-dimensional pipeline-the prototypical non-data-parallel algorithm- can be constructed with ease. To demonstrate the flexibility of the library, we give examples of the stepwise, efficient parallel implementation of nonlocal boundary conditions common in aircraft simulations, as well as the construction of the sequence of grids required for multigrid.

  8. Real-time trajectory optimization on parallel processors

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1993-01-01

    A parallel algorithm has been developed for rapidly solving trajectory optimization problems. The goal of the work has been to develop an algorithm that is suitable to do real-time, on-line optimal guidance through repeated solution of a trajectory optimization problem. The algorithm has been developed on an INTEL iPSC/860 message passing parallel processor. It uses a zero-order-hold discretization of a continuous-time problem and solves the resulting nonlinear programming problem using a custom-designed augmented Lagrangian nonlinear programming algorithm. The algorithm achieves parallelism of function, derivative, and search direction calculations through the principle of domain decomposition applied along the time axis. It has been encoded and tested on 3 example problems, the Goddard problem, the acceleration-limited, planar minimum-time to the origin problem, and a National Aerospace Plane minimum-fuel ascent guidance problem. Execution times as fast as 118 sec of wall clock time have been achieved for a 128-stage Goddard problem solved on 32 processors. A 32-stage minimum-time problem has been solved in 151 sec on 32 processors. A 32-stage National Aerospace Plane problem required 2 hours when solved on 32 processors. A speed-up factor of 7.2 has been achieved by using 32-nodes instead of 1-node to solve a 64-stage Goddard problem.

  9. New Parallel computing framework for radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostin, M.A.; /Michigan State U., NSCL; Mokhov, N.V.

    A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility canmore » be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.« less

  10. Physiological and Functional Alterations after Spaceflight and Bed Rest.

    PubMed

    Mulavara, Ajitkumar P; Peters, Brian T; Miller, Chris A; Kofman, Igor S; Reschke, Millard F; Taylor, Laura C; Lawrence, Emily L; Wood, Scott J; Laurie, Steven S; Lee, Stuart M C; Buxton, Roxanne E; May-Phillips, Tiffany R; Stenger, Michael B; Ploutz-Snyder, Lori L; Ryder, Jeffrey W; Feiveson, Alan H; Bloomberg, Jacob J

    2018-04-03

    Exposure to microgravity causes alterations in multiple physiological systems, potentially impacting the ability of astronauts to perform critical mission tasks. The goal of this study was to determine the effects of spaceflight on functional task performance and to identify the key physiological factors contributing to their deficits. A test battery comprised of 7 functional tests and 15 physiological measures was used to investigate the sensorimotor, cardiovascular and neuromuscular adaptations to spaceflight. Astronauts were tested before and after 6-month spaceflights. Subjects were also tested before and after 70 days of 6° head-down bed rest, a spaceflight analog, to examine the role of axial body unloading on the spaceflight results. These subjects included Control and Exercise groups to examine the effects of exercise during bed rest. Spaceflight subjects showed the greatest decrement in performance during functional tasks that required the greatest demand for dynamic control of postural equilibrium which was paralleled by similar decrements in sensorimotor tests that assessed postural and dynamic gait control. Other changes included reduced lower limb muscle performance and increased heart rate to maintain blood pressure. Exercise performed during bed rest prevented detrimental change in neuromuscular and cardiovascular function, however, both bed rest groups experienced functional and balance deficits similar to spaceflight subjects. Bed rest data indicates that body support unloading experienced during spaceflight contributes to postflight postural control dysfunction. Further, the bed rest results in the Exercise group of subjects confirm that resistance and aerobic exercises performed during spaceflight can play an integral role in maintaining neuromuscular and cardiovascular function, which can help in reducing decrements in functional performance. These results indicate that a countermeasure to mitigate postflight postural control dysfunction is required to maintain functional performance.

  11. DC currents collected by a RF biased electrode quasi-parallel to the magnetic field

    NASA Astrophysics Data System (ADS)

    Faudot, E.; Devaux, S.; Moritz, J.; Bobkov, V.; Heuraux, S.

    2017-10-01

    Local plasma biasings due to RF sheaths close to ICRF antennas result mainly in a negative DC current collection on the antenna structure. In some specific cases, we may observe positive currents when the ion mobility (seen from the collecting surface) overcomes the electron one or/and when the collecting surface on the antenna side becomes larger than the other end of the flux tube connected to the wall. The typical configuration is when the antenna surface is almost parallel to the magnetic field lines and the other side perpendicular. To test the optimal case where the magnetic field is quasi-parallel to the electrode surface, one needs a linear magnetic configuration as our magnetized RF discharge experiment called Aline. The magnetic field angle is in our case lower than 1 relative to the RF biased surface. The DC current flowing through the discharge has been measured as a function of the magnetic field strength, neutral gas (He) pressure and RF power. The main result is the reversal of the DC current depending on the magnetic field, collision frequency and RF power level.

  12. A PC parallel port button box provides millisecond response time accuracy under Linux.

    PubMed

    Stewart, Neil

    2006-02-01

    For psychologists, it is sometimes necessary to measure people's reaction times to the nearest millisecond. This article describes how to use the PC parallel port to receive signals from a button box to achieve millisecond response time accuracy. The workings of the parallel port, the corresponding port addresses, and a simple Linux program for controlling the port are described. A test of the speed and reliability of button box signal detection is reported. If the reader is moderately familiar with Linux, this article should provide sufficient instruction for him or her to build and test his or her own parallel port button box. This article also describes how the parallel port could be used to control an external apparatus.

  13. Pomegranate supplementation improves cognitive and functional recovery following ischemic stroke: A randomized trial.

    PubMed

    Bellone, John A; Murray, Jeffrey R; Jorge, Paolo; Fogel, Travis G; Kim, Mary; Wallace, Desiree R; Hartman, Richard E

    2018-02-13

    We tested whether supplementing with pomegranate polyphenols can enhance cognitive/functional recovery after stroke. In this parallel, block-randomized clinical trial, we administered commercially-available pomegranate polyphenol or placebo pills twice per day for one week to adult inpatients in a comprehensive rehabilitation setting starting approximately 2 weeks after stroke. Pills contained 1 g of polyphenols derived from whole pomegranate, equivalent to levels in approximately 8 oz of juice. Placebo pills were similar to the pomegranate pills except that they contained only lactose. Of the 163 patients that were screened, 22 were eligible and 16 were randomized (8 per group). We excluded one subject per group from the neuropsychological analyses since they were lost to follow-up, but we included all subjects in the analysis of functional data since outcome data were available. Clinicians and subjects were blinded to group assignment. Neuropsychological testing (primary outcome: Repeatable Battery for the Assessment of Neuropsychological Status) and functional independence scores were used to determine changes in cognitive and functional ability. Pomegranate-treated subjects demonstrated more neuropsychological and functional improvement and spent less time in the hospital than placebo controls. Pomegranate polyphenols enhanced cognitive and functional recovery after stroke, justifying pursuing larger clinical trials.

  14. Cortical effect and functional recovery by the electromyography-triggered neuromuscular stimulation in chronic stroke patients.

    PubMed

    Shin, Hwa Kyung; Cho, Sang Hyun; Jeon, Hye-seon; Lee, Young-Hee; Song, Jun Chan; Jang, Sung Ho; Lee, Chu-Hee; Kwon, Yong Hyun

    2008-09-19

    We investigated the effect of electromyography (EMG)-triggered neuromuscular electrical stimulation (NMES; EMG-stim) on functional recovery of the hemiparetic hand and the related cortical activation pattern in chronic stroke patients. We enrolled 14 stroke patients, who were randomly assigned to the EMG-stim (n=7) or the control groups (n=7). The EMG-stim was applied to the wrist extensor of the EMG-stim group for two sessions (30 min/session) a day, five times per week for 10 weeks. Four functional tests (box and block, strength, the accuracy index, and the on/offset time of muscle contraction) and functional MRI (fMRI) were performed before and after treatment. fMRI was measured at 1.5 T in parallel with timed finger flexion-extension movements at a fixed rate. Following treatment, the EMG-stim group showed a significant improvement in all functional tests. The main cortical activation change with such functional improvement was shifted from the ipsilateral sensorimotor cortex (SMC) to the contralateral SMC. We demonstrated that 10-week EMG-stim can induce functional recovery and change of cortical activation pattern in the hemiparetic hand of chronic stroke patients.

  15. Perpendicular dynamics of runaway electrons in tokamak plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandez-Gomez, I.; Martin-Solis, J. R.; Sanchez, R.

    2012-10-15

    In this paper, it will be shown that the runaway phenomenon in tokamak plasmas cannot be reduced to a one-dimensional problem, based on the competence between electric field acceleration and collisional friction losses in the parallel direction. A Langevin approach, including collisional diffusion in velocity space, will be used to analyze the two-dimensional runaway electron dynamics. An investigation of the runaway probability in velocity space will yield a criterion for runaway, which will be shown to be consistent with the results provided by the more simple test particle description of the runaway dynamics [Fuchs et al., Phys. Fluids 29, 2931more » (1986)]. Electron perpendicular collisional scattering will be found to play an important role, relaxing the conditions for runaway. Moreover, electron pitch angle scattering perpendicularly broadens the runaway distribution function, increasing the electron population in the runaway plateau region in comparison with what it should be expected from electron acceleration in the parallel direction only. The perpendicular broadening of the runaway distribution function, its dependence on the plasma parameters, and the resulting enhancement of the runaway production rate will be discussed.« less

  16. Energy distribution functions of kilovolt ions parallel and perpendicular to the magnetic field of a modified Penning discharge

    NASA Technical Reports Server (NTRS)

    Roth, R. J.

    1973-01-01

    The distribution function of ion energy parallel to the magnetic field of a modified Penning discharge has been measured with a retarding potential energy analyzer. These ions escaped through one of the throats of the magnetic mirror geometry. Simultaneous measurements of the ion energy distribution function perpendicular to the magnetic field have been made with a charge exchange neutral detector. The ion energy distribution functions are approximately Maxwellian, and the parallel and perpendicular kinetic temperatures are equal within experimental error. These results suggest that turbulent processes previously observed in this discharge Maxwellianize the velocity distribution along a radius in velocity space and cause an isotropic energy distribution. When the distributions depart from Maxwellian, they are enhanced above the Maxwellian tail.

  17. A Re-programmable Platform for Dynamic Burn-in Test of Xilinx Virtexll 3000 FPGA for Military and Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Roosta, Ramin; Wang, Xinchen; Sadigursky, Michael; Tracton, Phil

    2004-01-01

    Field Programmable Gate Arrays (FPGA) have played increasingly important roles in military and aerospace applications. Xilinx SRAM-based FPGAs have been extensively used in commercial applications. They have been used less frequently in space flight applications due to their susceptibility to single-event upsets. Reliability of these devices in space applications is a concern that has not been addressed. The objective of this project is to design a fully programmable hardware/software platform that allows (but is not limited to) comprehensive static/dynamic burn-in test of Virtex-II 3000 FPGAs, at speed test and SEU test. Conventional methods test very few discrete AC parameters (primarily switching) of a given integrated circuit. This approach will test any possible configuration of the FPGA and any associated performance parameters. It allows complete or partial re-programming of the FPGA and verification of the program by using read back followed by dynamic test. Designers have full control over which functional elements of the FPGA to stress. They can completely simulate all possible types of configurations/functions. Another benefit of this platform is that it allows collecting information on elevation of the junction temperature as a function of gate utilization, operating frequency and functionality. A software tool has been implemented to demonstrate the various features of the system. The software consists of three major parts: the parallel interface driver, main system procedure and a graphical user interface (GUI).

  18. Mixed-Meal Tolerance Test Versus Glucagon Stimulation Test for the Assessment of β-Cell Function in Therapeutic Trials in Type 1 Diabetes

    PubMed Central

    Greenbaum, Carla J.; Mandrup-Poulsen, Thomas; McGee, Paula Friedenberg; Battelino, Tadej; Haastert, Burkhard; Ludvigsson, Johnny; Pozzilli, Paolo; Lachin, John M.; Kolb, Hubert

    2008-01-01

    OBJECTIVE—β-Cell function in type 1 diabetes clinical trials is commonly measured by C-peptide response to a secretagogue in either a mixed-meal tolerance test (MMTT) or a glucagon stimulation test (GST). The Type 1 Diabetes TrialNet Research Group and the European C-peptide Trial (ECPT) Study Group conducted parallel randomized studies to compare the sensitivity, reproducibility, and tolerability of these procedures. RESEARCH DESIGN AND METHODS—In randomized sequences, 148 TrialNet subjects completed 549 tests with up to 2 MMTT and 2 GST tests on separate days, and 118 ECPT subjects completed 348 tests (up to 3 each) with either two MMTTs or two GSTs. RESULTS—Among individuals with up to 4 years’ duration of type 1 diabetes, >85% had measurable stimulated C-peptide values. The MMTT stimulus produced significantly higher concentrations of C-peptide than the GST. Whereas both tests were highly reproducible, the MMTT was significantly more so (R2 = 0.96 for peak C-peptide response). Overall, the majority of subjects preferred the MMTT, and there were few adverse events. Some older subjects preferred the shorter duration of the GST. Nausea was reported in the majority of GST studies, particularly in the young age-group. CONCLUSIONS—The MMTT is preferred for the assessment of β-cell function in therapeutic trials in type 1 diabetes. PMID:18628574

  19. Mixed-meal tolerance test versus glucagon stimulation test for the assessment of beta-cell function in therapeutic trials in type 1 diabetes.

    PubMed

    Greenbaum, Carla J; Mandrup-Poulsen, Thomas; McGee, Paula Friedenberg; Battelino, Tadej; Haastert, Burkhard; Ludvigsson, Johnny; Pozzilli, Paolo; Lachin, John M; Kolb, Hubert

    2008-10-01

    Beta-cell function in type 1 diabetes clinical trials is commonly measured by C-peptide response to a secretagogue in either a mixed-meal tolerance test (MMTT) or a glucagon stimulation test (GST). The Type 1 Diabetes TrialNet Research Group and the European C-peptide Trial (ECPT) Study Group conducted parallel randomized studies to compare the sensitivity, reproducibility, and tolerability of these procedures. In randomized sequences, 148 TrialNet subjects completed 549 tests with up to 2 MMTT and 2 GST tests on separate days, and 118 ECPT subjects completed 348 tests (up to 3 each) with either two MMTTs or two GSTs. Among individuals with up to 4 years' duration of type 1 diabetes, >85% had measurable stimulated C-peptide values. The MMTT stimulus produced significantly higher concentrations of C-peptide than the GST. Whereas both tests were highly reproducible, the MMTT was significantly more so (R(2) = 0.96 for peak C-peptide response). Overall, the majority of subjects preferred the MMTT, and there were few adverse events. Some older subjects preferred the shorter duration of the GST. Nausea was reported in the majority of GST studies, particularly in the young age-group. The MMTT is preferred for the assessment of beta-cell function in therapeutic trials in type 1 diabetes.

  20. Design of an x-ray telescope optics for XEUS

    NASA Astrophysics Data System (ADS)

    Graue, Roland; Kampf, Dirk; Wallace, Kotska; Lumb, David; Bavdaz, Marcos; Freyberg, Michael

    2017-11-01

    The X-ray telescope concept for XEUS is based on an innovative high performance and light weight Silicon Pore Optics technology. The XEUS telescope is segmented into 16 radial, thermostable petals providing the rigid optical bench structure of the stand alone XRay High Precision Tandem Optics. A fully representative Form Fit Function (FFF) Model of one petal is currently under development to demonstrate the outstanding lightweight telescope capabilities with high optically effective area. Starting from the envisaged system performance the related tolerance budgets were derived. These petals are made from ceramics, i.e. CeSiC. The structural and thermal performance of the petal shall be reported. The stepwise alignment and integration procedure on petal level shall be described. The functional performance and environmental test verification plan of the Form Fit Function Model and the test set ups are described in this paper. In parallel to the running development activities the programmatic and technical issues wrt. the FM telescope MAIT with currently 1488 Tandem Optics are under investigation. Remote controlled robot supported assembly, simultaneous active alignment and verification testing and decentralised time effective integration procedures shall be illustrated.

  1. Influence of continuous positive airway pressure on outcomes of rehabilitation in stroke patients with obstructive sleep apnea.

    PubMed

    Ryan, Clodagh M; Bayley, Mark; Green, Robin; Murray, Brian J; Bradley, T Douglas

    2011-04-01

    In stroke patients, obstructive sleep apnea (OSA) is associated with poorer functional outcomes than in those without OSA. We hypothesized that treatment of OSA by continuous positive airway pressure (CPAP) in stroke patients would enhance motor, functional, and neurocognitive recovery. This was a randomized, open label, parallel group trial with blind assessment of outcomes performed in stroke patients with OSA in a stroke rehabilitation unit. Patients were assigned to standard rehabilitation alone (control group) or to CPAP (CPAP group). The primary outcomes were the Canadian Neurological scale, the 6-minute walk test distance, sustained attention response test, and the digit or spatial span-backward. Secondary outcomes included Epworth Sleepiness scale, Stanford Sleepiness scale, Functional Independence measure, Chedoke McMaster Stroke assessment, neurocognitive function, and Beck depression inventory. Tests were performed at baseline and 1 month later. Patients assigned to CPAP (n=22) experienced no adverse events. Regarding primary outcomes, compared to the control group (n=22), the CPAP group experienced improvement in stroke-related impairment (Canadian Neurological scale score, P<0.001) but not in 6-minute walk test distance, sustained attention response test, or digit or spatial span-backward. Regarding secondary outcomes, the CPAP group experienced improvements in the Epworth Sleepiness scale (P<0.001), motor component of the Functional Independence measure (P=0.05), Chedoke-McMaster Stroke assessment of upper and lower limb motor recovery test of the leg (P=0.001), and the affective component of depression (P=0.006), but not neurocognitive function. Treatment of OSA by CPAP in stroke patients undergoing rehabilitation improved functional and motor, but not neurocognitive outcomes. URL: http://www.clinicaltrials.gov. Unique identifier: NCT00221065.

  2. GPU accelerated dynamic functional connectivity analysis for functional MRI data.

    PubMed

    Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu

    2015-07-01

    Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Method, systems, and computer program products for implementing function-parallel network firewall

    DOEpatents

    Fulp, Errin W [Winston-Salem, NC; Farley, Ryan J [Winston-Salem, NC

    2011-10-11

    Methods, systems, and computer program products for providing function-parallel firewalls are disclosed. According to one aspect, a function-parallel firewall includes a first firewall node for filtering received packets using a first portion of a rule set including a plurality of rules. The first portion includes less than all of the rules in the rule set. At least one second firewall node filters packets using a second portion of the rule set. The second portion includes at least one rule in the rule set that is not present in the first portion. The first and second portions together include all of the rules in the rule set.

  4. Parallel implementation of Hartree-Fock and density functional theory analytical second derivatives

    NASA Astrophysics Data System (ADS)

    Baker, Jon; Wolinski, Krzysztof; Malagoli, Massimo; Pulay, Peter

    2004-01-01

    We present an efficient, parallel implementation for the calculation of Hartree-Fock and density functional theory analytical Hessian (force constant, nuclear second derivative) matrices. These are important for the determination of harmonic vibrational frequencies, and to classify stationary points on potential energy surfaces. Our program is designed for modest parallelism (4-16 CPUs) as exemplified by our standard eight-processor QuantumCube™. We can routinely handle systems with up to 100+ atoms and 1000+ basis functions using under 0.5 GB of RAM memory per CPU. Timings are presented for several systems, ranging in size from aspirin (C9H8O4) to nickel octaethylporphyrin (C36H44N4Ni).

  5. Implementation of Multivariable Logic Functions in Parallel by Electrically Addressing a Molecule of Three Dopants in Silicon.

    PubMed

    Fresch, Barbara; Bocquel, Juanita; Hiluf, Dawit; Rogge, Sven; Levine, Raphael D; Remacle, Françoise

    2017-07-05

    To realize low-power, compact logic circuits, one can explore parallel operation on single nanoscale devices. An added incentive is to use multivalued (as distinct from Boolean) logic. Here, we theoretically demonstrate that the computation of all the possible outputs of a multivariate, multivalued logic function can be implemented in parallel by electrical addressing of a molecule made up of three interacting dopant atoms embedded in Si. The electronic states of the dopant molecule are addressed by pulsing a gate voltage. By simulating the time evolution of the non stationary electronic density built by the gate voltage, we show that one can implement a molecular decision tree that provides in parallel all the outputs for all the inputs of the multivariate, multivalued logic function. The outputs are encoded in the populations and in the bond orders of the dopant molecule, which can be measured using an STM tip. We show that the implementation of the molecular logic tree is equivalent to a spectral function decomposition. The function that is evaluated can be field-programmed by changing the time profile of the pulsed gate voltage. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Distributed computing methodology for training neural networks in an image-guided diagnostic application.

    PubMed

    Plagianakos, V P; Magoulas, G D; Vrahatis, M N

    2006-03-01

    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.

  7. Evidence for parallel activation of the pre-supplementary motor area and inferior frontal cortex during response inhibition: a combined MEG and TMS study

    PubMed Central

    Singh, Krish D.; Verbruggen, Frederick

    2018-01-01

    This pre-registered experiment sought to uncover the temporal relationship between the inferior frontal cortex (IFC) and the pre-supplementary motor area (pre-SMA) during stopping of an ongoing action. Both regions have previously been highlighted as being central to cognitive control of actions, particularly response inhibition. Here we tested which area is activated first during the stopping process using magnetoencephalography, before assessing the relative chronometry of each region using functionally localized transcranial magnetic stimulation. Both lines of evidence pointed towards simultaneous activity across both regions, suggesting that parallel, mutually interdependent processing may form the cortical basis of stopping. Additional exploratory analysis, however, provided weak evidence in support of previous suggestions that the pre-SMA may provide an ongoing drive of activity to the IFC. PMID:29515852

  8. Measurement properties of the Spinal Cord Injury-Functional Index (SCI-FI) short forms.

    PubMed

    Heinemann, Allen W; Dijkers, Marcel P; Ni, Pengsheng; Tulsky, David S; Jette, Alan

    2014-07-01

    To evaluate the psychometric properties of the Spinal Cord Injury-Functional Index (SCI-FI) short forms (basic mobility, self-care, fine motor, ambulation, manual wheelchair, and power wheelchair) based on internal consistency; correlations between short forms banks, full item bank forms, and a 10-item computer adaptive test version; magnitude of ceiling and floor effects; and test information functions. Cross-sectional cohort study. Six rehabilitation hospitals in the United States. Individuals with traumatic spinal cord injury (N=855) recruited from 6 national Spinal Cord Injury Model Systems facilities. Not applicable. SCI-FI full item bank, 10-item computer adaptive test, and parallel short form scores. The SCI-FI short forms (with separate versions for individuals with paraplegia and tetraplegia) demonstrate very good internal consistency, group-level reliability, excellent correlations between short forms and scores based on the total item bank, and minimal ceiling and floor effects (except ceiling effects for persons with paraplegia on self-care, fine motor, and power wheelchair ability and floor effects for persons with tetraplegia on self-care, fine motor, and manual wheelchair ability). The test information functions are acceptable across the range of scores where most persons in the sample performed. Clinicians and researchers should consider the SCI-FI short forms when computer adaptive testing is not feasible. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  9. High-Resolution Functional Mapping of the Venezuelan Equine Encephalitis Virus Genome by Insertional Mutagenesis and Massively Parallel Sequencing

    DTIC Science & Technology

    2010-10-14

    High-Resolution Functional Mapping of the Venezuelan Equine Encephalitis Virus Genome by Insertional Mutagenesis and Massively Parallel Sequencing...Venezuelan equine encephalitis virus (VEEV) genome. We initially used a capillary electrophoresis method to gain insight into the role of the VEEV...Smith JM, Schmaljohn CS (2010) High-Resolution Functional Mapping of the Venezuelan Equine Encephalitis Virus Genome by Insertional Mutagenesis and

  10. Locating hardware faults in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-01-12

    Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.

  11. SODR Memory Control Buffer Control ASIC

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.

    1994-01-01

    The Spacecraft Optical Disk Recorder (SODR) is a state of the art mass storage system for future NASA missions requiring high transmission rates and a large capacity storage system. This report covers the design and development of an SODR memory buffer control applications specific integrated circuit (ASIC). The memory buffer control ASIC has two primary functions: (1) buffering data to prevent loss of data during disk access times, (2) converting data formats from a high performance parallel interface format to a small computer systems interface format. Ten 144 p in, 50 MHz CMOS ASIC's were designed, fabricated and tested to implement the memory buffer control function.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin S.

    The Futility package contains the following: 1) Definition of the size of integers and real numbers; 2) A generic Unit test harness; 3) Definitions for some basic extensions to the Fortran language: arbitrary length strings, a parameter list construct, exception handlers, command line processor, timers; 4) Geometry definitions: point, line, plane, box, cylinder, polyhedron; 5) File wrapper functions: standard Fortran input/output files, Fortran binary files, HDF5 files; 6) Parallel wrapper functions: MPI, and Open MP abstraction layers, partitioning algorithms; 7) Math utilities: BLAS, Matrix and Vector definitions, Linear Solver methods and wrappers for other TPLs (PETSC, MKL, etc), preconditioner classes;more » 8) Misc: random number generator, water saturation properties, sorting algorithms.« less

  13. Fine-grained parallelization of fitness functions in bioinformatics optimization problems: gene selection for cancer classification and biclustering of gene expression data.

    PubMed

    Gomez-Pulido, Juan A; Cerrada-Barrios, Jose L; Trinidad-Amado, Sebastian; Lanza-Gutierrez, Jose M; Fernandez-Diaz, Ramon A; Crawford, Broderick; Soto, Ricardo

    2016-08-31

    Metaheuristics are widely used to solve large combinatorial optimization problems in bioinformatics because of the huge set of possible solutions. Two representative problems are gene selection for cancer classification and biclustering of gene expression data. In most cases, these metaheuristics, as well as other non-linear techniques, apply a fitness function to each possible solution with a size-limited population, and that step involves higher latencies than other parts of the algorithms, which is the reason why the execution time of the applications will mainly depend on the execution time of the fitness function. In addition, it is usual to find floating-point arithmetic formulations for the fitness functions. This way, a careful parallelization of these functions using the reconfigurable hardware technology will accelerate the computation, specially if they are applied in parallel to several solutions of the population. A fine-grained parallelization of two floating-point fitness functions of different complexities and features involved in biclustering of gene expression data and gene selection for cancer classification allowed for obtaining higher speedups and power-reduced computation with regard to usual microprocessors. The results show better performances using reconfigurable hardware technology instead of usual microprocessors, in computing time and power consumption terms, not only because of the parallelization of the arithmetic operations, but also thanks to the concurrent fitness evaluation for several individuals of the population in the metaheuristic. This is a good basis for building accelerated and low-energy solutions for intensive computing scenarios.

  14. Function algorithms for MPP scientific subroutines, volume 1

    NASA Technical Reports Server (NTRS)

    Gouch, J. G.

    1984-01-01

    Design documentation and user documentation for function algorithms for the Massively Parallel Processor (MPP) are presented. The contract specifies development of MPP assembler instructions to perform the following functions: natural logarithm; exponential (e to the x power); square root; sine; cosine; and arctangent. To fulfill the requirements of the contract, parallel array and solar implementations for these functions were developed on the PDP11/34 Program Development and Management Unit (PDMU) that is resident at the MPP testbed installation located at the NASA Goddard facility.

  15. Determining insulation condition of 110kV instrument transformers. Linking PD measurement results from both gas chromatography and electrical method

    NASA Astrophysics Data System (ADS)

    Dan, C.; Morar, R.

    2017-05-01

    Working methods for on site testing of insulations: Gas chromatography (using the TFGA-P200 chromatographer); Electrical measurements of partial discharge levels using the digital detection, recording, analysis and partial discharge acquisition system, MPD600. First performed, between 2000-2015, were the chromatographic analyses concerning electrical insulating environments of: 102 current transformers, 110kV. Items in operation, functioning in 110/20kV substations. 38 voltage transformers, 110kV also in operation, functioning in 110/20kV substations. Then, electrical measurements of partial discharge inside instrument transformers, on site (power substations) were made (starting in the year 2009, over a 7-year period, collecting data until the year 2015) according to the provisions of standard EN 61869-1:2007 „Instrument transformers. General requirements”, applying, assimilated to it, type A partial discharge test procedure, using as test voltage the very rated 110kV distribution grid voltage. Given the results of two parallel measurements, containing: to this type of failure specific gas amount (H 2) and the quantitative partial discharge’ level, establishing a clear dependence between the quantity of partial discharges and the type and amount of in oil dissolved gases inside equipments affected by this type of defect: partial discharges, was expected. Of the „population” of instrument transformers subject of the two parallel measurements, the dependency between Q IEC (apparent charge) and (H 2) (hydrogen, gas amount dissolved within their insulating environment) represents a finite assemblage situated between the two limits developed on an empirical basis.

  16. Post retention and post/core shear bond strength of four post systems.

    PubMed

    Stockton, L W; Williams, P T; Clarke, C T

    2000-01-01

    As clinicians we continue to search for a post system which will give us maximum retention while maximizing resistance to root fracture. The introduction of several new post systems, with claims of high retentive and resistance to root fracture values, require that independent studies be performed to evaluate these claims. This study tested the tensile and shear dislodgment forces of four post designs that were luted into roots 10 mm apical of the CEJ. The Para Post Plus (P1) is a parallel-sided, passive design; the Para Post XT (P2) is a combination active/passive design; the Flexi-Post (F1) and the Flexi-Flange (F2) are active post designs. All systems tested were stainless steel. This study compared the test results of the four post designs for tensile and shear dislodgment. All mounted samples were loaded in tension until failure occurred. The tensile load was applied parallel to the long axis of the root, while the shear load was applied at 450 to the long axis of the root. The Flexi-Post (F1) was significantly different from the other three in the tensile test, however, the Para Post XT (P2) was significantly different to the other three in the shear test and had a better probability for survival in the Kaplan-Meier survival function test. Based on the results of this study, our recommendation is for the Para Post XT (P2).

  17. Self-calibrated correlation imaging with k-space variant correlation functions.

    PubMed

    Li, Yu; Edalati, Masoud; Du, Xingfu; Wang, Hui; Cao, Jie J

    2018-03-01

    Correlation imaging is a previously developed high-speed MRI framework that converts parallel imaging reconstruction into the estimate of correlation functions. The presented work aims to demonstrate this framework can provide a speed gain over parallel imaging by estimating k-space variant correlation functions. Because of Fourier encoding with gradients, outer k-space data contain higher spatial-frequency image components arising primarily from tissue boundaries. As a result of tissue-boundary sparsity in the human anatomy, neighboring k-space data correlation varies from the central to the outer k-space. By estimating k-space variant correlation functions with an iterative self-calibration method, correlation imaging can benefit from neighboring k-space data correlation associated with both coil sensitivity encoding and tissue-boundary sparsity, thereby providing a speed gain over parallel imaging that relies only on coil sensitivity encoding. This new approach is investigated in brain imaging and free-breathing neonatal cardiac imaging. Correlation imaging performs better than existing parallel imaging techniques in simulated brain imaging acceleration experiments. The higher speed enables real-time data acquisition for neonatal cardiac imaging in which physiological motion is fast and non-periodic. With k-space variant correlation functions, correlation imaging gives a higher speed than parallel imaging and offers the potential to image physiological motion in real-time. Magn Reson Med 79:1483-1494, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  18. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    NASA Astrophysics Data System (ADS)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  19. Coding for parallel execution of hardware-in-the-loop millimeter-wave scene generation models on multicore SIMD processor architectures

    NASA Astrophysics Data System (ADS)

    Olson, Richard F.

    2013-05-01

    Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.

  20. Test and training simulator for ground-based teleoperated in-orbit servicing

    NASA Technical Reports Server (NTRS)

    Schaefer, Bernd E.

    1989-01-01

    For the Post-IOC(In-Orbit Construction)-Phase of COLUMBUS it is intended to use robotic devices for the routine operations of ground-based teleoperated In-Orbit Servicing. A hardware simulator for verification of the relevant in-orbit operations technologies, the Servicing Test Facility, is necessary which mainly will support the Flight Control Center for the Manned Space-Laboratories for operational specific tasks like system simulation, training of teleoperators, parallel operation simultaneously to actual in-orbit activities and for the verification of the ground operations segment for telerobotics. The present status of definition for the facility functional and operational concept is described.

  1. Improvement and scale-up of the NASA Redox storage system

    NASA Technical Reports Server (NTRS)

    Reid, M. A.; Thaller, L. H.

    1980-01-01

    A preprototype full-function 1.0 kW Redox system (2 kW peak) with 11 kW storage capacity has been built and integrated with the NASA/DOE photovoltaic test facility. The system includes four substacks of 39 cells each (1/3 sq ft active area) which are connected hydraulically in parallel and electrically in series. An open circuit voltage cell and a set of rebalance cells are used to continuously monitor the system state of charge and automatically maintain the anode and cathode reactants electrochemically in balance. Technological advances in membrane and electrodes and results of multicell stack tests are reviewed.

  2. Extended Logic Intelligent Processing System for a Sensor Fusion Processor Hardware

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Thomas, Tyson; Li, Wei-Te; Daud, Taher; Fabunmi, James

    2000-01-01

    The paper presents the hardware implementation and initial tests from a low-power, highspeed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) is described, which combines rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor signals in compact low power VLSI. The development of the ELIPS concept is being done to demonstrate the interceptor functionality which particularly underlines the high speed and low power requirements. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Processing speeds of microseconds have been demonstrated using our test hardware.

  3. Three-way parallel independent component analysis for imaging genetics using multi-objective optimization.

    PubMed

    Ulloa, Alvaro; Jingyu Liu; Vergara, Victor; Jiayu Chen; Calhoun, Vince; Pattichis, Marios

    2014-01-01

    In the biomedical field, current technology allows for the collection of multiple data modalities from the same subject. In consequence, there is an increasing interest for methods to analyze multi-modal data sets. Methods based on independent component analysis have proven to be effective in jointly analyzing multiple modalities, including brain imaging and genetic data. This paper describes a new algorithm, three-way parallel independent component analysis (3pICA), for jointly identifying genomic loci associated with brain function and structure. The proposed algorithm relies on the use of multi-objective optimization methods to identify correlations among the modalities and maximally independent sources within modality. We test the robustness of the proposed approach by varying the effect size, cross-modality correlation, noise level, and dimensionality of the data. Simulation results suggest that 3p-ICA is robust to data with SNR levels from 0 to 10 dB and effect-sizes from 0 to 3, while presenting its best performance with high cross-modality correlations, and more than one subject per 1,000 variables. In an experimental study with 112 human subjects, the method identified links between a genetic component (pointing to brain function and mental disorder associated genes, including PPP3CC, KCNQ5, and CYP7B1), a functional component related to signal decreases in the default mode network during the task, and a brain structure component indicating increases of gray matter in brain regions of the default mode region. Although such findings need further replication, the simulation and in-vivo results validate the three-way parallel ICA algorithm presented here as a useful tool in biomedical data decomposition applications.

  4. NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations

    NASA Astrophysics Data System (ADS)

    Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.

    2010-09-01

    The latest release of NWChem delivers an open-source computational chemistry package with extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical modules provided by the code and their parallel performance. Program summaryProgram title: NWChem Catalogue identifier: AEGI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open Source Educational Community License No. of lines in distributed program, including test data, etc.: 11 709 543 No. of bytes in distributed program, including test data, etc.: 680 696 106 Distribution format: tar.gz Programming language: Fortran 77, C Computer: all Linux based workstations and parallel supercomputers, Windows and Apple machines Operating system: Linux, OS X, Windows Has the code been vectorised or parallelized?: Code is parallelized Classification: 2.1, 2.2, 3, 7.3, 7.7, 16.1, 16.2, 16.3, 16.10, 16.13 Nature of problem: Large-scale atomistic simulations of chemical and biological systems require efficient and reliable methods for ground and excited solutions of many-electron Hamiltonian, analysis of the potential energy surface, and dynamics. Solution method: Ground and excited solutions of many-electron Hamiltonian are obtained utilizing density-functional theory, many-body perturbation approach, and coupled cluster expansion. These solutions or a combination thereof with classical descriptions are then used to analyze potential energy surface and perform dynamical simulations. Additional comments: Full documentation is provided in the distribution file. This includes an INSTALL file giving details of how to build the package. A set of test runs is provided in the examples directory. The distribution file for this program is over 90 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Running time depends on the size of the chemical system, complexity of the method, number of cpu's and the computational task. It ranges from several seconds for serial DFT energy calculations on a few atoms to several hours for parallel coupled cluster energy calculations on tens of atoms or ab-initio molecular dynamics simulation on hundreds of atoms.

  5. MCdevelop - a universal framework for Stochastic Simulations

    NASA Astrophysics Data System (ADS)

    Slawinska, M.; Jadach, S.

    2011-03-01

    We present MCdevelop, a universal computer framework for developing and exploiting the wide class of Stochastic Simulations (SS) software. This powerful universal SS software development tool has been derived from a series of scientific projects for precision calculations in high energy physics (HEP), which feature a wide range of functionality in the SS software needed for advanced precision Quantum Field Theory calculations for the past LEP experiments and for the ongoing LHC experiments at CERN, Geneva. MCdevelop is a "spin-off" product of HEP to be exploited in other areas, while it will still serve to develop new SS software for HEP experiments. Typically SS involve independent generation of large sets of random "events", often requiring considerable CPU power. Since SS jobs usually do not share memory it makes them easy to parallelize. The efficient development, testing and running in parallel SS software requires a convenient framework to develop software source code, deploy and monitor batch jobs, merge and analyse results from multiple parallel jobs, even before the production runs are terminated. Throughout the years of development of stochastic simulations for HEP, a sophisticated framework featuring all the above mentioned functionality has been implemented. MCdevelop represents its latest version, written mostly in C++ (GNU compiler gcc). It uses Autotools to build binaries (optionally managed within the KDevelop 3.5.3 Integrated Development Environment (IDE)). It uses the open-source ROOT package for histogramming, graphics and the mechanism of persistency for the C++ objects. MCdevelop helps to run multiple parallel jobs on any computer cluster with NQS-type batch system. Program summaryProgram title:MCdevelop Catalogue identifier: AEHW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 48 136 No. of bytes in distributed program, including test data, etc.: 355 698 Distribution format: tar.gz Programming language: ANSI C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system. Operating system: Most UNIX systems, Linux. The application programs were thoroughly tested under Ubuntu 7.04, 8.04 and CERN Scientific Linux 5. Has the code been vectorised or parallelised?: Tools (scripts) for optional parallelisation on a PC farm are included. RAM: 500 bytes Classification: 11.3 External routines: ROOT package version 5.0 or higher ( http://root.cern.ch/drupal/). Nature of problem: Developing any type of stochastic simulation program for high energy physics and other areas. Solution method: Object Oriented programming in C++ with added persistency mechanism, batch scripts for running on PC farms and Autotools.

  6. DGDFT: A massively parallel method for large scale density functional theory calculations.

    PubMed

    Hu, Wei; Lin, Lin; Yang, Chao

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10(-4) Hartree/atom in terms of the error of energy and 6.2 × 10(-4) Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  7. Breadboard development of a fluid infusion system

    NASA Technical Reports Server (NTRS)

    Thompson, R. W.

    1974-01-01

    A functional breadboard of a zero gravity Intravenous Infusion System (IVI) is presented. Major components described are: (1) infusate pack pressurizers; (2) pump module; (3) infusion set; and (4) electronic control package. The IVI breadboard was designed to demonstrate the feasibility of using the parallel solenoid pump and spring powered infusate source pressurizers for the emergency infusion of various liquids in a zero gravity environment. The IVI was tested for flow rate and sensitivity to back pressure at the needle. Results are presented.

  8. Standardization and validation of a parallel form of the verbal and non-verbal recognition memory test in an Italian population sample.

    PubMed

    Smirni, Daniela; Smirni, Pietro; Di Martino, Giovanni; Cipolotti, Lisa; Oliveri, Massimiliano; Turriziani, Patrizia

    2018-05-04

    In the neuropsychological assessment of several neurological conditions, recognition memory evaluation is requested. Recognition seems to be more appropriate than recall to study verbal and non-verbal memory, because interferences of psychological and emotional disorders are less relevant in the recognition than they are in recall memory paradigms. In many neurological disorders, longitudinal repeated assessments are needed to monitor the effectiveness of rehabilitation programs or pharmacological treatments on the recovery of memory. In order to contain the practice effect in repeated neuropsychological evaluations, it is necessary the use of parallel forms of the tests. Having two parallel forms of the same test, that kept administration procedures and scoring constant, is a great advantage in both clinical practice, for the monitoring of memory disorder, and in experimental practice, to allow the repeated evaluation of memory on healthy and neurological subjects. First aim of the present study was to provide normative values in an Italian sample (n = 160) for a parallel form of a verbal and non-verbal recognition memory battery. Multiple regression analysis revealed significant effects of age and education on recognition memory performance, whereas sex did not reach a significant probability level. Inferential cutoffs have been determined and equivalent scores computed. Secondly, the study aimed to validate the equivalence of the two parallel forms of the Recognition Memory Test. The correlations analyses between the total scores of the two versions of the test and correlation between the three subtasks revealed that the two forms are parallel and the subtasks are equivalent for difficulty.

  9. ParallelStructure: A R Package to Distribute Parallel Runs of the Population Genetics Program STRUCTURE on Multi-Core Computers

    PubMed Central

    Besnier, Francois; Glover, Kevin A.

    2013-01-01

    This software package provides an R-based framework to make use of multi-core computers when running analyses in the population genetics program STRUCTURE. It is especially addressed to those users of STRUCTURE dealing with numerous and repeated data analyses, and who could take advantage of an efficient script to automatically distribute STRUCTURE jobs among multiple processors. It also consists of additional functions to divide analyses among combinations of populations within a single data set without the need to manually produce multiple projects, as it is currently the case in STRUCTURE. The package consists of two main functions: MPI_structure() and parallel_structure() as well as an example data file. We compared the performance in computing time for this example data on two computer architectures and showed that the use of the present functions can result in several-fold improvements in terms of computation time. ParallelStructure is freely available at https://r-forge.r-project.org/projects/parallstructure/. PMID:23923012

  10. Constitutive Model Calibration via Autonomous Multiaxial Experimentation (Postprint)

    DTIC Science & Technology

    2016-09-17

    test machine. Experimental data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain...data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain conditions. Optimization methods...be used directly in finite element simulations of more complex geometries. Keywords Axial/torsional experimentation • Plasticity • Constitutive model

  11. Diagnostic value of tendon thickness and structure in the sonographic diagnosis of supraspinatus tendinopathy: room for a two-step approach.

    PubMed

    Arend, Carlos Frederico; Arend, Ana Amalia; da Silva, Tiago Rodrigues

    2014-06-01

    The aim of our study was to systematically compare different methodologies to establish an evidence-based approach based on tendon thickness and structure for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. US was obtained from 164 symptomatic patients with supraspinatus tendinopathy detected at MRI and 42 asymptomatic controls with normal MRI. Diagnostic yield was calculated for either maximal supraspinatus tendon thickness (MSTT) and tendon structure as isolated criteria and using different combinations of parallel and sequential testing at US. Chi-squared tests were performed to assess sensitivity, specificity, and accuracy of different diagnostic approaches. Mean MSTT was 6.68 mm in symptomatic patients and 5.61 mm in asymptomatic controls (P<.05). When used as an isolated criterion, MSTT>6.0mm provided best results for accuracy (93.7%) when compared to other measurements of tendon thickness. Also as an isolated criterion, abnormal tendon structure (ATS) yielded 93.2% accuracy for diagnosis. The best overall yield was obtained by both parallel and sequential testing using either MSTT>6.0mm or ATS as diagnostic criteria at no particular order, which provided 99.0% accuracy, 100% sensitivity, and 95.2% specificity. Among these parallel and sequential tests that provided best overall yield, additional analysis revealed that sequential testing first evaluating tendon structure required assessment of 258 criteria (vs. 261 for sequential testing first evaluating tendon thickness and 412 for parallel testing) and demanded a mean of 16.1s to assess diagnostic criteria and reach the diagnosis (vs. 43.3s for sequential testing first evaluating tendon thickness and 47.4s for parallel testing). We found that using either MSTT>6.0mm or ATS as diagnostic criteria for both parallel and sequential testing provides the best overall yield for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. Among these strategies, a two-step sequential approach first assessing tendon structure was advantageous because it required a lower number of criteria to be assessed and demanded less time to assess diagnostic criteria and reach the diagnosis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. SPSS and SAS programs for determining the number of components using parallel analysis and velicer's MAP test.

    PubMed

    O'Connor, B P

    2000-08-01

    Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.

  13. Massively parallel GPU-accelerated minimization of classical density functional theory

    NASA Astrophysics Data System (ADS)

    Stopper, Daniel; Roth, Roland

    2017-08-01

    In this paper, we discuss the ability to numerically minimize the grand potential of hard disks in two-dimensional and of hard spheres in three-dimensional space within the framework of classical density functional and fundamental measure theory on modern graphics cards. Our main finding is that a massively parallel minimization leads to an enormous performance gain in comparison to standard sequential minimization schemes. Furthermore, the results indicate that in complex multi-dimensional situations, a heavy parallel minimization of the grand potential seems to be mandatory in order to reach a reasonable balance between accuracy and computational cost.

  14. Seasonally and experimentally induced changes in testicular function of the Australian bush rat (Rattus fuscipes).

    PubMed

    Irby, D C; Kerr, J B; Risbridger, G P; de Kretser, D M

    1984-03-01

    Serum concentrations of LH, FSH and testosterone were measured monthly throughout the year in male bush rats. Testicular size and ultrastructure, LH/hCG, FSH and oestradiol receptors and the response of the pituitary to LHRH were also recorded. LH and FSH rose in parallel with an increase in testicular size after the winter solstice with peak gonadotrophin levels in the spring (September). The subsequent fall in LH and FSH levels was associated with a rise in serum testosterone which reached peak levels during summer (December and January). In February serum testosterone levels and testicular size declined in parallel, while the pituitary response to an LHRH injection was maximal during late summer. The number of LH/hCG, FSH and oestradiol receptors per testis were all greatly reduced in the regressed testes when compared to active testes. In a controlled environment of decreased lighting (shortened photoperiod), temperature and food quality, the testes of sexually active adult males regressed at any time of the year, the resultant testicular morphology and endocrine status being identical to that of wild rats in the non-breeding season. Full testicular regression was achieved only when the photoperiod, temperature and food quality were changed: experiments in which only one or two of these factors were altered failed to produce complete sexual regression.

  15. 3D streamers simulation in a pin to plane configuration using massively parallel computing

    NASA Astrophysics Data System (ADS)

    Plewa, J.-M.; Eichwald, O.; Ducasse, O.; Dessante, P.; Jacobs, C.; Renon, N.; Yousfi, M.

    2018-03-01

    This paper concerns the 3D simulation of corona discharge using high performance computing (HPC) managed with the message passing interface (MPI) library. In the field of finite volume methods applied on non-adaptive mesh grids and in the case of a specific 3D dynamic benchmark test devoted to streamer studies, the great efficiency of the iterative R&B SOR and BiCGSTAB methods versus the direct MUMPS method was clearly demonstrated in solving the Poisson equation using HPC resources. The optimization of the parallelization and the resulting scalability was undertaken as a function of the HPC architecture for a number of mesh cells ranging from 8 to 512 million and a number of cores ranging from 20 to 1600. The R&B SOR method remains at least about four times faster than the BiCGSTAB method and requires significantly less memory for all tested situations. The R&B SOR method was then implemented in a 3D MPI parallelized code that solves the classical first order model of an atmospheric pressure corona discharge in air. The 3D code capabilities were tested by following the development of one, two and four coplanar streamers generated by initial plasma spots for 6 ns. The preliminary results obtained allowed us to follow in detail the formation of the tree structure of a corona discharge and the effects of the mutual interactions between the streamers in terms of streamer velocity, trajectory and diameter. The computing time for 64 million of mesh cells distributed over 1000 cores using the MPI procedures is about 30 min ns-1, regardless of the number of streamers.

  16. Parallel transmit excitation at 1.5 T based on the minimization of a driving function for device heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gudino, N., E-mail: natalia.gudino@nih.gov; Sonmez, M.; Nielles-Vallespin, S.

    2015-01-15

    Purpose: To provide a rapid method to reduce the radiofrequency (RF) E-field coupling and consequent heating in long conductors in an interventional MRI (iMRI) setup. Methods: A driving function for device heating (W) was defined as the integration of the E-field along the direction of the wire and calculated through a quasistatic approximation. Based on this function, the phases of four independently controlled transmit channels were dynamically changed in a 1.5 T MRI scanner. During the different excitation configurations, the RF induced heating in a nitinol wire immersed in a saline phantom was measured by fiber-optic temperature sensing. Additionally, amore » minimization of W as a function of phase and amplitude values of the different channels and constrained by the homogeneity of the RF excitation field (B{sub 1}) over a region of interest was proposed and its results tested on the benchtop. To analyze the validity of the proposed method, using a model of the array and phantom setup tested in the scanner, RF fields and SAR maps were calculated through finite-difference time-domain (FDTD) simulations. In addition to phantom experiments, RF induced heating of an active guidewire inserted in a swine was also evaluated. Results: In the phantom experiment, heating at the tip of the device was reduced by 92% when replacing the body coil by an optimized parallel transmit excitation with same nominal flip angle. In the benchtop, up to 90% heating reduction was measured when implementing the constrained minimization algorithm with the additional degree of freedom given by independent amplitude control. The computation of the optimum phase and amplitude values was executed in just 12 s using a standard CPU. The results of the FDTD simulations showed similar trend of the local SAR at the tip of the wire and measured temperature as well as to a quadratic function of W, confirming the validity of the quasistatic approach for the presented problem at 64 MHz. Imaging and heating reduction of the guidewire were successfully performed in vivo with the proposed hardware and phase control. Conclusions: Phantom and in vivo data demonstrated that additional degrees of freedom in a parallel transmission system can be used to control RF induced heating in long conductors. A novel constrained optimization approach to reduce device heating was also presented that can be run in just few seconds and therefore could be added to an iMRI protocol to improve RF safety.« less

  17. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    NASA Astrophysics Data System (ADS)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-05-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  18. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    NASA Astrophysics Data System (ADS)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-01-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  19. Massively parallel implementation of 3D-RISM calculation with volumetric 3D-FFT.

    PubMed

    Maruyama, Yutaka; Yoshida, Norio; Tadano, Hiroto; Takahashi, Daisuke; Sato, Mitsuhisa; Hirata, Fumio

    2014-07-05

    A new three-dimensional reference interaction site model (3D-RISM) program for massively parallel machines combined with the volumetric 3D fast Fourier transform (3D-FFT) was developed, and tested on the RIKEN K supercomputer. The ordinary parallel 3D-RISM program has a limitation on the number of parallelizations because of the limitations of the slab-type 3D-FFT. The volumetric 3D-FFT relieves this limitation drastically. We tested the 3D-RISM calculation on the large and fine calculation cell (2048(3) grid points) on 16,384 nodes, each having eight CPU cores. The new 3D-RISM program achieved excellent scalability to the parallelization, running on the RIKEN K supercomputer. As a benchmark application, we employed the program, combined with molecular dynamics simulation, to analyze the oligomerization process of chymotrypsin Inhibitor 2 mutant. The results demonstrate that the massive parallel 3D-RISM program is effective to analyze the hydration properties of the large biomolecular systems. Copyright © 2014 Wiley Periodicals, Inc.

  20. A framework for parallelized efficient global optimization with application to vehicle crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Hamza, Karim; Shalaby, Mohamed

    2014-09-01

    This article presents a framework for simulation-based design optimization of computationally expensive problems, where economizing the generation of sample designs is highly desirable. One popular approach for such problems is efficient global optimization (EGO), where an initial set of design samples is used to construct a kriging model, which is then used to generate new 'infill' sample designs at regions of the search space where there is high expectancy of improvement. This article attempts to address one of the limitations of EGO, where generation of infill samples can become a difficult optimization problem in its own right, as well as allow the generation of multiple samples at a time in order to take advantage of parallel computing in the evaluation of the new samples. The proposed approach is tested on analytical functions, and then applied to the vehicle crashworthiness design of a full Geo Metro model undergoing frontal crash conditions.

  1. LB3D: A parallel implementation of the Lattice-Boltzmann method for simulation of interacting amphiphilic fluids

    NASA Astrophysics Data System (ADS)

    Schmieschek, S.; Shamardin, L.; Frijters, S.; Krüger, T.; Schiller, U. D.; Harting, J.; Coveney, P. V.

    2017-08-01

    We introduce the lattice-Boltzmann code LB3D, version 7.1. Building on a parallel program and supporting tools which have enabled research utilising high performance computing resources for nearly two decades, LB3D version 7 provides a subset of the research code functionality as an open source project. Here, we describe the theoretical basis of the algorithm as well as computational aspects of the implementation. The software package is validated against simulations of meso-phases resulting from self-assembly in ternary fluid mixtures comprising immiscible and amphiphilic components such as water-oil-surfactant systems. The impact of the surfactant species on the dynamics of spinodal decomposition are tested and quantitative measurement of the permeability of a body centred cubic (BCC) model porous medium for a simple binary mixture is described. Single-core performance and scaling behaviour of the code are reported for simulations on current supercomputer architectures.

  2. Optical to optical interface device

    NASA Technical Reports Server (NTRS)

    Oliver, D. S.; Vohl, P.; Nisenson, P.

    1972-01-01

    The development, fabrication, and testing of a preliminary model of an optical-to-optical (noncoherent-to-coherent) interface device for use in coherent optical parallel processing systems are described. The developed device demonstrates a capability for accepting as an input a scene illuminated by a noncoherent radiation source and providing as an output a coherent light beam spatially modulated to represent the original noncoherent scene. The converter device developed under this contract employs a Pockels readout optical modulator (PROM). This is a photosensitive electro-optic element which can sense and electrostatically store optical images. The stored images can be simultaneously or subsequently readout optically by utilizing the electrostatic storage pattern to control an electro-optic light modulating property of the PROM. The readout process is parallel as no scanning mechanism is required. The PROM provides the functions of optical image sensing, modulation, and storage in a single active material.

  3. Parallel processing for nonlinear dynamics simulations of structures including rotating bladed-disk assemblies

    NASA Technical Reports Server (NTRS)

    Hsieh, Shang-Hsien

    1993-01-01

    The principal objective of this research is to develop, test, and implement coarse-grained, parallel-processing strategies for nonlinear dynamic simulations of practical structural problems. There are contributions to four main areas: finite element modeling and analysis of rotational dynamics, numerical algorithms for parallel nonlinear solutions, automatic partitioning techniques to effect load-balancing among processors, and an integrated parallel analysis system.

  4. The Validation of Parallel Test Forms: "Mountain" and "Beach" Picture Series for Assessment of Language Skills

    ERIC Educational Resources Information Center

    Bae, Jungok; Lee, Yae-Sheik

    2011-01-01

    Pictures are widely used to elicit expressive language skills, and pictures must be established as parallel before changes in ability can be demonstrated by assessment using pictures prompts. Why parallel prompts are required and what it is necessary to do to ensure that prompts are in fact parallel is not widely known. To date, evidence of…

  5. Parallel processing on the Livermore VAX 11/780-4 parallel processor system with compatibility to Cray Research, Inc. (CRI) multitasking. Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werner, N.E.; Van Matre, S.W.

    1985-05-01

    This manual describes the CRI Subroutine Library and Utility Package. The CRI library provides Cray multitasking functionality on the four-processor shared memory VAX 11/780-4. Additional functionality has been added for more flexibility. A discussion of the library, utilities, error messages, and example programs is provided.

  6. Vestibular and Somatosensory Covergence in Postural Equilibrium Control: Insights from Spaceflight and Bed Rest Studies

    NASA Technical Reports Server (NTRS)

    Mulavara, A. P.; Batson, C. D.; Buxton, R. E.; Feiveson, A. H.; Kofman, I. S.; Lee, S. M. C.; Miller, C. A.; Peters, B. T.; Phillips, T.; Platts, S. H.; hide

    2014-01-01

    The goal of the Functional Task Test study is to determine the effects of space flight on functional tests that are representative of high priority exploration mission tasks and to identify the key underlying physiological factors that contribute to decrements in performance. We are currently conducting studies on both International Space Station (ISS) astronauts experiencing up to 6 months of microgravity and subjects experiencing 70 days of 6??head-down bed-rest as an analog for space flight. Bed-rest provides the opportunity for us to investigate the role of prolonged axial body unloading in isolation from the other physiological effects produced by exposure to the microgravity environment of space flight. This allows us to parse out the contribution of the body unloading somatosensory component on functional performance. Both ISS crewmembers and bed-rest subjects were tested using a protocol that evaluated functional performance along with tests of postural and locomotor control before and after space flight and bed-rest, respectively. Functional tests included ladder climbing, hatch opening, jump down, manual manipulation of objects and tool use, seat egress and obstacle avoidance, recovery from a fall, and object translation tasks. Astronauts were tested three times before flight, and on 1, 6, and 30 days after landing. Bed-rest subjects were tested three times before bed-rest and immediately after getting up from bed-rest as well as 1, 6, and 12 days after re-ambulation. A comparison of bed-rest and space flight data showed a significant concordance in performance changes across all functional tests. Tasks requiring a greater demand for dynamic control of postural equilibrium (i.e. fall recovery, seat egress/obstacle avoidance during walking, object translation, jump down) showed the greatest decrement in performance. Functional tests with reduced requirements for postural stability showed less reduction in performance. Results indicate that body unloading resulting from prolonged bed-rest impacts functional performance particularly for tests with a greater requirement for postural equilibrium control. These changes in functional performance were paralleled by similar decrement in tests designed to specifically assess postural equilibrium and dynamic gait control. These results indicate that body support unloading experienced during space flight plays a central role in postflight alteration of functional task performance. These data also support the concept that space flight may cause central adaptation of converging body-load somatosensory and vestibular input during gravitational transitions.

  7. Power of tests for comparing trend curves with application to national immunization survey (NIS).

    PubMed

    Zhao, Zhen

    2011-02-28

    To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.

  8. The paradigm compiler: Mapping a functional language for the connection machine

    NASA Technical Reports Server (NTRS)

    Dennis, Jack B.

    1989-01-01

    The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.

  9. Conserving the functional and phylogenetic trees of life of European tetrapods

    PubMed Central

    Thuiller, Wilfried; Maiorano, Luigi; Mazel, Florent; Guilhaumon, François; Ficetola, Gentile Francesco; Lavergne, Sébastien; Renaud, Julien; Roquet, Cristina; Mouillot, David

    2015-01-01

    Protected areas (PAs) are pivotal tools for biodiversity conservation on the Earth. Europe has had an extensive protection system since Natura 2000 areas were created in parallel with traditional parks and reserves. However, the extent to which this system covers not only taxonomic diversity but also other biodiversity facets, such as evolutionary history and functional diversity, has never been evaluated. Using high-resolution distribution data of all European tetrapods together with dated molecular phylogenies and detailed trait information, we first tested whether the existing European protection system effectively covers all species and in particular, those with the highest evolutionary or functional distinctiveness. We then tested the ability of PAs to protect the entire tetrapod phylogenetic and functional trees of life by mapping species' target achievements along the internal branches of these two trees. We found that the current system is adequately representative in terms of the evolutionary history of amphibians while it fails for the rest. However, the most functionally distinct species were better represented than they would be under random conservation efforts. These results imply better protection of the tetrapod functional tree of life, which could help to ensure long-term functioning of the ecosystem, potentially at the expense of conserving evolutionary history. PMID:25561666

  10. A hardware investigation of robotic SPECT for functional and molecular imaging onboard radiation therapy systems

    PubMed Central

    Yan, Susu; Bowsher, James; Tough, MengHeng; Cheng, Lin; Yin, Fang-Fang

    2014-01-01

    Purpose: To construct a robotic SPECT system and to demonstrate its capability to image a thorax phantom on a radiation therapy flat-top couch, as a step toward onboard functional and molecular imaging in radiation therapy. Methods: A robotic SPECT imaging system was constructed utilizing a gamma camera detector (Digirad 2020tc) and a robot (KUKA KR150 L110 robot). An imaging study was performed with a phantom (PET CT PhantomTM), which includes five spheres of 10, 13, 17, 22, and 28 mm diameters. The phantom was placed on a flat-top couch. SPECT projections were acquired either with a parallel-hole collimator or a single-pinhole collimator, both without background in the phantom and with background at 1/10th the sphere activity concentration. The imaging trajectories of parallel-hole and pinhole collimated detectors spanned 180° and 228°, respectively. The pinhole detector viewed an off-centered spherical common volume which encompassed the 28 and 22 mm spheres. The common volume for parallel-hole system was centered at the phantom which encompassed all five spheres in the phantom. The maneuverability of the robotic system was tested by navigating the detector to trace the phantom and flat-top table while avoiding collision and maintaining the closest possible proximity to the common volume. The robot base and tool coordinates were used for image reconstruction. Results: The robotic SPECT system was able to maneuver parallel-hole and pinhole collimated SPECT detectors in close proximity to the phantom, minimizing impact of the flat-top couch on detector radius of rotation. Without background, all five spheres were visible in the reconstructed parallel-hole image, while four spheres, all except the smallest one, were visible in the reconstructed pinhole image. With background, three spheres of 17, 22, and 28 mm diameters were readily observed with the parallel-hole imaging, and the targeted spheres (22 and 28 mm diameters) were readily observed in the pinhole region-of-interest imaging. Conclusions: Onboard SPECT could be achieved by a robot maneuvering a SPECT detector about patients in position for radiation therapy on a flat-top couch. The robot inherent coordinate frames could be an effective means to estimate detector pose for use in SPECT image reconstruction. PMID:25370663

  11. A hardware investigation of robotic SPECT for functional and molecular imaging onboard radiation therapy systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Susu, E-mail: susu.yan@duke.edu; Tough, MengHeng; Bowsher, James

    Purpose: To construct a robotic SPECT system and to demonstrate its capability to image a thorax phantom on a radiation therapy flat-top couch, as a step toward onboard functional and molecular imaging in radiation therapy. Methods: A robotic SPECT imaging system was constructed utilizing a gamma camera detector (Digirad 2020tc) and a robot (KUKA KR150 L110 robot). An imaging study was performed with a phantom (PET CT Phantom{sup TM}), which includes five spheres of 10, 13, 17, 22, and 28 mm diameters. The phantom was placed on a flat-top couch. SPECT projections were acquired either with a parallel-hole collimator ormore » a single-pinhole collimator, both without background in the phantom and with background at 1/10th the sphere activity concentration. The imaging trajectories of parallel-hole and pinhole collimated detectors spanned 180° and 228°, respectively. The pinhole detector viewed an off-centered spherical common volume which encompassed the 28 and 22 mm spheres. The common volume for parallel-hole system was centered at the phantom which encompassed all five spheres in the phantom. The maneuverability of the robotic system was tested by navigating the detector to trace the phantom and flat-top table while avoiding collision and maintaining the closest possible proximity to the common volume. The robot base and tool coordinates were used for image reconstruction. Results: The robotic SPECT system was able to maneuver parallel-hole and pinhole collimated SPECT detectors in close proximity to the phantom, minimizing impact of the flat-top couch on detector radius of rotation. Without background, all five spheres were visible in the reconstructed parallel-hole image, while four spheres, all except the smallest one, were visible in the reconstructed pinhole image. With background, three spheres of 17, 22, and 28 mm diameters were readily observed with the parallel-hole imaging, and the targeted spheres (22 and 28 mm diameters) were readily observed in the pinhole region-of-interest imaging. Conclusions: Onboard SPECT could be achieved by a robot maneuvering a SPECT detector about patients in position for radiation therapy on a flat-top couch. The robot inherent coordinate frames could be an effective means to estimate detector pose for use in SPECT image reconstruction.« less

  12. Application of a Scalable, Parallel, Unstructured-Grid-Based Navier-Stokes Solver

    NASA Technical Reports Server (NTRS)

    Parikh, Paresh

    2001-01-01

    A parallel version of an unstructured-grid based Navier-Stokes solver, USM3Dns, previously developed for efficient operation on a variety of parallel computers, has been enhanced to incorporate upgrades made to the serial version. The resultant parallel code has been extensively tested on a variety of problems of aerospace interest and on two sets of parallel computers to understand and document its characteristics. An innovative grid renumbering construct and use of non-blocking communication are shown to produce superlinear computing performance. Preliminary results from parallelization of a recently introduced "porous surface" boundary condition are also presented.

  13. Polymer functionalized nanostructured porous silicon for selective water vapor sensing at room temperature

    NASA Astrophysics Data System (ADS)

    Dwivedi, Priyanka; Das, Samaresh; Dhanekar, Saakshi

    2017-04-01

    This paper highlights the surface treatment of porous silicon (PSi) for enhancing the sensitivity of water vapors at room temperature. A simple and low cost technique was used for fabrication and functionalization of PSi. Spin coated polyvinyl alcohol (PVA) was used for functionalizing PSi surface. Morphological and structural studies were conducted to analyze samples using SEM and XRD/Raman spectroscopy respectively. Contact angle measurements were performed for assessing the wettability of the surfaces. PSi and functionalized PSi samples were tested as sensors in presence of different analytes like ethanol, acetone, isopropyl alcohol (IPA) and water vapors in the range of 50-500 ppm. Electrical measurements were taken from parallel aluminium electrodes fabricated on the functionalized surface, using metal mask and thermal evaporation. Functionalized PSi sensors in comparison to non-functionalized sensors depicted selective and enhanced response to water vapor at room temperature. The results portray an efficient and selective water vapor detection at room temperature.

  14. Functional Programming in Computer Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Loren James; Davis, Marion Kei

    We explore functional programming through a 16-week internship at Los Alamos National Laboratory. Functional programming is a branch of computer science that has exploded in popularity over the past decade due to its high-level syntax, ease of parallelization, and abundant applications. First, we summarize functional programming by listing the advantages of functional programming languages over the usual imperative languages, and we introduce the concept of parsing. Second, we discuss the importance of lambda calculus in the theory of functional programming. Lambda calculus was invented by Alonzo Church in the 1930s to formalize the concept of effective computability, and every functionalmore » language is essentially some implementation of lambda calculus. Finally, we display the lasting products of the internship: additions to a compiler and runtime system for the pure functional language STG, including both a set of tests that indicate the validity of updates to the compiler and a compiler pass that checks for illegal instances of duplicate names.« less

  15. Analytical gradients for subsystem density functional theory within the slater-function-based amsterdam density functional program.

    PubMed

    Schlüns, Danny; Franchini, Mirko; Götz, Andreas W; Neugebauer, Johannes; Jacob, Christoph R; Visscher, Lucas

    2017-02-05

    We present a new implementation of analytical gradients for subsystem density-functional theory (sDFT) and frozen-density embedding (FDE) into the Amsterdam Density Functional program (ADF). The underlying theory and necessary expressions for the implementation are derived and discussed in detail for various FDE and sDFT setups. The parallel implementation is numerically verified and geometry optimizations with different functional combinations (LDA/TF and PW91/PW91K) are conducted and compared to reference data. Our results confirm that sDFT-LDA/TF yields good equilibrium distances for the systems studied here (mean absolute deviation: 0.09 Å) compared to reference wave-function theory results. However, sDFT-PW91/PW91k quite consistently yields smaller equilibrium distances (mean absolute deviation: 0.23 Å). The flexibility of our new implementation is demonstrated for an HCN-trimer test system, for which several different setups are applied. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Correction for Metastability in the Quantification of PID in Thin-film Module Testing: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hacke, Peter L; Johnston, Steven; Spataru, Sergiu

    A fundamental change in the analysis for the accelerated stress testing of thin-film modules is proposed, whereby power changes due to metastability and other effects that may occur due to the thermal history are removed from the power measurement that we obtain as a function of the applied stress factor. The power of reference modules normalized to an initial state - undergoing the same thermal and light- exposure history but without the applied stress factor such as humidity or voltage bias - is subtracted from that of the stressed modules. For better understanding and appropriate application in standardized tests, themore » method is demonstrated and discussed for potential-induced degradation testing in view of the parallel-occurring but unrelated physical mechanisms that can lead to confounding power changes in the module.« less

  17. An Index and Test of Linear Moderated Mediation.

    PubMed

    Hayes, Andrew F

    2015-01-01

    I describe a test of linear moderated mediation in path analysis based on an interval estimate of the parameter of a function linking the indirect effect to values of a moderator-a parameter that I call the index of moderated mediation. This test can be used for models that integrate moderation and mediation in which the relationship between the indirect effect and the moderator is estimated as linear, including many of the models described by Edwards and Lambert ( 2007 ) and Preacher, Rucker, and Hayes ( 2007 ) as well as extensions of these models to processes involving multiple mediators operating in parallel or in serial. Generalization of the method to latent variable models is straightforward. Three empirical examples describe the computation of the index and the test, and its implementation is illustrated using Mplus and the PROCESS macro for SPSS and SAS.

  18. The effect of cell design and test criteria on the series/parallel performance of nickel cadmium cells and batteries

    NASA Technical Reports Server (NTRS)

    Halpert, G.; Webb, D. A.

    1983-01-01

    Three batteries were operated in parallel from a common bus during charge and discharge. SMM utilized NASA Standard 20AH cells and batteries, and LANDSAT-D NASA 50AH cells and batteries of a similar design. Each battery consisted of 22 series connected cells providing the nominal 28V bus. The three batteries were charged in parallel using the voltage limit/current taper mode wherein the voltage limit was temperature compensated. Discharge occurred on the demand of the spacecraft instruments and electronics. Both flights were planned for three to five year missions. The series/parallel configuration of cells and batteries for the 3-5 yr mission required a well controlled product with built-in reliability and uniformity. Examples of how component, cell and battery selection methods affect the uniformity of the series/parallel operation of the batteries both in testing and in flight are given.

  19. An index for breathlessness and leg fatigue.

    PubMed

    Borg, E; Borg, G; Larsson, K; Letzter, M; Sundblad, B-M

    2010-08-01

    The features of perceived symptoms causing discontinuation of strenuous exercise have been scarcely studied. The aim was to characterize the two main symptoms causing the discontinuation of heavy work in healthy persons as well as describe the growth of symptoms during exercise. Breathlessness (b) and leg fatigue (l) were assessed using the Borg CR10 Scale and the Borg CR100 (centiMax) Scale, during a standardized exercise test in 38 healthy subjects (24-71 years). The b/l-relationships were calculated for terminal perceptions (ERI(b/l)), and the growth of symptoms determined by power functions for the whole test, as well as by growth response indexes (GRI). This latter index was constructed as a ratio between power levels corresponding to a very strong and a moderate perception. In the majority (71%) of the test subjects, leg fatigue was the dominant symptom at the conclusion of exercise (P<0.001) and the b/l ratio was 0.77 (CR10) and 0.75 (CR100), respectively. The GRI for breathlessness and leg fatigue was similar, with good correlations between GRI and the power function exponent (P<0.005). In healthy subjects, leg fatigue is the most common cause for discontinuing an incremental exercise test. The growth functions for breathlessness and leg fatigue during work are, however, almost parallel.

  20. An efficient parallel algorithm for the calculation of canonical MP2 energies.

    PubMed

    Baker, Jon; Pulay, Peter

    2002-09-01

    We present the parallel version of a previous serial algorithm for the efficient calculation of canonical MP2 energies (Pulay, P.; Saebo, S.; Wolinski, K. Chem Phys Lett 2001, 344, 543). It is based on the Saebo-Almlöf direct-integral transformation, coupled with an efficient prescreening of the AO integrals. The parallel algorithm avoids synchronization delays by spawning a second set of slaves during the bin-sort prior to the second half-transformation. Results are presented for systems with up to 2000 basis functions. MP2 energies for molecules with 400-500 basis functions can be routinely calculated to microhartree accuracy on a small number of processors (6-8) in a matter of minutes with modern PC-based parallel computers. Copyright 2002 Wiley Periodicals, Inc. J Comput Chem 23: 1150-1156, 2002

  1. Scalable descriptive and correlative statistics with Titan.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, David C.; Pebay, Philippe Pierre

    This report summarizes the existing statistical engines in VTK/Titan and presents the parallel versions thereof which have already been implemented. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.

  2. NAS Requirements Checklist for Job Queuing/Scheduling Software

    NASA Technical Reports Server (NTRS)

    Jones, James Patton

    1996-01-01

    The increasing reliability of parallel systems and clusters of computers has resulted in these systems becoming more attractive for true production workloads. Today, the primary obstacle to production use of clusters of computers is the lack of a functional and robust Job Management System for parallel applications. This document provides a checklist of NAS requirements for job queuing and scheduling in order to make most efficient use of parallel systems and clusters for parallel applications. Future requirements are also identified to assist software vendors with design planning.

  3. Converging Paradigms: A Reflection on Parallel Theoretical Developments in Psychoanalytic Metapsychology and Empirical Dream Research.

    PubMed

    Schmelowszky, Ágoston

    2016-08-01

    In the last decades one can perceive a striking parallelism between the shifting perspective of leading representatives of empirical dream research concerning their conceptualization of dreaming and the paradigm shift within clinically based psychoanalytic metapsychology with respect to its theory on the significance of dreaming. In metapsychology, dreaming becomes more and more a central metaphor of mental functioning in general. The theories of Klein, Bion, and Matte-Blanco can be considered as milestones of this paradigm shift. In empirical dream research, the competing theories of Hobson and of Solms respectively argued for and against the meaningfulness of the dream-work in the functioning of the mind. In the meantime, empirical data coming from various sources seemed to prove the significance of dream consciousness for the development and maintenance of adaptive waking consciousness. Metapsychological speculations and hypotheses based on empirical research data seem to point in the same direction, promising for contemporary psychoanalytic practice a more secure theoretical base. In this paper the author brings together these diverse theoretical developments and presents conclusions regarding psychoanalytic theory and technique, as well as proposing an outline of an empirical research plan for testing the specificity of psychoanalysis in developing dream formation.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popovich, P.; Carter, T. A.; Friedman, B.

    Numerical simulation of plasma turbulence in the Large Plasma Device (LAPD) [W. Gekelman, H. Pfister, Z. Lucky et al., Rev. Sci. Instrum. 62, 2875 (1991)] is presented. The model, implemented in the BOUndary Turbulence code [M. Umansky, X. Xu, B. Dudson et al., Contrib. Plasma Phys. 180, 887 (2009)], includes three-dimensional (3D) collisional fluid equations for plasma density, electron parallel momentum, and current continuity, and also includes the effects of ion-neutral collisions. In nonlinear simulations using measured LAPD density profiles but assuming constant temperature profile for simplicity, self-consistent evolution of instabilities and nonlinearly generated zonal flows results in a saturatedmore » turbulent state. Comparisons of these simulations with measurements in LAPD plasmas reveal good qualitative and reasonable quantitative agreement, in particular in frequency spectrum, spatial correlation, and amplitude probability distribution function of density fluctuations. For comparison with LAPD measurements, the plasma density profile in simulations is maintained either by direct azimuthal averaging on each time step, or by adding particle source/sink function. The inferred source/sink values are consistent with the estimated ionization source and parallel losses in LAPD. These simulations lay the groundwork for more a comprehensive effort to test fluid turbulence simulation against LAPD data.« less

  5. Testing of the Engineering Model Electrical Power Control Unit for the Fluids and Combustion Facility

    NASA Technical Reports Server (NTRS)

    Kimnach, Greg L.; Lebron, Ramon C.; Fox, David A.

    1999-01-01

    The John H. Glenn Research Center at Lewis Field (GRC) in Cleveland, OH and the Sundstrand Corporation in Rockford, IL have designed and developed an Engineering Model (EM) Electrical Power Control Unit (EPCU) for the Fluids Combustion Facility, (FCF) experiments to be flown on the International Space Station (ISS). The EPCU will be used as the power interface to the ISS power distribution system for the FCF's space experiments'test and telemetry hardware. Furthermore. it is proposed to be the common power interface for all experiments. The EPCU is a three kilowatt 12OVdc-to-28Vdc converter utilizing three independent Power Converter Units (PCUs), each rated at 1kWe (36Adc @ 28Vdc) which are paralleled and synchronized. Each converter may be fed from one of two ISS power channels. The 28Vdc loads are connected to the EPCU output via 48 solid-state and current-limiting switches, rated at 4Adc each. These switches may be paralleled to supply any given load up to the 108Adc normal operational limit of the paralleled converters. The EPCU was designed in this manner to maximize allocated-power utilization. to shed loads autonomously, to provide fault tolerance. and to provide a flexible power converter and control module to meet various ISS load demands. Tests of the EPCU in the Power Systems Facility testbed at GRC reveal that the overall converted-power efficiency, is approximately 89% with a nominal-input voltage of 12OVdc and a total load in the range of 4O% to 110% rated 28Vdc load. (The PCUs alone have an efficiency of approximately 94.5%). Furthermore, the EM unit passed all flight-qualification level (and beyond) vibration tests, passed ISS EMI (conducted, radiated. and susceptibility) requirements. successfully operated for extended periods in a thermal/vacuum chamber, was integrated with a proto-flight experiment and passed all stability and functional requirements.

  6. The effect of selection environment on the probability of parallel evolution.

    PubMed

    Bailey, Susan F; Rodrigue, Nicolas; Kassen, Rees

    2015-06-01

    Across the great diversity of life, there are many compelling examples of parallel and convergent evolution-similar evolutionary changes arising in independently evolving populations. Parallel evolution is often taken to be strong evidence of adaptation occurring in populations that are highly constrained in their genetic variation. Theoretical models suggest a few potential factors driving the probability of parallel evolution, but experimental tests are needed. In this study, we quantify the degree of parallel evolution in 15 replicate populations of Pseudomonas fluorescens evolved in five different environments that varied in resource type and arrangement. We identified repeat changes across multiple levels of biological organization from phenotype, to gene, to nucleotide, and tested the impact of 1) selection environment, 2) the degree of adaptation, and 3) the degree of heterogeneity in the environment on the degree of parallel evolution at the gene-level. We saw, as expected, that parallel evolution occurred more often between populations evolved in the same environment; however, the extent of parallel evolution varied widely. The degree of adaptation did not significantly explain variation in the extent of parallelism in our system but number of available beneficial mutations correlated negatively with parallel evolution. In addition, degree of parallel evolution was significantly higher in populations evolved in a spatially structured, multiresource environment, suggesting that environmental heterogeneity may be an important factor constraining adaptation. Overall, our results stress the importance of environment in driving parallel evolutionary changes and point to a number of avenues for future work for understanding when evolution is predictable. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Clinical Evaluation of Effects of Chronic Resveratrol Supplementation on Cerebrovascular Function, Cognition, Mood, Physical Function and General Well-Being in Postmenopausal Women-Rationale and Study Design.

    PubMed

    Evans, Hamish Michael; Howe, Peter Ranald Charles; Wong, Rachel Heloise Xiwen

    2016-03-09

    This methodological paper presents both a scientific rationale and a methodological approach for investigating the effects of resveratrol supplementation on mood and cognitive performance in postmenopausal women. Postmenopausal women have an increased risk of cognitive decline and dementia, which may be at least partly due to loss of beneficial effects of estrogen on the cerebrovasculature. We hypothesise that resveratrol, a phytoestrogen, may counteract this risk by enhancing cerebrovascular function and improving regional blood flow in response to cognitive demands. A clinical trial was designed to test this hypothesis. Healthy postmenopausal women were recruited to participate in a randomised, double-blind, placebo-controlled (parallel comparison) dietary intervention trial to evaluate the effects of resveratrol supplementation (75 mg twice daily) on cognition, cerebrovascular responsiveness to cognitive tasks and overall well-being. They performed the following tests at baseline and after 14 weeks of supplementation: Rey Auditory Verbal Learning Test, Cambridge Semantic Memory Battery, the Double Span and the Trail Making Task. Cerebrovascular function was assessed simultaneously by monitoring blood flow velocity in the middle cerebral arteries using transcranial Doppler ultrasound. This trial provides a model approach to demonstrate that, by optimising circulatory function in the brain, resveratrol and other vasoactive nutrients may enhance mood and cognition and ameliorate the risk of developing dementia in postmenopausal women and other at-risk populations.

  8. PAREMD: A parallel program for the evaluation of momentum space properties of atoms and molecules

    NASA Astrophysics Data System (ADS)

    Meena, Deep Raj; Gadre, Shridhar R.; Balanarayan, P.

    2018-03-01

    The present work describes a code for evaluating the electron momentum density (EMD), its moments and the associated Shannon information entropy for a multi-electron molecular system. The code works specifically for electronic wave functions obtained from traditional electronic structure packages such as GAMESS and GAUSSIAN. For the momentum space orbitals, the general expression for Gaussian basis sets in position space is analytically Fourier transformed to momentum space Gaussian basis functions. The molecular orbital coefficients of the wave function are taken as an input from the output file of the electronic structure calculation. The analytic expressions of EMD are evaluated over a fine grid and the accuracy of the code is verified by a normalization check and a numerical kinetic energy evaluation which is compared with the analytic kinetic energy given by the electronic structure package. Apart from electron momentum density, electron density in position space has also been integrated into this package. The program is written in C++ and is executed through a Shell script. It is also tuned for multicore machines with shared memory through OpenMP. The program has been tested for a variety of molecules and correlated methods such as CISD, Møller-Plesset second order (MP2) theory and density functional methods. For correlated methods, the PAREMD program uses natural spin orbitals as an input. The program has been benchmarked for a variety of Gaussian basis sets for different molecules showing a linear speedup on a parallel architecture.

  9. Divide-and-conquer density functional theory on hierarchical real-space grids: Parallel implementation and applications

    NASA Astrophysics Data System (ADS)

    Shimojo, Fuyuki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2008-02-01

    A linear-scaling algorithm based on a divide-and-conquer (DC) scheme has been designed to perform large-scale molecular-dynamics (MD) simulations, in which interatomic forces are computed quantum mechanically in the framework of the density functional theory (DFT). Electronic wave functions are represented on a real-space grid, which is augmented with a coarse multigrid to accelerate the convergence of iterative solutions and with adaptive fine grids around atoms to accurately calculate ionic pseudopotentials. Spatial decomposition is employed to implement the hierarchical-grid DC-DFT algorithm on massively parallel computers. The largest benchmark tests include 11.8×106 -atom ( 1.04×1012 electronic degrees of freedom) calculation on 131 072 IBM BlueGene/L processors. The DC-DFT algorithm has well-defined parameters to control the data locality, with which the solutions converge rapidly. Also, the total energy is well conserved during the MD simulation. We perform first-principles MD simulations based on the DC-DFT algorithm, in which large system sizes bring in excellent agreement with x-ray scattering measurements for the pair-distribution function of liquid Rb and allow the description of low-frequency vibrational modes of graphene. The band gap of a CdSe nanorod calculated by the DC-DFT algorithm agrees well with the available conventional DFT results. With the DC-DFT algorithm, the band gap is calculated for larger system sizes until the result reaches the asymptotic value.

  10. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R

    Methods, apparatuses, and computer program products for endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface (`PAMI`) of a parallel computer are provided. Embodiments include establishing by a parallel application a data communications geometry, the geometry specifying a set of endpoints that are used in collective operations of the PAMI, including associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry. Embodiments also include registering in each endpoint in the geometry a dispatch callback function for a collective operation and executing without blocking, through a single onemore » of the endpoints in the geometry, an instruction for the collective operation.« less

  11. Nuclear fuel management optimization using genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1995-07-01

    The code independent genetic algorithm reactor optimization (CIGARO) system has been developed to optimize nuclear reactor loading patterns. It uses genetic algorithms (GAs) and a code-independent interface, so any reactor physics code (e.g., CASMO-3/SIMULATE-3) can be used to evaluate the loading patterns. The system is compared to other GA-based loading pattern optimizers. Tests were carried out to maximize the beginning of cycle k{sub eff} for a pressurized water reactor core loading with a penalty function to limit power peaking. The CIGARO system performed well, increasing the k{sub eff} after lowering the peak power. Tests of a prototype parallel evaluation methodmore » showed the potential for a significant speedup.« less

  12. Development of a preliminary design of a method to measure the effectiveness of virus exclusion during water process reclamation at zero-G

    NASA Technical Reports Server (NTRS)

    Fraser, A. S.; Wells, A. F.; Tenoso, H. J.; Linnecke, C. B.

    1976-01-01

    Organon Diagnostics has developed, under NASA sponsorship, a monitoring system to test the capability of a water recovery system to reject the passage of viruses into the recovered water. In this system, a non-pathogenic marker virus, bacteriophage F2, is fed into the process stream before the recovery unit and the reclaimed water is assayed for its presence. An engineering preliminary design has been performed as a parallel effort to the laboratory development of the marker virus test system. Engineering schematics and drawings present a preliminary instrument design of a fully functional laboratory prototype capable of zero-G operation.

  13. Characterization of a parallel beam CCD optical-CT apparatus for 3D radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Krstajić, Nikola; Doran, Simon J.

    2006-12-01

    This paper describes the initial steps we have taken in establishing CCD based optical-CT as a viable alternative for 3-D radiation dosimetry. First, we compare the optical density (OD) measurements from a high quality test target and variable neutral density filter (VNDF). A modulation transfer function (MTF) of individual projections is derived for three positions of the sinusoidal test target within the scanning tank. Our CCD is then characterized in terms of its signal-to-noise ratio (SNR). Finally, a sample reconstruction of a scan of a PRESAGETM (registered trademark of Heuris Pharma, NJ, Skillman, USA.) dosimeter is given, demonstrating the capabilities of the apparatus.

  14. Improvement and scale-up of the NASA Redox storage system

    NASA Technical Reports Server (NTRS)

    Reid, M. A.; Thaller, L. H.

    1980-01-01

    A preprototype 1.0 kW redox system (2 kW peak) with 11 kWh storage capacity was built and integrated with the NASA/DOE photovoltaic test facility at NASA Lewis. This full function redox system includes four substacks of 39 cells each (1/3 cu ft active area) which are connected hydraulically in parallel and electrically in series. An open circuit voltage cell and a set of rebalance cells are used to continuously monitor the system state of charge and automatically maintain the anode and cathode reactants electrochemically in balance. Recent membrane and electrode advances are summarized and the results of multicell stack tests of 1 cu ft are described.

  15. Oculomotor and Neuropsychological Effects of Antipsychotic Treatment for Schizophrenia

    PubMed Central

    Hill, S. Kristian; Reilly, James L.; Harris, Margret S. H.; Khine, Tin; Sweeney, John A.

    2008-01-01

    Cognitive enhancement has become an important target for drug therapies in schizophrenia. Treatment development in this area requires assessment approaches that are sensitive to procognitive effects of antipsychotic and adjunctive treatments. Ideally, new treatments will have translational characteristics for parallel human and animal research. Previous studies of antipsychotic effects on cognition have relied primarily on paper-and-pencil neuropsychological testing. No study has directly compared neurophysiological biomarkers and neuropsychological testing as strategies for assessing cognitive effects of antipsychotic treatment early in the course of schizophrenia. Antipsychotic-naive patients with schizophrenia were tested before treatment with risperidone and again 6 weeks later. Matched healthy participants were tested over a similar time period. Test-retest reliability, effect sizes of within-subject change, and multivariate/univariate analysis of variance were used to compare 3 neurophysiological tests (visually guided saccade, memory-guided saccade, and antisaccade) with neuropsychological tests covering 4 cognitive domains (executive function, attention, memory, and manual motor function). While both measurement approaches showed robust neurocognitive impairments in patients prior to risperidone treatment, oculomotor biomarkers were more sensitive to treatment-related effects on neurocognitive function than traditional neuropsychological measures. Further, unlike the pattern of modest generalized cognitive improvement suggested by neuropsychological measures, the oculomotor findings revealed a mixed pattern of beneficial and adverse treatment-related effects. These findings warrant further investigation regarding the utility of neurophysiological biomarkers for assessing cognitive outcomes of antipsychotic treatment in clinical trials and in early-phase drug development. PMID:17932088

  16. A Bootstrap Generalization of Modified Parallel Analysis for IRT Dimensionality Assessment

    ERIC Educational Resources Information Center

    Finch, Holmes; Monahan, Patrick

    2008-01-01

    This article introduces a bootstrap generalization to the Modified Parallel Analysis (MPA) method of test dimensionality assessment using factor analysis. This methodology, based on the use of Marginal Maximum Likelihood nonlinear factor analysis, provides for the calculation of a test statistic based on a parametric bootstrap using the MPA…

  17. Time-dependent density-functional theory in massively parallel computer architectures: the octopus project

    NASA Astrophysics Data System (ADS)

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A.; Oliveira, Micael J. T.; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G.; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A. L.

    2012-06-01

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  18. Time-dependent density-functional theory in massively parallel computer architectures: the OCTOPUS project.

    PubMed

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A; Oliveira, Micael J T; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A L

    2012-06-13

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  19. Safety and efficacy of a ginkgo biloba-containing dietary supplement on cognitive function, quality of life, and platelet function in healthy, cognitively intact older adults.

    PubMed

    Carlson, Joseph J; Farquhar, John W; DiNucci, Ellen; Ausserer, Laurie; Zehnder, James; Miller, Donald; Berra, Kathy; Hagerty, Lisa; Haskell, William L

    2007-03-01

    To determine if a ginkgo biloba-containing supplement improves cognitive function and quality of life, alters primary hemostasis, and is safe in healthy, cognitively intact older adults. Four-month, randomized, double-blind, placebo-controlled parallel design. Ninety men and women (age range 65 to 84 years) were recruited to a university clinic. Eligibility included those without dementia or depression, not taking psychoactive medications or medications or supplements that alter hemostasis. Ninety subjects were randomly assigned to placebo or a ginkgo biloba-based supplement containing 160 mg ginkgo biloba, 68 mg gotu kola, and 180 mg decosahexaenoic acid per day for 4 months. Assessments included: six standardized cognitive function tests, the SF-36 Quality of Life questionnaire, the Platelet Function Analyzer-100 (Dade Behring, Eschbom, Germany), and the monitoring of adverse events. Baseline characteristics and study hypotheses were tested using analysis of covariance. Tests were two-tailed with a 0.05 significance level. Seventy-eight subjects (87%) completed both baseline and 4-month testing (n=36 in placebo group, n=42 in ginkgo biloba group). At baseline, the participants' cognitive function was above average. One of six cognitive tests indicated significant protocol differences at 4 months (P=0.03), favoring the placebo. There were no significant differences in quality of life, platelet function, or adverse events. These finding do not support the use of a ginkgo biloba-containing supplement for improving cognitive function or quality of life in cognitively intact, older, healthy adults. However, high baseline scores may have contributed to the null findings. The ginkgo biloba product seems safe and did not alter platelet function, though additional studies are needed to evaluate the interaction of varying doses of ginkgo biloba and ginkgo biloba-containing supplements with medications and supplements that alter hemostasis.

  20. Investigation of the applicability of a functional programming model to fault-tolerant parallel processing for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Harper, Richard

    1989-01-01

    In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.

  1. Design of a massively parallel computer using bit serial processing elements

    NASA Technical Reports Server (NTRS)

    Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing

    1995-01-01

    A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.

  2. Evidence for a Functional Hierarchy of Association Networks.

    PubMed

    Choi, Eun Young; Drayna, Garrett K; Badre, David

    2018-05-01

    Patient lesion and neuroimaging studies have identified a rostral-to-caudal functional gradient in the lateral frontal cortex (LFC) corresponding to higher-order (complex or abstract) to lower-order (simple or concrete) cognitive control. At the same time, monkey anatomical and human functional connectivity studies show that frontal regions are reciprocally connected with parietal and temporal regions, forming parallel and distributed association networks. Here, we investigated the link between the functional gradient of LFC regions observed during control tasks and the parallel, distributed organization of association networks. Whole-brain fMRI task activity corresponding to four orders of hierarchical control [Badre, D., & D'Esposito, M. Functional magnetic resonance imaging evidence for a hierarchical organization of the prefrontal cortex. Journal of Cognitive Neuroscience, 19, 2082-2099, 2007] was compared with a resting-state functional connectivity MRI estimate of cortical networks [Yeo, B. T., Krienen, F. M., Sepulcre, J., Sabuncu, M. R., Lashkari, D., Hollinshead, M., et al. The organization of the human cerebral cortex estimated by intrinsic functional connectivity. Journal of Neurophysiology, 106, 1125-1165, 2011]. Critically, at each order of control, activity in the LFC and parietal cortex overlapped onto a common association network that differed between orders. These results are consistent with a functional organization based on separable association networks that are recruited during hierarchical control. Furthermore, corticostriatal functional connectivity MRI showed that, consistent with their participation in functional networks, rostral-to-caudal LFC and caudal-to-rostral parietal regions had similar, order-specific corticostriatal connectivity that agreed with a striatal gating model of hierarchical rule use. Our results indicate that hierarchical cognitive control is subserved by parallel and distributed association networks, together forming multiple localized functional gradients in different parts of association cortex. As such, association networks, while connectionally organized in parallel, may be functionally organized in a hierarchy via dynamic interaction with the striatum.

  3. 3D geometric modeling and simulation of laser propagation through turbulence with plenoptic functions

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Nelson, William; Davis, Christopher C.

    2014-10-01

    Plenoptic functions are functions that preserve all the necessary light field information of optical events. Theoretical work has demonstrated that geometric based plenoptic functions can serve equally well in the traditional wave propagation equation known as the "scalar stochastic Helmholtz equation". However, in addressing problems of 3D turbulence simulation, the dominant methods using phase screen models have limitations both in explaining the choice of parameters (on the transverse plane) in real-world measurements, and finding proper correlations between neighboring phase screens (the Markov assumption breaks down). Though possible corrections to phase screen models are still promising, the equivalent geometric approach based on plenoptic functions begins to show some advantages. In fact, in these geometric approaches, a continuous wave problem is reduced to discrete trajectories of rays. This allows for convenience in parallel computing and guarantees conservation of energy. Besides the pairwise independence of simulated rays, the assigned refractive index grids can be directly tested by temperature measurements with tiny thermoprobes combined with other parameters such as humidity level and wind speed. Furthermore, without loss of generality one can break the causal chain in phase screen models by defining regional refractive centers to allow rays that are less affected to propagate through directly. As a result, our work shows that the 3D geometric approach serves as an efficient and accurate method in assessing relevant turbulence problems with inputs of several environmental measurements and reasonable guesses (such as Cn 2 levels). This approach will facilitate analysis and possible corrections in lateral wave propagation problems, such as image de-blurring, prediction of laser propagation over long ranges, and improvement of free space optic communication systems. In this paper, the plenoptic function model and relevant parallel algorithm computing will be presented, and its primary results and applications are demonstrated.

  4. Evaluation of concurrent priority queue algorithms. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Q.

    1991-02-01

    The priority queue is a fundamental data structure that is used in a large variety of parallel algorithms, such as multiprocessor scheduling and parallel best-first search of state-space graphs. This thesis addresses the design and experimental evaluation of two novel concurrent priority queues: a parallel Fibonacci heap and a concurrent priority pool, and compares them with the concurrent binary heap. The parallel Fibonacci heap is based on the sequential Fibonacci heap, which is theoretically the most efficient data structure for sequential priority queues. This scheme not only preserves the efficient operation time bounds of its sequential counterpart, but also hasmore » very low contention by distributing locks over the entire data structure. The experimental results show its linearly scalable throughput and speedup up to as many processors as tested (currently 18). A concurrent access scheme for a doubly linked list is described as part of the implementation of the parallel Fibonacci heap. The concurrent priority pool is based on the concurrent B-tree and the concurrent pool. The concurrent priority pool has the highest throughput among the priority queues studied. Like the parallel Fibonacci heap, the concurrent priority pool scales linearly up to as many processors as tested. The priority queues are evaluated in terms of throughput and speedup. Some applications of concurrent priority queues such as the vertex cover problem and the single source shortest path problem are tested.« less

  5. Nuclear respiratory factor 2 regulates the expression of the same NMDA receptor subunit genes as NRF-1: both factors act by a concurrent and parallel mechanism to couple energy metabolism and synaptic transmission.

    PubMed

    Priya, Anusha; Johar, Kaid; Wong-Riley, Margaret T T

    2013-01-01

    Neuronal activity and energy metabolism are tightly coupled processes. Previously, we found that nuclear respiratory factor 1 (NRF-1) transcriptionally co-regulates energy metabolism and neuronal activity by regulating all 13 subunits of the critical energy generating enzyme, cytochrome c oxidase (COX), as well as N-methyl-d-aspartate (NMDA) receptor subunits 1 and 2B, GluN1 (Grin1) and GluN2B (Grin2b). We also found that another transcription factor, nuclear respiratory factor 2 (NRF-2 or GA-binding protein) regulates all subunits of COX as well. The goal of the present study was to test our hypothesis that NRF-2 also regulates specific subunits of NMDA receptors, and that it functions with NRF-1 via one of three mechanisms: complementary, concurrent and parallel, or a combination of complementary and concurrent/parallel. By means of multiple approaches, including in silico analysis, electrophoretic mobility shift and supershift assays, in vivo chromatin immunoprecipitation of mouse neuroblastoma cells and rat visual cortical tissue, promoter mutations, real-time quantitative PCR, and western blot analysis, NRF-2 was found to functionally regulate Grin1 and Grin2b genes, but not any other NMDA subunit genes. Grin1 and Grin2b transcripts were up-regulated by depolarizing KCl, but silencing of NRF-2 prevented this up-regulation. On the other hand, over-expression of NRF-2 rescued the down-regulation of these subunits by the impulse blocker TTX. NRF-2 binding sites on Grin1 and Grin2b are conserved among species. Our data indicate that NRF-2 and NRF-1 operate in a concurrent and parallel manner in mediating the tight coupling between energy metabolism and neuronal activity at the molecular level. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Functional networks in parallel with cortical development associate with executive functions in children.

    PubMed

    Zhong, Jidan; Rifkin-Graboi, Anne; Ta, Anh Tuan; Yap, Kar Lai; Chuang, Kai-Hsiang; Meaney, Michael J; Qiu, Anqi

    2014-07-01

    Children begin performing similarly to adults on tasks requiring executive functions in late childhood, a transition that is probably due to neuroanatomical fine-tuning processes, including myelination and synaptic pruning. In parallel to such structural changes in neuroanatomical organization, development of functional organization may also be associated with cognitive behaviors in children. We examined 6- to 10-year-old children's cortical thickness, functional organization, and cognitive performance. We used structural magnetic resonance imaging (MRI) to identify areas with cortical thinning, resting-state fMRI to identify functional organization in parallel to cortical development, and working memory/response inhibition tasks to assess executive functioning. We found that neuroanatomical changes in the form of cortical thinning spread over bilateral frontal, parietal, and occipital regions. These regions were engaged in 3 functional networks: sensorimotor and auditory, executive control, and default mode network. Furthermore, we found that working memory and response inhibition only associated with regional functional connectivity, but not topological organization (i.e., local and global efficiency of information transfer) of these functional networks. Interestingly, functional connections associated with "bottom-up" as opposed to "top-down" processing were more clearly related to children's performance on working memory and response inhibition, implying an important role for brain systems involved in late childhood. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  8. Storing files in a parallel computing system based on user-specified parser function

    DOEpatents

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron

    2014-10-21

    Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janjusic, Tommy; Kartsaklis, Christos

    Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an applicationmore » s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).« less

  10. Differential Draining of Parallel-Fed Propellant Tanks in Morpheus and Apollo Flight

    NASA Technical Reports Server (NTRS)

    Hurlbert, Eric; Guardado, Hector; Hernandez, Humberto; Desai, Pooja

    2015-01-01

    Parallel-fed propellant tanks are an advantageous configuration for many spacecraft. Parallel-fed tanks allow the center of gravity (cg) to be maintained over the engine(s), as opposed to serial-fed propellant tanks which result in a cg shift as propellants are drained from tank one tank first opposite another. Parallel-fed tanks also allow for tank isolation if that is needed. Parallel tanks and feed systems have been used in several past vehicles including the Apollo Lunar Module. The design of the feedsystem connecting the parallel tank is critical to maintain balance in the propellant tanks. The design must account for and minimize the effect of manufacturing variations that could cause delta-p or mass flowrate differences, which would lead to propellant imbalance. Other sources of differential draining will be discussed. Fortunately, physics provides some self-correcting behaviors that tend to equalize any initial imbalance. The question concerning whether or not active control of propellant in each tank is required or can be avoided or not is also important to answer. In order to provide data on parallel-fed tanks and differential draining in flight for cryogenic propellants (as well as any other fluid), a vertical test bed (flying lander) for terrestrial use was employed. The Morpheus vertical test bed is a parallel-fed propellant tank system that uses passive design to keep the propellant tanks balanced. The system is operated in blow down. The Morpheus vehicle was instrumented with a capacitance level sensor in each propellant tank in order to measure the draining of propellants in over 34 tethered and 12 free flights. Morpheus did experience an approximately 20 lb/m imbalance in one pair of tanks. The cause of this imbalance will be discussed. This paper discusses the analysis, design, flight simulation vehicle dynamic modeling, and flight test of the Morpheus parallel-fed propellant. The Apollo LEM data is also examined in this summary report of the flight data.

  11. Parallel Fortran-MPI software for numerical inversion of the Laplace transform and its application to oscillatory water levels in groundwater environments

    USGS Publications Warehouse

    Zhan, X.

    2005-01-01

    A parallel Fortran-MPI (Message Passing Interface) software for numerical inversion of the Laplace transform based on a Fourier series method is developed to meet the need of solving intensive computational problems involving oscillatory water level's response to hydraulic tests in a groundwater environment. The software is a parallel version of ACM (The Association for Computing Machinery) Transactions on Mathematical Software (TOMS) Algorithm 796. Running 38 test examples indicated that implementation of MPI techniques with distributed memory architecture speedups the processing and improves the efficiency. Applications to oscillatory water levels in a well during aquifer tests are presented to illustrate how this package can be applied to solve complicated environmental problems involved in differential and integral equations. The package is free and is easy to use for people with little or no previous experience in using MPI but who wish to get off to a quick start in parallel computing. ?? 2004 Elsevier Ltd. All rights reserved.

  12. 49 CFR 572.76 - Limbs assembly and test procedure.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... between 1g and 2g. (ii) Place the dummy legs in a plane parallel to the dummy's midsagittal plane with the knee pivot center line perpendicular to the dummy's midsagittal plane, and with the feet flat on the... parallel to the midsagittal plane at the specified velocity. (5) Guide the test probe during impact so that...

  13. An iterative method for systems of nonlinear hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1989-01-01

    An iterative algorithm for the efficient solution of systems of nonlinear hyperbolic equations is presented. Parallelism is evident at several levels. In the formation of the iteration, the equations are decoupled, thereby providing large grain parallelism. Parallelism may also be exploited within the solves for each equation. Convergence of the interation is established via a bounding function argument. Experimental results in two-dimensions are presented.

  14. Theoretical Compton profile anisotropies in molecules and solids. IV. Parallel--perpendicular anisotropies in alkali fluoride molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matcha, R.L.; Pettitt, B.M.; Ramirez, B.I.

    1979-07-15

    Calculations of Compton profiles and parallel--perpendicular anisotropies in alkali fluorides are presented and analyzed in terms of molecular charge distributions and wave function character. It is found that the parallel profile associated with the valence pi orbital is the principal factor determining the relative shapes of the total profile anisotropies in the low momentum region.

  15. Draco,Version 6.x.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Kelly; Budge, Kent; Lowrie, Rob

    2016-03-03

    Draco is an object-oriented component library geared towards numerically intensive, radiation (particle) transport applications built for parallel computing hardware. It consists of semi-independent packages and a robust build system. The packages in Draco provide a set of components that can be used by multiple clients to build transport codes. The build system can also be extracted for use in clients. Software includes smart pointers, Design-by-Contract assertions, unit test framework, wrapped MPI functions, a file parser, unstructured mesh data structures, a random number generator, root finders and an angular quadrature component.

  16. Havery Mudd 2014-2015 Computer Science Conduit Clinic Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aspesi, G; Bai, J; Deese, R

    2015-05-12

    Conduit, a new open-source library developed at Lawrence Livermore National Laboratories, provides a C++ application programming interface (API) to describe and access scientific data. Conduit’s primary use is for inmemory data exchange in high performance computing (HPC) applications. Our team tested and improved Conduit to make it more appealing to potential adopters in the HPC community. We extended Conduit’s capabilities by prototyping four libraries: one for parallel communication using MPI, one for I/O functionality, one for aggregating performance data, and one for data visualization.

  17. The BLAZE language: A parallel language for scientific programming

    NASA Technical Reports Server (NTRS)

    Mehrotra, P.; Vanrosendale, J.

    1985-01-01

    A Pascal-like scientific programming language, Blaze, is described. Blaze contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus Blaze should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with onceptually sequential control flow. A central goal in the design of Blaze is portability across a broad range of parallel architectures. The multiple levels of parallelism present in Blaze code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of Blaze are described and shows how this language would be used in typical scientific programming.

  18. Parallel and serial computing tools for testing single-locus and epistatic SNP effects of quantitative traits in genome-wide association studies

    PubMed Central

    Ma, Li; Runesha, H Birali; Dvorkin, Daniel; Garbe, John R; Da, Yang

    2008-01-01

    Background Genome-wide association studies (GWAS) using single nucleotide polymorphism (SNP) markers provide opportunities to detect epistatic SNPs associated with quantitative traits and to detect the exact mode of an epistasis effect. Computational difficulty is the main bottleneck for epistasis testing in large scale GWAS. Results The EPISNPmpi and EPISNP computer programs were developed for testing single-locus and epistatic SNP effects on quantitative traits in GWAS, including tests of three single-locus effects for each SNP (SNP genotypic effect, additive and dominance effects) and five epistasis effects for each pair of SNPs (two-locus interaction, additive × additive, additive × dominance, dominance × additive, and dominance × dominance) based on the extended Kempthorne model. EPISNPmpi is the parallel computing program for epistasis testing in large scale GWAS and achieved excellent scalability for large scale analysis and portability for various parallel computing platforms. EPISNP is the serial computing program based on the EPISNPmpi code for epistasis testing in small scale GWAS using commonly available operating systems and computer hardware. Three serial computing utility programs were developed for graphical viewing of test results and epistasis networks, and for estimating CPU time and disk space requirements. Conclusion The EPISNPmpi parallel computing program provides an effective computing tool for epistasis testing in large scale GWAS, and the epiSNP serial computing programs are convenient tools for epistasis analysis in small scale GWAS using commonly available computer hardware. PMID:18644146

  19. Radiative transfer in spherical shell atmospheres. 2: Asymmetric phase functions

    NASA Technical Reports Server (NTRS)

    Kattawar, G. W.; Adams, C. N.

    1977-01-01

    The effects are investigated of sphericity on the radiation reflected from a planet with a homogeneous, conservative scattering atmosphere of optical thicknesses of 0.25 and 1.0. A Henyey-Greenstein phase function with asymmetry factors of 0.5 and 0.7 is considered. Significant differences were found when these results were compared with the plane-parallel calculations. Also large violations of the reciprocity theorem, which is only true for plane-parallel calculations, were noted. Results are presented for the radiance versus height distributions as a function of planetary phase angle.

  20. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  1. 49 CFR 572.10 - Limbs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... planes parallel to its midsagittal plane (knee pivot centerline perpendicular to the midsagittal plane...) Impact the knee with the test probe moving horizontally and parallel to the midsagittal plane at the...

  2. Developing a Shuffled Complex-Self Adaptive Hybrid Evolution (SC-SAHEL) Framework for Water Resources Management and Water-Energy System Optimization

    NASA Astrophysics Data System (ADS)

    Rahnamay Naeini, M.; Sadegh, M.; AghaKouchak, A.; Hsu, K. L.; Sorooshian, S.; Yang, T.

    2017-12-01

    Meta-Heuristic optimization algorithms have gained a great deal of attention in a wide variety of fields. Simplicity and flexibility of these algorithms, along with their robustness, make them attractive tools for solving optimization problems. Different optimization methods, however, hold algorithm-specific strengths and limitations. Performance of each individual algorithm obeys the "No-Free-Lunch" theorem, which means a single algorithm cannot consistently outperform all possible optimization problems over a variety of problems. From users' perspective, it is a tedious process to compare, validate, and select the best-performing algorithm for a specific problem or a set of test cases. In this study, we introduce a new hybrid optimization framework, entitled Shuffled Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL), which combines the strengths of different evolutionary algorithms (EAs) in a parallel computing scheme, and allows users to select the most suitable algorithm tailored to the problem at hand. The concept of SC-SAHEL is to execute different EAs as separate parallel search cores, and let all participating EAs to compete during the course of the search. The newly developed SC-SAHEL algorithm is designed to automatically select, the best performing algorithm for the given optimization problem. This algorithm is rigorously effective in finding the global optimum for several strenuous benchmark test functions, and computationally efficient as compared to individual EAs. We benchmark the proposed SC-SAHEL algorithm over 29 conceptual test functions, and two real-world case studies - one hydropower reservoir model and one hydrological model (SAC-SMA). Results show that the proposed framework outperforms individual EAs in an absolute majority of the test problems, and can provide competitive results to the fittest EA algorithm with more comprehensive information during the search. The proposed framework is also flexible for merging additional EAs, boundary-handling techniques, and sampling schemes, and has good potential to be used in Water-Energy system optimal operation and management.

  3. Parallel computing on Unix workstation arrays

    NASA Astrophysics Data System (ADS)

    Reale, F.; Bocchino, F.; Sciortino, S.

    1994-12-01

    We have tested arrays of general-purpose Unix workstations used as MIMD systems for massive parallel computations. In particular we have solved numerically a demanding test problem with a 2D hydrodynamic code, generally developed to study astrophysical flows, by exucuting it on arrays either of DECstations 5000/200 on Ethernet LAN, or of DECstations 3000/400, equipped with powerful Alpha processors, on FDDI LAN. The code is appropriate for data-domain decomposition, and we have used a library for parallelization previously developed in our Institute, and easily extended to work on Unix workstation arrays by using the PVM software toolset. We have compared the parallel efficiencies obtained on arrays of several processors to those obtained on a dedicated MIMD parallel system, namely a Meiko Computing Surface (CS-1), equipped with Intel i860 processors. We discuss the feasibility of using non-dedicated parallel systems and conclude that the convenience depends essentially on the size of the computational domain as compared to the relative processor power and network bandwidth. We point out that for future perspectives a parallel development of processor and network technology is important, and that the software still offers great opportunities of improvement, especially in terms of latency times in the message-passing protocols. In conditions of significant gain in terms of speedup, such workstation arrays represent a cost-effective approach to massive parallel computations.

  4. Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo

    NASA Astrophysics Data System (ADS)

    Khosravi, Ebrahim

    1998-12-01

    This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.

  5. Multi-petascale highly efficient parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time andmore » supports DMA functionality allowing for parallel processing message-passing.« less

  6. Development and parallelization of a direct numerical simulation to study the formation and transport of nanoparticle clusters in a viscous fluid

    NASA Astrophysics Data System (ADS)

    Sloan, Gregory James

    The direct numerical simulation (DNS) offers the most accurate approach to modeling the behavior of a physical system, but carries an enormous computation cost. There exists a need for an accurate DNS to model the coupled solid-fluid system seen in targeted drug delivery (TDD), nanofluid thermal energy storage (TES), as well as other fields where experiments are necessary, but experiment design may be costly. A parallel DNS can greatly reduce the large computation times required, while providing the same results and functionality of the serial counterpart. A D2Q9 lattice Boltzmann method approach was implemented to solve the fluid phase. The use of domain decomposition with message passing interface (MPI) parallelism resulted in an algorithm that exhibits super-linear scaling in testing, which may be attributed to the caching effect. Decreased performance on a per-node basis for a fixed number of processes confirms this observation. A multiscale approach was implemented to model the behavior of nanoparticles submerged in a viscous fluid, and used to examine the mechanisms that promote or inhibit clustering. Parallelization of this model using a masterworker algorithm with MPI gives less-than-linear speedup for a fixed number of particles and varying number of processes. This is due to the inherent inefficiency of the master-worker approach. Lastly, these separate simulations are combined, and two-way coupling is implemented between the solid and fluid.

  7. Sequential or parallel decomposed processing of two-digit numbers? Evidence from eye-tracking.

    PubMed

    Moeller, Korbinian; Fischer, Martin H; Nuerk, Hans-Christoph; Willmes, Klaus

    2009-02-01

    While reaction time data have shown that decomposed processing of two-digit numbers occurs, there is little evidence about how decomposed processing functions. Poltrock and Schwartz (1984) argued that multi-digit numbers are compared in a sequential digit-by-digit fashion starting at the leftmost digit pair. In contrast, Nuerk and Willmes (2005) favoured parallel processing of the digits constituting a number. These models (i.e., sequential decomposition, parallel decomposition) make different predictions regarding the fixation pattern in a two-digit number magnitude comparison task and can therefore be differentiated by eye fixation data. We tested these models by evaluating participants' eye fixation behaviour while selecting the larger of two numbers. The stimulus set consisted of within-decade comparisons (e.g., 53_57) and between-decade comparisons (e.g., 42_57). The between-decade comparisons were further divided into compatible and incompatible trials (cf. Nuerk, Weger, & Willmes, 2001) and trials with different decade and unit distances. The observed fixation pattern implies that the comparison of two-digit numbers is not executed by sequentially comparing decade and unit digits as proposed by Poltrock and Schwartz (1984) but rather in a decomposed but parallel fashion. Moreover, the present fixation data provide first evidence that digit processing in multi-digit numbers is not a pure bottom-up effect, but is also influenced by top-down factors. Finally, implications for multi-digit number processing beyond the range of two-digit numbers are discussed.

  8. Study of adaptation to altered gravity through systems analysis of motor control.

    PubMed

    Fox, R A; Daunton, N G; Corcoran, M L

    1998-01-01

    Maintenance of posture and production of functional, coordinated movement demand integration of sensory feedback with spinal and supra-spinal circuitry to produce adaptive motor control in altered gravity (G). To investigate neuroplastic processes leading to optimal performance in altered G we have studied motor control in adult rats using a battery of motor function tests following chronic exposure to various treatments (hyper-G, hindlimb suspension, chemical distruction of hair cells, space flight). These treatments differentially affect muscle fibers, vestibular receptors, and behavioral compensations and, in consequence, differentially disrupt air righting, swimming, posture and gait. The time-course of recovery from these disruptions varies depending on the function tested and the duration and type of treatment. These studies, with others (e.g., D'Amelio et al. in this volume), indicate that adaptation to altered gravity involves alterations in multiple sensory-motor systems that change at different rates. We propose that the use of parallel studies under different altered G conditions will most efficiently lead to an understanding of the modifications in central (neural) and peripheral (sensory and neuromuscular) systems that underlie sensory-motor adaptation in active, intact individuals.

  9. Study of adaptation to altered gravity through systems analysis of motor control

    NASA Astrophysics Data System (ADS)

    Fox, R. A.; Daunton, N. G.; Corcoran, M. L.

    Maintenance of posture and production of functional, coordinated movement demand integration of sensory feedback with spinal and supra-spinal circuitry to produce adaptive motor control in altered gravity (G). To investigate neuroplastic processes leading to optimal performance in altered G we have studied motor control in adult rats using a battery of motor function tests following chronic exposure to various treatments (hyper-G, hindlimb suspension, chemical distruction of hair cells, space flight). These treatments differentially affect muscle fibers, vestibular receptors, and behavioral compensations and, in consequence, differentially disrupt air righting, swimming, posture and gait. The time-course of recovery from these disruptions varies depending on the function tested and the duration and type of treatment. These studies, with others (e.g., D'Amelio et al. in this volume), indicate that adaptation to altered gravity involves alterations in multiple sensory-motor systems that change at different rates. We propose that the use of parallel studies under different altered G conditions will most efficiently lead to an understanding of the modifications in central (neural) and peripheral (sensory and neuromuscular) systems that underlie sensory-motor adaptation in active, intact individuals.

  10. Executive functioning and general cognitive ability in pregnant women and matched controls.

    PubMed

    Onyper, Serge V; Searleman, Alan; Thacher, Pamela V; Maine, Emily E; Johnson, Alicia G

    2010-11-01

    The current study compared the performances of pregnant women with education- and age-matched controls on a variety of measures that assessed perceptual speed, short-term and working memory capacity, subjective memory complaints, sleep quality, level of fatigue, executive functioning, episodic and prospective memory, and crystallized and fluid intelligence. A primary purpose was to test the hypothesis of Henry and Rendell (2007) that pregnancy-related declines in cognitive functioning would be especially evident in tasks that place a high demand on executive processes. We also investigated a parallel hypothesis: that the pregnant women would experience a broad-based reduction in cognitive capability. Very limited support was found for the executive functioning hypothesis. Pregnant women scored lower only on the measure of verbal fluency (Controlled Oral Word Association Test, COWAT) but not on the Wisconsin Card Sorting Task or on any working memory measures. Furthermore, group differences in COWAT performance disappeared after controlling for verbal IQ (Shipley vocabulary). In addition, there was no support for the general decline hypothesis. We conclude that pregnancy-associated differences in performance observed in the current study were relatively mild and rarely reached either clinical or practical significance.

  11. Wake vortex effects on parallel runway operations

    DOT National Transportation Integrated Search

    2003-01-06

    Aircraft wake vortex behavior in ground effect between two parallel runways at Frankfurt/Main International Airport was studied. The distance and time of vortex demise were examined as a function of crosswind, aircraft type, and a measure of atmosphe...

  12. Simple Approaches to Minimally-Instrumented, Microfluidic-Based Point-of-Care Nucleic Acid Amplification Tests

    PubMed Central

    Mauk, Michael G.; Song, Jinzhao; Liu, Changchun; Bau, Haim H.

    2018-01-01

    Designs and applications of microfluidics-based devices for molecular diagnostics (Nucleic Acid Amplification Tests, NAATs) in infectious disease testing are reviewed, with emphasis on minimally instrumented, point-of-care (POC) tests for resource-limited settings. Microfluidic cartridges (‘chips’) that combine solid-phase nucleic acid extraction; isothermal enzymatic nucleic acid amplification; pre-stored, paraffin-encapsulated lyophilized reagents; and real-time or endpoint optical detection are described. These chips can be used with a companion module for separating plasma from blood through a combined sedimentation-filtration effect. Three reporter types: Fluorescence, colorimetric dyes, and bioluminescence; and a new paradigm for end-point detection based on a diffusion-reaction column are compared. Multiplexing (parallel amplification and detection of multiple targets) is demonstrated. Low-cost detection and added functionality (data analysis, control, communication) can be realized using a cellphone platform with the chip. Some related and similar-purposed approaches by others are surveyed. PMID:29495424

  13. Cellular automata with object-oriented features for parallel molecular network modeling.

    PubMed

    Zhu, Hao; Wu, Yinghui; Huang, Sui; Sun, Yan; Dhar, Pawan

    2005-06-01

    Cellular automata are an important modeling paradigm for studying the dynamics of large, parallel systems composed of multiple, interacting components. However, to model biological systems, cellular automata need to be extended beyond the large-scale parallelism and intensive communication in order to capture two fundamental properties characteristic of complex biological systems: hierarchy and heterogeneity. This paper proposes extensions to a cellular automata language, Cellang, to meet this purpose. The extended language, with object-oriented features, can be used to describe the structure and activity of parallel molecular networks within cells. Capabilities of this new programming language include object structure to define molecular programs within a cell, floating-point data type and mathematical functions to perform quantitative computation, message passing capability to describe molecular interactions, as well as new operators, statements, and built-in functions. We discuss relevant programming issues of these features, including the object-oriented description of molecular interactions with molecule encapsulation, message passing, and the description of heterogeneity and anisotropy at the cell and molecule levels. By enabling the integration of modeling at the molecular level with system behavior at cell, tissue, organ, or even organism levels, the program will help improve our understanding of how complex and dynamic biological activities are generated and controlled by parallel functioning of molecular networks. Index Terms-Cellular automata, modeling, molecular network, object-oriented.

  14. Parallelism Effects and Verb Activation: The Sustained Reactivation Hypothesis

    ERIC Educational Resources Information Center

    Callahan, Sarah M.; Shapiro, Lewis P.; Love, Tracy

    2010-01-01

    This study investigated the processes underlying parallelism by evaluating the activation of a parallel element (i.e., a verb) throughout "and"-coordinated sentences. Four points were tested: (1) approximately 1,600ms after the verb in the first conjunct (PP1), (2) immediately following the conjunction (PP2), (3) approximately 1,100ms after the…

  15. Energy distribution functions of kilovolt ions in a modified Penning discharge.

    NASA Technical Reports Server (NTRS)

    Roth, J. R.

    1973-01-01

    The distribution function of ion energy parallel to the magnetic field of a modified Penning discharge has been measured with a retarding potential energy analyzer. These ions escaped through one of the throats of the magnetic mirror geometry. Simultaneous measurements of the ion energy distribution function perpendicular to the magnetic field have been made with a charge-exchange neutral detector. The ion energy distribution functions are approximately Maxwellian, and the parallel and perpendicular kinetic temperatures are equal within experimental error. These results suggest that turbulent processes previously observed in this discharge Maxwellianize the velocity distribution along a radius in velocity space, and result in an isotropic energy distribution.

  16. Energy distribution functions of kilovolt ions in a modified Penning discharge.

    NASA Technical Reports Server (NTRS)

    Roth, J. R.

    1972-01-01

    The distribution function of ion energy parallel to the magnetic field of a modified Penning discharge has been measured with a retarding potential energy analyzer. These ions escaped through one of the throats of the magnetic mirror geometry. Simultaneous measurements of the ion energy distribution function perpendicular to the magnetic field have been made with a charge-exchange neutral detector. The ion energy distribution functions are approximately Maxwellian, and the parallel and perpendicular kinetic temperatures are equal within experimental error. These results suggest that turbulent processes previously observed in this discharge Maxwellianize the velocity distribution along a radius in velocity space, and result in an isotropic energy distribution.

  17. Chaste: A test-driven approach to software development for biological modelling

    NASA Astrophysics Data System (ADS)

    Pitt-Francis, Joe; Pathmanathan, Pras; Bernabeu, Miguel O.; Bordas, Rafel; Cooper, Jonathan; Fletcher, Alexander G.; Mirams, Gary R.; Murray, Philip; Osborne, James M.; Walter, Alex; Chapman, S. Jon; Garny, Alan; van Leeuwen, Ingeborg M. M.; Maini, Philip K.; Rodríguez, Blanca; Waters, Sarah L.; Whiteley, Jonathan P.; Byrne, Helen M.; Gavaghan, David J.

    2009-12-01

    Chaste ('Cancer, heart and soft-tissue environment') is a software library and a set of test suites for computational simulations in the domain of biology. Current functionality has arisen from modelling in the fields of cancer, cardiac physiology and soft-tissue mechanics. It is released under the LGPL 2.1 licence. Chaste has been developed using agile programming methods. The project began in 2005 when it was reasoned that the modelling of a variety of physiological phenomena required both a generic mathematical modelling framework, and a generic computational/simulation framework. The Chaste project evolved from the Integrative Biology (IB) e-Science Project, an inter-institutional project aimed at developing a suitable IT infrastructure to support physiome-level computational modelling, with a primary focus on cardiac and cancer modelling. Program summaryProgram title: Chaste Catalogue identifier: AEFD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL 2.1 No. of lines in distributed program, including test data, etc.: 5 407 321 No. of bytes in distributed program, including test data, etc.: 42 004 554 Distribution format: tar.gz Programming language: C++ Operating system: Unix Has the code been vectorised or parallelized?: Yes. Parallelized using MPI. RAM:<90 Megabytes for two of the scenarios described in Section 6 of the manuscript (Monodomain re-entry on a slab or Cylindrical crypt simulation). Up to 16 Gigabytes (distributed across processors) for full resolution bidomain cardiac simulation. Classification: 3. External routines: Boost, CodeSynthesis XSD, CxxTest, HDF5, METIS, MPI, PETSc, Triangle, Xerces Nature of problem: Chaste may be used for solving coupled ODE and PDE systems arising from modelling biological systems. Use of Chaste in two application areas are described in this paper: cardiac electrophysiology and intestinal crypt dynamics. Solution method: Coupled multi-physics with PDE, ODE and discrete mechanics simulation. Running time: The largest cardiac simulation described in the manuscript takes about 6 hours to run on a single 3 GHz core. See results section (Section 6) of the manuscript for discussion on parallel scaling.

  18. Evaluation of stability of osteosynthesis with K-wires on an artificial model of tibial malleolus fracture.

    PubMed

    Bumči, Igor; Vlahović, Tomislav; Jurić, Filip; Žganjer, Mirko; Miličić, Gordana; Wolf, Hinko; Antabak, Anko

    2015-11-01

    Paediatric ankle fractures comprise approximately 4% of all paediatric fractures and 30% of all epiphyseal fractures. Integrity of the ankle "mortise", which consists of tibial and fibular malleoli, is significant for stability and function of the ankle joint. Tibial malleolar fractures are classified as SH III or SH IV intra-articular fractures and, in cases where the fragments are displaced, anatomic reposition and fixation is mandatory. Type SH III-IV fractures of the tibial malleolus are usually treated with open reduction and fixation with cannulated screws that are parallel to the physis. Two K-wires are used for temporary stabilisation of fragments during reduction. A third "guide wire" for the screw is then placed parallel with the physis. Considering the rules of mechanics, it is assumed that the two temporary pins with the additional third pin placed parallel to the physis create a strong triangle and thus provide strong fracture fixation. To prove this hypothesis, an experiment was conducted on the artificial models of the lower end of the tibia from the company "Sawbones". Each model had been sawn in a way that imitates the fracture of medial malleoli and then reattached with 1.8mm pins in various combinations. Prepared models were then tested for tensile and pressure forces. The least stable model was that in which the fractured pieces were attached with only two parallel pins. The most stable model comprised three pins, where two crossed pins were inserted in the opposite compact bone and the third pin was inserted through the epiphysis parallel with and below the growth plate. A potential method of choice for fixation of tibial malleolar fractures comprises three K-wires, where two crossed pins are placed in the opposite compact bone and one is parallel with the growth plate. The benefits associated with this method include shorter operating times and avoidance of a second operation for screw removal. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shimojo, Fuyuki; Hattori, Shinnosuke; Department of Physics, Kumamoto University, Kumamoto 860-8555

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at themore » peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10{sup 6}-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques are employed for efficiently calculating the long-range exact exchange correction and excited-state forces. The NAQMD trajectories are analyzed to extract the rates of various excitonic processes, which are then used in KMC simulation to study the dynamics of the global exciton flow network. This has allowed the study of large-scale photoexcitation dynamics in 6400-atom amorphous molecular solid, reaching the experimental time scales.« less

  20. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  1. Respiratory muscle function in infants with spinal muscular atrophy type I.

    PubMed

    Finkel, Richard S; Weiner, Daniel J; Mayer, Oscar H; McDonough, Joseph M; Panitch, Howard B

    2014-12-01

    To determine the feasibility and safety of respiratory muscle function testing in weak infants with a progressive neuromuscular disorder. Respiratory insufficiency is the major cause of morbidity and mortality in infants with spinal muscular atrophy type I (SMA-I). Tests of respiratory muscle strength, endurance, and breathing patterns can be performed safely in SMA-I infants. Useful data can be collected which parallels the clinical course of pulmonary function in SMA-I. An exploratory study of respiratory muscle function testing and breathing patterns in seven infants with SMA-I seen in our neuromuscular clinic. Measurements were made at initial study visit and, where possible, longitudinally over time. We measured maximal inspiratory (MIP) and transdiaphragmatic pressures, mean transdiaphragmatic pressure, airway occlusion pressure at 100 msec of inspiration, inspiratory and total respiratory cycle time, and aspects of relative thoracoabdominal motion using respiratory inductive plethysmography (RIP). The tension time index of the diaphragm and of the respiratory muscles, phase angle (Φ), phase relation during the total breath, and labored breathing index were calculated. Age at baseline study was 54-237 (median 131) days. Reliable data were obtained safely for MIP, phase angle, labored breathing index, and the invasive and non-invasive tension time indices, even in very weak infants. Data obtained corresponded to the clinical estimate of severity and predicted the need for respiratory support. The testing employed was both safe and feasible. Measurements of MIP and RIP are easily performed tests that are well tolerated and provide clinically useful information for infants with SMA-I. © 2014 Wiley Periodicals, Inc.

  2. Liquid-Nitrogen Test for Blocked Tubes

    NASA Technical Reports Server (NTRS)

    Wagner, W. R.

    1984-01-01

    Nondestructive test identifies obstructed tube in array of parallel tubes. Trickle of liquid nitrogen allowed to flow through tube array until array accumulates substantial formation of frost from moisture in air. Flow stopped and warm air introduced into inlet manifold to heat tubes in array. Tubes still frosted after others defrosted identified as obstructed tubes. Applications include inspection of flow systems having parallel legs.

  3. Sustainability Attitudes and Behavioral Motivations of College Students: Testing the Extended Parallel Process Model

    ERIC Educational Resources Information Center

    Perrault, Evan K.; Clark, Scott K.

    2018-01-01

    Purpose: A planet that can no longer sustain life is a frightening thought--and one that is often present in mass media messages. Therefore, this study aims to test the components of a classic fear appeal theory, the extended parallel process model (EPPM) and to determine how well its constructs predict sustainability behavioral intentions. This…

  4. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed Central

    Nadkarni, P. M.; Miller, P. L.

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations. PMID:1807632

  5. Production of yarns composed of oriented nanofibers for ophthalmological implants

    NASA Astrophysics Data System (ADS)

    Shynkarenko, A.; Klapstova, A.; Krotov, A.; Moucka, M.; Lukas, D.

    2017-10-01

    Parallelized nanofibrous structures are commonly used in medical sector, especially for the ophthalmological implants. In this research self-fabricated device is tested for improved collection and twisting of the parallel nanofibers. Previously manual techniques are used to collect the nanofibers and then twist is given, where as in our device different parameters can be optimized to obtained parallel nanofibers and further twisting can be given. The device is used to bring automation to the technique of achieving parallel fibrous structures for medical applications.

  6. Data communications in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-11-12

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer composed of compute nodes that execute a parallel application, each compute node including application processors that execute the parallel application and at least one management processor dedicated to gathering information regarding data communications. The PAMI is composed of data communications endpoints, each endpoint composed of a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources. Embodiments function by gathering call site statistics describing data communications resulting from execution of data communications instructions and identifying in dependence upon the call cite statistics a data communications algorithm for use in executing a data communications instruction at a call site in the parallel application.

  7. Computational efficiency of parallel combinatorial OR-tree searches

    NASA Technical Reports Server (NTRS)

    Li, Guo-Jie; Wah, Benjamin W.

    1990-01-01

    The performance of parallel combinatorial OR-tree searches is analytically evaluated. This performance depends on the complexity of the problem to be solved, the error allowance function, the dominance relation, and the search strategies. The exact performance may be difficult to predict due to the nondeterminism and anomalies of parallelism. The authors derive the performance bounds of parallel OR-tree searches with respect to the best-first, depth-first, and breadth-first strategies, and verify these bounds by simulation. They show that a near-linear speedup can be achieved with respect to a large number of processors for parallel OR-tree searches. Using the bounds developed, the authors derive sufficient conditions for assuring that parallelism will not degrade performance and necessary conditions for allowing parallelism to have a speedup greater than the ratio of the numbers of processors. These bounds and conditions provide the theoretical foundation for determining the number of processors required to assure a near-linear speedup.

  8. The BLAZE language - A parallel language for scientific programming

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Van Rosendale, John

    1987-01-01

    A Pascal-like scientific programming language, BLAZE, is described. BLAZE contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus BLAZE should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with conceptually sequential control flow. A central goal in the design of BLAZE is portability across a broad range of parallel architectures. The multiple levels of parallelism present in BLAZE code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of BLAZE are described and it is shown how this language would be used in typical scientific programming.

  9. Linearly exact parallel closures for slab geometry

    NASA Astrophysics Data System (ADS)

    Ji, Jeong-Young; Held, Eric D.; Jhang, Hogun

    2013-08-01

    Parallel closures are obtained by solving a linearized kinetic equation with a model collision operator using the Fourier transform method. The closures expressed in wave number space are exact for time-dependent linear problems to within the limits of the model collision operator. In the adiabatic, collisionless limit, an inverse Fourier transform is performed to obtain integral (nonlocal) parallel closures in real space; parallel heat flow and viscosity closures for density, temperature, and flow velocity equations replace Braginskii's parallel closure relations, and parallel flow velocity and heat flow closures for density and temperature equations replace Spitzer's parallel transport relations. It is verified that the closures reproduce the exact linear response function of Hammett and Perkins [Phys. Rev. Lett. 64, 3019 (1990)] for Landau damping given a temperature gradient. In contrast to their approximate closures where the vanishing viscosity coefficient numerically gives an exact response, our closures relate the heat flow and nonvanishing viscosity to temperature and flow velocity (gradients).

  10. Effects of an antiandrogenic oral contraceptive pill compared with metformin on blood coagulation tests and endothelial function in women with the polycystic ovary syndrome: influence of obesity and smoking.

    PubMed

    Luque-Ramírez, Manuel; Mendieta-Azcona, Covandonga; del Rey Sánchez, José M; Matíes, Milagro; Escobar-Morreale, Héctor F

    2009-03-01

    To study the blood clotting tests and endothelial function of polycystic ovary syndrome (PCOS) patients and non-hyperandrogenic women, and their changes during PCOS treatment, as a function of the presence of obesity and smoking. Case-control study followed by a randomized clinical trial. Blood clotting and endothelial function were analyzed in 40 PCOS patients and 20 non-hyperandrogenic women. Thirty-four PCOS women were randomized to an oral contraceptive containing 35 microg ethinyl-estradiol plus 2 mg cyproterone acetate (Diane(35)Diario) or metformin (850 mg twice daily), monitoring the changes on these parameters during 24 weeks of treatment. The influence of obesity and smoking was also analyzed. Blood clotting and endothelial function tests were similar among PCOS patients and controls with the exception of a higher platelet count in the former. Obesity increased circulating fibrinogen levels, prothrombin activity and platelet counts, and reduced prothrombin and activated partial thromboplastin times. Smoking increased fibrinogen levels, platelet counts, and prothrombin activity, and reduced prothrombin time, in relation to the larger waist circumference of smokers. Irrespective of the treatment received, PCOS patients showed a decrease in prothrombin time and an increase in prothrombin activity, with a parallel increase in homocysteine levels in metformin users. The activated partial thromboplastin time decreased markedly in the patients treated with Diane(35)Diario. Finally, flow-mediated dilation improved in non-smokers irrespective of the drug received, but worsened in smokers. Oral contraceptives and metformin may exert deleterious effects on blood clotting tests of PCOS women, yet the effects of metformin appear to be milder. Because smoking potentiates some of these effects and deteriorates endothelial function, smoking cessation should be promoted in PCOS patients.

  11. The plant metacaspase AtMC1 in pathogen-triggered programmed cell death and aging: functional linkage with autophagy

    PubMed Central

    Coll, N S; Smidler, A; Puigvert, M; Popa, C; Valls, M; Dangl, J L

    2014-01-01

    Autophagy is a major nutrient recycling mechanism in plants. However, its functional connection with programmed cell death (PCD) is a topic of active debate and remains not well understood. Our previous studies established the plant metacaspase AtMC1 as a positive regulator of pathogen-triggered PCD. Here, we explored the linkage between plant autophagy and AtMC1 function in the context of pathogen-triggered PCD and aging. We observed that autophagy acts as a positive regulator of pathogen-triggered PCD in a parallel pathway to AtMC1. In addition, we unveiled an additional, pro-survival homeostatic function of AtMC1 in aging plants that acts in parallel to a similar pro-survival function of autophagy. This novel pro-survival role of AtMC1 may be functionally related to its prodomain-mediated aggregate localization and potential clearance, in agreement with recent findings using the single budding yeast metacaspase YCA1. We propose a unifying model whereby autophagy and AtMC1 are part of parallel pathways, both positively regulating HR cell death in young plants, when these functions are not masked by the cumulative stresses of aging, and negatively regulating senescence in older plants. PMID:24786830

  12. Radiation-Hard SpaceWire/Gigabit Ethernet-Compatible Transponder

    NASA Technical Reports Server (NTRS)

    Katzman, Vladimir

    2012-01-01

    A radiation-hard transponder was developed utilizing submicron/nanotechnology from IBM. The device consumes low power and has a low fabrication cost. This device utilizes a Plug-and-Play concept, and can be integrated into intra-satellite networks, supporting SpaceWire and Gigabit Ethernet I/O. A space-qualified, 100-pin package also was developed, allowing space-qualified (class K) transponders to be delivered within a six-month time frame. The novel, optical, radiation-tolerant transponder was implemented as a standalone board, containing the transponder ASIC (application specific integrated circuit) and optical module, with an FPGA (field-programmable gate array) friendly parallel interface. It features improved radiation tolerance; high-data-rate, low-power consumption; and advanced functionality. The transponder utilizes a patented current mode logic library of radiation-hardened-by-architecture cells. The transponder was developed, fabricated, and radhard tested up to 1 MRad. It was fabricated using 90-nm CMOS (complementary metal oxide semiconductor) 9 SF process from IBM, and incorporates full BIT circuitry, allowing a loop back test. The low-speed parallel LVCMOS (lowvoltage complementary metal oxide semiconductor) bus is compatible with Actel FPGA. The output LVDS (low-voltage differential signaling) interface operates up to 1.5 Gb/s. Built-in CDR (clock-data recovery) circuitry provides robust synchronization and incorporates two alarm signals such as synch loss and signal loss. The ultra-linear peak detector scheme allows on-line control of the amplitude of the input signal. Power consumption is less than 300 mW. The developed transponder with a 1.25 Gb/s serial data rate incorporates a 10-to-1 serializer with an internal clock multiplication unit and a 10-1 deserializer with internal clock and data recovery block, which can operate with 8B10B encoded signals. Three loop-back test modes are provided to facilitate the built-in-test functionality. The design is based on a proprietary library of differential current switching logic cells implemented in the standard 90-nm CMOS 9SF technology from IBM. The proprietary low-power LVDS physical interface is fully compatible with the SpaceWire standard, and can be directly connected to the SFP MSA (small form factor pluggable Multiple Source Agreement) optical transponder. The low-speed parallel interfaces are fully compatible with the standard 1.8 V CMOS input/output devices. The utilized proprietary annular CMOS layout structures provide TID tolerance above 1.2 MRad. The complete chip consumes less than 150 mW of power from a single 1.8-V positive supply source.

  13. Impact of equalizing currents on losses and torque ripples in electrical machines with fractional slot concentrated windings

    NASA Astrophysics Data System (ADS)

    Toporkov, D. M.; Vialcev, G. B.

    2017-10-01

    The implementation of parallel branches is a commonly used manufacturing method of the realizing of fractional slot concentrated windings in electrical machines. If the rotor eccentricity is enabled in a machine with parallel branches, the equalizing currents can arise. The simulation approach of the equalizing currents in parallel branches of an electrical machine winding based on magnetic field calculation by using Finite Elements Method is discussed in the paper. The high accuracy of the model is provided by the dynamic improvement of the inductances in the differential equation system describing a machine. The pre-computed table flux linkage functions are used for that. The functions are the dependences of the flux linkage of parallel branches on the branches currents and rotor position angle. The functions permit to calculate self-inductances and mutual inductances by partial derivative. The calculated results obtained for the electric machine specimen are presented. The results received show that the adverse combination of design solutions and the rotor eccentricity leads to a high value of the equalizing currents and windings heating. Additional torque ripples also arise. The additional ripples harmonic content is not similar to the cogging torque or ripples caused by the rotor eccentricity.

  14. Progressive Vascular Functional and Structural Damage in a Bronchopulmonary Dysplasia Model in Preterm Rabbits Exposed to Hyperoxia.

    PubMed

    Jiménez, Julio; Richter, Jute; Nagatomo, Taro; Salaets, Thomas; Quarck, Rozenn; Wagennar, Allard; Wang, Hongmei; Vanoirbeek, Jeroen; Deprest, Jan; Toelen, Jaan

    2016-10-24

    Bronchopulmonary dysplasia (BPD) is caused by preterm neonatal lung injury and results in oxygen dependency and pulmonary hypertension. Current clinical management fails to reduce the incidence of BPD, which calls for novel therapies. Fetal rabbits have a lung development that mimics humans and can be used as a translational model to test novel treatment options. In preterm rabbits, exposure to hyperoxia leads to parenchymal changes, yet vascular damage has not been studied in this model. In this study we document the early functional and structural changes of the lung vasculature in preterm rabbits that are induced by hyperoxia after birth. Pulmonary artery Doppler measurements, micro-CT barium angiograms and media thickness of peripheral pulmonary arteries were affected after seven days of hyperoxia when compared to controls. The parenchyma was also affected both at the functional and structural level. Lung function testing showed higher tissue resistance and elastance, with a decreased lung compliance and lung capacity. Histologically hyperoxia leads to fewer and larger alveoli with thicker walls, less developed distal airways and more inflammation than normoxia. In conclusion, we show that the rabbit model develops pulmonary hypertension and developmental lung arrest after preterm lung injury, which parallel the early changes in human BPD. Thus it enables the testing of pharmaceutical agents that target the cardiovascular compartment of the lung for further translation towards the clinic.

  15. Progressive Vascular Functional and Structural Damage in a Bronchopulmonary Dysplasia Model in Preterm Rabbits Exposed to Hyperoxia

    PubMed Central

    Jiménez, Julio; Richter, Jute; Nagatomo, Taro; Salaets, Thomas; Quarck, Rozenn; Wagennar, Allard; Wang, Hongmei; Vanoirbeek, Jeroen; Deprest, Jan; Toelen, Jaan

    2016-01-01

    Bronchopulmonary dysplasia (BPD) is caused by preterm neonatal lung injury and results in oxygen dependency and pulmonary hypertension. Current clinical management fails to reduce the incidence of BPD, which calls for novel therapies. Fetal rabbits have a lung development that mimics humans and can be used as a translational model to test novel treatment options. In preterm rabbits, exposure to hyperoxia leads to parenchymal changes, yet vascular damage has not been studied in this model. In this study we document the early functional and structural changes of the lung vasculature in preterm rabbits that are induced by hyperoxia after birth. Pulmonary artery Doppler measurements, micro-CT barium angiograms and media thickness of peripheral pulmonary arteries were affected after seven days of hyperoxia when compared to controls. The parenchyma was also affected both at the functional and structural level. Lung function testing showed higher tissue resistance and elastance, with a decreased lung compliance and lung capacity. Histologically hyperoxia leads to fewer and larger alveoli with thicker walls, less developed distal airways and more inflammation than normoxia. In conclusion, we show that the rabbit model develops pulmonary hypertension and developmental lung arrest after preterm lung injury, which parallel the early changes in human BPD. Thus it enables the testing of pharmaceutical agents that target the cardiovascular compartment of the lung for further translation towards the clinic. PMID:27783043

  16. Vortex-induced vibration of two parallel risers: Experimental test and numerical simulation

    NASA Astrophysics Data System (ADS)

    Huang, Weiping; Zhou, Yang; Chen, Haiming

    2016-04-01

    The vortex-induced vibration of two identical rigidly mounted risers in a parallel arrangement was studied using Ansys- CFX and model tests. The vortex shedding and force were recorded to determine the effect of spacing on the two-degree-of-freedom oscillation of the risers. CFX was used to study the single riser and two parallel risers in 2-8 D spacing considering the coupling effect. Because of the limited width of water channel, only three different riser spacings, 2 D, 3 D, and 4 D, were tested to validate the characteristics of the two parallel risers by comparing to the numerical simulation. The results indicate that the lift force changes significantly with the increase in spacing, and in the case of 3 D spacing, the lift force of the two parallel risers reaches the maximum. The vortex shedding of the risers in 3 D spacing shows that a variable velocity field with the same frequency as the vortex shedding is generated in the overlapped area, thus equalizing the period of drag force to that of lift force. It can be concluded that the interaction between the two parallel risers is significant when the risers are brought to a small distance between them because the trajectory of riser changes from oval to curve 8 as the spacing is increased. The phase difference of lift force between the two risers is also different as the spacing changes.

  17. A concise evidence-based physical examination for diagnosis of acromioclavicular joint pathology: a systematic review.

    PubMed

    Krill, Michael K; Rosas, Samuel; Kwon, KiHyun; Dakkak, Andrew; Nwachukwu, Benedict U; McCormick, Frank

    2018-02-01

    The clinical examination of the shoulder joint is an undervalued diagnostic tool for evaluating acromioclavicular (AC) joint pathology. Applying evidence-based clinical tests enables providers to make an accurate diagnosis and minimize costly imaging procedures and potential delays in care. The purpose of this study was to create a decision tree analysis enabling simple and accurate diagnosis of AC joint pathology. A systematic review of the Medline, Ovid and Cochrane Review databases was performed to identify level one and two diagnostic studies evaluating clinical tests for AC joint pathology. Individual test characteristics were combined in series and in parallel to improve sensitivities and specificities. A secondary analysis utilized subjective pre-test probabilities to create a clinical decision tree algorithm with post-test probabilities. The optimal special test combination to screen and confirm AC joint pathology combined Paxinos sign and O'Brien's Test, with a specificity of 95.8% when performed in series; whereas, Paxinos sign and Hawkins-Kennedy Test demonstrated a sensitivity of 93.7% when performed in parallel. Paxinos sign and O'Brien's Test demonstrated the greatest positive likelihood ratio (2.71); whereas, Paxinos sign and Hawkins-Kennedy Test reported the lowest negative likelihood ratio (0.35). No combination of special tests performed in series or in parallel creates more than a small impact on post-test probabilities to screen or confirm AC joint pathology. Paxinos sign and O'Brien's Test is the only special test combination that has a small and sometimes important impact when used both in series and in parallel. Physical examination testing is not beneficial for diagnosis of AC joint pathology when pretest probability is unequivocal. In these instances, it is of benefit to proceed with procedural tests to evaluate AC joint pathology. Ultrasound-guided corticosteroid injections are diagnostic and therapeutic. An ultrasound-guided AC joint corticosteroid injection may be an appropriate new standard for treatment and surgical decision-making. II - Systematic Review.

  18. A cost-benefit model comparing the California Milk Cell Test and Milk Electrical Resistance Test.

    PubMed

    Petzer, Inge-Marie; Karzis, Joanne; Meyer, Isabel A; van der Schans, Theodorus J

    2013-04-24

    The indirect effects of mastitis treatment are often overlooked in cost-benefit analyses, but it may be beneficial for the dairy industry to consider them. The cost of mastitis treatment may increase when the duration of intra-mammary infections are prolonged due to misdiagnosis of host-adapted mastitis. Laboratory diagnosis of mastitis can be costly and time consuming, therefore cow-side tests such as the California Milk Cell Test (CMCT) and Milk Electrical Resistance (MER) need to be utilised to their full potential. The aim of this study was to determine the relative benefit of using these two tests separately and in parallel. This was done using a partial-budget analysis and a cost-benefit model to estimate the benefits and costs of each respective test and the parallel combination thereof. Quarter milk samples (n= 1860) were taken from eight different dairy herds in South Africa. Milk samples were evaluated by means of the CMCT, hand-held MER meter and cyto-microbiological laboratory analysis. After determining the most appropriate cut-off points for the two cow-side tests, the sensitivity and specificity of the CMCT (Se= 1.00, Sp= 0.66), MER (Se= 0.92, Sp= 0.62) and the tests done in parallel (Se= 1.00, Sp= 0.87) were calculated. The input data that were used for partial-budget analysis and in the cost-benefit model were based on South African figures at the time of the study, and on literature. The total estimated financial benefit of correct diagnosis of host-adapted mastitis per cow for the CMCT, MER and the tests done in parallel was R898.73, R518.70 and R1064.67 respectively. This involved taking the expected benefit of a correct test result per cow, the expected cost of an error per cow and the cost of the test into account. The CMCT was shown to be 11%more beneficial than the MER test, whilst using the tests in parallel was shown to be the most beneficial method for evaluating the mastitis-control programme. Therefore, it is recommended that the combined tests should be used strategically in practice to monitor udder health and promote a pro-active udder health approach when dealing with host-adapted pathogens.

  19. Testing for carryover effects after cessation of treatments: a design approach.

    PubMed

    Sturdevant, S Gwynn; Lumley, Thomas

    2016-08-02

    Recently, trials addressing noisy measurements with diagnosis occurring by exceeding thresholds (such as diabetes and hypertension) have been published which attempt to measure carryover - the impact that treatment has on an outcome after cessation. The design of these trials has been criticised and simulations have been conducted which suggest that the parallel-designs used are not adequate to test this hypothesis; two solutions are that either a differing parallel-design or a cross-over design could allow for diagnosis of carryover. We undertook a systematic simulation study to determine the ability of a cross-over or a parallel-group trial design to detect carryover effects on incident hypertension in a population with prehypertension. We simulated blood pressure and focused on varying criteria to diagnose systolic hypertension. Using the difference in cumulative incidence hypertension to analyse parallel-group or cross-over trials resulted in none of the designs having acceptable Type I error rate. Under the null hypothesis of no carryover the difference is well above the nominal 5 % error rate. When a treatment is effective during the intervention period, reliable testing for a carryover effect is difficult. Neither parallel-group nor cross-over designs using the difference in cumulative incidence appear to be a feasible approach. Future trials should ensure their design and analysis is validated by simulation.

  20. Optimizing transformations of stencil operations for parallel cache-based architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassetti, F.; Davis, K.

    This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation andmore » applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.« less

  1. FILMPAR: A parallel algorithm designed for the efficient and accurate computation of thin film flow on functional surfaces containing micro-structure

    NASA Astrophysics Data System (ADS)

    Lee, Y. C.; Thompson, H. M.; Gaskell, P. H.

    2009-12-01

    FILMPAR is a highly efficient and portable parallel multigrid algorithm for solving a discretised form of the lubrication approximation to three-dimensional, gravity-driven, continuous thin film free-surface flow over substrates containing micro-scale topography. While generally applicable to problems involving heterogeneous and distributed features, for illustrative purposes the algorithm is benchmarked on a distributed memory IBM BlueGene/P computing platform for the case of flow over a single trench topography, enabling direct comparison with complementary experimental data and existing serial multigrid solutions. Parallel performance is assessed as a function of the number of processors employed and shown to lead to super-linear behaviour for the production of mesh-independent solutions. In addition, the approach is used to solve for the case of flow over a complex inter-connected topographical feature and a description provided of how FILMPAR could be adapted relatively simply to solve for a wider class of related thin film flow problems. Program summaryProgram title: FILMPAR Catalogue identifier: AEEL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 530 421 No. of bytes in distributed program, including test data, etc.: 1 960 313 Distribution format: tar.gz Programming language: C++ and MPI Computer: Desktop, server Operating system: Unix/Linux Mac OS X Has the code been vectorised or parallelised?: Yes. Tested with up to 128 processors RAM: 512 MBytes Classification: 12 External routines: GNU C/C++, MPI Nature of problem: Thin film flows over functional substrates containing well-defined single and complex topographical features are of enormous significance, having a wide variety of engineering, industrial and physical applications. However, despite recent modelling advances, the accurate numerical solution of the equations governing such problems is still at a relatively early stage. Indeed, recent studies employing a simplifying long-wave approximation have shown that highly efficient numerical methods are necessary to solve the resulting lubrication equations in order to achieve the level of grid resolution required to accurately capture the effects of micro- and nano-scale topographical features. Solution method: A portable parallel multigrid algorithm has been developed for the above purpose, for the particular case of flow over submerged topographical features. Within the multigrid framework adopted, a W-cycle is used to accelerate convergence in respect of the time dependent nature of the problem, with relaxation sweeps performed using a fixed number of pre- and post-Red-Black Gauss-Seidel Newton iterations. In addition, the algorithm incorporates automatic adaptive time-stepping to avoid the computational expense associated with repeated time-step failure. Running time: 1.31 minutes using 128 processors on BlueGene/P with a problem size of over 16.7 million mesh points.

  2. Cognitive predictors of everyday functioning in older adults: results from the ACTIVE Cognitive Intervention Trial.

    PubMed

    Gross, Alden L; Rebok, George W; Unverzagt, Frederick W; Willis, Sherry L; Brandt, Jason

    2011-09-01

    The present study sought to predict changes in everyday functioning using cognitive tests. Data from the Advanced Cognitive Training for Independent and Vital Elderly trial were used to examine the extent to which competence in different cognitive domains--memory, inductive reasoning, processing speed, and global mental status--predicts prospectively measured everyday functioning among older adults. Coefficients of determination for baseline levels and trajectories of everyday functioning were estimated using parallel process latent growth models. Each cognitive domain independently predicts a significant proportion of the variance in baseline and trajectory change of everyday functioning, with inductive reasoning explaining the most variance (R2 = .175) in baseline functioning and memory explaining the most variance (R2 = .057) in changes in everyday functioning. Inductive reasoning is an important determinant of current everyday functioning in community-dwelling older adults, suggesting that successful performance in daily tasks is critically dependent on executive cognitive function. On the other hand, baseline memory function is more important in determining change over time in everyday functioning, suggesting that some participants with low baseline memory function may reflect a subgroup with incipient progressive neurologic disease.

  3. Study design and rationale for Optimal aNtiplatelet pharmacotherapy guided by bedSIDE genetic or functional TESTing in elective percutaneous coronary intervention patients (ONSIDE TEST): a prospective, open-label, randomised parallel-group multicentre trial (NCT01930773).

    PubMed

    Kołtowski, Łukasz; Aradi, Daniel; Huczek, Zenon; Tomaniak, Mariusz; Sibbing, Dirk; Filipiak, Krzysztof J; Kochman, Janusz; Balsam, Paweł; Opolski, Grzegorz

    2016-01-01

    High platelet reactivity (HPR) and presence of CYP2C19 loss-of-function alleles are associated with higher risk for periprocedural myocardial infarction in clopidogrel-treated patients undergoing percutaneous coronary intervention (PCI). It is unknown whether personalised treatment based on platelet function testing or genotyping can prevent such complications. The ONSIDE-TEST is a multicentre, prospective, open-label, randomised controlled clinical trial aiming to assess if optimisation of antiplatelet therapy based on either phenotyping or genotyping is superior to conventional care. Patients will be randomised into phenotyping, genotyping, or control arms. In the phenotyping group, patients will be tested with the VerifyNow P2Y12 assay before PCI, and patients with a platelet reactivity unit greater than 208 will be switched over to prasugrel, while others will continue on clopidogrel therapy. In the genotyping group, carriers of the *2 loss-of-function allele will receive prasugrel for PCI, while wild-type subjects will be treated with clopidogrel. Patients in the control arm will be treated with standard-dose clopidogrel. The primary endpoint of the study is the prevalence of periprocedural myocardial injury within 24 h after PCI in the controls as compared to the phenotyping and genotyping group. Secondary endpoints include cardiac death, myocardial infarction, definite or probable stent thrombosis, or urgent repeat revascularisation within 30 days of PCI. Primary safety outcome is Bleeding Academic Research Consortium (BARC) type 3 and 5 bleeding during 30 days of PCI. The ONSIDE TEST trial is expected to verify the clinical utility of an individualised antiplatelet strategy in preventing periprocedural myocardial injury by either phenotyping or genotyping. ClinicalTrials.gov: NCT01930773.

  4. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm and Reduced Parallel Diagonal Dominant (RPDD) algorithm have been carefully studied on different parallel platforms for different applications, and a NASA simulation code developed by Man M. Rai and his colleagues has been parallelized and implemented based on data dependency analysis. These achievements are addressed in detail in the paper.

  5. Comparison of two tension-band fixation materials and techniques in transverse patella fractures: a biomechanical study.

    PubMed

    Rabalais, R David; Burger, Evalina; Lu, Yun; Mansour, Alfred; Baratta, Richard V

    2008-02-01

    This study compared the biomechanical properties of 2 tension-band techniques with stainless steel wire and ultra high molecular weight polyethylene (UHMWPE) cable in a patella fracture model. Transverse patella fractures were simulated in 8 cadaver knees and fixated with figure-of-8 and parallel wire configurations in combination with Kirschner wires. Identical configurations were tested with UHMWPE cable. Specimens were mounted to a testing apparatus and the quadriceps was used to extend the knees from 90 degrees to 0 degrees; 4 knees were tested under monotonic loading, and 4 knees were tested under cyclic loading. Under monotonic loading, average fracture gap was 0.50 and 0.57 mm for steel wire and UHMWPE cable, respectively, in the figure-of-8 construct compared with 0.16 and 0.04 mm, respectively, in the parallel wire construct. Under cyclic loading, average fracture gap was 1.45 and 1.66 mm for steel wire and UHMWPE cable, respectively, in the figure-of-8 construct compared with 0.45 and 0.60 mm, respectively, in the parallel wire construct. A statistically significant effect of technique was found, with the parallel wire construct performing better than the figure-of-8 construct in both loading models. There was no effect of material or interaction. In this biomechanical model, parallel wires performed better than the figure-of-8 configuration in both loading regimens, and UHMWPE cable performed similarly to 18-gauge steel wire.

  6. Methodes iteratives paralleles: Applications en neutronique et en mecanique des fluides

    NASA Astrophysics Data System (ADS)

    Qaddouri, Abdessamad

    Dans cette these, le calcul parallele est applique successivement a la neutronique et a la mecanique des fluides. Dans chacune de ces deux applications, des methodes iteratives sont utilisees pour resoudre le systeme d'equations algebriques resultant de la discretisation des equations du probleme physique. Dans le probleme de neutronique, le calcul des matrices des probabilites de collision (PC) ainsi qu'un schema iteratif multigroupe utilisant une methode inverse de puissance sont parallelises. Dans le probleme de mecanique des fluides, un code d'elements finis utilisant un algorithme iteratif du type GMRES preconditionne est parallelise. Cette these est presentee sous forme de six articles suivis d'une conclusion. Les cinq premiers articles traitent des applications en neutronique, articles qui representent l'evolution de notre travail dans ce domaine. Cette evolution passe par un calcul parallele des matrices des PC et un algorithme multigroupe parallele teste sur un probleme unidimensionnel (article 1), puis par deux algorithmes paralleles l'un mutiregion l'autre multigroupe, testes sur des problemes bidimensionnels (articles 2--3). Ces deux premieres etapes sont suivies par l'application de deux techniques d'acceleration, le rebalancement neutronique et la minimisation du residu aux deux algorithmes paralleles (article 4). Finalement, on a mis en oeuvre l'algorithme multigroupe et le calcul parallele des matrices des PC sur un code de production DRAGON ou les tests sont plus realistes et peuvent etre tridimensionnels (article 5). Le sixieme article (article 6), consacre a l'application a la mecanique des fluides, traite la parallelisation d'un code d'elements finis FES ou le partitionneur de graphe METIS et la librairie PSPARSLIB sont utilises.

  7. Post-9/11/2001 lung function trajectories by sex and race in World Trade Center-exposed New York City emergency medical service workers.

    PubMed

    Vossbrinck, Madeline; Zeig-Owens, Rachel; Hall, Charles B; Schwartz, Theresa; Moir, William; Webber, Mayris P; Cohen, Hillel W; Nolan, Anna; Weiden, Michael D; Christodoulou, Vasilios; Kelly, Kerry J; Aldrich, Thomas K; Prezant, David J

    2017-03-01

    To determine whether lung function trajectories after 9/11/2001 (9/11) differed by sex or race/ethnicity in World Trade Center-exposed Fire Department of the City of New York emergency medical service (EMS) workers. Serial cross-sectional study of pulmonary function tests (PFTs) taken between 9/11 and 9/10/2015. We used data from routine PFTs (forced expiratory volume in 1 s (FEV 1 ) and FEV 1 % predicted), conducted at 12-18 month intervals. FEV 1 and FEV 1 % predicted were assessed over time, stratified by sex, and race/ethnicity. We also assessed FEV 1 and FEV 1 % predicted in current, former and never-smokers. Among 1817 EMS workers, 334 (18.4%) were women, 979 (53.9%) self-identified as white and 939 (51.6%) were never-smokers. The median follow-up was 13.1 years (IQR 10.5-13.6), and the median number of PFTs per person was 11 (IQR 7-13). After large declines associated with 9/11, there was no discernible recovery in lung function. In analyses limited to never-smokers, the trajectory of decline in adjusted FEV 1 and FEV 1 % predicted was relatively parallel for men and women in the 3 racial/ethnic groups. Similarly, small differences in FEV 1 annual decline between groups were not clinically meaningful. Analyses including ever-smokers were essentially the same. 14 years after 9/11, most EMS workers continued to demonstrate a lack of lung function recovery. The trajectories of lung function decline, however, were parallel by sex and by race/ethnicity. These findings support the use of routine, serial measures of lung function over time in first responders and demonstrate no sex or racial sensitivity to exposure-related lung function decline. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  8. Parallel Processing Systems for Passive Ranging During Helicopter Flight

    NASA Technical Reports Server (NTRS)

    Sridhar, Bavavar; Suorsa, Raymond E.; Showman, Robert D. (Technical Monitor)

    1994-01-01

    The complexity of rotorcraft missions involving operations close to the ground result in high pilot workload. In order to allow a pilot time to perform mission-oriented tasks, sensor-aiding and automation of some of the guidance and control functions are highly desirable. Images from an electro-optical sensor provide a covert way of detecting objects in the flight path of a low-flying helicopter. Passive ranging consists of processing a sequence of images using techniques based on optical low computation and recursive estimation. The passive ranging algorithm has to extract obstacle information from imagery at rates varying from five to thirty or more frames per second depending on the helicopter speed. We have implemented and tested the passive ranging algorithm off-line using helicopter-collected images. However, the real-time data and computation requirements of the algorithm are beyond the capability of any off-the-shelf microprocessor or digital signal processor. This paper describes the computational requirements of the algorithm and uses parallel processing technology to meet these requirements. Various issues in the selection of a parallel processing architecture are discussed and four different computer architectures are evaluated regarding their suitability to process the algorithm in real-time. Based on this evaluation, we conclude that real-time passive ranging is a realistic goal and can be achieved with a short time.

  9. Parallel RNAi screens across different cell lines identify generic and cell type-specific regulators of actin organization and cell morphology.

    PubMed

    Liu, Tao; Sims, David; Baum, Buzz

    2009-01-01

    In recent years RNAi screening has proven a powerful tool for dissecting gene functions in animal cells in culture. However, to date, most RNAi screens have been performed in a single cell line, and results then extrapolated across cell types and systems. Here, to dissect generic and cell type-specific mechanisms underlying cell morphology, we have performed identical kinome RNAi screens in six different Drosophila cell lines, derived from two distinct tissues of origin. This analysis identified a core set of kinases required for normal cell morphology in all lines tested, together with a number of kinases with cell type-specific functions. Most significantly, the screen identified a role for minibrain (mnb/DYRK1A), a kinase associated with Down's syndrome, in the regulation of actin-based protrusions in CNS-derived cell lines. This cell type-specific requirement was not due to the peculiarities in the morphology of CNS-derived cells and could not be attributed to differences in mnb expression. Instead, it likely reflects differences in gene expression that constitute the cell type-specific functional context in which mnb/DYRK1A acts. Using parallel RNAi screens and gene expression analyses across cell types we have identified generic and cell type-specific regulators of cell morphology, which include mnb/DYRK1A in the regulation of protrusion morphology in CNS-derived cell lines. This analysis reveals the importance of using different cell types to gain a thorough understanding of gene function across the genome and, in the case of kinases, the difficulties of using the differential gene expression to predict function.

  10. Large-eddy simulation/Reynolds-averaged Navier-Stokes hybrid schemes for high speed flows

    NASA Astrophysics Data System (ADS)

    Xiao, Xudong

    Three LES/RANS hybrid schemes have been proposed for the prediction of high speed separated flows. Each method couples the k-zeta (Enstrophy) BANS model with an LES subgrid scale one-equation model by using a blending function that is coordinate system independent. Two of these functions are based on turbulence dissipation length scale and grid size, while the third one has no explicit dependence on the grid. To implement the LES/RANS hybrid schemes, a new rescaling-reintroducing method is used to generate time-dependent turbulent inflow conditions. The hybrid schemes have been tested on a Mach 2.88 flow over 25 degree compression-expansion ramp and a Mach 2.79 flow over 20 degree compression ramp. A special computation procedure has been designed to prevent the separation zone from expanding upstream to the recycle-plane. The code is parallelized using Message Passing Interface (MPI) and is optimized for running on IBM-SP3 parallel machine. The scheme was validated first for a flat plate. It was shown that the blending function has to be monotonic to prevent the RANS region from appearing in the LES region. In the 25 deg ramp case, the hybrid schemes provided better agreement with experiment in the recovery region. Grid refinement studies demonstrated the importance of using a grid independent blend function and further improvement with experiment in the recovery region. In the 20 deg ramp case, with a relatively finer grid, the hybrid scheme characterized by grid independent blending function well predicted the flow field in both the separation region and the recovery region. Therefore, with "appropriately" fine grid, current hybrid schemes are promising for the simulation of shock wave/boundary layer interaction problems.

  11. Research on the supercapacitor support schemes for LVRT of variable-frequency drive in the thermal power plant

    NASA Astrophysics Data System (ADS)

    Han, Qiguo; Zhu, Kai; Shi, Wenming; Wu, Kuayu; Chen, Kai

    2018-02-01

    In order to solve the problem of low voltage ride through(LVRT) of the major auxiliary equipment’s variable-frequency drive (VFD) in thermal power plant, the scheme of supercapacitor paralleled in the DC link of VFD is put forward, furthermore, two solutions of direct parallel support and voltage boost parallel support of supercapacitor are proposed. The capacitor values for the relevant motor loads are calculated according to the law of energy conservation, and they are verified by Matlab simulation. At last, a set of test prototype is set up, and the test results prove the feasibility of the proposed schemes.

  12. Intelligent flight control systems

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.

    1993-01-01

    The capabilities of flight control systems can be enhanced by designing them to emulate functions of natural intelligence. Intelligent control functions fall in three categories. Declarative actions involve decision-making, providing models for system monitoring, goal planning, and system/scenario identification. Procedural actions concern skilled behavior and have parallels in guidance, navigation, and adaptation. Reflexive actions are spontaneous, inner-loop responses for control and estimation. Intelligent flight control systems learn knowledge of the aircraft and its mission and adapt to changes in the flight environment. Cognitive models form an efficient basis for integrating 'outer-loop/inner-loop' control functions and for developing robust parallel-processing algorithms.

  13. Multi-thread parallel algorithm for reconstructing 3D large-scale porous structures

    NASA Astrophysics Data System (ADS)

    Ju, Yang; Huang, Yaohui; Zheng, Jiangtao; Qian, Xu; Xie, Heping; Zhao, Xi

    2017-04-01

    Geomaterials inherently contain many discontinuous, multi-scale, geometrically irregular pores, forming a complex porous structure that governs their mechanical and transport properties. The development of an efficient reconstruction method for representing porous structures can significantly contribute toward providing a better understanding of the governing effects of porous structures on the properties of porous materials. In order to improve the efficiency of reconstructing large-scale porous structures, a multi-thread parallel scheme was incorporated into the simulated annealing reconstruction method. In the method, four correlation functions, which include the two-point probability function, the linear-path functions for the pore phase and the solid phase, and the fractal system function for the solid phase, were employed for better reproduction of the complex well-connected porous structures. In addition, a random sphere packing method and a self-developed pre-conditioning method were incorporated to cast the initial reconstructed model and select independent interchanging pairs for parallel multi-thread calculation, respectively. The accuracy of the proposed algorithm was evaluated by examining the similarity between the reconstructed structure and a prototype in terms of their geometrical, topological, and mechanical properties. Comparisons of the reconstruction efficiency of porous models with various scales indicated that the parallel multi-thread scheme significantly shortened the execution time for reconstruction of a large-scale well-connected porous model compared to a sequential single-thread procedure.

  14. Evidence for parallel consolidation of motion direction and orientation into visual short-term memory.

    PubMed

    Rideaux, Reuben; Apthorp, Deborah; Edwards, Mark

    2015-02-12

    Recent findings have indicated the capacity to consolidate multiple items into visual short-term memory in parallel varies as a function of the type of information. That is, while color can be consolidated in parallel, evidence suggests that orientation cannot. Here we investigated the capacity to consolidate multiple motion directions in parallel and reexamined this capacity using orientation. This was achieved by determining the shortest exposure duration necessary to consolidate a single item, then examining whether two items, presented simultaneously, could be consolidated in that time. The results show that parallel consolidation of direction and orientation information is possible, and that parallel consolidation of direction appears to be limited to two. Additionally, we demonstrate the importance of adequate separation between feature intervals used to define items when attempting to consolidate in parallel, suggesting that when multiple items are consolidated in parallel, as opposed to serially, the resolution of representations suffer. Finally, we used facilitation of spatial attention to show that the deterioration of item resolution occurs during parallel consolidation, as opposed to storage. © 2015 ARVO.

  15. Parallelization of a blind deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  16. Introducing PROFESS 2.0: A parallelized, fully linear scaling program for orbital-free density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Hung, Linda; Huang, Chen; Shin, Ilgyou; Ho, Gregory S.; Lignères, Vincent L.; Carter, Emily A.

    2010-12-01

    Orbital-free density functional theory (OFDFT) is a first principles quantum mechanics method to find the ground-state energy of a system by variationally minimizing with respect to the electron density. No orbitals are used in the evaluation of the kinetic energy (unlike Kohn-Sham DFT), and the method scales nearly linearly with the size of the system. The PRinceton Orbital-Free Electronic Structure Software (PROFESS) uses OFDFT to model materials from the atomic scale to the mesoscale. This new version of PROFESS allows the study of larger systems with two significant changes: PROFESS is now parallelized, and the ion-electron and ion-ion terms scale quasilinearly, instead of quadratically as in PROFESS v1 (L. Hung and E.A. Carter, Chem. Phys. Lett. 475 (2009) 163). At the start of a run, PROFESS reads the various input files that describe the geometry of the system (ion positions and cell dimensions), the type of elements (defined by electron-ion pseudopotentials), the actions you want it to perform (minimize with respect to electron density and/or ion positions and/or cell lattice vectors), and the various options for the computation (such as which functionals you want it to use). Based on these inputs, PROFESS sets up a computation and performs the appropriate optimizations. Energies, forces, stresses, material geometries, and electron density configurations are some of the values that can be output throughout the optimization. New version program summaryProgram Title: PROFESS Catalogue identifier: AEBN_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBN_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 68 721 No. of bytes in distributed program, including test data, etc.: 1 708 547 Distribution format: tar.gz Programming language: Fortran 90 Computer: Intel with ifort; AMD Opteron with pathf90 Operating system: Linux Has the code been vectorized or parallelized?: Yes. Parallelization is implemented through domain composition using MPI. RAM: Problem dependent, but 2 GB is sufficient for up to 10,000 ions. Classification: 7.3 External routines: FFTW 2.1.5 ( http://www.fftw.org) Catalogue identifier of previous version: AEBN_v1_0 Journal reference of previous version: Comput. Phys. Comm. 179 (2008) 839 Does the new version supersede the previous version?: Yes Nature of problem: Given a set of coordinates describing the initial ion positions under periodic boundary conditions, recovers the ground state energy, electron density, ion positions, and cell lattice vectors predicted by orbital-free density functional theory. The computation of all terms is effectively linear scaling. Parallelization is implemented through domain decomposition, and up to ˜10,000 ions may be included in the calculation on just a single processor, limited by RAM. For example, when optimizing the geometry of ˜50,000 aluminum ions (plus vacuum) on 48 cores, a single iteration of conjugate gradient ion geometry optimization takes ˜40 minutes wall time. However, each CG geometry step requires two or more electron density optimizations, so step times will vary. Solution method: Computes energies as described in text; minimizes this energy with respect to the electron density, ion positions, and cell lattice vectors. Reasons for new version: To allow much larger systems to be simulated using PROFESS. Restrictions: PROFESS cannot use nonlocal (such as ultrasoft) pseudopotentials. A variety of local pseudopotential files are available at the Carter group website ( http://www.princeton.edu/mae/people/faculty/carter/homepage/research/localpseudopotentials/). Also, due to the current state of the kinetic energy functionals, PROFESS is only reliable for main group metals and some properties of semiconductors. Running time: Problem dependent: the test example provided with the code takes less than a second to run. Timing results for large scale problems are given in the PROFESS paper and Ref. [1].

  17. Parallel evolution of the glycogen synthase 1 (muscle) gene Gys1 between Old World and New World fruit bats (Order: Chiroptera).

    PubMed

    Fang, Lu; Shen, Bin; Irwin, David M; Zhang, Shuyi

    2014-10-01

    Glycogen synthase, which catalyzes the synthesis of glycogen, is especially important for Old World (Pteropodidae) and New World (Phyllostomidae) fruit bats that ingest high-carbohydrate diets. Glycogen synthase 1, encoded by the Gys1 gene, is the glycogen synthase isozyme that functions in muscles. To determine whether Gys1 has undergone adaptive evolution in bats with carbohydrate-rich diets, in comparison to insect-eating sister bat taxa, we sequenced the coding region of the Gys1 gene from 10 species of bats, including two Old World fruit bats (Pteropodidae) and a New World fruit bat (Phyllostomidae). Our results show no evidence for positive selection in the Gys1 coding sequence on the ancestral Old World and the New World Artibeus lituratus branches. Tests for convergent evolution indicated convergence of the sequences and one parallel amino acid substitution (T395A) was detected on these branches, which was likely driven by natural selection.

  18. Compact forced simple-shear sample for studying shear localization in materials

    DOE PAGES

    Gray, George Thompson; Vecchio, K. S.; Livescu, Veronica

    2015-11-06

    In this paper, a new specimen geometry, the compact forced-simple-shear specimen (CFSS), has been developed as a means to achieve simple shear testing of materials over a range of temperatures and strain rates. The stress and strain state in the gage section is designed to produce essentially “pure” simple shear, mode II in-plane shear, in a compact-sample geometry. The 2-D plane of shear can be directly aligned along specified directional aspects of a material's microstructure of interest; i.e., systematic shear loading parallel, at 45°, and orthogonal to anisotropic microstructural features in a material such as the pancake-shaped grains typical inmore » many rolled structural metals, or to specified directions in fiber-reinforced composites. Finally, the shear-stress shear-strain response and the damage evolution parallel and orthogonal to the pancake grain morphology in 7039-Al are shown to vary significantly as a function of orientation to the microstructure.« less

  19. EFFECT OF ROENTGEN RADIATION ON $beta$-GLUCURONIDASE IN RAT TESTIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arata, L.; Santoro, R.; Severi, M.A.

    1962-04-30

    The testes were irradiated with a single 600-r dose and enzyme activity was determined in homogenates of testis, at 10-day intervals, up to the 50th postirradiation day. In comparison with the control value of 47.9 (units/mg fresh tissue), BETA -glucuronidase activity fell to 30.5 by the 10th day, then progressively rose to 78.4, 126.0, 242.0, and 275.0 in the subsequent 10-day periods. A parallel drop, followed by a rise, occurred in total activity of testis. Testicular weight fell, and seminal vesicular weight fell and then rose, during the 50-day period. Thus, the transient sterility and destruction of germinal epithelium inducedmore » by irradiation were reflected by a decrease in BETA - glucuronidase activity, whereas regeneration of this epithelium followed the rise in enzyme activity. Such parallel changes in epithelial function and enzyme activity were previously noted in vitamin E-deficient rats. (H.H.D.)« less

  20. Accelerating a three-dimensional eco-hydrological cellular automaton on GPGPU with OpenCL

    NASA Astrophysics Data System (ADS)

    Senatore, Alfonso; D'Ambrosio, Donato; De Rango, Alessio; Rongo, Rocco; Spataro, William; Straface, Salvatore; Mendicino, Giuseppe

    2016-10-01

    This work presents an effective implementation of a numerical model for complete eco-hydrological Cellular Automata modeling on Graphical Processing Units (GPU) with OpenCL (Open Computing Language) for heterogeneous computation (i.e., on CPUs and/or GPUs). Different types of parallel implementations were carried out (e.g., use of fast local memory, loop unrolling, etc), showing increasing performance improvements in terms of speedup, adopting also some original optimizations strategies. Moreover, numerical analysis of results (i.e., comparison of CPU and GPU outcomes in terms of rounding errors) have proven to be satisfactory. Experiments were carried out on a workstation with two CPUs (Intel Xeon E5440 at 2.83GHz), one GPU AMD R9 280X and one GPU nVIDIA Tesla K20c. Results have been extremely positive, but further testing should be performed to assess the functionality of the adopted strategies on other complete models and their ability to fruitfully exploit parallel systems resources.

  1. Design and energetic evaluation of a prosthetic knee joint actuator with a lockable parallel spring.

    PubMed

    Geeroms, J; Flynn, L; Jimenez-Fabian, R; Vanderborght, B; Lefeber, D

    2017-02-03

    There are disadvantages to existing damping knee prostheses which cause an asymmetric gait and higher metabolic cost during level walking compared to non-amputees. Most existing active knee prostheses which could benefit the amputees use a significant amount of energy and require a considerable motor. In this work, a novel semi-active actuator with a lockable parallel spring for a prosthetic knee joint has been developed and tested. This actuator is able to provide an approximation of the behavior of a healthy knee during most of the gait cycle of level walking. This actuator is expanded with a series-elastic actuator to mimic the full gait cycle and enable its use in other functional tasks like stair climbing and sit-to-stance. The proposed novel actuator reduces the energy consumption for the same trajectory with respect to a compliant or directly-driven prosthetic active knee joint and improves the approximation of healthy knee behavior during level walking compared to passive or variable damping knee prostheses.

  2. Quasi-Newton parallel geometry optimization methods

    NASA Astrophysics Data System (ADS)

    Burger, Steven K.; Ayers, Paul W.

    2010-07-01

    Algorithms for parallel unconstrained minimization of molecular systems are examined. The overall framework of minimization is the same except for the choice of directions for updating the quasi-Newton Hessian. Ideally these directions are chosen so the updated Hessian gives steps that are same as using the Newton method. Three approaches to determine the directions for updating are presented: the straightforward approach of simply cycling through the Cartesian unit vectors (finite difference), a concurrent set of minimizations, and the Lanczos method. We show the importance of using preconditioning and a multiple secant update in these approaches. For the Lanczos algorithm, an initial set of directions is required to start the method, and a number of possibilities are explored. To test the methods we used the standard 50-dimensional analytic Rosenbrock function. Results are also reported for the histidine dipeptide, the isoleucine tripeptide, and cyclic adenosine monophosphate. All of these systems show a significant speed-up with the number of processors up to about eight processors.

  3. Multi-Sensor Data Fusion Identification for Shearer Cutting Conditions Based on Parallel Quasi-Newton Neural Networks and the Dempster-Shafer Theory.

    PubMed

    Si, Lei; Wang, Zhongbin; Liu, Xinhua; Tan, Chao; Xu, Jing; Zheng, Kehong

    2015-11-13

    In order to efficiently and accurately identify the cutting condition of a shearer, this paper proposed an intelligent multi-sensor data fusion identification method using the parallel quasi-Newton neural network (PQN-NN) and the Dempster-Shafer (DS) theory. The vibration acceleration signals and current signal of six cutting conditions were collected from a self-designed experimental system and some special state features were extracted from the intrinsic mode functions (IMFs) based on the ensemble empirical mode decomposition (EEMD). In the experiment, three classifiers were trained and tested by the selected features of the measured data, and the DS theory was used to combine the identification results of three single classifiers. Furthermore, some comparisons with other methods were carried out. The experimental results indicate that the proposed method performs with higher detection accuracy and credibility than the competing algorithms. Finally, an industrial application example in the fully mechanized coal mining face was demonstrated to specify the effect of the proposed system.

  4. Nemesis I: Parallel Enhancements to ExodusII

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hennigan, Gary L.; John, Matthew S.; Shadid, John N.

    2006-03-28

    NEMESIS I is an enhancement to the EXODUS II finite element database model used to store and retrieve data for unstructured parallel finite element analyses. NEMESIS I adds data structures which facilitate the partitioning of a scalar (standard serial) EXODUS II file onto parallel disk systems found on many parallel computers. Since the NEMESIS I application programming interface (APl)can be used to append information to an existing EXODUS II files can be used on files which contain NEMESIS I information. The NEMESIS I information is written and read via C or C++ callable functions which compromise the NEMESIS I API.

  5. Programming parallel architectures: The BLAZE family of languages

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush

    1988-01-01

    Programming multiprocessor architectures is a critical research issue. An overview is given of the various approaches to programming these architectures that are currently being explored. It is argued that two of these approaches, interactive programming environments and functional parallel languages, are particularly attractive since they remove much of the burden of exploiting parallel architectures from the user. Also described is recent work by the author in the design of parallel languages. Research on languages for both shared and nonshared memory multiprocessors is described, as well as the relations of this work to other current language research projects.

  6. Item Selection for the Development of Parallel Forms from an IRT-Based Seed Test Using a Sampling and Classification Approach

    ERIC Educational Resources Information Center

    Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan

    2012-01-01

    Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…

  7. GRADSPMHD: A parallel MHD code based on the SPH formalism

    NASA Astrophysics Data System (ADS)

    Vanaverbeke, S.; Keppens, R.; Poedts, S.

    2014-03-01

    We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a Sedov test including 15625 particles on a single CPU. Classification: 12. Nature of problem: Evolution of a plasma in the ideal MHD approximation. Solution method: The equations of magnetohydrodynamics are solved using the SPH method. Running time: The test provided takes approximately 20 min using 4 processors.

  8. Design and testing of a regenerative magnetorheological actuator for assistive knee braces

    NASA Astrophysics Data System (ADS)

    Ma, Hao; Chen, Bing; Qin, Ling; Liao, Wei-Hsin

    2017-03-01

    In this paper, a multifunctional magneto-rheological actuator with power regeneration capability, named regenerative magnetorheological actuator (RMRA), is designed for gait assistance in the knee joint. RMRA has motor and magnetorheological (MR) brake parts working in parallel that can harvest energy through regenerative braking. This novel design provides multiple functions with good energy efficiency. The configuration and basic design of the RMRA are first introduced. Then geometrical optimization of the MR brake is conducted based on a parameterized model, and multiple factors are considered in the design objectives: braking torque, weight, and power consumption. After the optimal design is obtained, an RMRA prototype is fabricated and associated driver circuits are designed. Finally, multiple functions of the RMRA, especially three different braking modes, are modeled and tested. Experimental results of RMRA output performances in all working modes match the modeling and simulation. Assistive knee braces with the developed RMRA are promising for future applications in gait assistance and rehabilitation.

  9. HYPOTHESIS TESTING FOR HIGH-DIMENSIONAL SPARSE BINARY REGRESSION

    PubMed Central

    Mukherjee, Rajarshi; Pillai, Natesh S.; Lin, Xihong

    2015-01-01

    In this paper, we study the detection boundary for minimax hypothesis testing in the context of high-dimensional, sparse binary regression models. Motivated by genetic sequencing association studies for rare variant effects, we investigate the complexity of the hypothesis testing problem when the design matrix is sparse. We observe a new phenomenon in the behavior of detection boundary which does not occur in the case of Gaussian linear regression. We derive the detection boundary as a function of two components: a design matrix sparsity index and signal strength, each of which is a function of the sparsity of the alternative. For any alternative, if the design matrix sparsity index is too high, any test is asymptotically powerless irrespective of the magnitude of signal strength. For binary design matrices with the sparsity index that is not too high, our results are parallel to those in the Gaussian case. In this context, we derive detection boundaries for both dense and sparse regimes. For the dense regime, we show that the generalized likelihood ratio is rate optimal; for the sparse regime, we propose an extended Higher Criticism Test and show it is rate optimal and sharp. We illustrate the finite sample properties of the theoretical results using simulation studies. PMID:26246645

  10. Repeated Low-Level Blast Exposure: A Descriptive Human Subjects Study.

    PubMed

    Carr, Walter; Stone, James R; Walilko, Tim; Young, Lee Ann; Snook, Tianlu Li; Paggi, Michelle E; Tsao, Jack W; Jankosky, Christopher J; Parish, Robert V; Ahlers, Stephen T

    2016-05-01

    The relationship between repeated exposure to blast overpressure and neurological function was examined in the context of breacher training at the U.S. Marine Corps Weapons Training Battalion Dynamic Entry School. During this training, Students are taught to apply explosive charges to achieve rapid ingress into secured buildings. For this study, both Students and Instructors participated in neurobehavioral testing, blood toxin screening, vestibular/auditory testing, and neuroimaging. Volunteers wore instrumentation during training to allow correlation of human response measurements and blast overpressure exposure. The key findings of this study were from high-memory demand tasks and were limited to the Instructors. Specific tests showing blast-related mean differences were California Verbal Learning Test II, Automated Neuropsychological Assessment Metrics subtests (Match-to-Sample, Code Substitution Delayed), and Delayed Matching-to-Sample 10-second delay condition. Importantly, apparent deficits were paralleled with functional magnetic resonance imaging using the n-back task. The findings of this study are suggestive, but not conclusive, owing to small sample size and effect. The observed changes yield descriptive evidence for potential neurological alterations in the subset of individuals with occupational history of repetitive blast exposure. This is the first study to integrate subject instrumentation for measurement of individual blast pressure exposure, neurocognitive testing, and neuroimaging. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  11. Correlates of cognitive dysfunction in multiple sclerosis.

    PubMed

    Heesen, C; Schulz, K H; Fiehler, J; Von der Mark, U; Otte, C; Jung, R; Poettgen, J; Krieger, T; Gold, S M

    2010-10-01

    Cognitive impairment is one of the most frequent symptoms in patients with multiple sclerosis (MS) but its underlying mechanisms are poorly understood. A number of pathogenetic correlates have previously been proposed including psychosocial factors (such as depression and fatigue), inflammation, neurodegeneration, and neuroendocrine dysregulation. However, these different systems have never been studied in parallel and their differential contributions to cognitive impairment in MS are unknown. We studied a well-characterized cohort of cognitively impaired (CI, n=25) and cognitively preserved (CP, n=25) MS patients based on a comprehensive neuropsychological testing battery, a test of hypothalamo-pituitary-adrenal axis functioning (dexamethasone-corticotropin-releasing hormone suppression test, Dex-CRH test) as well as peripheral blood and MRI markers of inflammatory activity. CI patients had significantly higher disability. In addition, CI patients showed higher levels of fatigue and depression. Fatigue was more closely associated with measures of attention while depression showed strongest correlations with memory tests. Furthermore, percentage of IFNγ-positive CD4+ and CD8+ T cells showed modest correlations with processing speed and working memory. MRI markers of inflammation or global atrophy were not associated with neuropsychological function. Compared to previous studies, the number of patients exhibiting HPA axis hyperactivity was very low and no correlations were found with neuropsychological function. We conclude that fatigue and depression are the main correlates of cognitive impairment, which show domain-specific associations with measures of attention and memory. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Follow-up of cortical activity and structure after lesion with laser speckle imaging and magnetic resonance imaging in nonhuman primates

    NASA Astrophysics Data System (ADS)

    Peuser, Jörn; Belhaj-Saif, Abderraouf; Hamadjida, Adjia; Schmidlin, Eric; Gindrat, Anne-Dominique; Völker, Andreas Charles; Zakharov, Pavel; Hoogewoud, Henri-Marcel; Rouiller, Eric M.; Scheffold, Frank

    2011-09-01

    The nonhuman primate model is suitable to study mechanisms of functional recovery following lesion of the cerebral cortex (motor cortex), on which therapeutic strategies can be tested. To interpret behavioral data (time course and extent of functional recovery), it is crucial to monitor the properties of the experimental cortical lesion, induced by infusion of the excitotoxin ibotenic acid. In two adult macaque monkeys, ibotenic acid infusions produced a restricted, permanent lesion of the motor cortex. In one monkey, the lesion was monitored over 3.5 weeks, combining laser speckle imaging (LSI) as metabolic readout (cerebral blood flow) and anatomical assessment with magnetic resonance imaging (T2-weighted MRI). The cerebral blood flow, measured online during subsequent injections of the ibotenic acid in the motor cortex, exhibited a dramatic increase, still present after one week, in parallel to a MRI hypersignal. After 3.5 weeks, the cerebral blood flow was strongly reduced (below reference level) and the hypersignal disappeared from the MRI scan, although the lesion was permanent as histologically assessed post-mortem. The MRI data were similar in the second monkey. Our experiments suggest that LSI and MRI, although they reflect different features, vary in parallel during a few weeks following an excitotoxic cortical lesion.

  13. Probability, not linear summation, mediates the detection of concentric orientation-defined textures.

    PubMed

    Schmidtmann, Gunnar; Jennings, Ben J; Bell, Jason; Kingdom, Frederick A A

    2015-01-01

    Previous studies investigating signal integration in circular Glass patterns have concluded that the information in these patterns is linearly summed across the entire display for detection. Here we test whether an alternative form of summation, probability summation (PS), modeled under the assumptions of Signal Detection Theory (SDT), can be rejected as a model of Glass pattern detection. PS under SDT alone predicts that the exponent β of the Quick- (or Weibull-) fitted psychometric function should decrease with increasing signal area. We measured spatial integration in circular, radial, spiral, and parallel Glass patterns, as well as comparable patterns composed of Gabors instead of dot pairs. We measured the signal-to-noise ratio required for detection as a function of the size of the area containing signal, with the remaining area containing dot-pair or Gabor-orientation noise. Contrary to some previous studies, we found that the strength of summation never reached values close to linear summation for any stimuli. More importantly, the exponent β systematically decreased with signal area, as predicted by PS under SDT. We applied a model for PS under SDT and found that it gave a good account of the data. We conclude that probability summation is the most likely basis for the detection of circular, radial, spiral, and parallel orientation-defined textures.

  14. Optimization of the coherence function estimation for multi-core central processing unit

    NASA Astrophysics Data System (ADS)

    Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.

    2017-02-01

    The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.

  15. Parallel hyperbolic PDE simulation on clusters: Cell versus GPU

    NASA Astrophysics Data System (ADS)

    Rostrup, Scott; De Sterck, Hans

    2010-12-01

    Increasingly, high-performance computing is looking towards data-parallel computational devices to enhance computational performance. Two technologies that have received significant attention are IBM's Cell Processor and NVIDIA's CUDA programming model for graphics processing unit (GPU) computing. In this paper we investigate the acceleration of parallel hyperbolic partial differential equation simulation on structured grids with explicit time integration on clusters with Cell and GPU backends. The message passing interface (MPI) is used for communication between nodes at the coarsest level of parallelism. Optimizations of the simulation code at the several finer levels of parallelism that the data-parallel devices provide are described in terms of data layout, data flow and data-parallel instructions. Optimized Cell and GPU performance are compared with reference code performance on a single x86 central processing unit (CPU) core in single and double precision. We further compare the CPU, Cell and GPU platforms on a chip-to-chip basis, and compare performance on single cluster nodes with two CPUs, two Cell processors or two GPUs in a shared memory configuration (without MPI). We finally compare performance on clusters with 32 CPUs, 32 Cell processors, and 32 GPUs using MPI. Our GPU cluster results use NVIDIA Tesla GPUs with GT200 architecture, but some preliminary results on recently introduced NVIDIA GPUs with the next-generation Fermi architecture are also included. This paper provides computational scientists and engineers who are considering porting their codes to accelerator environments with insight into how structured grid based explicit algorithms can be optimized for clusters with Cell and GPU accelerators. It also provides insight into the speed-up that may be gained on current and future accelerator architectures for this class of applications. Program summaryProgram title: SWsolver Catalogue identifier: AEGY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v3 No. of lines in distributed program, including test data, etc.: 59 168 No. of bytes in distributed program, including test data, etc.: 453 409 Distribution format: tar.gz Programming language: C, CUDA Computer: Parallel Computing Clusters. Individual compute nodes may consist of x86 CPU, Cell processor, or x86 CPU with attached NVIDIA GPU accelerator. Operating system: Linux Has the code been vectorised or parallelized?: Yes. Tested on 1-128 x86 CPU cores, 1-32 Cell Processors, and 1-32 NVIDIA GPUs. RAM: Tested on Problems requiring up to 4 GB per compute node. Classification: 12 External routines: MPI, CUDA, IBM Cell SDK Nature of problem: MPI-parallel simulation of Shallow Water equations using high-resolution 2D hyperbolic equation solver on regular Cartesian grids for x86 CPU, Cell Processor, and NVIDIA GPU using CUDA. Solution method: SWsolver provides 3 implementations of a high-resolution 2D Shallow Water equation solver on regular Cartesian grids, for CPU, Cell Processor, and NVIDIA GPU. Each implementation uses MPI to divide work across a parallel computing cluster. Additional comments: Sub-program numdiff is used for the test run.

  16. Analysis of a parallelized nonlinear elliptic boundary value problem solver with application to reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.; Smooke, Mitchell D.

    1987-01-01

    A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.

  17. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  18. Improvements on non-equilibrium and transport Green function techniques: The next-generation TRANSIESTA

    NASA Astrophysics Data System (ADS)

    Papior, Nick; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads

    2017-03-01

    We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT-NEGF code handles devices with one or multiple electrodes (Ne ≥ 1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable matrix inversion, performance-critical pivoting, and hybrid parallelization. Additionally, a generic NEGF "post-processing" code (TBTRANS/PHTRANS) for electron and phonon transport is presented with several novelties such as Hamiltonian interpolations, Ne ≥ 1 electrode capability, bond-currents, generalized interface for user-defined tight-binding transport, transmission projection using eigenstates of a projected Hamiltonian, and fast inversion algorithms for large-scale simulations easily exceeding 106 atoms on workstation computers. The new features of both codes are demonstrated and bench-marked for relevant test systems.

  19. Study and development of an air conditioning system operating on a magnetic heat pump cycle (design and testing of flow directors)

    NASA Astrophysics Data System (ADS)

    Wang, Pao-Lien

    1992-09-01

    This report describes the fabrication, design of flow director, fluid flow direction analysis and testing of flow director of a magnetic heat pump. The objectives of the project are: (1) to fabricate a demonstration magnetic heat pump prototype with flow directors installed; and (2) analysis and testing of flow director and to make sure working fluid loops flow through correct directions with minor mixing. The prototype was fabricated and tested at the Development Testing Laboratory of Kennedy Space Center. The magnetic heat pump uses rear earth metal plates rotate in and out of a magnetic field in a clear plastic housing with water flowing through the rotor plates to provide temperature lift. Obtaining the proper water flow direction has been a problem. Flow directors were installed as flow barriers between separating point of two parallel loops. Function of flow directors were proven to be excellent both analytically and experimentally.

  20. Study and development of an air conditioning system operating on a magnetic heat pump cycle (design and testing of flow directors)

    NASA Technical Reports Server (NTRS)

    Wang, Pao-Lien

    1992-01-01

    This report describes the fabrication, design of flow director, fluid flow direction analysis and testing of flow director of a magnetic heat pump. The objectives of the project are: (1) to fabricate a demonstration magnetic heat pump prototype with flow directors installed; and (2) analysis and testing of flow director and to make sure working fluid loops flow through correct directions with minor mixing. The prototype was fabricated and tested at the Development Testing Laboratory of Kennedy Space Center. The magnetic heat pump uses rear earth metal plates rotate in and out of a magnetic field in a clear plastic housing with water flowing through the rotor plates to provide temperature lift. Obtaining the proper water flow direction has been a problem. Flow directors were installed as flow barriers between separating point of two parallel loops. Function of flow directors were proven to be excellent both analytically and experimentally.

  1. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU.

    PubMed

    Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan

    2013-01-01

    This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis.

  2. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU

    PubMed Central

    Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan

    2013-01-01

    This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis. PMID:23840507

  3. OpenCL based machine learning labeling of biomedical datasets

    NASA Astrophysics Data System (ADS)

    Amoros, Oscar; Escalera, Sergio; Puig, Anna

    2011-03-01

    In this paper, we propose a two-stage labeling method of large biomedical datasets through a parallel approach in a single GPU. Diagnostic methods, structures volume measurements, and visualization systems are of major importance for surgery planning, intra-operative imaging and image-guided surgery. In all cases, to provide an automatic and interactive method to label or to tag different structures contained into input data becomes imperative. Several approaches to label or segment biomedical datasets has been proposed to discriminate different anatomical structures in an output tagged dataset. Among existing methods, supervised learning methods for segmentation have been devised to easily analyze biomedical datasets by a non-expert user. However, they still have some problems concerning practical application, such as slow learning and testing speeds. In addition, recent technological developments have led to widespread availability of multi-core CPUs and GPUs, as well as new software languages, such as NVIDIA's CUDA and OpenCL, allowing to apply parallel programming paradigms in conventional personal computers. Adaboost classifier is one of the most widely applied methods for labeling in the Machine Learning community. In a first stage, Adaboost trains a binary classifier from a set of pre-labeled samples described by a set of features. This binary classifier is defined as a weighted combination of weak classifiers. Each weak classifier is a simple decision function estimated on a single feature value. Then, at the testing stage, each weak classifier is independently applied on the features of a set of unlabeled samples. In this work, we propose an alternative representation of the Adaboost binary classifier. We use this proposed representation to define a new GPU-based parallelized Adaboost testing stage using OpenCL. We provide numerical experiments based on large available data sets and we compare our results to CPU-based strategies in terms of time and labeling speeds.

  4. Performance analysis of a parallel Monte Carlo code for simulating solar radiative transfer in cloudy atmospheres using CUDA-enabled NVIDIA GPU

    NASA Astrophysics Data System (ADS)

    Russkova, Tatiana V.

    2017-11-01

    One tool to improve the performance of Monte Carlo methods for numerical simulation of light transport in the Earth's atmosphere is the parallel technology. A new algorithm oriented to parallel execution on the CUDA-enabled NVIDIA graphics processor is discussed. The efficiency of parallelization is analyzed on the basis of calculating the upward and downward fluxes of solar radiation in both a vertically homogeneous and inhomogeneous models of the atmosphere. The results of testing the new code under various atmospheric conditions including continuous singlelayered and multilayered clouds, and selective molecular absorption are presented. The results of testing the code using video cards with different compute capability are analyzed. It is shown that the changeover of computing from conventional PCs to the architecture of graphics processors gives more than a hundredfold increase in performance and fully reveals the capabilities of the technology used.

  5. Domain decomposition methods in aerodynamics

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.; Saltz, Joel

    1990-01-01

    Compressible Euler equations are solved for two-dimensional problems by a preconditioned conjugate gradient-like technique. An approximate Riemann solver is used to compute the numerical fluxes to second order accuracy in space. Two ways to achieve parallelism are tested, one which makes use of parallelism inherent in triangular solves and the other which employs domain decomposition techniques. The vectorization/parallelism in triangular solves is realized by the use of a recording technique called wavefront ordering. This process involves the interpretation of the triangular matrix as a directed graph and the analysis of the data dependencies. It is noted that the factorization can also be done in parallel with the wave front ordering. The performances of two ways of partitioning the domain, strips and slabs, are compared. Results on Cray YMP are reported for an inviscid transonic test case. The performances of linear algebra kernels are also reported.

  6. Bioinspired engineering study of Plantae vascules for self-healing composite structures.

    PubMed

    Trask, R S; Bond, I P

    2010-06-06

    This paper presents the first conceptual study into creating a Plantae-inspired vascular network within a fibre-reinforced polymer composite laminate, which provides an ongoing self-healing functionality without incurring a mass penalty. Through the application of a 'lost-wax' technique, orthogonal hollow vascules, inspired by the 'ray cell' structures found in ring porous hardwoods, were successfully introduced within a carbon fibre-reinforced epoxy polymer composite laminate. The influence on fibre architecture and mechanical behaviour of single vascules (located on the laminate centreline) when aligned parallel and transverse to the local host ply was characterized experimentally using a compression-after-impact test methodology. Ultrasonic C-scanning and high-resolution micro-CT X-ray was undertaken to identify the influence of and interaction between the internal vasculature and impact damage. The results clearly show that damage morphology is influenced by vascule orientation and that a 10 J low-velocity impact damage event is sufficient to breach the vasculature; a prerequisite for any subsequent self-healing function. The residual compressive strength after a 10 J impact was found to be dependent upon vascule orientation. In general, residual compressive strength decreased to 70 per cent of undamaged strength when vasculature was aligned parallel to the local host ply and a value of 63 per cent when aligned transverse. This bioinspired engineering study has illustrated the potential that a vasculature concept has to offer in terms of providing a self-healing function with minimum mass penalty, without initiating premature failure within a composite structure.

  7. Functional anatomy of the lateral collateral ligament of the elbow.

    PubMed

    Hackl, M; Bercher, M; Wegmann, K; Müller, L P; Dargel, J

    2016-07-01

    The aim of this study was to analyze the functional anatomy of the lateral collateral ligament complex (LCLC) and the surrounding forearm extensors. Using 81 human cadaveric upper extremities, the anatomy of the forearm extensors-especially the anconeus, supinator and extensor carpi ulnaris (ECU)-was analyzed. After removal of aforementioned extensors the functional anatomy of the LCLC was analyzed. The origin of the LCLC was evaluated for isometry. The insertion types of the lateral ulnar collateral ligament (LUCL) were analyzed and classified. The ECU runs parallel to the RCL to dynamically preserve varus stability. The supinator and anconeus muscle fibers coalesce with the LCLC and lengthen during pronation. The anconeus fibers run parallel to the LUCL in full flexion. The LCLC consists of the annular ligament (AL) and the isometric radial collateral ligament (RCL). During elbow flexion, its posterior branches (LUCL) tighten while the anterior branches loosen. When performing a pivot shift test, the loosened LUCL fibers do not fully tighten in full extension. The LUCL inserts along with the AL at the supinator crest. Three different insertion types could be observed. The LUCL represents the posterior branch of the RCL rather than a distinct ligament. It is non-isometric and lengthens during elbow flexion. The RCL was found to be of vital importance for neutralization of posterolateral rotatory forces. Pronation of the forearm actively stabilizes the elbow joint as the supinator, anconeus and biceps muscle work in unison to increase posterolateral rotatory stability.

  8. Grid-Based Projector Augmented Wave (GPAW) Implementation of Quantum Mechanics/Molecular Mechanics (QM/MM) Electrostatic Embedding and Application to a Solvated Diplatinum Complex.

    PubMed

    Dohn, A O; Jónsson, E Ö; Levi, G; Mortensen, J J; Lopez-Acevedo, O; Thygesen, K S; Jacobsen, K W; Ulstrup, J; Henriksen, N E; Møller, K B; Jónsson, H

    2017-12-12

    A multiscale density functional theory-quantum mechanics/molecular mechanics (DFT-QM/MM) scheme is presented, based on an efficient electrostatic coupling between the electronic density obtained from a grid-based projector augmented wave (GPAW) implementation of density functional theory and a classical potential energy function. The scheme is implemented in a general fashion and can be used with various choices for the descriptions of the QM or MM regions. Tests on H 2 O clusters, ranging from dimer to decamer show that no systematic energy errors are introduced by the coupling that exceeds the differences in the QM and MM descriptions. Over 1 ns of liquid water, Born-Oppenheimer QM/MM molecular dynamics (MD) are sampled combining 10 parallel simulations, showing consistent liquid water structure over the QM/MM border. The method is applied in extensive parallel MD simulations of an aqueous solution of the diplatinum [Pt 2 (P 2 O 5 H 2 ) 4 ] 4- complex (PtPOP), spanning a total time period of roughly half a nanosecond. An average Pt-Pt distance deviating only 0.01 Å from experimental results, and a ground-state Pt-Pt oscillation frequency deviating by <2% from experimental results were obtained. The simulations highlight a remarkable harmonicity of the Pt-Pt oscillation, while also showing clear signs of Pt-H hydrogen bonding and directional coordination of water molecules along the Pt-Pt axis of the complex.

  9. Parallel evolution of chordate cis-regulatory code for development.

    PubMed

    Doglio, Laura; Goode, Debbie K; Pelleri, Maria C; Pauls, Stefan; Frabetti, Flavia; Shimeld, Sebastian M; Vavouri, Tanya; Elgar, Greg

    2013-11-01

    Urochordates are the closest relatives of vertebrates and at the larval stage, possess a characteristic bilateral chordate body plan. In vertebrates, the genes that orchestrate embryonic patterning are in part regulated by highly conserved non-coding elements (CNEs), yet these elements have not been identified in urochordate genomes. Consequently the evolution of the cis-regulatory code for urochordate development remains largely uncharacterised. Here, we use genome-wide comparisons between C. intestinalis and C. savignyi to identify putative urochordate cis-regulatory sequences. Ciona conserved non-coding elements (ciCNEs) are associated with largely the same key regulatory genes as vertebrate CNEs. Furthermore, some of the tested ciCNEs are able to activate reporter gene expression in both zebrafish and Ciona embryos, in a pattern that at least partially overlaps that of the gene they associate with, despite the absence of sequence identity. We also show that the ability of a ciCNE to up-regulate gene expression in vertebrate embryos can in some cases be localised to short sub-sequences, suggesting that functional cross-talk may be defined by small regions of ancestral regulatory logic, although functional sub-sequences may also be dispersed across the whole element. We conclude that the structure and organisation of cis-regulatory modules is very different between vertebrates and urochordates, reflecting their separate evolutionary histories. However, functional cross-talk still exists because the same repertoire of transcription factors has likely guided their parallel evolution, exploiting similar sets of binding sites but in different combinations.

  10. SOP: parallel surrogate global optimization with Pareto center selection for computationally expensive single objective problems

    DOE PAGES

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    2016-02-02

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  11. Early Disruption of Extracellular Pleiotrophin Distribution Alters Cerebellar Neuronal Circuit Development and Function.

    PubMed

    Hamza, M M; Rey, S A; Hilber, P; Arabo, A; Collin, T; Vaudry, D; Burel, D

    2016-10-01

    The cerebellum is a structure of the central nervous system involved in balance, motor coordination, and voluntary movements. The elementary circuit implicated in the control of locomotion involves Purkinje cells, which receive excitatory inputs from parallel and climbing fibers, and are regulated by cerebellar interneurons. In mice as in human, the cerebellar cortex completes its development mainly after birth with the migration, differentiation, and synaptogenesis of granule cells. These cellular events are under the control of numerous extracellular matrix molecules including pleiotrophin (PTN). This cytokine has been shown to regulate the morphogenesis of Purkinje cells ex vivo and in vivo via its receptor PTPζ. Since Purkinje cells are the unique output of the cerebellar cortex, we explored the consequences of their PTN-induced atrophy on the function of the cerebellar neuronal circuit in mice. Behavioral experiments revealed that, despite a normal overall development, PTN-treated mice present a delay in the maturation of their flexion reflex. Moreover, patch clamp recording of Purkinje cells revealed a significant increase in the frequency of spontaneous excitatory postsynaptic currents in PTN-treated mice, associated with a decrease of climbing fiber innervations and an abnormal perisomatic localization of the parallel fiber contacts. At adulthood, PTN-treated mice exhibit coordination impairment on the rotarod test associated with an alteration of the synchronization gait. Altogether these histological, electrophysiological, and behavior data reveal that an early ECM disruption of PTN composition induces short- and long-term defaults in the establishment of proper functional cerebellar circuit.

  12. Per-service supervised learning for identifying desired WoT apps from user requests in natural language

    PubMed Central

    2017-01-01

    Web of Things (WoT) platforms are growing fast so as the needs for composing WoT apps more easily and efficiently. We have recently commenced the campaign to develop an interface where users can issue requests for WoT apps entirely in natural language. This requires an effort to build a system that can learn to identify relevant WoT functions that fulfill user’s requests. In our preceding work, we trained a supervised learning system with thousands of publicly-available IFTTT app recipes based on conditional random fields (CRF). However, the sub-par accuracy and excessive training time motivated us to devise a better approach. In this paper, we present a novel solution that creates a separate learning engine for each trigger service. With this approach, parallel and incremental learning becomes possible. For inference, our system first identifies the most relevant trigger service for a given user request by using an information retrieval technique. Then, the learning engine associated with the trigger service predicts the most likely pair of trigger and action functions. We expect that such two-phase inference method given parallel learning engines would improve the accuracy of identifying related WoT functions. We verify our new solution through the empirical evaluation with training and test sets sampled from a pool of refined IFTTT app recipes. We also meticulously analyze the characteristics of the recipes to find future research directions. PMID:29149217

  13. Per-service supervised learning for identifying desired WoT apps from user requests in natural language.

    PubMed

    Yoon, Young

    2017-01-01

    Web of Things (WoT) platforms are growing fast so as the needs for composing WoT apps more easily and efficiently. We have recently commenced the campaign to develop an interface where users can issue requests for WoT apps entirely in natural language. This requires an effort to build a system that can learn to identify relevant WoT functions that fulfill user's requests. In our preceding work, we trained a supervised learning system with thousands of publicly-available IFTTT app recipes based on conditional random fields (CRF). However, the sub-par accuracy and excessive training time motivated us to devise a better approach. In this paper, we present a novel solution that creates a separate learning engine for each trigger service. With this approach, parallel and incremental learning becomes possible. For inference, our system first identifies the most relevant trigger service for a given user request by using an information retrieval technique. Then, the learning engine associated with the trigger service predicts the most likely pair of trigger and action functions. We expect that such two-phase inference method given parallel learning engines would improve the accuracy of identifying related WoT functions. We verify our new solution through the empirical evaluation with training and test sets sampled from a pool of refined IFTTT app recipes. We also meticulously analyze the characteristics of the recipes to find future research directions.

  14. Spin dependent structure function g1 of the deuteron and the proton

    NASA Astrophysics Data System (ADS)

    Klostermann, L.

    1995-05-01

    This thesis presents a study on the spin structure of the nucleon, via deep inelastic scattering (DIS) of polarized muons on polarized proton and deuterium targets. The work was done in the Spin Muon Collaboration (SMC) at CERN in Geneva. From the asymmetry in the scattering cross section for nucleon and lepton spins parallel and anti-parallel, one can determine the spin dependent structure function g(sub 1), which contains information on the quark and gluon spin distribution functions. The interpretation in the frame work of the quark parton model (QPM) of earlier results on g(sub 1, sup d) by the European Muon Collaboration (EMC), gave an indication that only a small fraction of the proton spin, compatible with zero, is carried by the spins of the constituent quarks. The SMC was set up to check this unexpected result with improved accuracy, and to combine measurements of g(sub 1, sup p) and g(sub 1, sup d) to test a fundamental sum rule in quantum chromodynamics (QCD), the Bjorken sum rule. The SMC results presented in this thesis are based on data taken in 1992 using a polarized deuterium target and polarized muons with an incident energy of 100 GeV, and 1993 data with a proton target and an incident muon energy of 190 GeV. Using all available data, the fundamental Bjorken sum rule has now been verified at the one standard deviation level to within 16% of its theoretical value.

  15. Micro/Nanoscale Parallel Patterning of Functional Biomolecules, Organic Fluorophores and Colloidal Nanocrystals

    PubMed Central

    2009-01-01

    We describe the design and optimization of a reliable strategy that combines self-assembly and lithographic techniques, leading to very precise micro-/nanopositioning of biomolecules for the realization of micro- and nanoarrays of functional DNA and antibodies. Moreover, based on the covalent immobilization of stable and versatile SAMs of programmable chemical reactivity, this approach constitutes a general platform for the parallel site-specific deposition of a wide range of molecules such as organic fluorophores and water-soluble colloidal nanocrystals. PMID:20596482

  16. The interaction of turbulence with parallel and perpendicular shocks

    NASA Astrophysics Data System (ADS)

    Adhikari, L.; Zank, G. P.; Hunana, P.; Hu, Q.

    2016-11-01

    Interplanetary shocks exist in most astrophysical flows, and modify the properties of the background flow. We apply the Zank et al 2012 six coupled turbulence transport model equations to study the interaction of turbulence with parallel and perpendicular shock waves in the solar wind. We model the 1D structure of a stationary perpendicular or parallel shock wave using a hyperbolic tangent function and the Rankine-Hugoniot conditions. A reduced turbulence transport model (the 4-equation model) is applied to parallel and perpendicular shock waves, and solved using a 4th- order Runge Kutta method. We compare the model results with ACE spacecraft observations. We identify one quasi-parallel and one quasi-perpendicular event in the ACE spacecraft data sets, and compute various turbulent observed values such as the fluctuating magnetic and kinetic energy, the energy in forward and backward propagating modes, the total turbulent energy in the upstream and downstream of the shock. We also calculate the error associated with each turbulent observed value, and fit the observed values by a least square method and use a Fourier series fitting function. We find that the theoretical results are in reasonable agreement with observations. The energy in turbulent fluctuations is enhanced and the correlation length is approximately constant at the shock. Similarly, the normalized cross helicity increases across a perpendicular shock, and decreases across a parallel shock.

  17. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; et al.

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less

  18. Parallel habitat acclimatization is realized by the expression of different genes in two closely related salamander species (genus Salamandra).

    PubMed

    Goedbloed, D J; Czypionka, T; Altmüller, J; Rodriguez, A; Küpfer, E; Segev, O; Blaustein, L; Templeton, A R; Nolte, A W; Steinfartz, S

    2017-12-01

    The utilization of similar habitats by different species provides an ideal opportunity to identify genes underlying adaptation and acclimatization. Here, we analysed the gene expression of two closely related salamander species: Salamandra salamandra in Central Europe and Salamandra infraimmaculata in the Near East. These species inhabit similar habitat types: 'temporary ponds' and 'permanent streams' during larval development. We developed two species-specific gene expression microarrays, each targeting over 12 000 transcripts, including an overlapping subset of 8331 orthologues. Gene expression was examined for systematic differences between temporary ponds and permanent streams in larvae from both salamander species to establish gene sets and functions associated with these two habitat types. Only 20 orthologues were associated with a habitat in both species, but these orthologues did not show parallel expression patterns across species more than expected by chance. Functional annotation of a set of 106 genes with the highest effect size for a habitat suggested four putative gene function categories associated with a habitat in both species: cell proliferation, neural development, oxygen responses and muscle capacity. Among these high effect size genes was a single orthologue (14-3-3 protein zeta/YWHAZ) that was downregulated in temporary ponds in both species. The emergence of four gene function categories combined with a lack of parallel expression of orthologues (except 14-3-3 protein zeta) suggests that parallel habitat adaptation or acclimatization by larvae from S. salamandra and S. infraimmaculata to temporary ponds and permanent streams is mainly realized by different genes with a converging functionality.

  19. Effect of IQoro® training on impaired postural control and oropharyngeal motor function in patients with dysphagia after stroke.

    PubMed

    Hägg, Mary; Tibbling, Lita

    2016-07-01

    Conclusion All patients with dysphagia after stroke have impaired postural control. IQoro® screen (IQS) training gives a significant and lasting improvement of postural control running parallel with significant improvement of oropharyngeal motor dysfunction (OPMD). Objectives The present investigation aimed at studying the frequency of impaired postural control in patients with stroke-related dysphagia and if IQS training has any effect on impaired postural control in parallel with effect on OPMD. Method A prospective clinical study was carried out with 26 adult patients with stroke-related dysphagia. The training effect was compared between patients consecutively investigated at two different time periods, the first period with 15 patients included in the study more than half a year after stroke, the second period with 11 patients included within 1 month after stroke. Postural control tests and different oropharyngeal motor tests were performed before and after 3 months of oropharyngeal sensorimotor training with an IQS, and at a late follow-up (median 59 weeks after end of training). Result All patients had impaired postural control at baseline. Significant improvement in postural control and OPMD was observed after the completion of IQS training in both intervention groups. The improvements were still present at the late follow-up.

  20. A new free and open source tool for space plasma modeling.

    NASA Astrophysics Data System (ADS)

    Honkonen, I. J.

    2014-12-01

    I will present a new distributed memory parallel, free and open source computational model for studying space plasma. The model is written in C++ with emphasis on good software development practices and code readability without sacrificing serial or parallel performance. As such the model could be especially useful for education, for learning both (magneto)hydrodynamics (MHD) and computational model development. By using latest features of the C++ standard (2011) it has been possible to develop a very modular program which improves not only the readability of code but also the testability of the model and decreases the effort required to make changes to various parts of the program. Major parts of the model, functionality not directly related to (M)HD, have been outsourced to other freely available libraries which has reduced the development time of the model significantly. I will present an overview of the code architecture as well as details of different parts of the model and will show examples of using the model including preparing input files and plotting results. A multitude of 1-, 2- and 3-dimensional test cases are included in the software distribution and the results of, for example, Kelvin-Helmholtz, bow shock, blast wave and reconnection tests, will be presented.

  1. Role of APOE Isoforms in the Pathogenesis of TBI induced Alzheimer’s Disease

    DTIC Science & Technology

    2016-10-01

    deletion, APOE targeted replacement, complex breeding, CCI model optimization, mRNA library generation, high throughput massive parallel sequencing...demonstrate that the lack of Abca1 increases amyloid plaques and decreased APOE protein levels in AD-model mice. In this proposal we will test the hypothesis...injury, inflammatory reaction, transcriptome, high throughput massive parallel sequencing, mRNA-seq., behavioral testing, memory impairment, recovery 3

  2. Holographic disk with high data transfer rate: its application to an audio response memory.

    PubMed

    Kubota, K; Ono, Y; Kondo, M; Sugama, S; Nishida, N; Sakaguchi, M

    1980-03-15

    This paper describes a memory realized with a high data transfer rate using the holographic parallel-processing function and its application to an audio response system that supplies many audio messages to many terminals simultaneously. Digitalized audio messages are recorded as tiny 1-D Fourier transform holograms on a holographic disk. A hologram recorder and a hologram reader were constructed to test and demonstrate the holographic audio response memory feasibility. Experimental results indicate the potentiality of an audio response system with a 2000-word vocabulary and 250-Mbit/sec bit transfer rate.

  3. Defining the Transfer Functions of the PCAD Model in North Atlantic Right Whales (Eubalaena glacialis) - Retrospective Analyses of Existing Data

    DTIC Science & Technology

    2012-09-30

    potentially providing information on nutritional state and chronic stress ( Wasser et al., 2010). We tested both T3 and T4 assays for parallelism.The...EconPapers.RePEc.org/RePEc:inn:wpaper:2011-20 Hunt K.E., Rolland R.M., Kraus S.D., Wasser S.K. 2006. Analysis of fecal glucocorticoids in the North Atlantic right...version 1.2.5. http://CRAN.R-project.org/package=doMC Rolland R.M., Hunt K.E., Kraus S.D., Wasser S.K. 2005. Assessing reproductive status of right

  4. SediFoam: A general-purpose, open-source CFD-DEM solver for particle-laden flow with emphasis on sediment transport

    NASA Astrophysics Data System (ADS)

    Sun, Rui; Xiao, Heng

    2016-04-01

    With the growth of available computational resource, CFD-DEM (computational fluid dynamics-discrete element method) becomes an increasingly promising and feasible approach for the study of sediment transport. Several existing CFD-DEM solvers are applied in chemical engineering and mining industry. However, a robust CFD-DEM solver for the simulation of sediment transport is still desirable. In this work, the development of a three-dimensional, massively parallel, and open-source CFD-DEM solver SediFoam is detailed. This solver is built based on open-source solvers OpenFOAM and LAMMPS. OpenFOAM is a CFD toolbox that can perform three-dimensional fluid flow simulations on unstructured meshes; LAMMPS is a massively parallel DEM solver for molecular dynamics. Several validation tests of SediFoam are performed using cases of a wide range of complexities. The results obtained in the present simulations are consistent with those in the literature, which demonstrates the capability of SediFoam for sediment transport applications. In addition to the validation test, the parallel efficiency of SediFoam is studied to test the performance of the code for large-scale and complex simulations. The parallel efficiency tests show that the scalability of SediFoam is satisfactory in the simulations using up to O(107) particles.

  5. Columnar Segregation of Magnocellular and Parvocellular Streams in Human Extrastriate Cortex

    PubMed Central

    2017-01-01

    Magnocellular versus parvocellular (M-P) streams are fundamental to the organization of macaque visual cortex. Segregated, paired M-P streams extend from retina through LGN into V1. The M stream extends further into area V5/MT, and parts of V2. However, elsewhere in visual cortex, it remains unclear whether M-P-derived information (1) becomes intermixed or (2) remains segregated in M-P-dominated columns and neurons. Here we tested whether M-P streams exist in extrastriate cortical columns, in 8 human subjects (4 female). We acquired high-resolution fMRI at high field (7T), testing for M- and P-influenced columns within each of four cortical areas (V2, V3, V3A, and V4), based on known functional distinctions in M-P streams in macaque: (1) color versus luminance, (2) binocular disparity, (3) luminance contrast sensitivity, (4) peak spatial frequency, and (5) color/spatial interactions. Additional measurements of resting state activity (eyes closed) tested for segregated functional connections between these columns. We found M- and P-like functions and connections within and between segregated cortical columns in V2, V3, and (in most experiments) area V4. Area V3A was dominated by the M stream, without significant influence from the P stream. These results suggest that M-P streams exist, and extend through, specific columns in early/middle stages of human extrastriate cortex. SIGNIFICANCE STATEMENT The magnocellular and parvocellular (M-P) streams are fundamental components of primate visual cortical organization. These streams segregate both anatomical and functional properties in parallel, from retina through primary visual cortex. However, in most higher-order cortical sites, it is unknown whether such M-P streams exist and/or what form those streams would take. Moreover, it is unknown whether M-P streams exist in human cortex. Here, fMRI evidence measured at high field (7T) and high resolution revealed segregated M-P streams in four areas of human extrastriate cortex. These results suggest that M-P information is processed in segregated parallel channels throughout much of human visual cortex; the M-P streams are more than a convenient sorting property in earlier stages of the visual system. PMID:28724749

  6. Detection of inherited mutations for breast and ovarian cancer using genomic capture and massively parallel sequencing

    PubMed Central

    Walsh, Tom; Lee, Ming K.; Casadei, Silvia; Thornton, Anne M.; Stray, Sunday M.; Pennil, Christopher; Nord, Alex S.; Mandell, Jessica B.; Swisher, Elizabeth M.; King, Mary-Claire

    2010-01-01

    Inherited loss-of-function mutations in the tumor suppressor genes BRCA1, BRCA2, and multiple other genes predispose to high risks of breast and/or ovarian cancer. Cancer-associated inherited mutations in these genes are collectively quite common, but individually rare or even private. Genetic testing for BRCA1 and BRCA2 mutations has become an integral part of clinical practice, but testing is generally limited to these two genes and to women with severe family histories of breast or ovarian cancer. To determine whether massively parallel, “next-generation” sequencing would enable accurate, thorough, and cost-effective identification of inherited mutations for breast and ovarian cancer, we developed a genomic assay to capture, sequence, and detect all mutations in 21 genes, including BRCA1 and BRCA2, with inherited mutations that predispose to breast or ovarian cancer. Constitutional genomic DNA from subjects with known inherited mutations, ranging in size from 1 to >100,000 bp, was hybridized to custom oligonucleotides and then sequenced using a genome analyzer. Analysis was carried out blind to the mutation in each sample. Average coverage was >1200 reads per base pair. After filtering sequences for quality and number of reads, all single-nucleotide substitutions, small insertion and deletion mutations, and large genomic duplications and deletions were detected. There were zero false-positive calls of nonsense mutations, frameshift mutations, or genomic rearrangements for any gene in any of the test samples. This approach enables widespread genetic testing and personalized risk assessment for breast and ovarian cancer. PMID:20616022

  7. GWM-VI: groundwater management with parallel processing for multiple MODFLOW versions

    USGS Publications Warehouse

    Banta, Edward R.; Ahlfeld, David P.

    2013-01-01

    Groundwater Management–Version Independent (GWM–VI) is a new version of the Groundwater Management Process of MODFLOW. The Groundwater Management Process couples groundwater-flow simulation with a capability to optimize stresses on the simulated aquifer based on an objective function and constraints imposed on stresses and aquifer state. GWM–VI extends prior versions of Groundwater Management in two significant ways—(1) it can be used with any version of MODFLOW that meets certain requirements on input and output, and (2) it is structured to allow parallel processing of the repeated runs of the MODFLOW model that are required to solve the optimization problem. GWM–VI uses the same input structure for files that describe the management problem as that used by prior versions of Groundwater Management. GWM–VI requires only minor changes to the input files used by the MODFLOW model. GWM–VI uses the Joint Universal Parameter IdenTification and Evaluation of Reliability Application Programming Interface (JUPITER-API) to implement both version independence and parallel processing. GWM–VI communicates with the MODFLOW model by manipulating certain input files and interpreting results from the MODFLOW listing file and binary output files. Nearly all capabilities of prior versions of Groundwater Management are available in GWM–VI. GWM–VI has been tested with MODFLOW-2005, MODFLOW-NWT (a Newton formulation for MODFLOW-2005), MF2005-FMP2 (the Farm Process for MODFLOW-2005), SEAWAT, and CFP (Conduit Flow Process for MODFLOW-2005). This report provides sample problems that demonstrate a range of applications of GWM–VI and the directory structure and input information required to use the parallel-processing capability.

  8. Repeated functional convergent effects of NaV1.7 on acid insensitivity in hibernating mammals

    PubMed Central

    Liu, Zhen; Wang, Wei; Zhang, Tong-Zuo; Li, Gong-Hua; He, Kai; Huang, Jing-Fei; Jiang, Xue-Long; Murphy, Robert W.; Shi, Peng

    2014-01-01

    Hibernating mammals need to be insensitive to acid in order to cope with conditions of high CO2; however, the molecular basis of acid tolerance remains largely unknown. The African naked mole-rat (Heterocephalus glaber) and hibernating mammals share similar environments and physiological features. In the naked mole-rat, acid insensitivity has been shown to be conferred by the functional motif of the sodium ion channel NaV1.7. There is now an opportunity to evaluate acid insensitivity in other taxa. In this study, we tested for functional convergence of NaV1.7 in 71 species of mammals, including 22 species that hibernate. Our analyses revealed a functional convergence of amino acid sequences, which occurred at least six times independently in mammals that hibernate. Evolutionary analyses determined that the convergence results from both parallel and divergent evolution of residues in the functional motif. Our findings not only identify the functional molecules responsible for acid insensitivity in hibernating mammals, but also open new avenues to elucidate the molecular underpinnings of acid insensitivity in mammals. PMID:24352952

  9. Repeated functional convergent effects of NaV1.7 on acid insensitivity in hibernating mammals.

    PubMed

    Liu, Zhen; Wang, Wei; Zhang, Tong-Zuo; Li, Gong-Hua; He, Kai; Huang, Jing-Fei; Jiang, Xue-Long; Murphy, Robert W; Shi, Peng

    2014-02-07

    Hibernating mammals need to be insensitive to acid in order to cope with conditions of high CO2; however, the molecular basis of acid tolerance remains largely unknown. The African naked mole-rat (Heterocephalus glaber) and hibernating mammals share similar environments and physiological features. In the naked mole-rat, acid insensitivity has been shown to be conferred by the functional motif of the sodium ion channel NaV1.7. There is now an opportunity to evaluate acid insensitivity in other taxa. In this study, we tested for functional convergence of NaV1.7 in 71 species of mammals, including 22 species that hibernate. Our analyses revealed a functional convergence of amino acid sequences, which occurred at least six times independently in mammals that hibernate. Evolutionary analyses determined that the convergence results from both parallel and divergent evolution of residues in the functional motif. Our findings not only identify the functional molecules responsible for acid insensitivity in hibernating mammals, but also open new avenues to elucidate the molecular underpinnings of acid insensitivity in mammals.

  10. AdiosStMan: Parallelizing Casacore Table Data System using Adaptive IO System

    NASA Astrophysics Data System (ADS)

    Wang, R.; Harris, C.; Wicenec, A.

    2016-07-01

    In this paper, we investigate the Casacore Table Data System (CTDS) used in the casacore and CASA libraries, and methods to parallelize it. CTDS provides a storage manager plugin mechanism for third-party developers to design and implement their own CTDS storage managers. Having this in mind, we looked into various storage backend techniques that can possibly enable parallel I/O for CTDS by implementing new storage managers. After carrying on benchmarks showing the excellent parallel I/O throughput of the Adaptive IO System (ADIOS), we implemented an ADIOS based parallel CTDS storage manager. We then applied the CASA MSTransform frequency split task to verify the ADIOS Storage Manager. We also ran a series of performance tests to examine the I/O throughput in a massively parallel scenario.

  11. A multiarchitecture parallel-processing development environment

    NASA Technical Reports Server (NTRS)

    Townsend, Scott; Blech, Richard; Cole, Gary

    1993-01-01

    A description is given of the hardware and software of a multiprocessor test bed - the second generation Hypercluster system. The Hypercluster architecture consists of a standard hypercube distributed-memory topology, with multiprocessor shared-memory nodes. By using standard, off-the-shelf hardware, the system can be upgraded to use rapidly improving computer technology. The Hypercluster's multiarchitecture nature makes it suitable for researching parallel algorithms in computational field simulation applications (e.g., computational fluid dynamics). The dedicated test-bed environment of the Hypercluster and its custom-built software allows experiments with various parallel-processing concepts such as message passing algorithms, debugging tools, and computational 'steering'. Such research would be difficult, if not impossible, to achieve on shared, commercial systems.

  12. Development of computer tablet software for clinical quantification of lateral knee compartment translation during the pivot shift test.

    PubMed

    Muller, Bart; Hofbauer, Marcus; Rahnemai-Azar, Amir Ata; Wolf, Megan; Araki, Daisuke; Hoshino, Yuichi; Araujo, Paulo; Debski, Richard E; Irrgang, James J; Fu, Freddie H; Musahl, Volker

    2016-01-01

    The pivot shift test is a commonly used clinical examination by orthopedic surgeons to evaluate knee function following injury. However, the test can only be graded subjectively by the examiner. Therefore, the purpose of this study is to develop software for a computer tablet to quantify anterior translation of the lateral knee compartment during the pivot shift test. Based on the simple image analysis method, software for a computer tablet was developed with the following primary design constraint - the software should be easy to use in a clinical setting and it should not slow down an outpatient visit. Translation of the lateral compartment of the intact knee was 2.0 ± 0.2 mm and for the anterior cruciate ligament-deficient knee was 8.9 ± 0.9 mm (p < 0.001). Intra-tester (ICC range = 0.913 to 0.999) and inter-tester (ICC = 0.949) reliability were excellent for the repeatability assessments. Overall, the average percent error of measuring simulated translation of the lateral knee compartment with the tablet parallel to the monitor increased from 2.8% at 50 cm distance to 7.7% at 200 cm. Deviation from the parallel position of the tablet did not have a significant effect until a tablet angle of 45°. Average percent error during anterior translation of the lateral knee compartment of 6mm was 2.2% compared to 6.2% for 2 mm of translation. The software provides reliable, objective, and quantitative data on translation of the lateral knee compartment during the pivot shift test and meets the design constraints posed by the clinical setting.

  13. BROCCOLI: Software for fast fMRI analysis on many-core CPUs and GPUs

    PubMed Central

    Eklund, Anders; Dufort, Paul; Villani, Mattias; LaConte, Stephen

    2014-01-01

    Analysis of functional magnetic resonance imaging (fMRI) data is becoming ever more computationally demanding as temporal and spatial resolutions improve, and large, publicly available data sets proliferate. Moreover, methodological improvements in the neuroimaging pipeline, such as non-linear spatial normalization, non-parametric permutation tests and Bayesian Markov Chain Monte Carlo approaches, can dramatically increase the computational burden. Despite these challenges, there do not yet exist any fMRI software packages which leverage inexpensive and powerful graphics processing units (GPUs) to perform these analyses. Here, we therefore present BROCCOLI, a free software package written in OpenCL (Open Computing Language) that can be used for parallel analysis of fMRI data on a large variety of hardware configurations. BROCCOLI has, for example, been tested with an Intel CPU, an Nvidia GPU, and an AMD GPU. These tests show that parallel processing of fMRI data can lead to significantly faster analysis pipelines. This speedup can be achieved on relatively standard hardware, but further, dramatic speed improvements require only a modest investment in GPU hardware. BROCCOLI (running on a GPU) can perform non-linear spatial normalization to a 1 mm3 brain template in 4–6 s, and run a second level permutation test with 10,000 permutations in about a minute. These non-parametric tests are generally more robust than their parametric counterparts, and can also enable more sophisticated analyses by estimating complicated null distributions. Additionally, BROCCOLI includes support for Bayesian first-level fMRI analysis using a Gibbs sampler. The new software is freely available under GNU GPL3 and can be downloaded from github (https://github.com/wanderine/BROCCOLI/). PMID:24672471

  14. Predicting helix orientation for coiled-coil dimers

    PubMed Central

    Apgar, James R.; Gutwin, Karl N.; Keating, Amy E.

    2008-01-01

    The alpha-helical coiled coil is a structurally simple protein oligomerization or interaction motif consisting of two or more alpha helices twisted into a supercoiled bundle. Coiled coils can differ in their stoichiometry, helix orientation and axial alignment. Because of the near degeneracy of many of these variants, coiled coils pose a challenge to fold recognition methods for structure prediction. Whereas distinctions between some protein folds can be discriminated on the basis of hydrophobic/polar patterning or secondary structure propensities, the sequence differences that encode important details of coiled-coil structure can be subtle. This is emblematic of a larger problem in the field of protein structure and interaction prediction: that of establishing specificity between closely similar structures. We tested the behavior of different computational models on the problem of recognizing the correct orientation - parallel vs. antiparallel - of pairs of alpha helices that can form a dimeric coiled coil. For each of 131 examples of known structure, we constructed a large number of both parallel and antiparallel structural models and used these to asses the ability of five energy functions to recognize the correct fold. We also developed and tested three sequenced-based approaches that make use of varying degrees of implicit structural information. The best structural methods performed similarly to the best sequence methods, correctly categorizing ∼81% of dimers. Steric compatibility with the fold was important for some coiled coils we investigated. For many examples, the correct orientation was determined by smaller energy differences between parallel and antiparallel structures distributed over many residues and energy components. Prediction methods that used structure but incorporated varying approximations and assumptions showed quite different behaviors when used to investigate energetic contributions to orientation preference. Sequence based methods were sensitive to the choice of residue-pair interactions scored. PMID:18506779

  15. Individual differences in control of language interference in late bilinguals are mainly related to general executive abilities

    PubMed Central

    2010-01-01

    Background Recent research based on comparisons between bilinguals and monolinguals postulates that bilingualism enhances cognitive control functions, because the parallel activation of languages necessitates control of interference. In a novel approach we investigated two groups of bilinguals, distinguished by their susceptibility to cross-language interference, asking whether bilinguals with strong language control abilities ("non-switchers") have an advantage in executive functions (inhibition of irrelevant information, problem solving, planning efficiency, generative fluency and self-monitoring) compared to those bilinguals showing weaker language control abilities ("switchers"). Methods 29 late bilinguals (21 women) were evaluated using various cognitive control neuropsychological tests [e.g., Tower of Hanoi, Ruff Figural Fluency Task, Divided Attention, Go/noGo] tapping executive functions as well as four subtests of the Wechsler Adult Intelligence Scale. The analysis involved t-tests (two independent samples). Non-switchers (n = 16) were distinguished from switchers (n = 13) by their performance observed in a bilingual picture-naming task. Results The non-switcher group demonstrated a better performance on the Tower of Hanoi and Ruff Figural Fluency task, faster reaction time in a Go/noGo and Divided Attention task, and produced significantly fewer errors in the Tower of Hanoi, Go/noGo, and Divided Attention tasks when compared to the switchers. Non-switchers performed significantly better on two verbal subtests of the Wechsler Adult Intelligence Scale (Information and Similarity), but not on the Performance subtests (Picture Completion, Block Design). Conclusions The present results suggest that bilinguals with stronger language control have indeed a cognitive advantage in the administered tests involving executive functions, in particular inhibition, self-monitoring, problem solving, and generative fluency, and in two of the intelligence tests. What remains unclear is the direction of the relationship between executive functions and language control abilities. PMID:20180956

  16. Early life stress induces attention-deficit hyperactivity disorder (ADHD)-like behavioral and brain metabolic dysfunctions: functional imaging of methylphenidate treatment in a novel rodent model.

    PubMed

    Bock, J; Breuer, S; Poeggel, G; Braun, K

    2017-03-01

    In a novel animal model Octodon degus we tested the hypothesis that, in addition to genetic predisposition, early life stress (ELS) contributes to the etiology of attention-deficit hyperactivity disorder-like behavioral symptoms and the associated brain functional deficits. Since previous neurochemical observations revealed that early life stress impairs dopaminergic functions, we predicted that these symptoms can be normalized by treatment with methylphenidate. In line with our hypothesis, the behavioral analysis revealed that repeated ELS induced locomotor hyperactivity and reduced attention towards an emotionally relevant acoustic stimulus. Functional imaging using ( 14 C)-2-fluoro-deoxyglucose-autoradiography revealed that the behavioral symptoms are paralleled by metabolic hypoactivity of prefrontal, mesolimbic and subcortical brain areas. Finally, the pharmacological intervention provided further evidence that the behavioral and metabolic dysfunctions are due to impaired dopaminergic neurotransmission. Elevating dopamine in ELS animals by methylphenidate normalized locomotor hyperactivity and attention-deficit and ameliorated brain metabolic hypoactivity in a dose-dependent manner.

  17. Parallel computation of level set method for 500 Hz visual servo control

    NASA Astrophysics Data System (ADS)

    Fei, Xianfeng; Igarashi, Yasunobu; Hashimoto, Koichi

    2008-11-01

    We propose a 2D microorganism tracking system using a parallel level set method and a column parallel vision system (CPV). This system keeps a single microorganism in the middle of the visual field under a microscope by visual servoing an automated stage. We propose a new energy function for the level set method. This function constrains an amount of light intensity inside the detected object contour to control the number of the detected objects. This algorithm is implemented in CPV system and computational time for each frame is 2 [ms], approximately. A tracking experiment for about 25 s is demonstrated. Also we demonstrate a single paramecium can be kept tracking even if other paramecia appear in the visual field and contact with the tracked paramecium.

  18. Mechanism to support generic collective communication across a variety of programming models

    DOEpatents

    Almasi, Gheorghe [Ardsley, NY; Dozsa, Gabor [Ardsley, NY; Kumar, Sameer [White Plains, NY

    2011-07-19

    A system and method for supporting collective communications on a plurality of processors that use different parallel programming paradigms, in one aspect, may comprise a schedule defining one or more tasks in a collective operation, an executor that executes the task, a multisend module to perform one or more data transfer functions associated with the tasks, and a connection manager that controls one or more connections and identifies an available connection. The multisend module uses the available connection in performing the one or more data transfer functions. A plurality of processors that use different parallel programming paradigms can use a common implementation of the schedule module, the executor module, the connection manager and the multisend module via a language adaptor specific to a parallel programming paradigm implemented on a processor.

  19. Investigating a method of producing "red and dead" galaxies

    NASA Astrophysics Data System (ADS)

    Skory, Stephen

    2010-08-01

    In optical wavelengths, galaxies are observed to be either red or blue. The overall color of a galaxy is due to the distribution of the ages of its stellar population. Galaxies with currently active star formation appear blue, while those with no recent star formation at all (greater than about a Gyr) have only old, red stars. This strong bimodality has lead to the idea of star formation quenching, and various proposed physical mechanisms. In this dissertation, I attempt to reproduce with Enzo the results of Naab et al. (2007), in which red and dead galaxies are formed using gravitational quenching, rather than with one of the more typical methods of quenching. My initial attempts are unsuccessful, and I explore the reasons why I think they failed. Then using simpler methods better suited to Enzo + AMR, I am successful in producing a galaxy that appears to be similar in color and formation history to those in Naab et al. However, quenching is achieved using unphysically high star formation efficiencies, which is a different mechanism than Naab et al. suggests. Preliminary results of a much higher resolution, follow-on simulation of the above show some possible contradiction with the results of Naab et al. Cold gas is streaming into the galaxy to fuel starbursts, while at a similar epoch the galaxies in Naab et al. have largely already ceased forming stars in the galaxy. On the other hand, the results of the high resolution simulation are qualitatively similar to other works in the literature that show a somewhat different gravitational quenching mechanism than Naab et al. I also discuss my work using halo finders to analyze simulated cosmological data, and my work improving the Enzo/AMR analysis tool "yt". This includes two parallelizations of the halo finder HOP (Eisenstein and Hut, 1998) which allows analysis of very large cosmological datasets on parallel machines. The first version is "yt-HOP," which works well for datasets between about 2563 and 5123 particles, but has memory bottlenecks as the datasets get larger. These bottlenecks inspired the second version, "Parallel HOP," which is a fully parallelized method and implementation of HOP that has worked on datasets with more than 20483 particles on hundreds of processing cores. Both methods are described in detail, as are the various effects of performance-related runtime options. Additionally, both halo finders are subjected to a full suite of performance benchmarks varying both dataset sizes and computational resources used. I conclude with descriptions of four new tools I added to yt. A Parallel Structure Function Generator allows analysis of two-point functions, such as correlation functions, using memory- and workload-parallelism. A Parallel Merger Tree Generator leverages the parallel halo finders in yt, such as Parallel HOP, to build the merger tree of halos in a cosmological simulation, and outputs the result to a SQLite database for simple and powerful data extraction. A Star Particle Analysis toolkit takes a group of star particles and can output the rate of formation as a function of time, and/or a synthetic Spectral Energy Distribution (S.E.D.) using the Bruzual and Charlot (2003) data tables. Finally, a Halo Mass Function toolkit takes as input a list of halo masses and can output the halo mass function for the halos, as well as an analytical fit for those halos using several previously published fits.

  20. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R

    Endpoint-based parallel data processing with non-blocking collective instructions in a PAMI of a parallel computer is disclosed. The PAMI is composed of data communications endpoints, each including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task. The compute nodes are coupled for data communications through the PAMI. The parallel application establishes a data communications geometry specifying a set of endpoints that are used in collective operations of the PAMI by associating with the geometry a list of collective algorithms valid for use with themore » endpoints of the geometry; registering in each endpoint in the geometry a dispatch callback function for a collective operation; and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation.« less

  1. Parallel Implementation of the Discontinuous Galerkin Method

    NASA Technical Reports Server (NTRS)

    Baggag, Abdalkader; Atkins, Harold; Keyes, David

    1999-01-01

    This paper describes a parallel implementation of the discontinuous Galerkin method. Discontinuous Galerkin is a spatially compact method that retains its accuracy and robustness on non-smooth unstructured grids and is well suited for time dependent simulations. Several parallelization approaches are studied and evaluated. The most natural and symmetric of the approaches has been implemented in all object-oriented code used to simulate aeroacoustic scattering. The parallel implementation is MPI-based and has been tested on various parallel platforms such as the SGI Origin, IBM SP2, and clusters of SGI and Sun workstations. The scalability results presented for the SGI Origin show slightly superlinear speedup on a fixed-size problem due to cache effects.

  2. The Research and Implementation of Vehicle Bluetooth Hands-free Devices Key Parameters Downloading Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-bo; Wang, Zhi-xue; Li, Jian-xin; Ma, Jian-hui; Li, Yang; Li, Yan-qiang

    In order to facilitate Bluetooth function realization and information can be effectively tracked in the process of production, the vehicle Bluetooth hands-free devices need to download such key parameters as Bluetooth address, CVC license and base plate numbers, etc. Therefore, it is the aim to search simple and effective methods to download parameters for each vehicle Bluetooth hands-free device, and to control and record the use of parameters. In this paper, by means of Bluetooth Serial Peripheral Interface programmer device, the parallel port is switched to SPI. The first step is to download parameters is simulating SPI with the parallel port. To perform SPI function, operating the parallel port in accordance with the SPI timing. The next step is to achieve SPI data transceiver functions according to the programming parameters of options. Utilizing the new method, downloading parameters is fast and accurate. It fully meets vehicle Bluetooth hands-free devices production requirements. In the production line, it has played a large role.

  3. Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study

    DOE PAGES

    Radhakrishnan, Hari; Rouson, Damian W. I.; Morris, Karla; ...

    2015-01-01

    This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO) and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP) facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were donemore » using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.« less

  4. Parallel Fixed Point Implementation of a Radial Basis Function Network in an FPGA

    PubMed Central

    de Souza, Alisson C. D.; Fernandes, Marcelo A. C.

    2014-01-01

    This paper proposes a parallel fixed point radial basis function (RBF) artificial neural network (ANN), implemented in a field programmable gate array (FPGA) trained online with a least mean square (LMS) algorithm. The processing time and occupied area were analyzed for various fixed point formats. The problems of precision of the ANN response for nonlinear classification using the XOR gate and interpolation using the sine function were also analyzed in a hardware implementation. The entire project was developed using the System Generator platform (Xilinx), with a Virtex-6 xc6vcx240t-1ff1156 as the target FPGA. PMID:25268918

  5. Radiative transfer in spherical shell atmospheres. II - Asymmetric phase functions

    NASA Technical Reports Server (NTRS)

    Kattawar, G. W.; Adams, C. N.

    1978-01-01

    This paper investigates the effects of sphericity on the radiation reflected from a planet with a homogeneous conservative-scattering atmosphere of optical thicknesses of 0.25 and 1.0. A Henyey-Greenstein phase function with asymmetry factors of 0.5 and 0.7 was considered. Significant differences were found when these results were compared with the plane-parallel calculations. Also, large violations of the reciprocity theorem, which is only true for plane-parallel calculations, were noted. Results are presented for the radiance versus height distributions as a function of planetary phase angle. These results will be useful to researchers in the field of remote sensing and planetary spectroscopy.

  6. A Parallel Independent Component Analysis Approach to Investigate Genomic Influence on Brain Function

    PubMed Central

    Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D.

    2009-01-01

    Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings. PMID:19834575

  7. A Parallel Independent Component Analysis Approach to Investigate Genomic Influence on Brain Function.

    PubMed

    Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D

    2008-01-01

    Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings.

  8. Asteroids as Calibration Standards in the Thermal Infrared -- Applications and Results from ISO

    NASA Astrophysics Data System (ADS)

    Müller, T. G.; Lagerros, J. S. V.

    Asteroids have been used extensively as calibration sources for ISO. We summarise the asteroid observational parameters in the thermal infrared and explain the important modelling aspects. Ten selected asteroids were extensively used for the absolute photometric calibration of ISOPHOT in the far-IR. Additionally, the point-like and bright asteroids turned out to be of great interest for many technical tests and calibration aspects. They have been used for testing the calibration for SWS and LWS, the validation of relative spectral response functions of different bands, for colour correction and filter leak tests. Currently, there is a strong emphasis on ISO cross-calibration, where the asteroids contribute in many fields. Well known asteroids have also been seen serendipitously in the CAM Parallel Mode and the PHT Serendipity Mode, allowing for validation and improvement of the photometric calibration of these special observing modes.

  9. Efficient Predictions of Excited State for Nanomaterials Using Aces 3 and 4

    DTIC Science & Technology

    2017-12-20

    by first-principle methods in the software package ACES by using large parallel computers, growing tothe exascale. 15. SUBJECT TERMS Computer...modeling, excited states, optical properties, structure, stability, activation barriers first principle methods , parallel computing 16. SECURITY...2 Progress with new density functional methods

  10. Multiprocessor speed-up, Amdahl's Law, and the Activity Set Model of parallel program behavior

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol

    1988-01-01

    An important issue in the effective use of parallel processing is the estimation of the speed-up one may expect as a function of the number of processors used. Amdahl's Law has traditionally provided a guideline to this issue, although it appears excessively pessimistic in the light of recent experimental results. In this note, Amdahl's Law is amended by giving a greater importance to the capacity of a program to make effective use of parallel processing, but also recognizing the fact that imbalance of the workload of each processor is bound to occur. An activity set model of parallel program behavior is then introduced along with the corresponding parallelism index of a program, leading to upper and lower bounds to the speed-up.

  11. Programmable logic controller performance enhancement by field programmable gate array based design.

    PubMed

    Patel, Dhruv; Bhatt, Jignesh; Trivedi, Sanjay

    2015-01-01

    PLC, the core element of modern automation systems, due to serial execution, exhibits limitations like slow speed and poor scan time. Improved PLC design using FPGA has been proposed based on parallel execution mechanism for enhancement of performance and flexibility. Modelsim as simulation platform and VHDL used to translate, integrate and implement the logic circuit in FPGA. Xilinx's Spartan kit for implementation-testing and VB has been used for GUI development. Salient merits of the design include cost-effectiveness, miniaturization, user-friendliness, simplicity, along with lower power consumption, smaller scan time and higher speed. Various functionalities and applications like typical PLC and industrial alarm annunciator have been developed and successfully tested. Results of simulation, design and implementation have been reported. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pahl, R. J.; Trott, W. M.; Snedigar, S.

    A series of gas gun tests has been performed to examine contributions to energy release from micron-sized and nanometric aluminum powder added to sieved (212-300{mu}m) HMX. In the absence of added metal, 4-mm-thick, low-density (64-68% of theoretical maximum density) pressings of the sieved HMX respond to modest shock loading by developing distinctive reactive waves that exhibit both temporal and mesoscale spatial fluctuations. Parallel tests have been performed on samples containing 10% (by mass) aluminum in two particle sizes: 2-{mu}m and 123-nm mean particle diameter, respectively. The finely dispersed aluminum initially suppresses wave growth from HMX reactions; however, after a visiblemore » induction period, the added metal drives rapid increases in the transmitted wave particle velocity. Wave profile variations as a function of the aluminum particle diameter are discussed.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castaneda, Jaime N.; Pahl, Robert J.; Snedigar, Shane

    A series of gas gun tests has been performed to examine contributions to energy release from micron-sized and nanometric aluminum powder added to sieved (212-300{micro}m) HMX. In the absence of added metal, 4-mm-thick, low-density (64-68% of theoretical maximum density) pressings of the sieved HMX respond to modest shock loading by developing distinctive reactive waves that exhibit both temporal and mesoscale spatial fluctuations. Parallel tests have been performed on samples containing 10% (by mass) aluminum in two particle sizes: 2-{micro}m and 123-nm mean particle diameter, respectively. The finely dispersed aluminum initially suppresses wave growth from HMX reactions; however, after a visiblemore » induction period, the added metal drives rapid increases in the transmitted wave particle velocity. Wave profile variations as a function of the aluminum particle diameter are discussed.« less

  14. Multiplexed chemostat system for quantification of biodiversity and ecosystem functioning in anaerobic digestion

    PubMed Central

    Plouchart, Diane; Guizard, Guillaume; Latrille, Eric

    2018-01-01

    Continuous cultures in chemostats have proven their value in microbiology, microbial ecology, systems biology and bioprocess engineering, among others. In these systems, microbial growth and ecosystem performance can be quantified under stable and defined environmental conditions. This is essential when linking microbial diversity to ecosystem function. Here, a new system to test this link in anaerobic, methanogenic microbial communities is introduced. Rigorously replicated experiments or a suitable experimental design typically require operating several chemostats in parallel. However, this is labor intensive, especially when measuring biogas production. Commercial solutions for multiplying reactors performing continuous anaerobic digestion exist but are expensive and use comparably large reactor volumes, requiring the preparation of substantial amounts of media. Here, a flexible system of Lab-scale Automated and Multiplexed Anaerobic Chemostat system (LAMACs) with a working volume of 200 mL is introduced. Sterile feeding, biomass wasting and pressure monitoring are automated. One module containing six reactors fits the typical dimensions of a lab bench. Thanks to automation, time required for reactor operation and maintenance are reduced compared to traditional lab-scale systems. Several modules can be used together, and so far the parallel operation of 30 reactors was demonstrated. The chemostats are autoclavable. Parameters like reactor volume, flow rates and operating temperature can be freely set. The robustness of the system was tested in a two-month long experiment in which three inocula in four replicates, i.e., twelve continuous digesters were monitored. Statistically significant differences in the biogas production between inocula were observed. In anaerobic digestion, biogas production and consequently pressure development in a closed environment is a proxy for ecosystem performance. The precision of the pressure measurement is thus crucial. The measured maximum and minimum rates of gas production could be determined at the same precision. The LAMACs is a tool that enables us to put in practice the often-demanded need for replication and rigorous testing in microbial ecology as well as bioprocess engineering. PMID:29518106

  15. Whole body vibration for older persons: an open randomized, multicentre, parallel, clinical trial

    PubMed Central

    2011-01-01

    Background Institutionalized older persons have a poor functional capacity. Including physical exercise in their routine activities decreases their frailty and improves their quality of life. Whole-body vibration (WBV) training is a type of exercise that seems beneficial in frail older persons to improve their functional mobility, but the evidence is inconclusive. This trial will compare the results of exercise with WBV and exercise without WBV in improving body balance, muscle performance and fall prevention in institutionalized older persons. Methods/Design An open, multicentre and parallel randomized clinical trial with blinded assessment. 160 nursing home residents aged over 65 years and of both sexes will be identified to participate in the study. Participants will be centrally randomised and allocated to interventions (vibration or exercise group) by telephone. The vibration group will perform static/dynamic exercises (balance and resistance training) on a vibratory platform (Frequency: 30-35 Hz; Amplitude: 2-4 mm) over a six-week training period (3 sessions/week). The exercise group will perform the same exercise protocol but without a vibration stimuli platform. The primary outcome measure is the static/dynamic body balance. Secondary outcomes are muscle strength and, number of new falls. Follow-up measurements will be collected at 6 weeks and at 6 months after randomization. Efficacy will be analysed on an intention-to-treat (ITT) basis and 'per protocol'. The effects of the intervention will be evaluated using the "t" test, Mann-Witney test, or Chi-square test, depending on the type of outcome. The final analysis will be performed 6 weeks and 6 months after randomization. Discussion This study will help to clarify whether WBV training improves body balance, gait mobility and muscle strength in frail older persons living in nursing homes. As far as we know, this will be the first study to evaluate the efficacy of WBV for the prevention of falls. Trial Registration ClinicalTrials.gov: NCT01375790 PMID:22192313

  16. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization

    PubMed Central

    Dazard, Jean-Eudes; Xu, Hua; Rao, J. Sunil

    2015-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets (p ≫ n paradigm), such as in ‘omics’-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real ‘omics’ test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR (‘Mean-Variance Regularization’), downloadable from the CRAN. PMID:26819572

  17. Evolving binary classifiers through parallel computation of multiple fitness cases.

    PubMed

    Cagnoni, Stefano; Bergenti, Federico; Mordonini, Monica; Adorni, Giovanni

    2005-06-01

    This paper describes two versions of a novel approach to developing binary classifiers, based on two evolutionary computation paradigms: cellular programming and genetic programming. Such an approach achieves high computation efficiency both during evolution and at runtime. Evolution speed is optimized by allowing multiple solutions to be computed in parallel. Runtime performance is optimized explicitly using parallel computation in the case of cellular programming or implicitly taking advantage of the intrinsic parallelism of bitwise operators on standard sequential architectures in the case of genetic programming. The approach was tested on a digit recognition problem and compared with a reference classifier.

  18. Formal methods for test case generation

    NASA Technical Reports Server (NTRS)

    Rushby, John (Inventor); De Moura, Leonardo Mendonga (Inventor); Hamon, Gregoire (Inventor)

    2011-01-01

    The invention relates to the use of model checkers to generate efficient test sets for hardware and software systems. The method provides for extending existing tests to reach new coverage targets; searching *to* some or all of the uncovered targets in parallel; searching in parallel *from* some or all of the states reached in previous tests; and slicing the model relative to the current set of coverage targets. The invention provides efficient test case generation and test set formation. Deep regions of the state space can be reached within allotted time and memory. The approach has been applied to use of the model checkers of SRI's SAL system and to model-based designs developed in Stateflow. Stateflow models achieving complete state and transition coverage in a single test case are reported.

  19. Parallel-In-Time For Moving Meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Manteuffel, T. A.; Southworth, B.

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is appliedmore » to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.« less

  20. Effects of Crossed Brassiere Straps on Pain, Range of Motion, and Electromyographic Activity of Scapular Upward Rotators in Women With Scapular Downward Rotation Syndrome.

    PubMed

    Kang, Min-Hyeok; Choi, Ji-Young; Oh, Jae-seop

    2015-12-01

    Scapular downward rotation syndrome manifests as an abnormally downward-rotated scapula at rest or with arm motion and typically results in neck and shoulder pain. The brassiere strap has been suggested as a possible contributing factor to scapula downward rotation and pain in the upper trapezius because of increased downward rotational force on the lateral aspect of the scapula. No study, however, has examined the influences of a modified brassiere strap on pain in and the function of the scapular muscles. To examine the effects of crossed brassiere straps on the pressure pain threshold (PPT) of the upper trapezius, neck rotation range of motion (ROM), and electromyographic activity of the scapular upward rotators in females with scapular downward rotation syndrome. Cross-over design. Laboratory. In total, 15 female subjects with scapular downward rotation syndrome were recruited at hospitals and a local university. All participants performed neck rotation and humeral elevation under 2 different conditions: parallel and crossed brassiere straps. The PPT of the upper trapezius was measured using an analog algometer, whereas neck rotation ROM was quantified with a 3-dimensional ultrasonic motion analysis system. The electromyographic activities of the upper trapezius, serratus anterior, and lower trapezius during humeral elevation were assessed with a surface electromyography system. Outcome measures were assessed under parallel and crossed brassiere strap conditions, and differences in outcomes between the conditions were analyzed using a paired t-test. The PPT and neck rotation ROM were increased when the subject was wearing the brassiere with crossed versus parallel straps (P < .001). Greater electromyographic activities of the serratus anterior, lower trapezius, and lesser upper trapezius muscles during humeral elevation were found under the crossed strap condition than the parallel strap condition (P < .05). These findings provide useful information for clinicians when designing management programs to decrease pain and improve biomechanical function for females with scapular downward rotation syndrome. Copyright © 2015 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  1. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  2. A cost-effective methodology for the design of massively-parallel VLSI functional units

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Sriram, G.; Desouza, J.

    1993-01-01

    In this paper we propose a generalized methodology for the design of cost-effective massively-parallel VLSI Functional Units. This methodology is based on a technique of generating and reducing a massive bit-array on the mask-programmable PAcube VLSI array. This methodology unifies (maintains identical data flow and control) the execution of complex arithmetic functions on PAcube arrays. It is highly regular, expandable and uniform with respect to problem-size and wordlength, thereby reducing the communication complexity. The memory-functional unit interface is regular and expandable. Using this technique functional units of dedicated processors can be mask-programmed on the naked PAcube arrays, reducing the turn-around time. The production cost of such dedicated processors can be drastically reduced since the naked PAcube arrays can be mass-produced. Analysis of the the performance of functional units designed by our method yields promising results.

  3. Use Computer-Aided Tools to Parallelize Large CFD Applications

    NASA Technical Reports Server (NTRS)

    Jin, H.; Frumkin, M.; Yan, J.

    2000-01-01

    Porting applications to high performance parallel computers is always a challenging task. It is time consuming and costly. With rapid progressing in hardware architectures and increasing complexity of real applications in recent years, the problem becomes even more sever. Today, scalability and high performance are mostly involving handwritten parallel programs using message-passing libraries (e.g. MPI). However, this process is very difficult and often error-prone. The recent reemergence of shared memory parallel (SMP) architectures, such as the cache coherent Non-Uniform Memory Access (ccNUMA) architecture used in the SGI Origin 2000, show good prospects for scaling beyond hundreds of processors. Programming on an SMP is simplified by working in a globally accessible address space. The user can supply compiler directives, such as OpenMP, to parallelize the code. As an industry standard for portable implementation of parallel programs for SMPs, OpenMP is a set of compiler directives and callable runtime library routines that extend Fortran, C and C++ to express shared memory parallelism. It promises an incremental path for parallel conversion of existing software, as well as scalability and performance for a complete rewrite or an entirely new development. Perhaps the main disadvantage of programming with directives is that inserted directives may not necessarily enhance performance. In the worst cases, it can create erroneous results. While vendors have provided tools to perform error-checking and profiling, automation in directive insertion is very limited and often failed on large programs, primarily due to the lack of a thorough enough data dependence analysis. To overcome the deficiency, we have developed a toolkit, CAPO, to automatically insert OpenMP directives in Fortran programs and apply certain degrees of optimization. CAPO is aimed at taking advantage of detailed inter-procedural dependence analysis provided by CAPTools, developed by the University of Greenwich, to reduce potential errors made by users. Earlier tests on NAS Benchmarks and ARC3D have demonstrated good success of this tool. In this study, we have applied CAPO to parallelize three large applications in the area of computational fluid dynamics (CFD): OVERFLOW, TLNS3D and INS3D. These codes are widely used for solving Navier-Stokes equations with complicated boundary conditions and turbulence model in multiple zones. Each one comprises of from 50K to 1,00k lines of FORTRAN77. As an example, CAPO took 77 hours to complete the data dependence analysis of OVERFLOW on a workstation (SGI, 175MHz, R10K processor). A fair amount of effort was spent on correcting false dependencies due to lack of necessary knowledge during the analysis. Even so, CAPO provides an easy way for user to interact with the parallelization process. The OpenMP version was generated within a day after the analysis was completed. Due to sequential algorithms involved, code sections in TLNS3D and INS3D need to be restructured by hand to produce more efficient parallel codes. An included figure shows preliminary test results of the generated OVERFLOW with several test cases in single zone. The MPI data points for the small test case were taken from a handcoded MPI version. As we can see, CAPO's version has achieved 18 fold speed up on 32 nodes of the SGI O2K. For the small test case, it outperformed the MPI version. These results are very encouraging, but further work is needed. For example, although CAPO attempts to place directives on the outer- most parallel loops in an interprocedural framework, it does not insert directives based on the best manual strategy. In particular, it lacks the support of parallelization at the multi-zone level. Future work will emphasize on the development of methodology to work in a multi-zone level and with a hybrid approach. Development of tools to perform more complicated code transformation is also needed.

  4. Effects of prolonged acceleration with or without clinostat rotation on seedlings of Arabidopsis thaliana (L.) Heynh

    NASA Technical Reports Server (NTRS)

    Brown, A. H.; Dahl, A. O.; Loercher, L.

    1974-01-01

    Three 21-day tests of the effects of chronic centrifugation were carried out on populations of Arabidopsis thaliana. In addition to 1 g the resultant g-forces tested were: 2,4,6,8,16, and 20 g. Observed end points included gross morphological characters such as size of plant organs and, at the other extreme, features of sub-cellular structure and ultrastructure. Plants were grown on banks of clinostats. The acceleration vector was directed either parallel with the plants' axes or transverse to the axes. Plant responses to chronic axial acceleration and to transverse acceleration with clinostated plants were determined. From the data obtained it was possible in some cases: (1) to determine the g-functions of specific plant developmental characters; (2) to extrapolate those functions to the hypothetical value at zero g in order to predict (tentatively) the morphology of a plant grown in space, (3) to describe morphological effects of clinostat rotation, (4) to determine which of those effects was influenced by the prevailing g-force, and (5) to put to direct test the assumption that clinostat rotation nullifies or compensates for the influence of gravity.

  5. The molecular gradient using the divide-expand-consolidate resolution of the identity second-order Møller-Plesset perturbation theory: The DEC-RI-MP2 gradient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bykov, Dmytro; Kristensen, Kasper; Kjærgaard, Thomas

    We report an implementation of the molecular gradient using the divide-expand-consolidate resolution of the identity second-order Møller-Plesset perturbation theory (DEC-RI-MP2). The new DEC-RI-MP2 gradient method combines the precision control as well as the linear-scaling and massively parallel features of the DEC scheme with efficient evaluations of the gradient contributions using the RI approximation. We further demonstrate that the DEC-RI-MP2 gradient method is capable of calculating molecular gradients for very large molecular systems. A test set of supramolecular complexes containing up to 158 atoms and 1960 contracted basis functions has been employed to demonstrate the general applicability of the DEC-RI-MP2 methodmore » and to analyze the errors of the DEC approximation. Moreover, the test set contains molecules of complicated electronic structures and is thus deliberately chosen to stress test the DEC-RI-MP2 gradient implementation. Additionally, as a showcase example the full molecular gradient for insulin (787 atoms and 7604 contracted basis functions) has been evaluated.« less

  6. Tungsten inert gas (TIG) welding of Ni-rich NiTi plates: functional behavior

    NASA Astrophysics Data System (ADS)

    Oliveira, J. P.; Barbosa, D.; Braz Fernandes, F. M.; Miranda, R. M.

    2016-03-01

    It is often reported that, to successfully join NiTi shape memory alloys, fusion-based processes with reduced thermal affected regions (as in laser welding) are required. This paper describes an experimental study performed on the tungsten inert gas (TIG) welding of 1.5 mm thick plates of Ni-rich NiTi. The functional behavior of the joints was assessed. The superelasticity was analyzed by cycling tests at maximum imposed strains of 4, 8 and 12% and for a total of 600 cycles, without rupture. The superelastic plateau was observed, in the stress-strain curves, 30 MPa below that of the base material. Shape-memory effect was evidenced by bending tests with full recovery of the initial shape of the welded joints. In parallel, uniaxial tensile tests of the joints showed a tensile strength of 700 MPa and an elongation to rupture of 20%. The elongation is the highest reported for fusion-welding of NiTi, including laser welding. These results can be of great interest for the wide-spread inclusion of NiTi in complex shaped components requiring welding, since TIG is not an expensive process and is simple to operate and implement in industrial environments.

  7. Cross-over studies underestimate energy compensation: The example of sucrose-versus sucralose-containing drinks.

    PubMed

    Gadah, Nouf S; Brunstrom, Jeffrey M; Rogers, Peter J

    2016-12-01

    The vast majority of preload-test-meal studies that have investigated the effects on energy intake of disguised nutrient or other food/drink ingredient manipulations have used a cross-over design. We argue that this design may underestimate the effect of the manipulation due to carry-over effects. To test this we conducted comparable cross-over (n = 69) and parallel-groups (n = 48) studies testing the effects of sucrose versus low-calorie sweetener (sucralose) in a drink preload on test-meal energy intake. The parallel-groups study included a baseline day in which only the test meal was consumed. Energy intake in that meal was used to control for individual differences in energy intake in the analysis of the effects of sucrose versus sucralose on energy intake on the test day. Consistent with our prediction, the effect of consuming sucrose on subsequent energy intake was greater when measured in the parallel-groups study than in the cross-over study (respectively 64% versus 36% compensation for the 162 kcal difference in energy content of the sucrose and sucralose drinks). We also included a water comparison group in the parallel-groups study (n = 24) and found that test-meal energy intake did not differ significantly between the water and sucralose conditions. Together, these results confirm that consumption of sucrose in a drink reduces subsequent energy intake, but by less than the energy content of the drink, whilst drink sweetness does not increase food energy intake. Crucially, though, the studies demonstrate that study design affects estimated energy compensation. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. SMMP v. 3.0—Simulating proteins and protein interactions in Python and Fortran

    NASA Astrophysics Data System (ADS)

    Meinke, Jan H.; Mohanty, Sandipan; Eisenmenger, Frank; Hansmann, Ulrich H. E.

    2008-03-01

    We describe a revised and updated version of the program package SMMP. SMMP is an open-source FORTRAN package for molecular simulation of proteins within the standard geometry model. It is designed as a simple and inexpensive tool for researchers and students to become familiar with protein simulation techniques. SMMP 3.0 sports a revised API increasing its flexibility, an implementation of the Lund force field, multi-molecule simulations, a parallel implementation of the energy function, Python bindings, and more. Program summaryTitle of program:SMMP Catalogue identifier:ADOJ_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADOJ_v3_0.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language used:FORTRAN, Python No. of lines in distributed program, including test data, etc.:52 105 No. of bytes in distributed program, including test data, etc.:599 150 Distribution format:tar.gz Computer:Platform independent Operating system:OS independent RAM:2 Mbytes Classification:3 Does the new version supersede the previous version?:Yes Nature of problem:Molecular mechanics computations and Monte Carlo simulation of proteins. Solution method:Utilizes ECEPP2/3, FLEX, and Lund potentials. Includes Monte Carlo simulation algorithms for canonical, as well as for generalized ensembles. Reasons for new version:API changes and increased functionality. Summary of revisions:Added Lund potential; parameters used in subroutines are now passed as arguments; multi-molecule simulations; parallelized energy calculation for ECEPP; Python bindings. Restrictions:The consumed CPU time increases with the size of protein molecule. Running time:Depends on the size of the simulated molecule.

  9. Study on the Therapeutic Benefit on Lactoferrin in Patients with Colorectal Cancer Receiving Chemotherapy

    PubMed Central

    Moastafa, Tarek M.; El-Sissy, Alaa El-Din Elsayed; El-Saeed, Gehan K.; Koura, Mai Salah El-Din

    2014-01-01

    A double-blinded parallel randomized controlled clinical trial was conducted on two groups of colorectal cancer patients to study the therapeutic benefit of orally administered bovine lactoferrin (bLF) on colorectal cancer patients having age ranges from 20 to 71 years and who received 5-fluorouracil and leucovorin calcium. Test group (15 patients) received oral bLF 250 mg/day beside chemotherapy for three months. Control group (15 patients) received chemotherapy only. Serum lactoferrin (LF), serum glutathione-s-transferase enzyme (GST), interferon gamma (INF-γ), tumor marker carcinoembryonic antigen (CEA), renal function tests, hepatic function tests, and complete blood count were measured for both groups before and at the end of the trial. Although, there was a significant effect of oral bLF (250 mg/day) that indicated a significant improvement in mean percent of change of all parameters 3 months after treatment, there was no significant difference between results of patients in the test group and patients in the control group after treatment. This result suggests that oral bLF has significant therapeutic effect on colorectal cancer patients. Our study suggests that daily administration of bLF showed a clinically beneficial effect to colorectal cancer patients with better disease prognosis but that needs further looking into. PMID:27350986

  10. Study on the Therapeutic Benefit on Lactoferrin in Patients with Colorectal Cancer Receiving Chemotherapy.

    PubMed

    Moastafa, Tarek M; El-Sissy, Alaa El-Din Elsayed; El-Saeed, Gehan K; Koura, Mai Salah El-Din

    2014-01-01

    A double-blinded parallel randomized controlled clinical trial was conducted on two groups of colorectal cancer patients to study the therapeutic benefit of orally administered bovine lactoferrin (bLF) on colorectal cancer patients having age ranges from 20 to 71 years and who received 5-fluorouracil and leucovorin calcium. Test group (15 patients) received oral bLF 250 mg/day beside chemotherapy for three months. Control group (15 patients) received chemotherapy only. Serum lactoferrin (LF), serum glutathione-s-transferase enzyme (GST), interferon gamma (INF-γ), tumor marker carcinoembryonic antigen (CEA), renal function tests, hepatic function tests, and complete blood count were measured for both groups before and at the end of the trial. Although, there was a significant effect of oral bLF (250 mg/day) that indicated a significant improvement in mean percent of change of all parameters 3 months after treatment, there was no significant difference between results of patients in the test group and patients in the control group after treatment. This result suggests that oral bLF has significant therapeutic effect on colorectal cancer patients. Our study suggests that daily administration of bLF showed a clinically beneficial effect to colorectal cancer patients with better disease prognosis but that needs further looking into.

  11. GSRP/David Marshall: Fully Automated Cartesian Grid CFD Application for MDO in High Speed Flows

    NASA Technical Reports Server (NTRS)

    2003-01-01

    With the renewed interest in Cartesian gridding methodologies for the ease and speed of gridding complex geometries in addition to the simplicity of the control volumes used in the computations, it has become important to investigate ways of extending the existing Cartesian grid solver functionalities. This includes developing methods of modeling the viscous effects in order to utilize Cartesian grids solvers for accurate drag predictions and addressing the issues related to the distributed memory parallelization of Cartesian solvers. This research presents advances in two areas of interest in Cartesian grid solvers, viscous effects modeling and MPI parallelization. The development of viscous effects modeling using solely Cartesian grids has been hampered by the widely varying control volume sizes associated with the mesh refinement and the cut cells associated with the solid surface. This problem is being addressed by using physically based modeling techniques to update the state vectors of the cut cells and removing them from the finite volume integration scheme. This work is performed on a new Cartesian grid solver, NASCART-GT, with modifications to its cut cell functionality. The development of MPI parallelization addresses issues associated with utilizing Cartesian solvers on distributed memory parallel environments. This work is performed on an existing Cartesian grid solver, CART3D, with modifications to its parallelization methodology.

  12. A parallel approach of COFFEE objective function to multiple sequence alignment

    NASA Astrophysics Data System (ADS)

    Zafalon, G. F. D.; Visotaky, J. M. V.; Amorim, A. R.; Valêncio, C. R.; Neves, L. A.; de Souza, R. C. G.; Machado, J. M.

    2015-09-01

    The computational tools to assist genomic analyzes show even more necessary due to fast increasing of data amount available. With high computational costs of deterministic algorithms for sequence alignments, many works concentrate their efforts in the development of heuristic approaches to multiple sequence alignments. However, the selection of an approach, which offers solutions with good biological significance and feasible execution time, is a great challenge. Thus, this work aims to show the parallelization of the processing steps of MSA-GA tool using multithread paradigm in the execution of COFFEE objective function. The standard objective function implemented in the tool is the Weighted Sum of Pairs (WSP), which produces some distortions in the final alignments when sequences sets with low similarity are aligned. Then, in studies previously performed we implemented the COFFEE objective function in the tool to smooth these distortions. Although the nature of COFFEE objective function implies in the increasing of execution time, this approach presents points, which can be executed in parallel. With the improvements implemented in this work, we can verify the execution time of new approach is 24% faster than the sequential approach with COFFEE. Moreover, the COFFEE multithreaded approach is more efficient than WSP, because besides it is slightly fast, its biological results are better.

  13. Visual resolution in incoherent and coherent light: preliminary investigation

    NASA Astrophysics Data System (ADS)

    Sarnowska-Habrat, Katarzyna; Dubik, Boguslawa; Zajac, Marek

    2001-05-01

    In ophthalmology and optometry a number of measures are used for describing quality of human vision such as resolution, visual acuity, contrast sensitivity function, etc. In this paper we will concentrate on the vision quality understood as a resolution of periodic object being a set of equidistant parallel lines of given spacing and direction. The measurement procedure is based on presenting the test to the investigated person and determining the highest spatial frequency he/she can still resolve. In this paper we describe a number of experiments in which we use test tables illuminated with light both coherent and incoherent of different spectral characteristics. Our experiments suggest that while considering incoherent polychromatic illumination the resolution in blue light is substantially worse than in white light. In coherent illumination speckling effect causes worsening of resolution. While using laser light it is easy to generate a sinusoidal interference pattern which can serve as test object. In the paper we compare the results of resolution measurements with test tables and interference fringes.

  14. Idiopathic normal pressure hydrocephalus: the CSF tap-test may predict the clinical response to shunting.

    PubMed

    Sand, T; Bovim, G; Grimse, R; Myhr, G; Helde, G; Cappelen, J

    1994-05-01

    A follow-up study was performed in nine patients with idiopathic normal pressure hydrocephalus (NPH) 37 months (mean) after shunting and 10 non-operated controls with comparable degrees of ventricular enlargement, gait disorder, and dementia. Five operated patients vs. no controls reported sustained general improvement (p < 0.02). Objectively improved gait at follow-up (compared with preoperative status) was found in five of the six tested NPH-patients vs. none of the controls (p < 0.005). Improved gait and/or psychometric function was found in four of six NPH vs. none of eight control patients (p < 0.02) after drainage of 40 ml cerebrospinal fluid (CSF tap-test). Improved gait during the CSF tap-test predicted continued improvement at follow-up. Temporal horn size was the only radiological variable which showed a (moderate) positive correlation with resistance to CSF absorption and rate of pressure increase. The size of the third ventricle diminished in parallel with clinical improvement.

  15. Test benches for studying the properties of car tyres

    NASA Astrophysics Data System (ADS)

    Kuznetsov, N. Yu.; Fedotov, A. I.; Vlasov, V. G.

    2017-12-01

    The article describes the design of the measuring systems of test benches used to study the properties of elastic tyres. The bench has two autonomous systems - for testing the braking properties of elastic tyres rolling in a plane parallel way and for testing tyre slip properties. The system for testing braking properties determines experimental characteristics of elastic tyres as the following dependencies: longitudinal response vs time, braking torque vs slip, angular velocity vs slip, and longitudinal response vs slip. The system for studying tyre slip properties determines both steady (dependence of the lateral response in a contact area on the slipping angle) and non-steady characteristics (time variation of the slipping angle as a result of turning from -40 to +40 degrees) of tyre slip. The article presents the diagrams of bench tests of elastic tyres. The experimental results show metrological parameters and functional capabilities of the bench for studying tyre properties in driving and braking modes. The metrological indices of the recorded parameters of the measuring system for studying tyre properties are presented in the table.

  16. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    PubMed Central

    2011-01-01

    Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples. PMID:21352538

  17. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines.

    PubMed

    Cieślik, Marcin; Mura, Cameron

    2011-02-25

    Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples.

  18. Learning Quantitative Sequence-Function Relationships from Massively Parallel Experiments

    NASA Astrophysics Data System (ADS)

    Atwal, Gurinder S.; Kinney, Justin B.

    2016-03-01

    A fundamental aspect of biological information processing is the ubiquity of sequence-function relationships—functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence-function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such "massively parallel" experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence-function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of "diffeomorphic modes"—directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence-function relationships.

  19. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  20. Angiotensin II receptors in testes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Millan, M.A.; Aguilera, G.

    Receptors for angiotensin II (AII) were identified and characterized in testes of rats and several primate species. Autoradiographic analysis of the binding of 125I-labeled (Sar1,Ile8)AII to rat, rhesus monkey, cebus monkey, and human testicular slide-mounted frozen sections indicated specific binding to Leydig cells in the interstitium. In rat collagenase-dispersed interstitial cells fractionated by Percoll gradient, AII receptor content was parallel to that of hCG receptors, confirming that the AII receptors are in the Leydig cells. In rat dispersed Leydig cells, binding was specific for AII and its analogs and of high affinity (Kd, 4.8 nM), with a receptor concentration ofmore » 15 fmol/10(6) cells. Studies of AII receptors in rat testes during development reveals the presence of high receptor density in newborn rats which decreases toward the adult age (4934 +/- 309, 1460 +/- 228, 772 +/- 169, and 82 +/- 12 fmol/mg protein at 5, 15, 20, and 30 days of age, respectively) with no change in affinity. At all ages receptors were located in the interstitium, and the decrease in binding was parallel to the decrease in the interstitial to tubular ratio observed with age. AII receptor properties in membrane-rich fractions from prepuberal testes were similar in the rat and rhesus monkey. Binding was time and temperature dependent, reaching a plateau at 60 min at 37 C, and was increased by divalent cations, EGTA, and dithiothreitol up to 0.5 mM. In membranes from prepuberal monkey testes, AII receptors were specific for AII analogs and of high affinity (Kd, 4.2 nM) with a receptor concentration of 7599 +/- 1342 fmol/mg protein. The presence of AII receptors in Leydig cells in rat and primate testes in conjunction with reports of the presence of other components of the renin-angiotensin system in the testes suggests that the peptide has a physiological role in testicular function.« less

  1. Low bias negative differential conductance and reversal of current in coupled quantum dots in different topological configurations

    NASA Astrophysics Data System (ADS)

    Devi, Sushila; Brogi, B. B.; Ahluwalia, P. K.; Chand, S.

    2018-06-01

    Electronic transport through asymmetric parallel coupled quantum dot system hybridized between normal leads has been investigated theoretically in the Coulomb blockade regime by using Non-Equilibrium Green Function formalism. A new decoupling scheme proposed by Rabani and his co-workers has been adopted to close the chain of higher order Green's functions appearing in the equations of motion. For resonant tunneling case; the calculations of current and differential conductance have been presented during transition of coupled quantum dot system from series to symmetric parallel configuration. It has been found that during this transition, increase in current and differential conductance of the system occurs. Furthermore, clear signatures of negative differential conductance and negative current appear in series case, both of which disappear when topology of system is tuned to asymmetric parallel configuration.

  2. On-top density functionals for the short-range dynamic correlation between electrons of opposite and parallel spin

    NASA Astrophysics Data System (ADS)

    Hollett, Joshua W.; Pegoretti, Nicholas

    2018-04-01

    Separate, one-parameter, on-top density functionals are derived for the short-range dynamic correlation between opposite and parallel-spin electrons, in which the electron-electron cusp is represented by an exponential function. The combination of both functionals is referred to as the Opposite-spin exponential-cusp and Fermi-hole correction (OF) functional. The two parameters of the OF functional are set by fitting the ionization energies and electron affinities, of the atoms He to Ar, predicted by ROHF in combination with the OF functional to the experimental values. For ionization energies, the overall performance of ROHF-OF is better than completely renormalized coupled-cluster [CR-CC(2,3)] and better than, or as good as, conventional density functional methods. For electron affinities, the overall performance of ROHF-OF is less impressive. However, for both ionization energies and electron affinities of third row atoms, the mean absolute error of ROHF-OF is only 3 kJ mol-1.

  3. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    NASA Astrophysics Data System (ADS)

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle

    2016-08-01

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.

  4. An O( N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    DOE PAGES

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; ...

    2016-08-10

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O( N 2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Methodmore » (FMM) to evaluate the integrals in O( N) operations, with O( N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.« less

  5. Assessment of hospital surge capacity using the MACSIM simulation system: a pilot study.

    PubMed

    Lennquist Montán, K; Riddez, L; Lennquist, S; Olsberg, A C; Lindberg, H; Gryth, D; Örtenwall, P

    2017-08-01

    The aim of this study was to use a simulation model developed for the scientific evaluation of methodology in disaster medicine to test surge capacity (SC) in a major hospital responding to a simulated major incident with a scenario copied from a real incident. The tested hospital was illustrated on a system of magnetic boards, where available resources, staff, and patients treated in the hospital at the time of the test were illustrated. Casualties were illustrated with simulation cards supplying all data required to determine procedures for diagnosis and treatment, which all were connected to real consumption of time and resources. The first capacity-limiting factor was the number of resuscitation teams that could work parallel in the emergency department (ED). This made it necessary to refer severely injured to other hospitals. At this time, surgery (OR) and intensive care (ICU) had considerable remaining capacity. Thus, the reception of casualties could be restarted when the ED had been cleared. The next limiting factor was lack of ventilators in the ICU, which permanently set the limit for SC. At this time, there was still residual OR capacity. With access to more ventilators, the full surgical capacity of the hospital could have been utilized. The tested model was evaluated as an accurate tool to determine SC. The results illustrate that SC cannot be determined by testing one single function in the hospital, since all functions interact with each other and different functions can be identified as limiting factors at different times during the response.

  6. The stress shadow effect: a mechanical analysis of the evenly-spaced parallel strike-slip faults in the San Andreas fault system

    NASA Astrophysics Data System (ADS)

    Zuza, A. V.; Yin, A.; Lin, J. C.

    2015-12-01

    Parallel evenly-spaced strike-slip faults are prominent in the southern San Andreas fault system, as well as other settings along plate boundaries (e.g., the Alpine fault) and within continental interiors (e.g., the North Anatolian, central Asian, and northern Tibetan faults). In southern California, the parallel San Jacinto, Elsinore, Rose Canyon, and San Clemente faults to the west of the San Andreas are regularly spaced at ~40 km. In the Eastern California Shear Zone, east of the San Andreas, faults are spaced at ~15 km. These characteristic spacings provide unique mechanical constraints on how the faults interact. Despite the common occurrence of parallel strike-slip faults, the fundamental questions of how and why these fault systems form remain unanswered. We address this issue by using the stress shadow concept of Lachenbruch (1961)—developed to explain extensional joints by using the stress-free condition on the crack surface—to present a mechanical analysis of the formation of parallel strike-slip faults that relates fault spacing and brittle-crust thickness to fault strength, crustal strength, and the crustal stress state. We discuss three independent models: (1) a fracture mechanics model, (2) an empirical stress-rise function model embedded in a plastic medium, and (3) an elastic-plate model. The assumptions and predictions of these models are quantitatively tested using scaled analogue sandbox experiments that show that strike-slip fault spacing is linearly related to the brittle-crust thickness. We derive constraints on the mechanical properties of the southern San Andreas strike-slip faults and fault-bounded crust (e.g., local fault strength and crustal/regional stress) given the observed fault spacing and brittle-crust thickness, which is obtained by defining the base of the seismogenic zone with high-resolution earthquake data. Our models allow direct comparison of the parallel faults in the southern San Andreas system with other similar strike-slip fault systems, both on Earth and throughout the solar system (e.g., the Tiger Stripe Fractures on Enceladus).

  7. Power-balancing instantaneous optimization energy management for a novel series-parallel hybrid electric bus

    NASA Astrophysics Data System (ADS)

    Sun, Dongye; Lin, Xinyou; Qin, Datong; Deng, Tao

    2012-11-01

    Energy management(EM) is a core technique of hybrid electric bus(HEB) in order to advance fuel economy performance optimization and is unique for the corresponding configuration. There are existing algorithms of control strategy seldom take battery power management into account with international combustion engine power management. In this paper, a type of power-balancing instantaneous optimization(PBIO) energy management control strategy is proposed for a novel series-parallel hybrid electric bus. According to the characteristic of the novel series-parallel architecture, the switching boundary condition between series and parallel mode as well as the control rules of the power-balancing strategy are developed. The equivalent fuel model of battery is implemented and combined with the fuel of engine to constitute the objective function which is to minimize the fuel consumption at each sampled time and to coordinate the power distribution in real-time between the engine and battery. To validate the proposed strategy effective and reasonable, a forward model is built based on Matlab/Simulink for the simulation and the dSPACE autobox is applied to act as a controller for hardware in-the-loop integrated with bench test. Both the results of simulation and hardware-in-the-loop demonstrate that the proposed strategy not only enable to sustain the battery SOC within its operational range and keep the engine operation point locating the peak efficiency region, but also the fuel economy of series-parallel hybrid electric bus(SPHEB) dramatically advanced up to 30.73% via comparing with the prototype bus and a similar improvement for PBIO strategy relative to rule-based strategy, the reduction of fuel consumption is up to 12.38%. The proposed research ensures the algorithm of PBIO is real-time applicability, improves the efficiency of SPHEB system, as well as suite to complicated configuration perfectly.

  8. A time-parallel approach to strong-constraint four-dimensional variational data assimilation

    NASA Astrophysics Data System (ADS)

    Rao, Vishwas; Sandu, Adrian

    2016-05-01

    A parallel-in-time algorithm based on an augmented Lagrangian approach is proposed to solve four-dimensional variational (4D-Var) data assimilation problems. The assimilation window is divided into multiple sub-intervals that allows parallelization of cost function and gradient computations. The solutions to the continuity equations across interval boundaries are added as constraints. The augmented Lagrangian approach leads to a different formulation of the variational data assimilation problem than the weakly constrained 4D-Var. A combination of serial and parallel 4D-Vars to increase performance is also explored. The methodology is illustrated on data assimilation problems involving the Lorenz-96 and the shallow water models.

  9. Parallel heuristics for scalable community detection

    DOE PAGES

    Lu, Hao; Halappanavar, Mahantesh; Kalyanaraman, Ananth

    2015-08-14

    Community detection has become a fundamental operation in numerous graph-theoretic applications. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method ismore » also inherently sequential, thereby limiting its scalability. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose heuristics that are designed to break the sequential barrier. For evaluation purposes, we implemented our heuristics using OpenMP multithreading, and tested them over real world graphs derived from multiple application domains. Compared to the serial Louvain implementation, our parallel implementation is able to produce community outputs with a higher modularity for most of the inputs tested, in comparable number or fewer iterations, while providing real speedups of up to 16x using 32 threads.« less

  10. Parallel implementation of an adaptive and parameter-free N-body integrator

    NASA Astrophysics Data System (ADS)

    Pruett, C. David; Ingham, William H.; Herman, Ralph D.

    2011-05-01

    Previously, Pruett et al. (2003) [3] described an N-body integrator of arbitrarily high order M with an asymptotic operation count of O(MN). The algorithm's structure lends itself readily to data parallelization, which we document and demonstrate here in the integration of point-mass systems subject to Newtonian gravitation. High order is shown to benefit parallel efficiency. The resulting N-body integrator is robust, parameter-free, highly accurate, and adaptive in both time-step and order. Moreover, it exhibits linear speedup on distributed parallel processors, provided that each processor is assigned at least a handful of bodies. Program summaryProgram title: PNB.f90 Catalogue identifier: AEIK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3052 No. of bytes in distributed program, including test data, etc.: 68 600 Distribution format: tar.gz Programming language: Fortran 90 and OpenMPI Computer: All shared or distributed memory parallel processors Operating system: Unix/Linux Has the code been vectorized or parallelized?: The code has been parallelized but has not been explicitly vectorized. RAM: Dependent upon N Classification: 4.3, 4.12, 6.5 Nature of problem: High accuracy numerical evaluation of trajectories of N point masses each subject to Newtonian gravitation. Solution method: Parallel and adaptive extrapolation in time via power series of arbitrary degree. Running time: 5.1 s for the demo program supplied with the package.

  11. Systemic lipopolysaccharide administration impairs retrieval of context-object discrimination, but not spatial, memory: Evidence for selective disruption of specific hippocampus-dependent memory functions during acute neuroinflammation

    PubMed Central

    Czerniawski, Jennifer; Miyashita, Teiko; Lewandowski, Gail; Guzowski, John F.

    2014-01-01

    Neuroinflammation is implicated in impairments in neuronal function and cognition that arise with aging, trauma, and/or disease. Therefore, understanding the underlying basis of the effect of immune system activation on neural function could lead to therapies for treating cognitive decline. Although neuroinflammation is widely thought to preferentially impair hippocampus-dependent memory, data on the effects of cytokines on cognition are mixed. One possible explanation for these inconsistent results is that cytokines may disrupt specific neural processes underlying some forms of memory but not others. In an earlier study, we tested the effect of systemic administration of bacterial lipopolysaccharide (LPS) on retrieval of hippocampus-dependent context memory and neural circuit function in CA3 and CA1 (Czerniawski and Guzowski, 2014). Paralleling impairment in context discrimination memory, we observed changes in neural circuit function consistent with disrupted pattern separation function. In the current study we tested the hypothesis that acute neuroinflammation selectively disrupts memory retrieval in tasks requiring hippocampal pattern separation processes. Male Sprague-Dawley rats given LPS systemically prior to testing exhibited intact performance in tasks that do not require hippocampal pattern separation processes: novel object recognition and spatial memory in the water maze. By contrast, memory retrieval in a task thought to require hippocampal pattern separation, context-object discrimination, was strongly impaired in LPS-treated rats in the absence of any gross effects on exploratory activity or motivation. These data show that LPS administration does not impair memory retrieval in all hippocampus-dependent tasks, and support the hypothesis that acute neuroinflammation impairs context discrimination memory via disruption of pattern separation processes in hippocampus. PMID:25451612

  12. Systemic lipopolysaccharide administration impairs retrieval of context-object discrimination, but not spatial, memory: Evidence for selective disruption of specific hippocampus-dependent memory functions during acute neuroinflammation.

    PubMed

    Czerniawski, Jennifer; Miyashita, Teiko; Lewandowski, Gail; Guzowski, John F

    2015-02-01

    Neuroinflammation is implicated in impairments in neuronal function and cognition that arise with aging, trauma, and/or disease. Therefore, understanding the underlying basis of the effect of immune system activation on neural function could lead to therapies for treating cognitive decline. Although neuroinflammation is widely thought to preferentially impair hippocampus-dependent memory, data on the effects of cytokines on cognition are mixed. One possible explanation for these inconsistent results is that cytokines may disrupt specific neural processes underlying some forms of memory but not others. In an earlier study, we tested the effect of systemic administration of bacterial lipopolysaccharide (LPS) on retrieval of hippocampus-dependent context memory and neural circuit function in CA3 and CA1 (Czerniawski and Guzowski, 2014). Paralleling impairment in context discrimination memory, we observed changes in neural circuit function consistent with disrupted pattern separation function. In the current study we tested the hypothesis that acute neuroinflammation selectively disrupts memory retrieval in tasks requiring hippocampal pattern separation processes. Male Sprague-Dawley rats given LPS systemically prior to testing exhibited intact performance in tasks that do not require hippocampal pattern separation processes: novel object recognition and spatial memory in the water maze. By contrast, memory retrieval in a task thought to require hippocampal pattern separation, context-object discrimination, was strongly impaired in LPS-treated rats in the absence of any gross effects on exploratory activity or motivation. These data show that LPS administration does not impair memory retrieval in all hippocampus-dependent tasks, and support the hypothesis that acute neuroinflammation impairs context discrimination memory via disruption of pattern separation processes in hippocampus. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Identification of a common neurobiological substrate for mental illness.

    PubMed

    Goodkind, Madeleine; Eickhoff, Simon B; Oathes, Desmond J; Jiang, Ying; Chang, Andrew; Jones-Hagata, Laura B; Ortega, Brissa N; Zaiko, Yevgeniya V; Roach, Erika L; Korgaonkar, Mayuresh S; Grieve, Stuart M; Galatzer-Levy, Isaac; Fox, Peter T; Etkin, Amit

    2015-04-01

    Psychiatric diagnoses are currently distinguished based on sets of specific symptoms. However, genetic and clinical analyses find similarities across a wide variety of diagnoses, suggesting that a common neurobiological substrate may exist across mental illness. To conduct a meta-analysis of structural neuroimaging studies across multiple psychiatric diagnoses, followed by parallel analyses of 3 large-scale healthy participant data sets to help interpret structural findings in the meta-analysis. PubMed was searched to identify voxel-based morphometry studies through July 2012 comparing psychiatric patients to healthy control individuals for the meta-analysis. The 3 parallel healthy participant data sets included resting-state functional magnetic resonance imaging, a database of activation foci across thousands of neuroimaging experiments, and a data set with structural imaging and cognitive task performance data. Studies were included in the meta-analysis if they reported voxel-based morphometry differences between patients with an Axis I diagnosis and control individuals in stereotactic coordinates across the whole brain, did not present predominantly in childhood, and had at least 10 studies contributing to that diagnosis (or across closely related diagnoses). The meta-analysis was conducted on peak voxel coordinates using an activation likelihood estimation approach. We tested for areas of common gray matter volume increase or decrease across Axis I diagnoses, as well as areas differing between diagnoses. Follow-up analyses on other healthy participant data sets tested connectivity related to regions arising from the meta-analysis and the relationship of gray matter volume to cognition. Based on the voxel-based morphometry meta-analysis of 193 studies comprising 15 892 individuals across 6 diverse diagnostic groups (schizophrenia, bipolar disorder, depression, addiction, obsessive-compulsive disorder, and anxiety), we found that gray matter loss converged across diagnoses in 3 regions: the dorsal anterior cingulate, right insula, and left insula. By contrast, there were few diagnosis-specific effects, distinguishing only schizophrenia and depression from other diagnoses. In the parallel follow-up analyses of the 3 independent healthy participant data sets, we found that the common gray matter loss regions formed a tightly interconnected network during tasks and at resting and that lower gray matter in this network was associated with poor executive functioning. We identified a concordance across psychiatric diagnoses in terms of integrity of an anterior insula/dorsal anterior cingulate-based network, which may relate to executive function deficits observed across diagnoses. This concordance provides an organizing model that emphasizes the importance of shared neural substrates across psychopathology, despite likely diverse etiologies, which is currently not an explicit component of psychiatric nosology.

  14. The development of a revised version of multi-center molecular Ornstein-Zernike equation

    NASA Astrophysics Data System (ADS)

    Kido, Kentaro; Yokogawa, Daisuke; Sato, Hirofumi

    2012-04-01

    Ornstein-Zernike (OZ)-type theory is a powerful tool to obtain 3-dimensional solvent distribution around solute molecule. Recently, we proposed multi-center molecular OZ method, which is suitable for parallel computing of 3D solvation structure. The distribution function in this method consists of two components, namely reference and residue parts. Several types of the function were examined as the reference part to investigate the numerical robustness of the method. As the benchmark, the method is applied to water, benzene in aqueous solution and single-walled carbon nanotube in chloroform solution. The results indicate that fully-parallelization is achieved by utilizing the newly proposed reference functions.

  15. MPI implementation of PHOENICS: A general purpose computational fluid dynamics code

    NASA Astrophysics Data System (ADS)

    Simunovic, S.; Zacharia, T.; Baltas, N.; Spalding, D. B.

    1995-03-01

    PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. The Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.

  16. MPI implementation of PHOENICS: A general purpose computational fluid dynamics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, S.; Zacharia, T.; Baltas, N.

    1995-04-01

    PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. Themore » Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.« less

  17. F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).

  18. The glycogen synthase 2 gene (Gys2) displays parallel evolution between Old World and New World fruit bats.

    PubMed

    Qian, Yamin; Fang, Tao; Shen, Bin; Zhang, Shuyi

    2014-01-01

    Frugivorous and nectarivorous bats rely largely on hepatic glycogenesis and glycogenolysis for postprandial blood glucose disposal and maintenance of glucose homeostasis during short time starvation, respectively. The glycogen synthase 2 encoded by the Gys2 gene plays a critical role in liver glycogen synthesis. To test whether the Gys2 gene has undergone adaptive evolution in bats with carbohydrate-rich diets in relation to their insect-eating sister taxa, we sequenced the coding region of the Gys2 gene in a number of bat species, including three Old World fruit bats (OWFBs) (Pteropodidae) and two New World fruit bats (NWFBs) (Phyllostomidae). Our results showed that the Gys2 coding sequences are highly conserved across all bat species we examined, and no evidence of positive selection was detected in the ancestral branches leading to OWFBs and NWFBs. Our explicit convergence test showed that posterior probabilities of convergence between several branches of OWFBs, and the NWFBs were markedly higher than that of divergence. Three parallel amino acid substitutions (Q72H, K371Q, and E666D) were detected among branches of OWFBs and NWFBs. Tests for parallel evolution showed that two parallel substitutions (Q72H and E666D) were driven by natural selection, while the K371Q was more likely to be fixed randomly. Thus, our results suggested that the Gys2 gene has undergone parallel evolution on amino acid level between OWFBs and NWFBs in relation to their carbohydrate metabolism.

  19. Structural considerations for functional anti-EGFR × anti-CD3 bispecific diabodies in light of domain order and binding affinity.

    PubMed

    Asano, Ryutaro; Nagai, Keisuke; Makabe, Koki; Takahashi, Kento; Kumagai, Takashi; Kawaguchi, Hiroko; Ogata, Hiromi; Arai, Kyoko; Umetsu, Mitsuo; Kumagai, Izumi

    2018-03-02

    We previously reported a functional humanized bispecific diabody (bsDb) that targeted EGFR and CD3 (hEx3-Db) and enhancement of its cytotoxicity by rearranging the domain order in the V domain. Here, we further dissected the effect of domain order in bsDbs on their cross-linking ability and binding kinetics to elucidate general rules regarding the design of functional bsDbs. Using Ex3-Db as a model system, we first classified the four possible domain orders as anti-parallel (where both chimeric single-chain components are variable heavy domain (VH)-variable light domain (VL) or VL-VH order) and parallel types (both chimeric single-chain components are mixed with VH-VL and VL-VH order). Although anti-parallel Ex3-Dbs could cross-link the soluble target antigens, their cross-linking ability between soluble targets had no correlation with their growth inhibitory effects. In contrast, the binding affinity of one of the two constructs with a parallel-arrangement V domain was particularly low, and structural modeling supported this phenomenon. Similar results were observed with E2x3-Dbs, in which the V region of the anti-EGFR antibody clone in hEx3 was replaced with that of another anti-EGFR clone. Only anti-parallel types showed affinity-dependent cancer inhibitory effects in each molecule, and E2x3-LH (both components in VL-VH order) showed the most intense anti-tumor activity in vitro and in vivo . Our results showed that, in addition to rearranging the domain order of bsDbs, increasing their binding affinity may be an ideal strategy for enhancing the cytotoxicity of anti-parallel constructs and that E2x3-LH is particularly attractive as a candidate next-generation anti-cancer drug.

  20. An interfering Go/No-go task does not affect accuracy in a Concealed Information Test.

    PubMed

    Ambach, Wolfgang; Stark, Rudolf; Peper, Martin; Vaitl, Dieter

    2008-04-01

    Following the idea that response inhibition processes play a central role in concealing information, the present study investigated the influence of a Go/No-go task as an interfering mental activity, performed parallel to the Concealed Information Test (CIT), on the detectability of concealed information. 40 undergraduate students participated in a mock-crime experiment and simultaneously performed a CIT and a Go/No-go task. Electrodermal activity (EDA), respiration line length (RLL), heart rate (HR) and finger pulse waveform length (FPWL) were registered. Reaction times were recorded as behavioral measures in the Go/No-go task as well as in the CIT. As a within-subject control condition, the CIT was also applied without an additional task. The parallel task did not influence the mean differences of the physiological measures of the mock-crime-related probe and the irrelevant items. This finding might possibly be due to the fact that the applied parallel task induced a tonic rather than a phasic mental activity, which did not influence differential responding to CIT items. No physiological evidence for an interaction between the parallel task and sub-processes of deception (e.g. inhibition) was found. Subjects' performance in the Go/No-go parallel task did not contribute to the detection of concealed information. Generalizability needs further investigations of different variations of the parallel task.

  1. Type synthesis for 4-DOF parallel press mechanism using GF set theory

    NASA Astrophysics Data System (ADS)

    He, Jun; Gao, Feng; Meng, Xiangdun; Guo, Weizhong

    2015-07-01

    Parallel mechanisms is used in the large capacity servo press to avoid the over-constraint of the traditional redundant actuation. Currently, the researches mainly focus on the performance analysis for some specific parallel press mechanisms. However, the type synthesis and evaluation of parallel press mechanisms is seldom studied, especially for the four degrees of freedom(DOF) press mechanisms. The type synthesis of 4-DOF parallel press mechanisms is carried out based on the generalized function(GF) set theory. Five design criteria of 4-DOF parallel press mechanisms are firstly proposed. The general procedure of type synthesis of parallel press mechanisms is obtained, which includes number synthesis, symmetrical synthesis of constraint GF sets, decomposition of motion GF sets and design of limbs. Nine combinations of constraint GF sets of 4-DOF parallel press mechanisms, ten combinations of GF sets of active limbs, and eleven combinations of GF sets of passive limbs are synthesized. Thirty-eight kinds of press mechanisms are presented and then different structures of kinematic limbs are designed. Finally, the geometrical constraint complexity( GCC), kinematic pair complexity( KPC), and type complexity( TC) are proposed to evaluate the press types and the optimal press type is achieved. The general methodologies of type synthesis and evaluation for parallel press mechanism are suggested.

  2. The source of dual-task limitations: Serial or parallel processing of multiple response selections?

    PubMed Central

    Marois, René

    2014-01-01

    Although it is generally recognized that the concurrent performance of two tasks incurs costs, the sources of these dual-task costs remain controversial. The serial bottleneck model suggests that serial postponement of task performance in dual-task conditions results from a central stage of response selection that can only process one task at a time. Cognitive-control models, by contrast, propose that multiple response selections can proceed in parallel, but that serial processing of task performance is predominantly adopted because its processing efficiency is higher than that of parallel processing. In the present study, we empirically tested this proposition by examining whether parallel processing would occur when it was more efficient and financially rewarded. The results indicated that even when parallel processing was more efficient and was incentivized by financial reward, participants still failed to process tasks in parallel. We conclude that central information processing is limited by a serial bottleneck. PMID:23864266

  3. Parallel transformation of K-SVD solar image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Liang, Youwen; Tian, Yu; Li, Mei

    2017-02-01

    The images obtained by observing the sun through a large telescope always suffered with noise due to the low SNR. K-SVD denoising algorithm can effectively remove Gauss white noise. Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. In this paper, an OpenMP parallel programming language is proposed to transform the serial algorithm to the parallel version. Data parallelism model is used to transform the algorithm. Not one atom but multiple atoms updated simultaneously is the biggest change. The denoising effect and acceleration performance are tested after completion of the parallel algorithm. Speedup of the program is 13.563 in condition of using 16 cores. This parallel version can fully utilize the multi-core CPU hardware resources, greatly reduce running time and easily to transplant in multi-core platform.

  4. Data decomposition method for parallel polygon rasterization considering load balancing

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Chen, Zhenjie; Liu, Yongxue; Li, Feixue; Cheng, Liang; Zhu, A.-xing; Li, Manchun

    2015-12-01

    It is essential to adopt parallel computing technology to rapidly rasterize massive polygon data. In parallel rasterization, it is difficult to design an effective data decomposition method. Conventional methods ignore load balancing of polygon complexity in parallel rasterization and thus fail to achieve high parallel efficiency. In this paper, a novel data decomposition method based on polygon complexity (DMPC) is proposed. First, four factors that possibly affect the rasterization efficiency were investigated. Then, a metric represented by the boundary number and raster pixel number in the minimum bounding rectangle was developed to calculate the complexity of each polygon. Using this metric, polygons were rationally allocated according to the polygon complexity, and each process could achieve balanced loads of polygon complexity. To validate the efficiency of DMPC, it was used to parallelize different polygon rasterization algorithms and tested on different datasets. Experimental results showed that DMPC could effectively parallelize polygon rasterization algorithms. Furthermore, the implemented parallel algorithms with DMPC could achieve good speedup ratios of at least 15.69 and generally outperformed conventional decomposition methods in terms of parallel efficiency and load balancing. In addition, the results showed that DMPC exhibited consistently better performance for different spatial distributions of polygons.

  5. 3D magnetospheric parallel hybrid multi-grid method applied to planet–plasma interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leclercq, L., E-mail: ludivine.leclercq@latmos.ipsl.fr; Modolo, R., E-mail: ronan.modolo@latmos.ipsl.fr; Leblanc, F.

    2016-03-15

    We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet–plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order tomore » conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.« less

  6. Porting LAMMPS to GPUs.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, William Michael; Plimpton, Steven James; Wang, Peng

    2010-03-01

    LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.

  7. Hardware Neural Network for a Visual Inspection System

    NASA Astrophysics Data System (ADS)

    Chun, Seungwoo; Hayakawa, Yoshihiro; Nakajima, Koji

    The visual inspection of defects in products is heavily dependent on human experience and instinct. In this situation, it is difficult to reduce the production costs and to shorten the inspection time and hence the total process time. Consequently people involved in this area desire an automatic inspection system. In this paper, we propose a hardware neural network, which is expected to provide high-speed operation for automatic inspection of products. Since neural networks can learn, this is a suitable method for self-adjustment of criteria for classification. To achieve high-speed operation, we use parallel and pipelining techniques. Furthermore, we use a piecewise linear function instead of a conventional activation function in order to save hardware resources. Consequently, our proposed hardware neural network achieved 6GCPS and 2GCUPS, which in our test sample proved to be sufficiently fast.

  8. Superposition rheology.

    PubMed

    Dhont, J K; Wagner, N J

    2001-02-01

    The interpretation of superposition rheology data is still a matter of debate due to lack of understanding of viscoelastic superposition response on a microscopic level. So far, only phenomenological approaches have been described, which do not capture the shear induced microstructural deformation, which is responsible for the viscoelastic behavior to the superimposed flow. Experimentally there are indications that there is a fundamental difference between the viscoelastic response to an orthogonally and a parallel superimposed shear flow. We present theoretical predictions, based on microscopic considerations, for both orthogonal and parallel viscoelastic response functions for a colloidal system of attractive particles near their gas-liquid critical point. These predictions extend to values of the stationary shear rate where the system is nonlinearly perturbed, and are based on considerations on the colloidal particle level. The difference in response to orthogonal and parallel superimposed shear flow can be understood entirely in terms of microstructural distortion, where the anisotropy of the microstructure under shear flow conditions is essential. In accordance with experimental observations we find pronounced negative values for response functions in case of parallel superposition for an intermediate range of frequencies, provided that microstructure is nonlinearly perturbed by the stationary shear component. For the critical colloidal systems considered here, the Kramers-Kronig relations for the superimposed response functions are found to be valid. It is argued, however, that the Kramers-Kronig relations may be violated for systems where the stationary shear flow induces a considerable amount of new microstructure.

  9. Parallel recovery of consciousness and sleep in acute traumatic brain injury.

    PubMed

    Duclos, Catherine; Dumont, Marie; Arbour, Caroline; Paquet, Jean; Blais, Hélène; Menon, David K; De Beaumont, Louis; Bernard, Francis; Gosselin, Nadia

    2017-01-17

    To investigate whether the progressive recuperation of consciousness was associated with the reconsolidation of sleep and wake states in hospitalized patients with acute traumatic brain injury (TBI). This study comprised 30 hospitalized patients (age 29.1 ± 13.5 years) in the acute phase of moderate or severe TBI. Testing started 21.0 ± 13.7 days postinjury. Consciousness level and cognitive functioning were assessed daily with the Rancho Los Amigos scale of cognitive functioning (RLA). Sleep and wake cycle characteristics were estimated with continuous wrist actigraphy. Mixed model analyses were performed on 233 days with the RLA (fixed effect) and sleep-wake variables (random effects). Linear contrast analyses were performed in order to verify if consolidation of the sleep and wake states improved linearly with increasing RLA score. Associations were found between scores on the consciousness/cognitive functioning scale and measures of sleep-wake cycle consolidation (p < 0.001), nighttime sleep duration (p = 0.018), and nighttime fragmentation index (p < 0.001). These associations showed strong linear relationships (p < 0.01 for all), revealing that consciousness and cognition improved in parallel with sleep-wake quality. Consolidated 24-hour sleep-wake cycle occurred when patients were able to give context-appropriate, goal-directed responses. Our results showed that when the brain has not sufficiently recovered a certain level of consciousness, it is also unable to generate a 24-hour sleep-wake cycle and consolidated nighttime sleep. This study contributes to elucidating the pathophysiology of severe sleep-wake cycle alterations in the acute phase of moderate to severe TBI. © 2016 American Academy of Neurology.

  10. Bioinspired engineering study of Plantae vascules for self-healing composite structures

    PubMed Central

    Trask, R. S.; Bond, I. P.

    2010-01-01

    This paper presents the first conceptual study into creating a Plantae-inspired vascular network within a fibre-reinforced polymer composite laminate, which provides an ongoing self-healing functionality without incurring a mass penalty. Through the application of a ‘lost-wax’ technique, orthogonal hollow vascules, inspired by the ‘ray cell’ structures found in ring porous hardwoods, were successfully introduced within a carbon fibre-reinforced epoxy polymer composite laminate. The influence on fibre architecture and mechanical behaviour of single vascules (located on the laminate centreline) when aligned parallel and transverse to the local host ply was characterized experimentally using a compression-after-impact test methodology. Ultrasonic C-scanning and high-resolution micro-CT X-ray was undertaken to identify the influence of and interaction between the internal vasculature and impact damage. The results clearly show that damage morphology is influenced by vascule orientation and that a 10 J low-velocity impact damage event is sufficient to breach the vasculature; a prerequisite for any subsequent self-healing function. The residual compressive strength after a 10 J impact was found to be dependent upon vascule orientation. In general, residual compressive strength decreased to 70 per cent of undamaged strength when vasculature was aligned parallel to the local host ply and a value of 63 per cent when aligned transverse. This bioinspired engineering study has illustrated the potential that a vasculature concept has to offer in terms of providing a self-healing function with minimum mass penalty, without initiating premature failure within a composite structure. PMID:19955122

  11. Isoflavones, calcium, vitamin D and inulin improve quality of life, sexual function, body composition and metabolic parameters in menopausal women: result from a prospective, randomized, placebo-controlled, parallel-group study.

    PubMed

    Vitale, Salvatore Giovanni; Caruso, Salvatore; Rapisarda, Agnese Maria Chiara; Cianci, Stefano; Cianci, Antonio

    2018-03-01

    Menopause results in metabolic changes that contribute to increase risk of cardiovascular diseases: increase in low density lipoprotein (LDL) and triglycerides and decrease in high density lipoprotein (HDL), weight gain are associated with a correspondent increase in incidence of hypertension and diabetes. The aim of this study was to evaluate the effect of a preparation of isoflavones, calcium vitamin D and inulin in menopausal women. We performed a prospective, randomized, placebo-controlled, parallel-group study. A total of 50 patients were randomized to receive either oral preparations of isoflavones (40 mg), calcium (500 mg) vitamin D (300 UI) and inulin (3 g) or placebo (control group). Pre- and post-treatment assessment of quality of life and sexual function were performed through Menopause-Specific Quality of Life Questionnaire (MENQOL) and Female Sexual Function Index (FSFI); evaluations of anthropometric indicators, body composition through bioelectrical impedance analyser, lumbar spine and proximal femur T-score and lipid profile were performed. After 12 months, a significant reduction in MENQOL vasomotor, physical and sexual domain scores ( p < 0.05) and a significant increase in all FSFI domain scores ( p < 0.05) were observed in treatment group. Laboratory tests showed significant increase in serum levels of HDL ( p < 0.05). No significant changes of lumbar spine and femur neck T-score ( p > 0.05) were found in the same group. According to our data analysis, isoflavones, calcium, vitamin D and inulin may exert favourable effects on menopausal symptoms and signs.

  12. A parallel simulated annealing algorithm for standard cell placement on a hypercube computer

    NASA Technical Reports Server (NTRS)

    Jones, Mark Howard

    1987-01-01

    A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.

  13. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  14. Real-time processing of radar return on a parallel computer

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1992-01-01

    NASA is working with the FAA to demonstrate the feasibility of pulse Doppler radar as a candidate airborne sensor to detect low altitude windshears. The need to provide the pilot with timely information about possible hazards has motivated a demand for real-time processing of a radar return. Investigated here is parallel processing as a means of accommodating the high data rates required. A PC based parallel computer, called the transputer, is used to investigate issues in real time concurrent processing of radar signals. A transputer network is made up of an array of single instruction stream processors that can be networked in a variety of ways. They are easily reconfigured and software development is largely independent of the particular network topology. The performance of the transputer is evaluated in light of the computational requirements. A number of algorithms have been implemented on the transputers in OCCAM, a language specially designed for parallel processing. These include signal processing algorithms such as the Fast Fourier Transform (FFT), pulse-pair, and autoregressive modelling, as well as routing software to support concurrency. The most computationally intensive task is estimating the spectrum. Two approaches have been taken on this problem, the first and most conventional of which is to use the FFT. By using table look-ups for the basis function and other optimizing techniques, an algorithm has been developed that is sufficient for real time. The other approach is to model the signal as an autoregressive process and estimate the spectrum based on the model coefficients. This technique is attractive because it does not suffer from the spectral leakage problem inherent in the FFT. Benchmark tests indicate that autoregressive modeling is feasible in real time.

  15. Multichannel quench-flow microreactor chip for parallel reaction monitoring.

    PubMed

    Bula, Wojciech P; Verboom, Willem; Reinhoudt, David N; Gardeniers, Han J G E

    2007-12-01

    This paper describes a multichannel silicon-glass microreactor which has been utilized to investigate the kinetics of a Knoevenagel condensation reaction under different reaction conditions. The reaction is performed on the chip in four parallel channels under identical conditions but with different residence times. A special topology of the reaction coils overcomes the common problem arising from the difference in pressure drop of parallel channels having different length. The parallelization of reaction coils combined with chemical quenching at specific locations results in a considerable reduction in experimental effort and cost. The system was tested and showed good reproducibility in flow properties and reaction kinetic data generation.

  16. Parallel processing via a dual olfactory pathway in the honeybee.

    PubMed

    Brill, Martin F; Rosenbaum, Tobias; Reus, Isabelle; Kleineidam, Christoph J; Nawrot, Martin P; Rössler, Wolfgang

    2013-02-06

    In their natural environment, animals face complex and highly dynamic olfactory input. Thus vertebrates as well as invertebrates require fast and reliable processing of olfactory information. Parallel processing has been shown to improve processing speed and power in other sensory systems and is characterized by extraction of different stimulus parameters along parallel sensory information streams. Honeybees possess an elaborate olfactory system with unique neuronal architecture: a dual olfactory pathway comprising a medial projection-neuron (PN) antennal lobe (AL) protocerebral output tract (m-APT) and a lateral PN AL output tract (l-APT) connecting the olfactory lobes with higher-order brain centers. We asked whether this neuronal architecture serves parallel processing and employed a novel technique for simultaneous multiunit recordings from both tracts. The results revealed response profiles from a high number of PNs of both tracts to floral, pheromonal, and biologically relevant odor mixtures tested over multiple trials. PNs from both tracts responded to all tested odors, but with different characteristics indicating parallel processing of similar odors. Both PN tracts were activated by widely overlapping response profiles, which is a requirement for parallel processing. The l-APT PNs had broad response profiles suggesting generalized coding properties, whereas the responses of m-APT PNs were comparatively weaker and less frequent, indicating higher odor specificity. Comparison of response latencies within and across tracts revealed odor-dependent latencies. We suggest that parallel processing via the honeybee dual olfactory pathway provides enhanced odor processing capabilities serving sophisticated odor perception and olfactory demands associated with a complex olfactory world of this social insect.

  17. Analysis of pressure distortion testing

    NASA Technical Reports Server (NTRS)

    Koch, K. E.; Rees, R. L.

    1976-01-01

    The development of a distortion methodology, method D, was documented, and its application to steady state and unsteady data was demonstrated. Three methodologies based upon DIDENT, a NASA-LeRC distortion methodology based upon the parallel compressor model, were investigated by applying them to a set of steady state data. The best formulation was then applied to an independent data set. The good correlation achieved with this data set showed that method E, one of the above methodologies, is a viable concept. Unsteady data were analyzed by using the method E methodology. This analysis pointed out that the method E sensitivities are functions of pressure defect level as well as corrected speed and pattern.

  18. Research Studies on Advanced Optical Module/Head Designs for Optical Data Storage

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Preprints are presented from the recent 1992 Optical Data Storage meeting in San Jose. The papers are divided into the following topical areas: Magneto-optical media (Modeling/design and fabrication/characterization/testing); Optical heads (holographic optical elements); and Optical heads (integrated optics). Some representative titles are as follow: Diffraction analysis and evaluation of several focus and track error detection schemes for magneto-optical disk systems; Proposal for massively parallel data storage system; Transfer function characteristics of super resolving systems; Modeling and measurement of a micro-optic beam deflector; Oxidation processes in magneto-optic and related materials; and A modal analysis of lamellar diffraction gratings in conical mountings.

  19. Development of programs for computing characteristics of ultraviolet radiation

    NASA Technical Reports Server (NTRS)

    Dave, J. V.

    1972-01-01

    Efficient programs were developed for computing all four characteristics of the radiation scattered by a plane-parallel, turbid, terrestrial atmospheric model. They were developed (FORTRAN 4) and tested on the IBM /360 computers with 2314 direct access storage facility. The storage requirement varies between 200K and 750K bytes depending upon the task. The scattering phase matrix (or function) is expanded in a Fourier series whose number of terms depend upon the zenith angles of the incident and scattered radiations, as well as on the nature of aerosols. A Gauss-Seidel procedure is used for obtaining the numerical solution of the transfer equation.

  20. Biology and therapy of fibromyalgia. Evidence-based biomarkers for fibromyalgia syndrome

    PubMed Central

    Dadabhoy, Dina; Crofford, Leslie J; Spaeth, Michael; Russell, I Jon; Clauw, Daniel J

    2008-01-01

    Researchers studying fibromyalgia strive to identify objective, measurable biomarkers that may identify susceptible individuals, may facilitate diagnosis, or that parallel activity of the disease. Candidate objective measures range from sophisticated functional neuroimaging to office-ready measures of the pressure pain threshold. A systematic literature review was completed to assess highly investigated, objective measures used in fibromyalgia studies. To date, only experimental pain testing has been shown to coincide with improvements in clinical status in a longitudinal study. Concerted efforts to systematically evaluate additional objective measures in research trials will be vital for ongoing progress in outcome research and translation into clinical practice. PMID:18768089

  1. A landmark recognition and tracking experiment for flight on the Shuttle/Advanced Technology Laboratory (ATL)

    NASA Technical Reports Server (NTRS)

    Welch, J. D.

    1975-01-01

    The preliminary design of an experiment for landmark recognition and tracking from the Shuttle/Advanced Technology Laboratory is described. It makes use of parallel coherent optical processing to perform correlation tests between landmarks observed passively with a telescope and previously made holographic matched filters. The experimental equipment including the optics, the low power laser, the random access file of matched filters and the electro-optical readout device are described. A real time optically excited liquid crystal device is recommended for performing the input non-coherent optical to coherent optical interface function. A development program leading to a flight experiment in 1981 is outlined.

  2. Assessment of tear osmolarity and other dry eye parameters in post-LASIK eyes.

    PubMed

    Hassan, Ziad; Szalai, Eszter; Berta, Andras; Modis, Laszlo; Nemeth, Gabor

    2013-07-01

    To assess the tear osmolarity using the TearLab device after laser in situ keratomileusis (LASIK) and to compare the values with those obtained by traditional tear film tests before and after the procedure. Thirty eyes of 15 refractive surgery candidates (5 men and 10 women of mean age: 30.55 ± 11.79 years) were examined. Using a special questionnaire (Ocular Surface Disease Index), subjective dry eye complaints were evaluated, and then, the tear osmolarity was measured with the TearLab system (TearLab Corporation) and conventional dry eye tests were carried out. Examinations were performed preoperatively and at 1, 30, and 60 days after the surgery. The mean value of tear osmolarity was 303.62 ± 12.29 mOsm/L before the surgery and 303.58 ± 20.14 mOsm/L at 60 days after the treatment (P = 0.69). Mean lid parallel conjunctival folds value was 0.68 ± 0.68 before the procedure and 0.58 ± 0.65 subsequent to surgery (P = 0.25). Meibomian gland dysfunction was not detected. No significant deviation was observed in the values of Schirmer test, corneal staining, tear break-up time, and lid parallel conjunctival folds when compared with postoperatively obtained values during the follow-up period (P > 0.05). During LASIK flap creation, intact corneal innervation is damaged, and the ocular surface lacrimal functional unit can be impaired. In our study, no abnormal dry eye test results were observed before or after the procedure. Based on our results, LASIK treatment is safe for dry eye involving the administration of adequate artificial tears for a minimum of 3 months.

  3. Corral framework: Trustworthy and fully functional data intensive parallel astronomical pipelines

    NASA Astrophysics Data System (ADS)

    Cabral, J. B.; Sánchez, B.; Beroiz, M.; Domínguez, M.; Lares, M.; Gurovich, S.; Granitto, P.

    2017-07-01

    Data processing pipelines represent an important slice of the astronomical software library that include chains of processes that transform raw data into valuable information via data reduction and analysis. In this work we present Corral, a Python framework for astronomical pipeline generation. Corral features a Model-View-Controller design pattern on top of an SQL Relational Database capable of handling: custom data models; processing stages; and communication alerts, and also provides automatic quality and structural metrics based on unit testing. The Model-View-Controller provides concept separation between the user logic and the data models, delivering at the same time multi-processing and distributed computing capabilities. Corral represents an improvement over commonly found data processing pipelines in astronomysince the design pattern eases the programmer from dealing with processing flow and parallelization issues, allowing them to focus on the specific algorithms needed for the successive data transformations and at the same time provides a broad measure of quality over the created pipeline. Corral and working examples of pipelines that use it are available to the community at https://github.com/toros-astro.

  4. Multi-Sensor Data Fusion Identification for Shearer Cutting Conditions Based on Parallel Quasi-Newton Neural Networks and the Dempster-Shafer Theory

    PubMed Central

    Si, Lei; Wang, Zhongbin; Liu, Xinhua; Tan, Chao; Xu, Jing; Zheng, Kehong

    2015-01-01

    In order to efficiently and accurately identify the cutting condition of a shearer, this paper proposed an intelligent multi-sensor data fusion identification method using the parallel quasi-Newton neural network (PQN-NN) and the Dempster-Shafer (DS) theory. The vibration acceleration signals and current signal of six cutting conditions were collected from a self-designed experimental system and some special state features were extracted from the intrinsic mode functions (IMFs) based on the ensemble empirical mode decomposition (EEMD). In the experiment, three classifiers were trained and tested by the selected features of the measured data, and the DS theory was used to combine the identification results of three single classifiers. Furthermore, some comparisons with other methods were carried out. The experimental results indicate that the proposed method performs with higher detection accuracy and credibility than the competing algorithms. Finally, an industrial application example in the fully mechanized coal mining face was demonstrated to specify the effect of the proposed system. PMID:26580620

  5. Continuous-time ΣΔ ADC with implicit variable gain amplifier for CMOS image sensor.

    PubMed

    Tang, Fang; Bermak, Amine; Abbes, Amira; Benammar, Mohieddine Amor

    2014-01-01

    This paper presents a column-parallel continuous-time sigma delta (CTSD) ADC for mega-pixel resolution CMOS image sensor (CIS). The sigma delta modulator is implemented with a 2nd order resistor/capacitor-based loop filter. The first integrator uses a conventional operational transconductance amplifier (OTA), for the concern of a high power noise rejection. The second integrator is realized with a single-ended inverter-based amplifier, instead of a standard OTA. As a result, the power consumption is reduced, without sacrificing the noise performance. Moreover, the variable gain amplifier in the traditional column-parallel read-out circuit is merged into the front-end of the CTSD modulator. By programming the input resistance, the amplitude range of the input current can be tuned with 8 scales, which is equivalent to a traditional 2-bit preamplification function without consuming extra power and chip area. The test chip prototype is fabricated using 0.18 μm CMOS process and the measurement result shows an ADC power consumption lower than 63.5 μW under 1.4 V power supply and 50 MHz clock frequency.

  6. IDAHO NATIONAL LABORATORY TRANSPORTATION TASK REPORT ON ACHIEVING MODERATOR EXCLUSION AND SUPPORTING STANDARDIZED TRANSPORTATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D.K. Morton

    2011-09-01

    Following the defunding of the Yucca Mountain Project, it is reasonable to assume that commercial used fuel will remain in storage for the foreseeable future. This report proposes supplementing the ongoing research and development work related to potential degradation of used fuel, baskets, poisons, and storage canisters during an extended period of storage with a parallel path. This parallel path can assure criticality safety during transportation by implementing a concept that achieves moderator exclusion (no in-leakage of moderator into the used fuel cavity). Using updated risk assessment insights for additional technical justification and relying upon a component inside of themore » transportation cask that provides a watertight function, a strong argument can be made that moderator intrusion is not credible and should not be a required assumption for criticality evaluations during normal conditions of transportation. A demonstrating testing program supporting a detailed analytical effort as well as updated risk assessment insights can provide the basis for moderator exclusion during hypothetical accident conditions. This report also discusses how this engineered concept can support the goal of standardized transportation.« less

  7. PROTO-PLASM: parallel language for adaptive and scalable modelling of biosystems.

    PubMed

    Bajaj, Chandrajit; DiCarlo, Antonio; Paoluzzi, Alberto

    2008-09-13

    This paper discusses the design goals and the first developments of PROTO-PLASM, a novel computational environment to produce libraries of executable, combinable and customizable computer models of natural and synthetic biosystems, aiming to provide a supporting framework for predictive understanding of structure and behaviour through multiscale geometric modelling and multiphysics simulations. Admittedly, the PROTO-PLASM platform is still in its infancy. Its computational framework--language, model library, integrated development environment and parallel engine--intends to provide patient-specific computational modelling and simulation of organs and biosystem, exploiting novel functionalities resulting from the symbolic combination of parametrized models of parts at various scales. PROTO-PLASM may define the model equations, but it is currently focused on the symbolic description of model geometry and on the parallel support of simulations. Conversely, CellML and SBML could be viewed as defining the behavioural functions (the model equations) to be used within a PROTO-PLASM program. Here we exemplify the basic functionalities of PROTO-PLASM, by constructing a schematic heart model. We also discuss multiscale issues with reference to the geometric and physical modelling of neuromuscular junctions.

  8. Proto-Plasm: parallel language for adaptive and scalable modelling of biosystems

    PubMed Central

    Bajaj, Chandrajit; DiCarlo, Antonio; Paoluzzi, Alberto

    2008-01-01

    This paper discusses the design goals and the first developments of Proto-Plasm, a novel computational environment to produce libraries of executable, combinable and customizable computer models of natural and synthetic biosystems, aiming to provide a supporting framework for predictive understanding of structure and behaviour through multiscale geometric modelling and multiphysics simulations. Admittedly, the Proto-Plasm platform is still in its infancy. Its computational framework—language, model library, integrated development environment and parallel engine—intends to provide patient-specific computational modelling and simulation of organs and biosystem, exploiting novel functionalities resulting from the symbolic combination of parametrized models of parts at various scales. Proto-Plasm may define the model equations, but it is currently focused on the symbolic description of model geometry and on the parallel support of simulations. Conversely, CellML and SBML could be viewed as defining the behavioural functions (the model equations) to be used within a Proto-Plasm program. Here we exemplify the basic functionalities of Proto-Plasm, by constructing a schematic heart model. We also discuss multiscale issues with reference to the geometric and physical modelling of neuromuscular junctions. PMID:18559320

  9. Effectiveness of virtual reality using Wii gaming technology in stroke rehabilitation: a pilot randomized clinical trial and proof of principle.

    PubMed

    Saposnik, Gustavo; Teasell, Robert; Mamdani, Muhammad; Hall, Judith; McIlroy, William; Cheung, Donna; Thorpe, Kevin E; Cohen, Leonardo G; Bayley, Mark

    2010-07-01

    Hemiparesis resulting in functional limitation of an upper extremity is common among stroke survivors. Although existing evidence suggests that increasing intensity of stroke rehabilitation therapy results in better motor recovery, limited evidence is available on the efficacy of virtual reality for stroke rehabilitation. In this pilot, randomized, single-blinded clinical trial with 2 parallel groups involving stroke patients within 2 months, we compared the feasibility, safety, and efficacy of virtual reality using the Nintendo Wii gaming system (VRWii) versus recreational therapy (playing cards, bingo, or "Jenga") among those receiving standard rehabilitation to evaluate arm motor improvement. The primary feasibility outcome was the total time receiving the intervention. The primary safety outcome was the proportion of patients experiencing intervention-related adverse events during the study period. Efficacy, a secondary outcome measure, was evaluated with the Wolf Motor Function Test, Box and Block Test, and Stroke Impact Scale at 4 weeks after intervention. Overall, 22 of 110 (20%) of screened patients were randomized. The mean age (range) was 61.3 (41 to 83) years. Two participants dropped out after a training session. The interventions were successfully delivered in 9 of 10 participants in the VRWii and 8 of 10 in the recreational therapy arm. The mean total session time was 388 minutes in the recreational therapy group compared with 364 minutes in the VRWii group (P=0.75). There were no serious adverse events in any group. Relative to the recreational therapy group, participants in the VRWii arm had a significant improvement in mean motor function of 7 seconds (Wolf Motor Function Test, 7.4 seconds; 95% CI, -14.5, -0.2) after adjustment for age, baseline functional status (Wolf Motor Function Test), and stroke severity. VRWii gaming technology represents a safe, feasible, and potentially effective alternative to facilitate rehabilitation therapy and promote motor recovery after stroke.

  10. Effectiveness of Virtual Reality Using Wii Gaming Technology in Stroke Rehabilitation

    PubMed Central

    Saposnik, Gustavo; Teasell, Robert; Mamdani, Muhammad; Hall, Judith; McIlroy, William; Cheung, Donna; Thorpe, Kevin E.; Cohen, Leonardo G.; Bayley, Mark

    2016-01-01

    Background and Purpose Hemiparesis resulting in functional limitation of an upper extremity is common among stroke survivors. Although existing evidence suggests that increasing intensity of stroke rehabilitation therapy results in better motor recovery, limited evidence is available on the efficacy of virtual reality for stroke rehabilitation. Methods In this pilot, randomized, single-blinded clinical trial with 2 parallel groups involving stroke patients within 2 months, we compared the feasibility, safety, and efficacy of virtual reality using the Nintendo Wii gaming system (VRWii) versus recreational therapy (playing cards, bingo, or “Jenga”) among those receiving standard rehabilitation to evaluate arm motor improvement. The primary feasibility outcome was the total time receiving the intervention. The primary safety outcome was the proportion of patients experiencing intervention-related adverse events during the study period. Efficacy, a secondary outcome measure, was evaluated with the Wolf Motor Function Test, Box and Block Test, and Stroke Impact Scale at 4 weeks after intervention. Results Overall, 22 of 110 (20%) of screened patients were randomized. The mean age (range) was 61.3 (41 to 83) years. Two participants dropped out after a training session. The interventions were successfully delivered in 9 of 10 participants in the VRWii and 8 of 10 in the recreational therapy arm. The mean total session time was 388 minutes in the recreational therapy group compared with 364 minutes in the VRWii group (P=0.75). There were no serious adverse events in any group. Relative to the recreational therapy group, participants in the VRWii arm had a significant improvement in mean motor function of 7 seconds (Wolf Motor Function Test, 7.4 seconds; 95% CI, −14.5, −0.2) after adjustment for age, baseline functional status (Wolf Motor Function Test), and stroke severity. Conclusions VRWii gaming technology represents a safe, feasible, and potentially effective alternative to facilitate rehabilitation therapy and promote motor recovery after stroke. PMID:20508185

  11. Early application of tail nerve electrical stimulation-induced walking training promotes locomotor recovery in rats with spinal cord injury.

    PubMed

    Zhang, S-X; Huang, F; Gates, M; Shen, X; Holmberg, E G

    2016-11-01

    This is a randomized controlled prospective trial with two parallel groups. The objective of this study was to determine whether early application of tail nerve electrical stimulation (TANES)-induced walking training can improve the locomotor function. This study was conducted in SCS Research Center in Colorado, USA. A contusion injury to spinal cord T10 was produced using the New York University impactor device with a 25 -mm height setting in female, adult Long-Evans rats. Injured rats were randomly divided into two groups (n=12 per group). One group was subjected to TANES-induced walking training 2 weeks post injury, and the other group, as control, received no TANES-induced walking training. Restorations of behavior and conduction were assessed using the Basso, Beattie and Bresnahan open-field rating scale, horizontal ladder rung walking test and electrophysiological test (Hoffmann reflex). Early application of TANES-induced walking training significantly improved the recovery of locomotor function and benefited the restoration of Hoffmann reflex. TANES-induced walking training is a useful method to promote locomotor recovery in rats with spinal cord injury.

  12. Assessing Visuospatial Skills in Parkinson's: Comparison of Neuropsychological Assessment Battery Visual Discrimination to the Judgment of Line Orientation.

    PubMed

    Renfroe, Jenna B; Turner, Travis H; Hinson, Vanessa K

    2017-02-01

    Judgment of Line Orientation (JOLO) test is widely used in assessing visuospatial deficits in Parkinson's disease (PD). The neuropsychological assessment battery (NAB) offers the Visual Discrimination test, with age and education correction, parallel forms, and co-normed standardization sample for comparisons within and between domains. However, NAB Visual Discrimination has not been validated in PD, and may not measure the same construct as JOLO. A heterogeneous sample of 47 PD patients completed the JOLO and NAB Visual Discrimination within a broader neuropsychological evaluation. Pearson correlations assessed relationships between JOLO and NAB Visual Discrimination performances. Raw and demographically corrected scores from JOLO and Visual Discrimination were only weakly correlated. NAB Visual Discrimination subtest was moderately correlated with overall cognitive functioning, whereas the JOLO was not. Despite apparent virtues, results do not support NAB Visual Discrimination as an alternative to JOLO in assessing visuospatial functioning in PD. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.

    PubMed

    Higginson, J S; Neptune, R R; Anderson, F C

    2005-09-01

    Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.

  14. A convenient and accurate parallel Input/Output USB device for E-Prime.

    PubMed

    Canto, Rosario; Bufalari, Ilaria; D'Ausilio, Alessandro

    2011-03-01

    Psychological and neurophysiological experiments require the accurate control of timing and synchrony for Input/Output signals. For instance, a typical Event-Related Potential (ERP) study requires an extremely accurate synchronization of stimulus delivery with recordings. This is typically done via computer software such as E-Prime, and fast communications are typically assured by the Parallel Port (PP). However, the PP is an old and disappearing technology that, for example, is no longer available on portable computers. Here we propose a convenient USB device enabling parallel I/O capabilities. We tested this device against the PP on both a desktop and a laptop machine in different stress tests. Our data demonstrate the accuracy of our system, which suggests that it may be a good substitute for the PP with E-Prime.

  15. A message passing kernel for the hypercluster parallel processing test bed

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Quealy, Angela; Cole, Gary L.

    1989-01-01

    A Message-Passing Kernel (MPK) for the Hypercluster parallel-processing test bed is described. The Hypercluster is being developed at the NASA Lewis Research Center to support investigations of parallel algorithms and architectures for computational fluid and structural mechanics applications. The Hypercluster resembles the hypercube architecture except that each node consists of multiple processors communicating through shared memory. The MPK efficiently routes information through the Hypercluster, using a message-passing protocol when necessary and faster shared-memory communication whenever possible. The MPK also interfaces all of the processors with the Hypercluster operating system (HYCLOPS), which runs on a Front-End Processor (FEP). This approach distributes many of the I/O tasks to the Hypercluster processors and eliminates the need for a separate I/O support program on the FEP.

  16. Parallelization of Program to Optimize Simulated Trajectories (POST3D)

    NASA Technical Reports Server (NTRS)

    Hammond, Dana P.; Korte, John J. (Technical Monitor)

    2001-01-01

    This paper describes the parallelization of the Program to Optimize Simulated Trajectories (POST3D). POST3D uses a gradient-based optimization algorithm that reaches an optimum design point by moving from one design point to the next. The gradient calculations required to complete the optimization process, dominate the computational time and have been parallelized using a Single Program Multiple Data (SPMD) on a distributed memory NUMA (non-uniform memory access) architecture. The Origin2000 was used for the tests presented.

  17. Operation of high power converters in parallel

    NASA Technical Reports Server (NTRS)

    Decker, D. K.; Inouye, L. Y.

    1993-01-01

    High power converters that are used in space power subsystems are limited in power handling capability due to component and thermal limitations. For applications, such as Space Station Freedom, where multi-kilowatts of power must be delivered to user loads, parallel operation of converters becomes an attractive option when considering overall power subsystem topologies. TRW developed three different unequal power sharing approaches for parallel operation of converters. These approaches, known as droop, master-slave, and proportional adjustment, are discussed and test results are presented.

  18. Experiment E89-044 of quasi-elastic diffusion 3He(e,e'p) at Jefferson Laboratory: Analyze cross sections of the two body breakup in parallel kinematics; Experience E89-044 de diffusion quasi-elastique 3he(e,e'p) au Jefferson Laboratory : analyse des sections efficaces de desintegration a deux corps en cinematique parallele (in French)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penel-Nottaris, Emilie

    2004-07-01

    The Jefferson Lab Hall A experiment has measured the 3He(e,e'p) reaction cross sections. The separation of the longitudinal and transverse response functions for the two-body breakup reaction in parallel kinematics allows to study the bound proton electromagnetic properties in the 3He nucleus and the involved nuclear mechanisms beyond impulse approximation. Preliminary cross sections show some disagreement with theoretical predictions for the forward angles kinematics around 0 MeV/c missing momenta, and sensitivity to final state interactions and 3He wave functions for missing momenta of 300 MeV/c.

  19. Histological assessment of the triangular fibrocartilage complex.

    PubMed

    Semisch, M; Hagert, E; Garcia-Elias, M; Lluch, A; Rein, S

    2016-06-01

    The morphological structure of the seven components of triangular fibrocartilage complexes of 11 cadaver wrists of elderly people was assessed microscopically, after staining with Hematoxylin-Eosin and Elastica van Gieson. The articular disc consisted of tight interlaced fibrocartilage without blood vessels except in its ulnar part. Volar and dorsal radioulnar ligaments showed densely parallel collagen bundles. The subsheath of the extensor carpi ulnaris muscle, the ulnotriquetral and ulnolunate ligament showed mainly mixed tight and loose parallel tissue. The ulnolunate ligament contained tighter parallel collagen bundles and clearly less elastic fibres than the ulnotriquetral ligament. The ulnocarpal meniscoid had an irregular morphological composition and loose connective tissue predominated. The structure of the articular disc indicates a buffering function. The tight structure of radioulnar and ulnolunate ligaments reflects a central stabilizing role, whereas the ulnotriquetral ligament and ulnocarpal meniscoid have less stabilizing functions. © The Author(s) 2015.

  20. Fault detection for hydraulic pump based on chaotic parallel RBF network

    NASA Astrophysics Data System (ADS)

    Lu, Chen; Ma, Ning; Wang, Zhipeng

    2011-12-01

    In this article, a parallel radial basis function network in conjunction with chaos theory (CPRBF network) is presented, and applied to practical fault detection for hydraulic pump, which is a critical component in aircraft. The CPRBF network consists of a number of radial basis function (RBF) subnets connected in parallel. The number of input nodes for each RBF subnet is determined by different embedding dimension based on chaotic phase-space reconstruction. The output of CPRBF is a weighted sum of all RBF subnets. It was first trained using the dataset from normal state without fault, and then a residual error generator was designed to detect failures based on the trained CPRBF network. Then, failure detection can be achieved by the analysis of the residual error. Finally, two case studies are introduced to compare the proposed CPRBF network with traditional RBF networks, in terms of prediction and detection accuracy.

Top