Sample records for parallel random number

  1. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.

  2. A Proposed Solution to the Problem with Using Completely Random Data to Assess the Number of Factors with Parallel Analysis

    ERIC Educational Resources Information Center

    Green, Samuel B.; Levy, Roy; Thompson, Marilyn S.; Lu, Min; Lo, Wen-Juo

    2012-01-01

    A number of psychometricians have argued for the use of parallel analysis to determine the number of factors. However, parallel analysis must be viewed at best as a heuristic approach rather than a mathematically rigorous one. The authors suggest a revision to parallel analysis that could improve its accuracy. A Monte Carlo study is conducted to…

  3. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  4. A parallel Monte Carlo code for planar and SPECT imaging: implementation, verification and applications in (131)I SPECT.

    PubMed

    Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F

    2002-02-01

    This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.

  5. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  6. Type I and Type II Error Rates and Overall Accuracy of the Revised Parallel Analysis Method for Determining the Number of Factors

    ERIC Educational Resources Information Center

    Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo

    2015-01-01

    Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…

  7. The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kargaran, Hamed, E-mail: h-kargaran@sbu.ac.ir; Minuchehr, Abdolhamid; Zolfaghari, Ahmad

    The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL-MODE and SHARED-MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showedmore » a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL-MODE and SHARED-MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.« less

  8. Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2017-03-01

    Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.

  9. Random Number Generation for High Performance Computing

    DTIC Science & Technology

    2015-01-01

    number streams, a quality metric for the parallel random number streams. * * * * * Atty. Dkt . No.: 5660-14400 Customer No. 35690 Eric B. Meyertons...responsibility to ensure timely payment of maintenance fees when due. Pagel of3 PTOL-85 (Rev. 02/11) Atty. Dkt . No.: 5660-14400 Page 1 Meyertons...with each subtask executed by a separate thread or process (henceforth, process). Each process has Atty. Dkt . No.: 5660-14400 Page 2 Meyertons

  10. Parallel Algorithms for Switching Edges in Heterogeneous Graphs.

    PubMed

    Bhuiyan, Hasanuzzaman; Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav

    2017-06-01

    An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors.

  11. Parallel Algorithms for Switching Edges in Heterogeneous Graphs☆

    PubMed Central

    Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav

    2017-01-01

    An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors. PMID:28757680

  12. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications.

    PubMed

    Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J

    2004-09-01

    We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy.

  13. Accuracy of the Parallel Analysis Procedure with Polychoric Correlations

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Li, Feiming; Bandalos, Deborah

    2009-01-01

    The purpose of this study was to investigate the application of the parallel analysis (PA) method for choosing the number of factors in component analysis for situations in which data are dichotomous or ordinal. Although polychoric correlations are sometimes used as input for component analyses, the random data matrices generated for use in PA…

  14. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by semi-randomly varying routing policies for different packets

    DOEpatents

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-11-23

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. Nodes vary a choice of routing policy for routing data in the network in a semi-random manner, so that similarly situated packets are not always routed along the same path. Semi-random variation of the routing policy tends to avoid certain local hot spots of network activity, which might otherwise arise using more consistent routing determinations. Preferably, the originating node chooses a routing policy for a packet, and all intermediate nodes in the path route the packet according to that policy. Policies may be rotated on a round-robin basis, selected by generating a random number, or otherwise varied.

  15. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    PubMed

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  16. Reporting of participant flow diagrams in published reports of randomized trials.

    PubMed

    Hopewell, Sally; Hirst, Allison; Collins, Gary S; Mallett, Sue; Yu, Ly-Mee; Altman, Douglas G

    2011-12-05

    Reporting of the flow of participants through each stage of a randomized trial is essential to assess the generalisability and validity of its results. We assessed the type and completeness of information reported in CONSORT (Consolidated Standards of Reporting Trials) flow diagrams published in current reports of randomized trials. A cross sectional review of all primary reports of randomized trials which included a CONSORT flow diagram indexed in PubMed core clinical journals (2009). We assessed the proportion of parallel group trial publications reporting specific items recommended by CONSORT for inclusion in a flow diagram. Of 469 primary reports of randomized trials, 263 (56%) included a CONSORT flow diagram of which 89% (237/263) were published in a CONSORT endorsing journal. Reports published in CONSORT endorsing journals were more likely to include a flow diagram (62%; 237/380 versus 29%; 26/89). Ninety percent (236/263) of reports which included a flow diagram had a parallel group design, of which 49% (116/236) evaluated drug interventions, 58% (137/236) were multicentre, and 79% (187/236) compared two study groups, with a median sample size of 213 participants. Eighty-one percent (191/236) reported the overall number of participants assessed for eligibility, 71% (168/236) the number excluded prior to randomization and 98% (231/236) the overall number randomized. Reasons for exclusion prior to randomization were more poorly reported. Ninety-four percent (223/236) reported the number of participants allocated to each arm of the trial. However, only 40% (95/236) reported the number who actually received the allocated intervention, 67% (158/236) the number lost to follow up in each arm of the trial, 61% (145/236) whether participants discontinued the intervention during the trial and 54% (128/236) the number included in the main analysis. Over half of published reports of randomized trials included a diagram showing the flow of participants through the trial. However, information was often missing from published flow diagrams, even in articles published in CONSORT endorsing journals. If important information is not reported it can be difficult and sometimes impossible to know if the conclusions of that trial are justified by the data presented.

  17. Bistatic scattering from a three-dimensional object above a two-dimensional randomly rough surface modeled with the parallel FDTD approach.

    PubMed

    Guo, L-X; Li, J; Zeng, H

    2009-11-01

    We present an investigation of the electromagnetic scattering from a three-dimensional (3-D) object above a two-dimensional (2-D) randomly rough surface. A Message Passing Interface-based parallel finite-difference time-domain (FDTD) approach is used, and the uniaxial perfectly matched layer (UPML) medium is adopted for truncation of the FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different number of processors is illustrated for one rough surface realization and shows that the computation time of our parallel FDTD algorithm is dramatically reduced relative to a single-processor implementation. Finally, the composite scattering coefficients versus scattered and azimuthal angle are presented and analyzed for different conditions, including the surface roughness, the dielectric constants, the polarization, and the size of the 3-D object.

  18. An in silico approach helped to identify the best experimental design, population, and outcome for future randomized clinical trials.

    PubMed

    Bajard, Agathe; Chabaud, Sylvie; Cornu, Catherine; Castellan, Anne-Charlotte; Malik, Salma; Kurbatova, Polina; Volpert, Vitaly; Eymard, Nathalie; Kassai, Behrouz; Nony, Patrice

    2016-01-01

    The main objective of our work was to compare different randomized clinical trial (RCT) experimental designs in terms of power, accuracy of the estimation of treatment effect, and number of patients receiving active treatment using in silico simulations. A virtual population of patients was simulated and randomized in potential clinical trials. Treatment effect was modeled using a dose-effect relation for quantitative or qualitative outcomes. Different experimental designs were considered, and performances between designs were compared. One thousand clinical trials were simulated for each design based on an example of modeled disease. According to simulation results, the number of patients needed to reach 80% power was 50 for crossover, 60 for parallel or randomized withdrawal, 65 for drop the loser (DL), and 70 for early escape or play the winner (PW). For a given sample size, each design had its own advantage: low duration (parallel, early escape), high statistical power and precision (crossover), and higher number of patients receiving the active treatment (PW and DL). Our approach can help to identify the best experimental design, population, and outcome for future RCTs. This may be particularly useful for drug development in rare diseases, theragnostic approaches, or personalized medicine. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Random Walk Method for Potential Problems

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Raju, I. S.

    2002-01-01

    A local Random Walk Method (RWM) for potential problems governed by Lapalace's and Paragon's equations is developed for two- and three-dimensional problems. The RWM is implemented and demonstrated in a multiprocessor parallel environment on a Beowulf cluster of computers. A speed gain of 16 is achieved as the number of processors is increased from 1 to 23.

  20. MIP Models and Hybrid Algorithms for Simultaneous Job Splitting and Scheduling on Unrelated Parallel Machines

    PubMed Central

    Ozmutlu, H. Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  1. Statistical Estimation of Some Irrational Numbers Using an Extension of Buffon's Needle Experiment

    ERIC Educational Resources Information Center

    Velasco, S.; Roman, F. L.; Gonzalez, A.; White, J. A.

    2006-01-01

    In the nineteenth century many people tried to seek a value for the most famous irrational number, [pi], by means of an experiment known as Buffon's needle, consisting of throwing randomly a needle onto a surface ruled with straight parallel lines. Here we propose to extend this experiment in order to evaluate other irrational numbers, such as…

  2. [Three-dimensional parallel collagen scaffold promotes tendon extracellular matrix formation].

    PubMed

    Zheng, Zefeng; Shen, Weiliang; Le, Huihui; Dai, Xuesong; Ouyang, Hongwei; Chen, Weishan

    2016-03-01

    To investigate the effects of three-dimensional parallel collagen scaffold on the cell shape, arrangement and extracellular matrix formation of tendon stem cells. Parallel collagen scaffold was fabricated by unidirectional freezing technique, while random collagen scaffold was fabricated by freeze-drying technique. The effects of two scaffolds on cell shape and extracellular matrix formation were investigated in vitro by seeding tendon stem/progenitor cells and in vivo by ectopic implantation. Parallel and random collagen scaffolds were produced successfully. Parallel collagen scaffold was more akin to tendon than random collagen scaffold. Tendon stem/progenitor cells were spindle-shaped and unified orientated in parallel collagen scaffold, while cells on random collagen scaffold had disorder orientation. Two weeks after ectopic implantation, cells had nearly the same orientation with the collagen substance. In parallel collagen scaffold, cells had parallel arrangement, and more spindly cells were observed. By contrast, cells in random collagen scaffold were disorder. Parallel collagen scaffold can induce cells to be in spindly and parallel arrangement, and promote parallel extracellular matrix formation; while random collagen scaffold can induce cells in random arrangement. The results indicate that parallel collagen scaffold is an ideal structure to promote tendon repairing.

  3. Reporting of participant flow diagrams in published reports of randomized trials

    PubMed Central

    2011-01-01

    Background Reporting of the flow of participants through each stage of a randomized trial is essential to assess the generalisability and validity of its results. We assessed the type and completeness of information reported in CONSORT (Consolidated Standards of Reporting Trials) flow diagrams published in current reports of randomized trials. Methods A cross sectional review of all primary reports of randomized trials which included a CONSORT flow diagram indexed in PubMed core clinical journals (2009). We assessed the proportion of parallel group trial publications reporting specific items recommended by CONSORT for inclusion in a flow diagram. Results Of 469 primary reports of randomized trials, 263 (56%) included a CONSORT flow diagram of which 89% (237/263) were published in a CONSORT endorsing journal. Reports published in CONSORT endorsing journals were more likely to include a flow diagram (62%; 237/380 versus 29%; 26/89). Ninety percent (236/263) of reports which included a flow diagram had a parallel group design, of which 49% (116/236) evaluated drug interventions, 58% (137/236) were multicentre, and 79% (187/236) compared two study groups, with a median sample size of 213 participants. Eighty-one percent (191/236) reported the overall number of participants assessed for eligibility, 71% (168/236) the number excluded prior to randomization and 98% (231/236) the overall number randomized. Reasons for exclusion prior to randomization were more poorly reported. Ninety-four percent (223/236) reported the number of participants allocated to each arm of the trial. However, only 40% (95/236) reported the number who actually received the allocated intervention, 67% (158/236) the number lost to follow up in each arm of the trial, 61% (145/236) whether participants discontinued the intervention during the trial and 54% (128/236) the number included in the main analysis. Conclusions Over half of published reports of randomized trials included a diagram showing the flow of participants through the trial. However, information was often missing from published flow diagrams, even in articles published in CONSORT endorsing journals. If important information is not reported it can be difficult and sometimes impossible to know if the conclusions of that trial are justified by the data presented. PMID:22141446

  4. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  5. Transparency in stereopsis: parallel encoding of overlapping depth planes.

    PubMed

    Reeves, Adam; Lynch, David

    2017-08-01

    We report that after extensive training, expert adults can accurately report the number, up to six, of transparent overlapping depth planes portrayed by brief (400 ms or 200 ms) random-element stereoscopic displays, and can well discriminate six from seven planes. Naïve subjects did poorly above three planes. Displays contained seven rows of 12 randomly located ×'s or +'s; jittering the disparities and number in each row to remove spurious cues had little effect on accuracy. Removing the central 3° of the 10° display to eliminate foveal vision hardly reduced the number of reportable planes. Experts could report how many of six planes contained +'s when the remainder contained ×'s, and most learned to report up to six planes in reverse contrast (left eye white +'s; right eye black +'s). Long-term training allowed some experts to reach eight depth planes. Results suggest that adult stereoscopic vision can learn to distinguish the outputs of six or more statistically independent, contrast-insensitive, narrowly tuned, asymmetric disparity channels in parallel.

  6. Evaluation of the accuracy of the Rotating Parallel Ray Omnidirectional Integration for instantaneous pressure reconstruction from the measured pressure gradient

    NASA Astrophysics Data System (ADS)

    Moreto, Jose; Liu, Xiaofeng

    2017-11-01

    The accuracy of the Rotating Parallel Ray omnidirectional integration for pressure reconstruction from the measured pressure gradient (Liu et al., AIAA paper 2016-1049) is evaluated against both the Circular Virtual Boundary omnidirectional integration (Liu and Katz, 2006 and 2013) and the conventional Poisson equation approach. Dirichlet condition at one boundary point and Neumann condition at all other boundary points are applied to the Poisson solver. A direct numerical simulation database of isotropic turbulence flow (JHTDB), with a homogeneously distributed random noise added to the entire field of DNS pressure gradient, is used to assess the performance of the methods. The random noise, generated by the Matlab function Rand, has a magnitude varying randomly within the range of +/-40% of the maximum DNS pressure gradient. To account for the effect of the noise distribution pattern on the reconstructed pressure accuracy, a total of 1000 different noise distributions achieved by using different random number seeds are involved in the evaluation. Final results after averaging the 1000 realizations show that the error of the reconstructed pressure normalized by the DNS pressure variation range is 0.15 +/-0.07 for the Poisson equation approach, 0.028 +/-0.003 for the Circular Virtual Boundary method and 0.027 +/-0.003 for the Rotating Parallel Ray method, indicating the robustness of the Rotating Parallel Ray method in pressure reconstruction. Sponsor: The San Diego State University UGP program.

  7. Demonstration of Numerical Equivalence of Ensemble and Spectral Averaging in Electromagnetic Scattering by Random Particulate Media

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Zakharova, Nadezhda T.

    2016-01-01

    The numerically exact superposition T-matrix method is used to model far-field electromagnetic scattering by two types of particulate object. Object 1 is a fixed configuration which consists of N identical spherical particles (with N 200 or 400) quasi-randomly populating a spherical volume V having a median size parameter of 50. Object 2 is a true discrete random medium (DRM) comprising the same number N of particles randomly moving throughout V. The median particle size parameter is fixed at 4. We show that if Object 1 is illuminated by a quasi-monochromatic parallel beam then it generates a typical speckle pattern having no resemblance to the scattering pattern generated by Object 2. However, if Object 1 is illuminated by a parallel polychromatic beam with a 10 bandwidth then it generates a scattering pattern that is largely devoid of speckles and closely reproduces the quasi-monochromatic pattern generated by Object 2. This result serves to illustrate the capacity of the concept of electromagnetic scattering by a DRM to encompass fixed quasi-random particulate samples provided that they are illuminated by polychromatic light.

  8. Parallel Optical Random Access Memory (PORAM)

    NASA Technical Reports Server (NTRS)

    Alphonse, G. A.

    1989-01-01

    It is shown that the need to minimize component count, power and size, and to maximize packing density require a parallel optical random access memory to be designed in a two-level hierarchy: a modular level and an interconnect level. Three module designs are proposed, in the order of research and development requirements. The first uses state-of-the-art components, including individually addressed laser diode arrays, acousto-optic (AO) deflectors and magneto-optic (MO) storage medium, aimed at moderate size, moderate power, and high packing density. The next design level uses an electron-trapping (ET) medium to reduce optical power requirements. The third design uses a beam-steering grating surface emitter (GSE) array to reduce size further and minimize the number of components.

  9. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    PubMed

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  10. Heterogeneous Hardware Parallelism Review of the IN2P3 2016 Computing School

    NASA Astrophysics Data System (ADS)

    Lafage, Vincent

    2017-11-01

    Parallel and hybrid Monte Carlo computation. The Monte Carlo method is the main workhorse for computation of particle physics observables. This paper provides an overview of various HPC technologies that can be used today: multicore (OpenMP, HPX), manycore (OpenCL). The rewrite of a twenty years old Fortran 77 Monte Carlo will illustrate the various programming paradigms in use beyond language implementation. The problem of parallel random number generator will be addressed. We will give a short report of the one week school dedicated to these recent approaches, that took place in École Polytechnique in May 2016.

  11. Implementation of a parallel protein structure alignment service on cloud.

    PubMed

    Hung, Che-Lun; Lin, Yaw-Ling

    2013-01-01

    Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform.

  12. Implementation of a Parallel Protein Structure Alignment Service on Cloud

    PubMed Central

    Hung, Che-Lun; Lin, Yaw-Ling

    2013-01-01

    Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform. PMID:23671842

  13. Randomized, Controlled Trial of CBT Training for PTSD Providers

    DTIC Science & Technology

    2016-10-29

    trial and comparative effectiveness study is to design, implement and evaluate a cost effective, web based self paced training program to provide skills...without web -centered supervision, may provide an effective means to train increasing numbers of mental health providers in relevant, evidence-based...in equal numbers to three parallel intervention condition: a) Web -based training plus web -centered supervision; b) Web - based training alone; and c

  14. Parallelization of a spatial random field characterization process using the Method of Anchored Distributions and the HTCondor high throughput computing system

    NASA Astrophysics Data System (ADS)

    Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.

    2013-12-01

    A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)

  15. Comparison of the analgesic efficacy of oral ketorolac versus intramuscular tramadol after third molar surgery: A parallel, double-blind, randomized, placebo-controlled clinical trial.

    PubMed

    Isiordia-Espinoza, M-A; Pozos-Guillen, A; Martinez-Rider, R; Perez-Urizar, J

    2016-09-01

    Preemptive analgesia is considered an alternative for treating the postsurgical pain of third molar removal. The aim of this study was to evaluate the preemptive analgesic efficacy of oral ketorolac versus intramuscular tramadol after a mandibular third molar surgery. A parallel, double-blind, randomized, placebo-controlled clinical trial was carried out. Thirty patients were randomized into two treatment groups using a series of random numbers: Group A, oral ketorolac 10 mg plus intramuscular placebo (1 mL saline solution); or Group B, oral placebo (similar tablet to oral ketorolac) plus intramuscular tramadol 50 mg diluted in 1 mL saline solution. These treatments were given 30 min before the surgery. We evaluated the time of first analgesic rescue medication, pain intensity, total analgesic consumption and adverse effects. Patients taking oral ketorolac had longer time of analgesic covering and less postoperative pain when compared with patients receiving intramuscular tramadol. According to the VAS and UAC results, this study suggests that 10 mg of oral ketorolac had superior analgesic effect than 50 mg of tramadol when administered before a mandibular third molar surgery.

  16. Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    NASA Astrophysics Data System (ADS)

    Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin

    2016-08-01

    This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.

  17. Approximate ground states of the random-field Potts model from graph cuts

    NASA Astrophysics Data System (ADS)

    Kumar, Manoj; Kumar, Ravinder; Weigel, Martin; Banerjee, Varsha; Janke, Wolfhard; Puri, Sanjay

    2018-05-01

    While the ground-state problem for the random-field Ising model is polynomial, and can be solved using a number of well-known algorithms for maximum flow or graph cut, the analog random-field Potts model corresponds to a multiterminal flow problem that is known to be NP-hard. Hence an efficient exact algorithm is very unlikely to exist. As we show here, it is nevertheless possible to use an embedding of binary degrees of freedom into the Potts spins in combination with graph-cut methods to solve the corresponding ground-state problem approximately in polynomial time. We benchmark this heuristic algorithm using a set of quasiexact ground states found for small systems from long parallel tempering runs. For a not-too-large number q of Potts states, the method based on graph cuts finds the same solutions in a fraction of the time. We employ the new technique to analyze the breakup length of the random-field Potts model in two dimensions.

  18. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    PubMed

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  19. Comparison of the analgesic efficacy of oral ketorolac versus intramuscular tramadol after third molar surgery: A parallel, double-blind, randomized, placebo-controlled clinical trial

    PubMed Central

    Isiordia-Espinoza, Mario-Alberto; Martinez-Rider, Ricardo; Perez-Urizar, Jose

    2016-01-01

    Background Preemptive analgesia is considered an alternative for treating the postsurgical pain of third molar removal. The aim of this study was to evaluate the preemptive analgesic efficacy of oral ketorolac versus intramuscular tramadol after a mandibular third molar surgery. Material and Methods A parallel, double-blind, randomized, placebo-controlled clinical trial was carried out. Thirty patients were randomized into two treatment groups using a series of random numbers: Group A, oral ketorolac 10 mg plus intramuscular placebo (1 mL saline solution); or Group B, oral placebo (similar tablet to oral ketorolac) plus intramuscular tramadol 50 mg diluted in 1 mL saline solution. These treatments were given 30 min before the surgery. We evaluated the time of first analgesic rescue medication, pain intensity, total analgesic consumption and adverse effects. Results Patients taking oral ketorolac had longer time of analgesic covering and less postoperative pain when compared with patients receiving intramuscular tramadol. Conclusions According to the VAS and AUC results, this study suggests that 10 mg of oral ketorolac had superior analgesic effect than 50 mg of tramadol when administered before a mandibular third molar surgery. Key words:Ketorolac, tramadol, third molar surgery, pain, preemptive analgesia. PMID:27475688

  20. Effect of loading on unintentional lifting velocity declines during single sets of repetitions to failure during upper and lower extremity muscle actions.

    PubMed

    Izquierdo, M; González-Badillo, J J; Häkkinen, K; Ibáñez, J; Kraemer, W J; Altadill, A; Eslava, J; Gorostiaga, E M

    2006-09-01

    The purpose of this study was to examine the effect of different loads on repetition speed during single sets of repetitions to failure in bench press and parallel squat. Thirty-six physical active men performed 1-repetition maximum in a bench press (1 RM (BP)) and half squat position (1 RM (HS)), and performed maximal power-output continuous repetition sets randomly every 10 days until failure with a submaximal load (60 %, 65 %, 70 %, and 75 % of 1RM, respectively) during bench press and parallel squat. Average velocity of each repetition was recorded by linking a rotary encoder to the end part of the bar. The values of 1 RM (BP) and 1 RM (HS) were 91 +/- 17 and 200 +/- 20 kg, respectively. The number of repetitions performed for a given percentage of 1RM was significantly higher (p < 0.001) in half squat than in bench press performance. Average repetition velocity decreased at a greater rate in bench press than in parallel squat. The significant reductions observed in the average repetition velocity (expressed as a percentage of the average velocity achieved during the initial repetition) were observed at higher percentage of the total number of repetitions performed in parallel squat (48 - 69 %) than in bench press (34 - 40 %) actions. The major finding in this study was that, for a given muscle action (bench press or parallel squat), the pattern of reduction in the relative average velocity achieved during each repetition and the relative number of repetitions performed was the same for all percentages of 1RM tested. However, relative average velocity decreased at a greater rate in bench press than in parallel squat performance. This would indicate that in bench press the significant reductions observed in the average repetition velocity occurred when the number of repetitions was over one third (34 %) of the total number of repetitions performed, whereas in parallel squat it was nearly one half (48 %). Conceptually, this would indicate that for a given exercise (bench press or squat) and percentage of maximal dynamic strength (1RM), the pattern of velocity decrease can be predicted over a set of repetitions, so that a minimum repetition threshold to ensure maximal speed performance is determined.

  1. Numerical Test of Analytical Theories for Perpendicular Diffusion in Small Kubo Number Turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heusen, M.; Shalchi, A., E-mail: husseinm@myumanitoba.ca, E-mail: andreasm4@yahoo.com

    In the literature, one can find various analytical theories for perpendicular diffusion of energetic particles interacting with magnetic turbulence. Besides quasi-linear theory, there are different versions of the nonlinear guiding center (NLGC) theory and the unified nonlinear transport (UNLT) theory. For turbulence with high Kubo numbers, such as two-dimensional turbulence or noisy reduced magnetohydrodynamic turbulence, the aforementioned nonlinear theories provide similar results. For slab and small Kubo number turbulence, however, this is not the case. In the current paper, we compare different linear and nonlinear theories with each other and test-particle simulations for a noisy slab model corresponding to smallmore » Kubo number turbulence. We show that UNLT theory agrees very well with all performed test-particle simulations. In the limit of long parallel mean free paths, the perpendicular mean free path approaches asymptotically the quasi-linear limit as predicted by the UNLT theory. For short parallel mean free paths we find a Rechester and Rosenbluth type of scaling as predicted by UNLT theory as well. The original NLGC theory disagrees with all performed simulations regardless what the parallel mean free path is. The random ballistic interpretation of the NLGC theory agrees much better with the simulations, but compared to UNLT theory the agreement is inferior. We conclude that for this type of small Kubo number turbulence, only the latter theory allows for an accurate description of perpendicular diffusion.« less

  2. Performance of multi-hop parallel free-space optical communication over gamma-gamma fading channel with pointing errors.

    PubMed

    Gao, Zhengguang; Liu, Hongzhan; Ma, Xiaoping; Lu, Wei

    2016-11-10

    Multi-hop parallel relaying is considered in a free-space optical (FSO) communication system deploying binary phase-shift keying (BPSK) modulation under the combined effects of a gamma-gamma (GG) distribution and misalignment fading. Based on the best path selection criterion, the cumulative distribution function (CDF) of this cooperative random variable is derived. Then the performance of this optical mesh network is analyzed in detail. A Monte Carlo simulation is also conducted to demonstrate the effectiveness of the results for the average bit error rate (ABER) and outage probability. The numerical result proves that it needs a smaller average transmitted optical power to achieve the same ABER and outage probability when using the multi-hop parallel network in FSO links. Furthermore, the system use of more number of hops and cooperative paths can improve the quality of the communication.

  3. Parallelization of KENO-Va Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Ramón, Javier; Peña, Jorge

    1995-07-01

    KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.

  4. A package of Linux scripts for the parallelization of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    Despite the fact that fast computers are nowadays available at low cost, there are many situations where obtaining a reasonably low statistical uncertainty in a Monte Carlo (MC) simulation involves a prohibitively large amount of time. This limitation can be overcome by having recourse to parallel computing. Most tools designed to facilitate this approach require modification of the source code and the installation of additional software, which may be inconvenient for some users. We present a set of tools, named clonEasy, that implement a parallelization scheme of a MC simulation that is free from these drawbacks. In clonEasy, which is designed to run under Linux, a set of "clone" CPUs is governed by a "master" computer by taking advantage of the capabilities of the Secure Shell (ssh) protocol. Any Linux computer on the Internet that can be ssh-accessed by the user can be used as a clone. A key ingredient for the parallel calculation to be reliable is the availability of an independent string of random numbers for each CPU. Many generators—such as RANLUX, RANECU or the Mersenne Twister—can readily produce these strings by initializing them appropriately and, hence, they are suitable to be used with clonEasy. This work was primarily motivated by the need to find a straightforward way to parallelize PENELOPE, a code for MC simulation of radiation transport that (in its current 2005 version) employs the generator RANECU, which uses a combination of two multiplicative linear congruential generators (MLCGs). Thus, this paper is focused on this class of generators and, in particular, we briefly present an extension of RANECU that increases its period up to ˜5×10 and we introduce seedsMLCG, a tool that provides the information necessary to initialize disjoint sequences of an MLCG to feed different CPUs. This program, in combination with clonEasy, allows to run PENELOPE in parallel easily, without requiring specific libraries or significant alterations of the sequential code. Program summary 1Title of program:clonEasy Catalogue identifier:ADYD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYD_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a Unix style shell (bash), support for the Secure Shell protocol and a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1) Compilers:GNU FORTRAN g77 (Linux); g95 (Linux); Intel Fortran Compiler 7.1 (Linux) Programming language used:Linux shell (bash) script, FORTRAN 77 No. of bits in a word:32 No. of lines in distributed program, including test data, etc.:1916 No. of bytes in distributed program, including test data, etc.:18 202 Distribution format:tar.gz Nature of the physical problem:There are many situations where a Monte Carlo simulation involves a huge amount of CPU time. The parallelization of such calculations is a simple way of obtaining a relatively low statistical uncertainty using a reasonable amount of time. Method of solution:The presented collection of Linux scripts and auxiliary FORTRAN programs implement Secure Shell-based communication between a "master" computer and a set of "clones". The aim of this communication is to execute a code that performs a Monte Carlo simulation on all the clones simultaneously. The code is unique, but each clone is fed with a different set of random seeds. Hence, clonEasy effectively permits the parallelization of the calculation. Restrictions on the complexity of the program:clonEasy can only be used with programs that produce statistically independent results using the same code, but with a different sequence of random numbers. Users must choose the initialization values for the random number generator on each computer and combine the output from the different executions. A FORTRAN program to combine the final results is also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo-random numbers. The calculated values initiate the generator in distant positions of the random number cycle and can be used, for instance, on a parallel simulation. The values are found using the formula S=(aS)MODm, which gives the random value that will be generated after J iterations of the MLCG. Restrictions on the complexity of the program:The 32-bit length restriction for the integer variables in standard FORTRAN 77 limits the produced seeds to be separated a distance smaller than 2 31, when the distance J is expressed as an integer value. The program allows the user to input the distance as a power of 10 for the purpose of efficiently splitting the sequence of generators with a very long period. Typical running time:The execution time depends on the parameters of the used MLCG and the distance between the generated seeds. The generation of 10 6 seeds separated 10 12 units in the sequential cycle, for one of the MLCGs found in the RANECU generator, takes 3 s on a 2.4 GHz Intel Pentium 4 using the g77 compiler.

  5. A study on the probability of twin plane formation during the nucleation of AgBr and AgCl crystals in the aqueous gelatin solution

    NASA Astrophysics Data System (ADS)

    Ohzeki, Katsuhisa; Hosoya, Yoichi

    2007-07-01

    A study was made on the probability of twin plane formation during the nucleation of AgBr and AgCl crystals. The growth condition was controlled to keep the number of the nuclei, neither decreasing owing to their dissolution, nor increasing owing to the formation of a new nucleus during the growth process. Under the condition, the nuclei were grown to have {1 1 1} faces on their surfaces by controlling pAg in a reaction solution and by use of growth modifier in case of AgCl crystal formation. The number of twin planes in each crystal was judged according to a conventional way on the basis of its morphology. The dependence of the number of twin planes per crystal on the probability of twin plain formation was in accordance with Poisson distribution, indicating the random formation of a twin plane on the {1 1 1} faces of a nucleus. The result that the ratio of number of AgCl crystals with parallel twin planes to all the multiply twinned crystals was about 10% supports the random formation of a twin plane and suggests that the twin plane formation took place on {1 1 1} surfaces at the possible eight corner of a nucleus. On the other hand, the ratio of the number of AgBr crystals with parallel twin planes to all the multiply twinned crystals was more than 50%. The result was explained by the anisotropic growth of a singly twinned nucleus according to the higher growth rate of {1 0 0} surfaces than that of {1 1 1} surfaces.

  6. Improving diagnostic accuracy of prostate carcinoma by systematic random map-biopsy.

    PubMed

    Szabó, J; Hegedûs, G; Bartók, K; Kerényi, T; Végh, A; Romics, I; Szende, B

    2000-01-01

    Systematic random rectal ultrasound directed map-biopsy of the prostate was performed in 77 RDE (rectal digital examination) positive and 25 RDE negative cases, if applicable. Hypoechoic areas were found in 30% of RDE positive and in 16% of RDE negative cases. The score for carcinoma in the hypoechoic areas was 6.5% in RDE positive and 0% in RDE negative cases, whereas systematic map biopsy detected 62% carcinomas in RDE positive, and 16% carcinomas in RDE negative patients. The probability of positive diagnosis of prostate carcinoma increased in parallel with the number of biopsy samples/case. The importance of systematic map biopsy is emphasized.

  7. mm_par2.0: An object-oriented molecular dynamics simulation program parallelized using a hierarchical scheme with MPI and OPENMP

    NASA Astrophysics Data System (ADS)

    Oh, Kwang Jin; Kang, Ji Hoon; Myung, Hun Joo

    2012-02-01

    We have revised a general purpose parallel molecular dynamics simulation program mm_par using the object-oriented programming. We parallelized the revised version using a hierarchical scheme in order to utilize more processors for a given system size. The benchmark result will be presented here. New version program summaryProgram title: mm_par2.0 Catalogue identifier: ADXP_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXP_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 390 858 No. of bytes in distributed program, including test data, etc.: 25 068 310 Distribution format: tar.gz Programming language: C++ Computer: Any system operated by Linux or Unix Operating system: Linux Classification: 7.7 External routines: We provide wrappers for FFTW [1], Intel MKL library [2] FFT routine, and Numerical recipes [3] FFT, random number generator, and eigenvalue solver routines, SPRNG [4] random number generator, Mersenne Twister [5] random number generator, space filling curve routine. Catalogue identifier of previous version: ADXP_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 560 Does the new version supersede the previous version?: Yes Nature of problem: Structural, thermodynamic, and dynamical properties of fluids and solids from microscopic scales to mesoscopic scales. Solution method: Molecular dynamics simulation in NVE, NVT, and NPT ensemble, Langevin dynamics simulation, dissipative particle dynamics simulation. Reasons for new version: First, object-oriented programming has been used, which is known to be open for extension and closed for modification. It is also known to be better for maintenance. Second, version 1.0 was based on atom decomposition and domain decomposition scheme [6] for parallelization. However, atom decomposition is not popular due to its poor scalability. On the other hand, domain decomposition scheme is better for scalability. It still has a limitation in utilizing a large number of cores on recent petascale computers due to the requirement that the domain size is larger than the potential cutoff distance. To go beyond such a limitation, a hierarchical parallelization scheme has been adopted in this new version and implemented using MPI [7] and OPENMP [8]. Summary of revisions: (1) Object-oriented programming has been used. (2) A hierarchical parallelization scheme has been adopted. (3) SPME routine has been fully parallelized with parallel 3D FFT using volumetric decomposition scheme [9]. K.J.O. thanks Mr. Seung Min Lee for useful discussion on programming and debugging. Running time: Running time depends on system size and methods used. For test system containing a protein (PDB id: 5DHFR) with CHARMM22 force field [10] and 7023 TIP3P [11] waters in simulation box having dimension 62.23 Å×62.23 Å×62.23 Å, the benchmark results are given in Fig. 1. Here the potential cutoff distance was set to 12 Å and the switching function was applied from 10 Å for the force calculation in real space. For the SPME [12] calculation, K, K, and K were set to 64 and the interpolation order was set to 4. To do the fast Fourier transform, we used Intel MKL library. All bonds including hydrogen atoms were constrained using SHAKE/RATTLE algorithms [13,14]. The code was compiled using Intel compiler version 11.1 and mvapich2 version 1.5. Fig. 2 shows performance gains from using CUDA-enabled version [15] of mm_par for 5DHFR simulation in water on Intel Core2Quad 2.83 GHz and GeForce GTX 580. Even though mm_par2.0 is not ported yet for GPU, its performance data would be useful to expect mm_par2.0 performance on GPU. Timing results for 1000 MD steps. 1, 2, 4, and 8 in the figure mean the number of OPENMP threads. Timing results for 1000 MD steps from double precision simulation on CPU, single precision simulation on GPU, and double precision simulation on GPU.

  8. Study protocol of Prednisone in episodic Cluster Headache (PredCH): a randomized, double-blind, placebo-controlled parallel group trial to evaluate the efficacy and safety of oral prednisone as an add-on therapy in the prophylactic treatment of episodic cluster headache with verapamil

    PubMed Central

    2013-01-01

    Background Episodic cluster headache (ECH) is a primary headache disorder that severely impairs patient’s quality of life. First-line therapy in the initiation of a prophylactic treatment is verapamil. Due to its delayed onset of efficacy and the necessary slow titration of dosage for tolerability reasons prednisone is frequently added by clinicians to the initial prophylactic treatment of a cluster episode. This treatment strategy is thought to effectively reduce the number and intensity of cluster attacks in the beginning of a cluster episode (before verapamil is effective). This study will assess the efficacy and safety of oral prednisone as an add-on therapy to verapamil and compare it to a monotherapy with verapamil in the initial prophylactic treatment of a cluster episode. Methods and design PredCH is a prospective, randomized, double-blind, placebo-controlled trial with parallel study arms. Eligible patients with episodic cluster headache will be randomized to a treatment intervention with prednisone or a placebo arm. The multi-center trial will be conducted in eight German headache clinics that specialize in the treatment of ECH. Discussion PredCH is designed to assess whether oral prednisone added to first-line agent verapamil helps reduce the number and intensity of cluster attacks in the beginning of a cluster episode as compared to monotherapy with verapamil. Trial registration German Clinical Trials Register DRKS00004716 PMID:23889923

  9. Effects of a Short Course of Eszopiclone on Continuous Positive Airway Pressure Adherence

    DTIC Science & Technology

    2009-11-17

    We collected addi- tional data related to mood and depression, libido and erectile dysfunction , and quality of life that will be in- cluded in...onset of therapy improves long-term CPAP adherence more than placebo in adults with obstructive sleep apnea. Design: Parallel randomized, placebo...collected. (ClinicalTrials.gov registration number: NCT00612157) Setting: Academic sleep disorder center. Patients: 160 adults (mean age, 45.7 years [SD

  10. Oscillations and chaos in neural networks: an exactly solvable model.

    PubMed Central

    Wang, L P; Pichler, E E; Ross, J

    1990-01-01

    We consider a randomly diluted higher-order network with noise, consisting of McCulloch-Pitts neurons that interact by Hebbian-type connections. For this model, exact dynamical equations are derived and solved for both parallel and random sequential updating algorithms. For parallel dynamics, we find a rich spectrum of different behaviors including static retrieving and oscillatory and chaotic phenomena in different parts of the parameter space. The bifurcation parameters include first- and second-order neuronal interaction coefficients and a rescaled noise level, which represents the combined effects of the random synaptic dilution, interference between stored patterns, and additional background noise. We show that a marked difference in terms of the occurrence of oscillations or chaos exists between neural networks with parallel and random sequential dynamics. Images PMID:2251287

  11. Theory and implementation of a very high throughput true random number generator in field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less

  12. Theory and implementation of a very high throughput true random number generator in field programmable gate array.

    PubMed

    Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao

    2016-04-01

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.

  13. Design of high-throughput and low-power true random number generator utilizing perpendicularly magnetized voltage-controlled magnetic tunnel junction

    NASA Astrophysics Data System (ADS)

    Lee, Hochul; Ebrahimi, Farbod; Amiri, Pedram Khalili; Wang, Kang L.

    2017-05-01

    A true random number generator based on perpendicularly magnetized voltage-controlled magnetic tunnel junction devices (MRNG) is presented. Unlike MTJs used in memory applications where a stable bit is needed to store information, in this work, the MTJ is intentionally designed with small perpendicular magnetic anisotropy (PMA). This allows one to take advantage of the thermally activated fluctuations of its free layer as a stochastic noise source. Furthermore, we take advantage of the voltage dependence of anisotropy to temporarily change the MTJ state into an unstable state when a voltage is applied. Since the MTJ has two energetically stable states, the final state is randomly chosen by thermal fluctuation. The voltage controlled magnetic anisotropy (VCMA) effect is used to generate the metastable state of the MTJ by lowering its energy barrier. The proposed MRNG achieves a high throughput (32 Gbps) by implementing a 64 ×64 MTJ array into CMOS circuits and executing operations in a parallel manner. Furthermore, the circuit consumes very low energy to generate a random bit (31.5 fJ/bit) due to the high energy efficiency of the voltage-controlled MTJ switching.

  14. Parallel hyperspectral image reconstruction using random projections

    NASA Astrophysics Data System (ADS)

    Sevilla, Jorge; Martín, Gabriel; Nascimento, José M. P.

    2016-10-01

    Spaceborne sensors systems are characterized by scarce onboard computing and storage resources and by communication links with reduced bandwidth. Random projections techniques have been demonstrated as an effective and very light way to reduce the number of measurements in hyperspectral data, thus, the data to be transmitted to the Earth station is reduced. However, the reconstruction of the original data from the random projections may be computationally expensive. SpeCA is a blind hyperspectral reconstruction technique that exploits the fact that hyperspectral vectors often belong to a low dimensional subspace. SpeCA has shown promising results in the task of recovering hyperspectral data from a reduced number of random measurements. In this manuscript we focus on the implementation of the SpeCA algorithm for graphics processing units (GPU) using the compute unified device architecture (CUDA). Experimental results conducted using synthetic and real hyperspectral datasets on the GPU architecture by NVIDIA: GeForce GTX 980, reveal that the use of GPUs can provide real-time reconstruction. The achieved speedup is up to 22 times when compared with the processing time of SpeCA running on one core of the Intel i7-4790K CPU (3.4GHz), with 32 Gbyte memory.

  15. Statistical study of defects caused by primary knock-on atoms in fcc Cu and bcc W using molecular dynamics

    NASA Astrophysics Data System (ADS)

    Warrier, M.; Bhardwaj, U.; Hemani, H.; Schneider, R.; Mutzke, A.; Valsakumar, M. C.

    2015-12-01

    We report on molecular Dynamics (MD) simulations carried out in fcc Cu and bcc W using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code to study (i) the statistical variations in the number of interstitials and vacancies produced by energetic primary knock-on atoms (PKA) (0.1-5 keV) directed in random directions and (ii) the in-cascade cluster size distributions. It is seen that around 60-80 random directions have to be explored for the average number of displaced atoms to become steady in the case of fcc Cu, whereas for bcc W around 50-60 random directions need to be explored. The number of Frenkel pairs produced in the MD simulations are compared with that from the Binary Collision Approximation Monte Carlo (BCA-MC) code SDTRIM-SP and the results from the NRT model. It is seen that a proper choice of the damage energy, i.e. the energy required to create a stable interstitial, is essential for the BCA-MC results to match the MD results. On the computational front it is seen that in-situ processing saves the need to input/output (I/O) atomic position data of several tera-bytes when exploring a large number of random directions and there is no difference in run-time because the extra run-time in processing data is offset by the time saved in I/O.

  16. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach

    PubMed Central

    Hemming, Karla; Taljaard, Monica

    2016-01-01

    Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808

  17. An international randomized study of a home-based self-management program for severe COPD: the COMET.

    PubMed

    Bourbeau, Jean; Casan, Pere; Tognella, Silvia; Haidl, Peter; Texereau, Joëlle B; Kessler, Romain

    2016-01-01

    Most hospitalizations and costs related to COPD are due to exacerbations and insufficient disease management. The COPD patient Management European Trial (COMET) is investigating a home-based multicomponent COPD self-management program designed to reduce exacerbations and hospital admissions. Multicenter parallel randomized controlled, open-label superiority trial. Thirty-three hospitals in four European countries. A total of 345 patients with Global initiative for chronic Obstructive Lung Disease III/IV COPD. The program includes extensive patient coaching by health care professionals to improve self-management (eg, develop skills to better manage their disease), an e-health platform for reporting frequent health status updates, rapid intervention when necessary, and oxygen therapy monitoring. Comparator is the usual management as per the center's routine practice. Yearly number of hospital days for acute care, exacerbation number, quality of life, deaths, and costs.

  18. Xyloglucan for the treatment of acute diarrhea: results of a randomized, controlled, open-label, parallel group, multicentre, national clinical trial.

    PubMed

    Gnessi, Lucio; Bacarea, Vladimir; Marusteri, Marius; Piqué, Núria

    2015-10-30

    There is a strong rationale for the use of agents with film-forming protective properties, like xyloglucan, for the treatment of acute diarrhea. However, few data from clinical trials are available. A randomized, controlled, open-label, parallel group, multicentre, clinical trial was performed to evaluate the efficacy and safety of xyloglucan, in comparison with diosmectite and Saccharomyces in adult patients with acute diarrhea due to different causes. Patients were randomized to receive a 3-day treatment. Symptoms (stools type, nausea, vomiting, abdominal pain and flatulence) were assessed by a self-administered ad-hoc questionnaire 1, 3, 6, 12, 24, 48 and 72 h following the first dose administration. Adverse events were also recorded. A total of 150 patients (69.3 % women and 30.7 % men, mean age 47.3 ± 14.7 years) were included (n = 50 in each group). A faster onset of action was observed in the xyloglucan group compared with the diosmectite and S. bouliardii groups. At 6 h xyloglucan produced a statistically significant higher decrease in the mean number of type 6 and 7 stools compared with diosmectite (p = 0.031). Xyloglucan was the most efficient treatment in reducing the percentage of patients with nausea throughout the study period, particularly during the first hours (from 26 % at baseline to 4 % after 6 and 12 h). An important improvement of vomiting was observed in all three treatment groups. Xyloglucan was more effective than diosmectite and S. bouliardii in reducing abdominal pain, with a constant improvement observed throughout the study. The clinical evolution of flatulence followed similar patterns in the three groups, with continuous improvement of the symptom. All treatments were well tolerated, without reported adverse events. Xyloglucan is a fast, efficacious and safe option for the treatment of acute diarrhea. EudraCT number 2014-001814-24 (date: 2014-04-28) ISRCTN number: 90311828.

  19. An efficient parallel-processing method for transposing large matrices in place.

    PubMed

    Portnoff, M R

    1999-01-01

    We have developed an efficient algorithm for transposing large matrices in place. The algorithm is efficient because data are accessed either sequentially in blocks or randomly within blocks small enough to fit in cache, and because the same indexing calculations are shared among identical procedures operating on independent subsets of the data. This inherent parallelism makes the method well suited for a multiprocessor computing environment. The algorithm is easy to implement because the same two procedures are applied to the data in various groupings to carry out the complete transpose operation. Using only a single processor, we have demonstrated nearly an order of magnitude increase in speed over the previously published algorithm by Gate and Twigg for transposing a large rectangular matrix in place. With multiple processors operating in parallel, the processing speed increases almost linearly with the number of processors. A simplified version of the algorithm for square matrices is presented as well as an extension for matrices large enough to require virtual memory.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin S.

    The Futility package contains the following: 1) Definition of the size of integers and real numbers; 2) A generic Unit test harness; 3) Definitions for some basic extensions to the Fortran language: arbitrary length strings, a parameter list construct, exception handlers, command line processor, timers; 4) Geometry definitions: point, line, plane, box, cylinder, polyhedron; 5) File wrapper functions: standard Fortran input/output files, Fortran binary files, HDF5 files; 6) Parallel wrapper functions: MPI, and Open MP abstraction layers, partitioning algorithms; 7) Math utilities: BLAS, Matrix and Vector definitions, Linear Solver methods and wrappers for other TPLs (PETSC, MKL, etc), preconditioner classes;more » 8) Misc: random number generator, water saturation properties, sorting algorithms.« less

  1. LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDERDETERMINED SYSTEMS*

    PubMed Central

    Meng, Xiangrui; Saunders, Michael A.; Mahoney, Michael W.

    2014-01-01

    We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to minx∈ℝn ‖Ax − b‖2, where A ∈ ℝm × n with m ≫ n or m ≪ n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min(m, n)⌉ × min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK’s DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster. PMID:25419094

  2. Review of Recent Methodological Developments in Group-Randomized Trials: Part 1—Design

    PubMed Central

    Li, Fan; Gallis, John A.; Prague, Melanie; Murray, David M.

    2017-01-01

    In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have highlighted the developments of the past 13 years in design with a companion article to focus on developments in analysis. As a pair, these articles update the 2004 review. We have discussed developments in the topics of the earlier review (e.g., clustering, matching, and individually randomized group-treatment trials) and in new topics, including constrained randomization and a range of randomized designs that are alternatives to the standard parallel-arm GRT. These include the stepped-wedge GRT, the pseudocluster randomized trial, and the network-randomized GRT, which, like the parallel-arm GRT, require clustering to be accounted for in both their design and analysis. PMID:28426295

  3. Review of Recent Methodological Developments in Group-Randomized Trials: Part 1-Design.

    PubMed

    Turner, Elizabeth L; Li, Fan; Gallis, John A; Prague, Melanie; Murray, David M

    2017-06-01

    In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have highlighted the developments of the past 13 years in design with a companion article to focus on developments in analysis. As a pair, these articles update the 2004 review. We have discussed developments in the topics of the earlier review (e.g., clustering, matching, and individually randomized group-treatment trials) and in new topics, including constrained randomization and a range of randomized designs that are alternatives to the standard parallel-arm GRT. These include the stepped-wedge GRT, the pseudocluster randomized trial, and the network-randomized GRT, which, like the parallel-arm GRT, require clustering to be accounted for in both their design and analysis.

  4. Reliability Evaluation for Clustered WSNs under Malware Propagation

    PubMed Central

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C.; Yu, Shui; Cao, Qiying

    2016-01-01

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node’s MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN. PMID:27294934

  5. Reliability Evaluation for Clustered WSNs under Malware Propagation.

    PubMed

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C; Yu, Shui; Cao, Qiying

    2016-06-10

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node's MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN.

  6. Effect of Oral Carbohydrate Intake on Labor Progress: Randomized Controlled Trial

    PubMed Central

    Rahmani, R; Khakbazan, Z; Yavari, P; Granmayeh, M; Yavari, L

    2012-01-01

    Background Lack of information regarding biochemical changes in women during labor and its outcomes on maternal and neonatal health still is an unanswered question. This study aims to explore the effectiveness of oral carbohydrate intake during labor on the duration of the active phase and other maternal and neonatal outcomes. Methods: A parallel prospective randomized controlled trial, conducted at the University Affiliated Teaching Hospital in Gonabad. Totally, 190 women were randomly assigned to an intervention (N=87) or control (N=90) group. Inclusion criteria were low-risk women with singleton cephalic presentation; and cervical dilatation 3–4 cm. Randomization was used by random number generator on every day. Odd numbers was used for intervention and even numbers for control group. Intervention was based on the preferences between: 3 medium dates plus 110 ml water; 3 dates plus 110 ml light tea without sugar; or 110 ml orange juice. The protocol is only run once but women ate and drank gradually before second stage of labor. Control group were fasted as routine practice. Neither participants nor care givers or staff could be blinded to group allocation. Differences between duration of the active phase of labor were assessed as primary outcome measure. Results: There was significant difference in the length of second stage of labor (P <.05). The effect size for this variable was 0.48. There were no significant differences in other maternal and neonatal outcomes. Conclusions: Oral intake of carbohydrate was an effective method for shortening the duration of second stage of labor in low-risk women. PMID:23304677

  7. Effect of aerobic exercise on peripheral nerve functions of population with diabetic peripheral neuropathy in type 2 diabetes: a single blind, parallel group randomized controlled trial.

    PubMed

    Dixit, Snehil; Maiya, Arun G; Shastry, B A

    2014-01-01

    To evaluate the effect of moderate intensity aerobic exercise (40%-60% of Heart Rate Reserve (HRR)) on diabetic peripheral neuropathy. A parallel-group, randomized controlled trial was carried out in a tertiary health care setting, India. The study comprised of experimental (moderate intensity aerobic exercise and standard care) and control groups (standard care). Population with type 2 diabetes with clinical neuropathy, defined as a minimum score of seven on the Michigan Diabetic Neuropathy Score (MDNS), was randomly assigned to experimental and control groups by computer generated random number tables. RANOVA was used for data analysis (p<0.05 was significant). A total of 87 patients with DPN were evaluated in the study. After randomization there were 47 patients in the control group and 40 patients in the experimental group. A comparison of two groups using RANOVA for anthropometric measures showed an insignificant change at eight weeks. For distal peroneal nerve's conduction velocity there was a significant difference in two groups at eight weeks (p<0.05), Degrees of freedom (Df)=1, 62, F=5.14, and p=0.03. Sural sensory nerve at eight weeks showed a significant difference in two groups for conduction velocity, Df =1, 60, F=10.16, and p=0.00. Significant differences in mean scores of MDNS were also observed in the two groups at eight weeks (p value significant<0.05). Moderate intensity aerobic exercises can play a valuable role to disrupt the normal progression of DPN in type 2 diabetes. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Comparison of intervention effects in split-mouth and parallel-arm randomized controlled trials: a meta-epidemiological study

    PubMed Central

    2014-01-01

    Background Split-mouth randomized controlled trials (RCTs) are popular in oral health research. Meta-analyses frequently include trials of both split-mouth and parallel-arm designs to derive combined intervention effects. However, carry-over effects may induce bias in split- mouth RCTs. We aimed to assess whether intervention effect estimates differ between split- mouth and parallel-arm RCTs investigating the same questions. Methods We performed a meta-epidemiological study. We systematically reviewed meta- analyses including both split-mouth and parallel-arm RCTs with binary or continuous outcomes published up to February 2013. Two independent authors selected studies and extracted data. We used a two-step approach to quantify the differences between split-mouth and parallel-arm RCTs: for each meta-analysis. First, we derived ratios of odds ratios (ROR) for dichotomous data and differences in standardized mean differences (∆SMD) for continuous data; second, we pooled RORs or ∆SMDs across meta-analyses by random-effects meta-analysis models. Results We selected 18 systematic reviews, for 15 meta-analyses with binary outcomes (28 split-mouth and 28 parallel-arm RCTs) and 19 meta-analyses with continuous outcomes (28 split-mouth and 28 parallel-arm RCTs). Effect estimates did not differ between split-mouth and parallel-arm RCTs (mean ROR, 0.96, 95% confidence interval 0.52–1.80; mean ∆SMD, 0.08, -0.14–0.30). Conclusions Our study did not provide sufficient evidence for a difference in intervention effect estimates derived from split-mouth and parallel-arm RCTs. Authors should consider including split-mouth RCTs in their meta-analyses with suitable and appropriate analysis. PMID:24886043

  9. An efficient dynamic load balancing algorithm

    NASA Astrophysics Data System (ADS)

    Lagaros, Nikos D.

    2014-01-01

    In engineering problems, randomness and uncertainties are inherent. Robust design procedures, formulated in the framework of multi-objective optimization, have been proposed in order to take into account sources of randomness and uncertainty. These design procedures require orders of magnitude more computational effort than conventional analysis or optimum design processes since a very large number of finite element analyses is required to be dealt. It is therefore an imperative need to exploit the capabilities of computing resources in order to deal with this kind of problems. In particular, parallel computing can be implemented at the level of metaheuristic optimization, by exploiting the physical parallelization feature of the nondominated sorting evolution strategies method, as well as at the level of repeated structural analyses required for assessing the behavioural constraints and for calculating the objective functions. In this study an efficient dynamic load balancing algorithm for optimum exploitation of available computing resources is proposed and, without loss of generality, is applied for computing the desired Pareto front. In such problems the computation of the complete Pareto front with feasible designs only, constitutes a very challenging task. The proposed algorithm achieves linear speedup factors and almost 100% speedup factor values with reference to the sequential procedure.

  10. A Randomized, Rater-Blinded, Parallel Trial of Intensive Speech Therapy in Sub-Acute Post-Stroke Aphasia: The SP-I-R-IT Study

    ERIC Educational Resources Information Center

    Martins, Isabel Pavao; Leal, Gabriela; Fonseca, Isabel; Farrajota, Luisa; Aguiar, Marta; Fonseca, Jose; Lauterbach, Martin; Goncalves, Luis; Cary, M. Carmo; Ferreira, Joaquim J.; Ferro, Jose M.

    2013-01-01

    Background: There is conflicting evidence regarding the benefits of intensive speech and language therapy (SLT), particularly because intensity is often confounded with total SLT provided. Aims: A two-centre, randomized, rater-blinded, parallel study was conducted to compare the efficacy of 100 h of SLT in a regular (RT) versus intensive (IT)…

  11. Randomized Trial of Reduced-Nicotine Standards for Cigarettes.

    PubMed

    Donny, Eric C; Denlinger, Rachel L; Tidey, Jennifer W; Koopmeiners, Joseph S; Benowitz, Neal L; Vandrey, Ryan G; al'Absi, Mustafa; Carmella, Steven G; Cinciripini, Paul M; Dermody, Sarah S; Drobes, David J; Hecht, Stephen S; Jensen, Joni; Lane, Tonya; Le, Chap T; McClernon, F Joseph; Montoya, Ivan D; Murphy, Sharon E; Robinson, Jason D; Stitzer, Maxine L; Strasser, Andrew A; Tindle, Hilary; Hatsukami, Dorothy K

    2015-10-01

    The Food and Drug Administration can set standards that reduce the nicotine content of cigarettes. We conducted a double-blind, parallel, randomized clinical trial between June 2013 and July 2014 at 10 sites. Eligibility criteria included an age of 18 years or older, smoking of five or more cigarettes per day, and no current interest in quitting smoking. Participants were randomly assigned to smoke for 6 weeks either their usual brand of cigarettes or one of six types of investigational cigarettes, provided free. The investigational cigarettes had nicotine content ranging from 15.8 mg per gram of tobacco (typical of commercial brands) to 0.4 mg per gram. The primary outcome was the number of cigarettes smoked per day during week 6. A total of 840 participants underwent randomization, and 780 completed the 6-week study. During week 6, the average number of cigarettes smoked per day was lower for participants randomly assigned to cigarettes containing 2.4, 1.3, or 0.4 mg of nicotine per gram of tobacco (16.5, 16.3, and 14.9 cigarettes, respectively) than for participants randomly assigned to their usual brand or to cigarettes containing 15.8 mg per gram (22.2 and 21.3 cigarettes, respectively; P<0.001). Participants assigned to cigarettes with 5.2 mg per gram smoked an average of 20.8 cigarettes per day, which did not differ significantly from the average number among those who smoked control cigarettes. Cigarettes with lower nicotine content, as compared with control cigarettes, reduced exposure to and dependence on nicotine, as well as craving during abstinence from smoking, without significantly increasing the expired carbon monoxide level or total puff volume, suggesting minimal compensation. Adverse events were generally mild and similar among groups. In this 6-week study, reduced-nicotine cigarettes versus standard-nicotine cigarettes reduced nicotine exposure and dependence and the number of cigarettes smoked. (Funded by the National Institute on Drug Abuse and the Food and Drug Administration Center for Tobacco Products; ClinicalTrials.gov number, NCT01681875.).

  12. Study of Solid State Drives performance in PROOF distributed analysis system

    NASA Astrophysics Data System (ADS)

    Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.

    2010-04-01

    Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.

  13. Social Desirability Bias in the Reporting of Alcohol Consumption: A Randomized Trial.

    PubMed

    Kypri, Kypros; Wilson, Amanda; Attia, John; Sheeran, Paschal; Miller, Peter; McCambridge, Jim

    2016-05-01

    To investigate reporting of alcohol consumption, we manipulated the contexts of questions in ways designed to induce social desirability bias. We undertook a two-arm, parallel-group, individually randomized trial at an Australian public university. Students were recruited by email to a web-based "Research Project on Student Health Behavior." Respondents answered nine questions about their physical activity, diet, and smoking. They were unknowingly randomized to a group presented with either (A) three questions about their alcohol consumption or (B) seven questions about their alcohol dependence and problems (under a prominent header labeled "Alcohol Use Disorders Identification Test"), followed by the same three alcohol consumption questions from (A). A total of 3,594 students (mean age = 27, SD = 10) responded and were randomized: 1,778 to Group A and 1,816 to Group B. Outcome measures were the number of days they drank alcohol, the typical number of drinks they consumed per drinking day, and the number of days they consumed six or more drinks. The primary analysis included participants with any alcohol consumption in the preceding 4 weeks (1,304 in Group A; 1,340 in Group B) using between-group, two-tailed t tests. In Groups A and B, respectively, means (and SDs) of the number of days drinking were 5.89 (5.92) versus 6.06 (6.12), p = .49; typical number of drinks per drinking day: 4.02 (3.87) versus 3.82 (3.76), p = .17; and number of days consuming six or more drinks: 1.69 (2.94) versus 1.67 (3.25), p = .56. We could not reject the null hypothesis because earlier questions about alcohol dependence and problems showed no sign of biasing the respondents' subsequent reports of alcohol consumption. These data support the validity of university students' reporting of alcohol consumption in web-based studies.

  14. Draco,Version 6.x.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Kelly; Budge, Kent; Lowrie, Rob

    2016-03-03

    Draco is an object-oriented component library geared towards numerically intensive, radiation (particle) transport applications built for parallel computing hardware. It consists of semi-independent packages and a robust build system. The packages in Draco provide a set of components that can be used by multiple clients to build transport codes. The build system can also be extracted for use in clients. Software includes smart pointers, Design-by-Contract assertions, unit test framework, wrapped MPI functions, a file parser, unstructured mesh data structures, a random number generator, root finders and an angular quadrature component.

  15. Simultaneous Range-Velocity Processing and SNR Analysis of AFIT’s Random Noise Radar

    DTIC Science & Technology

    2012-03-22

    reducing the overall processing time. Two computers, equipped with NVIDIA ® GPUs, were used to process the col- 45 lected data. The specifications for each...gather the results back to the CPU. Another company , AccelerEyes®, has developed a product called Jacket® that claims to be better than the parallel...Number of Processing Cores 4 8 Processor Speed 3.33 GHz 3.07 GHz Installed Memory 48 GB 48 GB GPU Make NVIDIA NVIDIA GPU Model Tesla 1060 Tesla C2070 GPU

  16. Efficient, massively parallel eigenvalue computation

    NASA Technical Reports Server (NTRS)

    Huo, Yan; Schreiber, Robert

    1993-01-01

    In numerical simulations of disordered electronic systems, one of the most common approaches is to diagonalize random Hamiltonian matrices and to study the eigenvalues and eigenfunctions of a single electron in the presence of a random potential. An effort to implement a matrix diagonalization routine for real symmetric dense matrices on massively parallel SIMD computers, the Maspar MP-1 and MP-2 systems, is described. Results of numerical tests and timings are also presented.

  17. Survival distributions impact the power of randomized placebo-phase design and parallel groups randomized clinical trials.

    PubMed

    Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M

    2011-03-01

    The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Small Private Key PKS on an Embedded Microprocessor

    PubMed Central

    Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon

    2014-01-01

    Multivariate quadratic ( ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012. PMID:24651722

  19. Small private key MQPKS on an embedded microprocessor.

    PubMed

    Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon

    2014-03-19

    Multivariate quadratic (MQ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to MQ cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key MQ scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key MQ scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing MQ on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key MQ scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012.

  20. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  1. Redundant binary number representation for an inherently parallel arithmetic on optical computers.

    PubMed

    De Biase, G A; Massini, A

    1993-02-10

    A simple redundant binary number representation suitable for digital-optical computers is presented. By means of this representation it is possible to build an arithmetic with carry-free parallel algebraic sums carried out in constant time and parallel multiplication in log N time. This redundant number representation naturally fits the 2's complement binary number system and permits the construction of inherently parallel arithmetic units that are used in various optical technologies. Some properties of this number representation and several examples of computation are presented.

  2. Separating the Laparoscopic Camera Cord From the Monopolar "Bovie" Cord Reduces Unintended Thermal Injury From Antenna Coupling: A Randomized Controlled Trial.

    PubMed

    Robinson, Thomas N; Jones, Edward L; Dunn, Christina L; Dunne, Bruce; Johnson, Elizabeth; Townsend, Nicole T; Paniccia, Alessandro; Stiegmann, Greg V

    2015-06-01

    The monopolar "Bovie" is used in virtually every laparoscopic operation. The active electrode and its cord emit radiofrequency energy that couples (or transfers) to nearby conductive material without direct contact. This phenomenon is increased when the active electrode cord is oriented parallel to another wire/cord. The parallel orientation of the "Bovie" and laparoscopic camera cords cause transfer of energy to the camera cord resulting in cutaneous burns at the camera trocar incision. We hypothesized that separating the active electrode/camera cords would reduce thermal injury occurring at the camera trocar incision in comparison to parallel oriented active electrode/camera cords. In this prospective, blinded, randomized controlled trial, patients undergoing standardized laparoscopic cholecystectomy were randomized to separated active electrode/camera cords or parallel oriented active electrode/camera cords. The primary outcome variable was thermal injury determined by histology from skin biopsied at the camera trocar incision. Eighty-four patients participated. Baseline demographics were similar in the groups for age, sex, preoperative diagnosis, operative time, and blood loss. Thermal injury at the camera trocar incision was lower in the separated versus parallel group (31% vs 57%; P = 0.027). Separation of the laparoscopic camera cord from the active electrode cord decreases thermal injury from antenna coupling at the camera trocar incision in comparison to the parallel orientation of these cords. Therefore, parallel orientation of these cords (an arrangement promoted by integrated operating rooms) should be abandoned. The findings of this study should influence the operating room setup for all laparoscopic cases.

  3. Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2

    NASA Technical Reports Server (NTRS)

    Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  4. Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  5. Massively parallel polymerase cloning and genome sequencing of single cells using nanoliter microwells

    PubMed Central

    Gole, Jeff; Gore, Athurva; Richards, Andrew; Chiu, Yu-Jui; Fung, Ho-Lim; Bushman, Diane; Chiang, Hsin-I; Chun, Jerold; Lo, Yu-Hwa; Zhang, Kun

    2013-01-01

    Genome sequencing of single cells has a variety of applications, including characterizing difficult-to-culture microorganisms and identifying somatic mutations in single cells from mammalian tissues. A major hurdle in this process is the bias in amplifying the genetic material from a single cell, a procedure known as polymerase cloning. Here we describe the microwell displacement amplification system (MIDAS), a massively parallel polymerase cloning method in which single cells are randomly distributed into hundreds to thousands of nanoliter wells and simultaneously amplified for shotgun sequencing. MIDAS reduces amplification bias because polymerase cloning occurs in physically separated nanoliter-scale reactors, facilitating the de novo assembly of near-complete microbial genomes from single E. coli cells. In addition, MIDAS allowed us to detect single-copy number changes in primary human adult neurons at 1–2 Mb resolution. MIDAS will further the characterization of genomic diversity in many heterogeneous cell populations. PMID:24213699

  6. A multi-center randomized controlled trial to compare a self-ligating bracket with a conventional bracket in a UK population: Part 1: Treatment efficiency.

    PubMed

    O'Dywer, Lian; Littlewood, Simon J; Rahman, Shahla; Spencer, R James; Barber, Sophy K; Russell, Joanne S

    2016-01-01

    To use a two-arm parallel trial to compare treatment efficiency between a self-ligating and a conventional preadjusted edgewise appliance system. A prospective multi-center randomized controlled clinical trial was conducted in three hospital orthodontic departments. Subjects were randomly allocated to receive treatment with either a self-ligating (3M SmartClip) or conventional (3M Victory) preadjusted edgewise appliance bracket system using a computer-generated random sequence concealed in opaque envelopes, with stratification for operator and center. Two operators followed a standardized protocol regarding bracket bonding procedure and archwire sequence. Efficiency of each ligation system was assessed by comparing the duration of treatment (months), total number of appointments (scheduled and emergency visits), and number of bracket bond failures. One hundred thirty-eight subjects (mean age 14 years 11 months) were enrolled in the study, of which 135 subjects (97.8%) completed treatment. The mean treatment time and number of visits were 25.12 months and 19.97 visits in the SmartClip group and 25.80 months and 20.37 visits in the Victory group. The overall bond failure rate was 6.6% for the SmartClip and 7.2% for Victory, with a similar debond distribution between the two appliances. No significant differences were found between the bracket systems in any of the outcome measures. No serious harm was observed from either bracket system. There was no clinically significant difference in treatment efficiency between treatment with a self-ligating bracket system and a conventional ligation system.

  7. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less

  8. Parallel Algorithms for Image Analysis.

    DTIC Science & Technology

    1982-06-01

    8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9

  9. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  10. Toward a Model Framework of Generalized Parallel Componential Processing of Multi-Symbol Numbers

    ERIC Educational Resources Information Center

    Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph

    2015-01-01

    In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining…

  11. A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging

    PubMed Central

    Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.

    2012-01-01

    Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065

  12. TemperSAT: A new efficient fair-sampling random k-SAT solver

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Zhu, Zheng; Katzgraber, Helmut G.

    The set membership problem is of great importance to many applications and, in particular, database searches for target groups. Recently, an approach to speed up set membership searches based on the NP-hard constraint-satisfaction problem (random k-SAT) has been developed. However, the bottleneck of the approach lies in finding the solution to a large SAT formula efficiently and, in particular, a large number of independent solutions is needed to reduce the probability of false positives. Unfortunately, traditional random k-SAT solvers such as WalkSAT are biased when seeking solutions to the Boolean formulas. By porting parallel tempering Monte Carlo to the sampling of binary optimization problems, we introduce a new algorithm (TemperSAT) whose performance is comparable to current state-of-the-art SAT solvers for large k with the added benefit that theoretically it can find many independent solutions quickly. We illustrate our results by comparing to the currently fastest implementation of WalkSAT, WalkSATlm.

  13. Using Horn's Parallel Analysis Method in Exploratory Factor Analysis for Determining the Number of Factors

    ERIC Educational Resources Information Center

    Çokluk, Ömay; Koçak, Duygu

    2016-01-01

    In this study, the number of factors obtained from parallel analysis, a method used for determining the number of factors in exploratory factor analysis, was compared to that of the factors obtained from eigenvalue and scree plot--two traditional methods for determining the number of factors--in terms of consistency. Parallel analysis is based on…

  14. The glycogen synthase 2 gene (Gys2) displays parallel evolution between Old World and New World fruit bats.

    PubMed

    Qian, Yamin; Fang, Tao; Shen, Bin; Zhang, Shuyi

    2014-01-01

    Frugivorous and nectarivorous bats rely largely on hepatic glycogenesis and glycogenolysis for postprandial blood glucose disposal and maintenance of glucose homeostasis during short time starvation, respectively. The glycogen synthase 2 encoded by the Gys2 gene plays a critical role in liver glycogen synthesis. To test whether the Gys2 gene has undergone adaptive evolution in bats with carbohydrate-rich diets in relation to their insect-eating sister taxa, we sequenced the coding region of the Gys2 gene in a number of bat species, including three Old World fruit bats (OWFBs) (Pteropodidae) and two New World fruit bats (NWFBs) (Phyllostomidae). Our results showed that the Gys2 coding sequences are highly conserved across all bat species we examined, and no evidence of positive selection was detected in the ancestral branches leading to OWFBs and NWFBs. Our explicit convergence test showed that posterior probabilities of convergence between several branches of OWFBs, and the NWFBs were markedly higher than that of divergence. Three parallel amino acid substitutions (Q72H, K371Q, and E666D) were detected among branches of OWFBs and NWFBs. Tests for parallel evolution showed that two parallel substitutions (Q72H and E666D) were driven by natural selection, while the K371Q was more likely to be fixed randomly. Thus, our results suggested that the Gys2 gene has undergone parallel evolution on amino acid level between OWFBs and NWFBs in relation to their carbohydrate metabolism.

  15. Sample size re-estimation and other midcourse adjustments with sequential parallel comparison design.

    PubMed

    Silverman, Rachel K; Ivanova, Anastasia

    2017-01-01

    Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.

  16. Reconfiguration and Search of Social Networks

    PubMed Central

    Zhang, Lianming; Peng, Aoyuan

    2013-01-01

    Social networks tend to exhibit some topological characteristics different from regular networks and random networks, such as shorter average path length and higher clustering coefficient, and the node degree of the majority of social networks obeys exponential distribution. Based on the topological characteristics of the real social networks, a new network model which suits to portray the structure of social networks was proposed, and the characteristic parameters of the model were calculated. To find out the relationship between two people in the social network, and using the local information of the social network and the parallel mechanism, a hybrid search strategy based on k-walker random and a high degree was proposed. Simulation results show that the strategy can significantly reduce the average number of search steps, so as to effectively improve the search speed and efficiency. PMID:24574861

  17. The studies of FT-IR and CD spectroscopy on catechol oxidase I from tobacco

    NASA Astrophysics Data System (ADS)

    Xiao, Hourong; Xie, Yongshu; Liu, Qingliang; Xu, Xiaolong; Shi, Chunhua

    2005-10-01

    A novel copper-containing enzyme named COI (catechol oxidase I) has been isolated and purified from tobacco by extracting acetone-emerged powder with phosphate buffer, centrifugation at low temperature, ammonium sulfate fractional precipitation, and column chromatography on DEAE-sephadex (A-50), sephadex (G-75), and DEAE-celluse (DE-52). PAGE, SDS-PAGE were used to detect the enzyme purity, and to determine its molecular weight. Then the secondary structures of COI at different pH, different temperatures and different concentrations of guanidine hydrochloride (GdnHCl) were studied by the FT-IR, Fourier self-deconvolution spectra, and circular dichroism (CD). At pH 2.0, the contents of both α-helix and anti-parallel β-sheet decrease, and that of random coil increases, while β-turn is unchanged compared with the neutral condition (pH 7.0). At pH 11.0, the results indicate that the contents of α-helix, anti-parallel β-sheet and β-turn decrease, while random coil structure increases. According to the CD measurements, the relative average fractions of α-helix, anti-parallel β-sheet, β-turn/parallel β-sheet, aromatic residues and disulfide bond, and random coil/γ-turn are 41.7%, 16.7%, 23.5%, 11.3%, and 6.8% at pH 7.0, respectively, while 7.2%, 7.7%, 15.2%, 10.7%, 59.2% at pH 2.0, and 20.6%, 9.5%, 15.2%, 10.5%, 44.2% at pH 11.0. Both α-helix and random coil decrease with temperature increasing, and anti-parallel β-sheet increases at the same time. After incubated in 6 mol/L guanidine hydrochloride for 30 min, the fraction of α-helix almost disappears (only 1.1% left), while random coil/γ-turn increases to 81.8%, which coincides well with the results obtained through enzymatic activity experiment.

  18. Computing effective properties of random heterogeneous materials on heterogeneous parallel processors

    NASA Astrophysics Data System (ADS)

    Leidi, Tiziano; Scocchi, Giulio; Grossi, Loris; Pusterla, Simone; D'Angelo, Claudio; Thiran, Jean-Philippe; Ortona, Alberto

    2012-11-01

    In recent decades, finite element (FE) techniques have been extensively used for predicting effective properties of random heterogeneous materials. In the case of very complex microstructures, the choice of numerical methods for the solution of this problem can offer some advantages over classical analytical approaches, and it allows the use of digital images obtained from real material samples (e.g., using computed tomography). On the other hand, having a large number of elements is often necessary for properly describing complex microstructures, ultimately leading to extremely time-consuming computations and high memory requirements. With the final objective of reducing these limitations, we improved an existing freely available FE code for the computation of effective conductivity (electrical and thermal) of microstructure digital models. To allow execution on hardware combining multi-core CPUs and a GPU, we first translated the original algorithm from Fortran to C, and we subdivided it into software components. Then, we enhanced the C version of the algorithm for parallel processing with heterogeneous processors. With the goal of maximizing the obtained performances and limiting resource consumption, we utilized a software architecture based on stream processing, event-driven scheduling, and dynamic load balancing. The parallel processing version of the algorithm has been validated using a simple microstructure consisting of a single sphere located at the centre of a cubic box, yielding consistent results. Finally, the code was used for the calculation of the effective thermal conductivity of a digital model of a real sample (a ceramic foam obtained using X-ray computed tomography). On a computer equipped with dual hexa-core Intel Xeon X5670 processors and an NVIDIA Tesla C2050, the parallel application version features near to linear speed-up progression when using only the CPU cores. It executes more than 20 times faster when additionally using the GPU.

  19. Accelerate quasi Monte Carlo method for solving systems of linear algebraic equations through shared memory

    NASA Astrophysics Data System (ADS)

    Lai, Siyan; Xu, Ying; Shao, Bo; Guo, Menghan; Lin, Xiaola

    2017-04-01

    In this paper we study on Monte Carlo method for solving systems of linear algebraic equations (SLAE) based on shared memory. Former research demostrated that GPU can effectively speed up the computations of this issue. Our purpose is to optimize Monte Carlo method simulation on GPUmemoryachritecture specifically. Random numbers are organized to storein shared memory, which aims to accelerate the parallel algorithm. Bank conflicts can be avoided by our Collaborative Thread Arrays(CTA)scheme. The results of experiments show that the shared memory based strategy can speed up the computaions over than 3X at most.

  20. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  1. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  2. Prevention of Hypoglycemia With Predictive Low Glucose Insulin Suspension in Children With Type 1 Diabetes: A Randomized Controlled Trial.

    PubMed

    Battelino, Tadej; Nimri, Revital; Dovc, Klemen; Phillip, Moshe; Bratina, Natasa

    2017-06-01

    To investigate whether predictive low glucose management (PLGM) of the MiniMed 640G system significantly reduces the rate of hypoglycemia compared with the sensor-augmented insulin pump in children with type 1 diabetes. This randomized, two-arm, parallel, controlled, two-center open-label study included 100 children and adolescents with type 1 diabetes and glycated hemoglobin A 1c ≤10% (≤86 mmol/mol) and using continuous subcutaneous insulin infusion. Patients were randomly assigned to either an intervention group with PLGM features enabled (PLGM ON) or a control group (PLGM OFF), in a 1:1 ratio, all using the same type of sensor-augmented insulin pump. The primary end point was the number of hypoglycemic events below 65 mg/dL (3.6 mmol/L), based on sensor glucose readings, during a 14-day study treatment. The analysis was performed by intention to treat for all randomized patients. The number of hypoglycemic events below 65 mg/dL (3.6 mmol/L) was significantly smaller in the PLGM ON compared with the PLGM OFF group (mean ± SD 4.4 ± 4.5 and 7.4 ± 6.3, respectively; P = 0.008). This was also true when calculated separately for night ( P = 0.025) and day ( P = 0.022). No severe hypoglycemic events occurred; however, there was a significant increase in time spent above 140 mg/dL (7.8 mmol/L) in the PLGM ON group ( P = 0.0165). The PLGM insulin suspension was associated with a significantly reduced number of hypoglycemic events. Although this was achieved at the expense of increased time in moderate hyperglycemia, there were no serious adverse effects in young patients with type 1 diabetes. © 2017 by the American Diabetes Association.

  3. Multidisciplinary intensive functional restoration versus outpatient active physiotherapy in chronic low back pain: a randomized controlled trial.

    PubMed

    Roche-Leboucher, Ghislaine; Petit-Lemanac'h, Audrey; Bontoux, Luc; Dubus-Bausière, Valérie; Parot-Shinkel, Elsa; Fanello, Serge; Penneau-Fontbonne, Dominique; Fouquet, Natacha; Legrand, Erick; Roquelaure, Yves; Richard, Isabelle

    2011-12-15

    Randomized parallel group comparative trial with a 1-year follow-up period. To compare in a population of patients with chronic low back pain, the effectiveness of a functional restoration program (FRP), including intensive physical training and a multidisciplinary approach, with an outpatient active physiotherapy program at 1-year follow-up. Controlled studies conducted in the United States and in Northern Europe showed a benefit of FRPs, especially on return to work. Randomized studies have compared these programs with standard care. A previously reported study presented the effectiveness at 6 months of both functional restoration and active physiotherapy, with a significantly greater reduction of sick-leave days for functional restoration. A total of 132 patients with low back pain were randomized to either FRP (68 patients) or active individual therapy (64 patients). One patient did not complete the FRP; 19 patients were lost to follow-up (4 in the FRP group and 15 in the active individual treatment group). The number of sick-leave days in 2 years before the program was similar in both groups (180 ± 135.1 days in active individual treatment vs. 185 ± 149.8 days in FRP, P = 0.847). In both groups, at 1-year follow-up, intensity of pain, flexibility, trunk muscle endurance, Dallas daily activities and work and leisure scores, and number of sick-leave days were significantly improved compared with baseline. The number of sick-leave days was significantly lower in the FRP group. Both programs are efficient in reducing disability and sick-leave days. The FRP is significantly more effective in reducing sick-leave days. Further analysis is required to determine if this overweighs the difference in costs of both programs.

  4. Habit Reversal versus Object Manipulation Training for Treating Nail Biting: A Randomized Controlled Clinical Trial

    PubMed Central

    Ghanizadeh, Ahmad; Bazrafshan, Amir; Dehbozorgi, Gholamreza

    2013-01-01

    Objective This is a parallel, three group, randomized, controlled clinical trial, with outcomes evaluated up to three months after randomization for children and adolescents with chronic nail biting. The current study investigates the efficacy of habit reversal training (HRT) and compares its effect with object manipulation training (OMT) considering the limitations of the current literature. Method Ninety one children and adolescents with nail biting were randomly allocated to one of the three groups. The three groups were HRT (n = 30), OMT (n = 30), and wait-list or control group (n = 31). The mean length of nail was considered as the main outcome. Results The mean length of the nails after one month in HRT and OMT groups increased compared to the waiting list group (P < 0.001, P < 0.001, respectively). In long term, both OMT and HRT increased the mean length of nails (P < 0.01), but HRT was more effective than OMT (P < 0.021). The parent-reported frequency of nail biting did show similar results as to the mean length of nails assessment in long term. The number of children who completely stopped nail biting in HRT and OMT groups during three months was 8 and 7, respectively. This number was zero during one month for the wait-list group. Conclusion This trial showed that HRT is more effective than wait-list and OMT in increasing the mean length of nails of children and adolescents in long terms. PMID:24130603

  5. Can sequential parallel comparison design and two-way enriched design be useful in medical device clinical trials?

    PubMed

    Ivanova, Anastasia; Zhang, Zhiwei; Thompson, Laura; Yang, Ying; Kotz, Richard M; Fang, Xin

    2016-01-01

    Sequential parallel comparison design (SPCD) was proposed for trials with high placebo response. In the first stage of SPCD subjects are randomized between placebo and active treatment. In the second stage placebo nonresponders are re-randomized between placebo and active treatment. Data from the population of "all comers" and the subpopulations of placebo nonresponders then combined to yield a single p-value for treatment comparison. Two-way enriched design (TED) is an extension of SPCD where active treatment responders are also re-randomized between placebo and active treatment in Stage 2. This article investigates the potential uses of SPCD and TED in medical device trials.

  6. Randomized placebo controlled blinded study to assess valsartan efficacy in preventing left ventricle remodeling in patients with dual chamber pacemaker--Rationale and design of the trial.

    PubMed

    Tomasik, Andrzej; Jacheć, Wojciech; Wojciechowska, Celina; Kawecki, Damian; Białkowska, Beata; Romuk, Ewa; Gabrysiak, Artur; Birkner, Ewa; Kalarus, Zbigniew; Nowalany-Kozielska, Ewa

    2015-05-01

    Dual chamber pacing is known to have detrimental effect on cardiac performance and heart failure occurring eventually is associated with increased mortality. Experimental studies of pacing in dogs have shown contractile dyssynchrony leading to diffuse alterations in extracellular matrix. In parallel, studies on experimental ischemia/reperfusion injury have shown efficacy of valsartan to inhibit activity of matrix metalloproteinase-9, to increase the activity of tissue inhibitor of matrix metalloproteinase-3 and preserve global contractility and left ventricle ejection fraction. To present rationale and design of randomized blinded trial aimed to assess whether 12 month long administration of valsartan will prevent left ventricle remodeling in patients with preserved left ventricle ejection fraction (LVEF ≥ 40%) and first implantation of dual chamber pacemaker. A total of 100 eligible patients will be randomized into three parallel arms: placebo, valsartan 80 mg/daily and valsartan 160 mg/daily added to previously used drugs. The primary endpoint will be assessment of valsartan efficacy to prevent left ventricle remodeling during 12 month follow-up. We assess patients' functional capacity, blood plasma activity of matrix metalloproteinases and their tissue inhibitors, NT-proBNP, tumor necrosis factor alpha, and Troponin T. Left ventricle function and remodeling is assessed echocardiographically: M-mode, B-mode, tissue Doppler imaging. If valsartan proves effective, it will be an attractive measure to improve long term prognosis in aging population and increasing number of pacemaker recipients. ClinicalTrials.org (NCT01805804). Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Parallel Mitogenome Sequencing Alleviates Random Rooting Effect in Phylogeography.

    PubMed

    Hirase, Shotaro; Takeshima, Hirohiko; Nishida, Mutsumi; Iwasaki, Wataru

    2016-04-28

    Reliably rooted phylogenetic trees play irreplaceable roles in clarifying diversification in the patterns of species and populations. However, such trees are often unavailable in phylogeographic studies, particularly when the focus is on rapidly expanded populations that exhibit star-like trees. A fundamental bottleneck is known as the random rooting effect, where a distant outgroup tends to root an unrooted tree "randomly." We investigated whether parallel mitochondrial genome (mitogenome) sequencing alleviates this effect in phylogeography using a case study on the Sea of Japan lineage of the intertidal goby Chaenogobius annularis Eighty-three C. annularis individuals were collected and their mitogenomes were determined by high-throughput and low-cost parallel sequencing. Phylogenetic analysis of these mitogenome sequences was conducted to root the Sea of Japan lineage, which has a star-like phylogeny and had not been reliably rooted. The topologies of the bootstrap trees were investigated to determine whether the use of mitogenomes alleviated the random rooting effect. The mitogenome data successfully rooted the Sea of Japan lineage by alleviating the effect, which hindered phylogenetic analysis that used specific gene sequences. The reliable rooting of the lineage led to the discovery of a novel, northern lineage that expanded during an interglacial period with high bootstrap support. Furthermore, the finding of this lineage suggested the existence of additional glacial refugia and provided a new recent calibration point that revised the divergence time estimation between the Sea of Japan and Pacific Ocean lineages. This study illustrates the effectiveness of parallel mitogenome sequencing for solving the random rooting problem in phylogeographic studies. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  8. Feasibility of including cellular telephone numbers in random digit dialing for epidemiologic case-control studies.

    PubMed

    Voigt, Lynda F; Schwartz, Stephen M; Doody, David R; Lee, Spencer C; Li, Christopher I

    2011-01-01

    The usefulness of landline random digit dialing (RDD) in epidemiologic studies is threatened by the rapid increase in households with only cellular telephone service. This study assessed the feasibility of including cellular telephone numbers in RDD and differences between young adults with landline telephones and those with only cellular telephones. Between 2008 and 2009, a total of 9,023 cellular telephone numbers were called and 43.8% were successfully screened; 248 men and 249 women who resided in 3 Washington State counties, were 20-44 years of age, and used only cellular telephones were interviewed. They were compared with 332 men and 526 women with landline telephones interviewed as controls for 2 case-control studies conducted in parallel with cellular telephone interviewing. Cellular-only users were more likely to be college educated and less likely to have fathered/birthed a child than were their landline counterparts. Male cellular-only users were less likely to be obese and more likely to exercise, to be Hispanic, and to have lower incomes, while female cellular-only users were more likely to be single than landline respondents. Including cellular telephone numbers in RDD is feasible and should be incorporated into epidemiologic studies that rely on this method to ascertain subjects, although low screening rates could hamper the representativeness of such a sample.

  9. A sweep algorithm for massively parallel simulation of circuit-switched networks

    NASA Technical Reports Server (NTRS)

    Gaujal, Bruno; Greenberg, Albert G.; Nicol, David M.

    1992-01-01

    A new massively parallel algorithm is presented for simulating large asymmetric circuit-switched networks, controlled by a randomized-routing policy that includes trunk-reservation. A single instruction multiple data (SIMD) implementation is described, and corresponding experiments on a 16384 processor MasPar parallel computer are reported. A multiple instruction multiple data (MIMD) implementation is also described, and corresponding experiments on an Intel IPSC/860 parallel computer, using 16 processors, are reported. By exploiting parallelism, our algorithm increases the possible execution rate of such complex simulations by as much as an order of magnitude.

  10. Estimation of treatment efficacy with complier average causal effects (CACE) in a randomized stepped wedge trial.

    PubMed

    Gruber, Joshua S; Arnold, Benjamin F; Reygadas, Fermin; Hubbard, Alan E; Colford, John M

    2014-05-01

    Complier average causal effects (CACE) estimate the impact of an intervention among treatment compliers in randomized trials. Methods used to estimate CACE have been outlined for parallel-arm trials (e.g., using an instrumental variables (IV) estimator) but not for other randomized study designs. Here, we propose a method for estimating CACE in randomized stepped wedge trials, where experimental units cross over from control conditions to intervention conditions in a randomized sequence. We illustrate the approach with a cluster-randomized drinking water trial conducted in rural Mexico from 2009 to 2011. Additionally, we evaluated the plausibility of assumptions required to estimate CACE using the IV approach, which are testable in stepped wedge trials but not in parallel-arm trials. We observed small increases in the magnitude of CACE risk differences compared with intention-to-treat estimates for drinking water contamination (risk difference (RD) = -22% (95% confidence interval (CI): -33, -11) vs. RD = -19% (95% CI: -26, -12)) and diarrhea (RD = -0.8% (95% CI: -2.1, 0.4) vs. RD = -0.1% (95% CI: -1.1, 0.9)). Assumptions required for IV analysis were probably violated. Stepped wedge trials allow investigators to estimate CACE with an approach that avoids the stronger assumptions required for CACE estimation in parallel-arm trials. Inclusion of CACE estimates in stepped wedge trials with imperfect compliance could enhance reporting and interpretation of the results of such trials.

  11. Computational efficiency of parallel combinatorial OR-tree searches

    NASA Technical Reports Server (NTRS)

    Li, Guo-Jie; Wah, Benjamin W.

    1990-01-01

    The performance of parallel combinatorial OR-tree searches is analytically evaluated. This performance depends on the complexity of the problem to be solved, the error allowance function, the dominance relation, and the search strategies. The exact performance may be difficult to predict due to the nondeterminism and anomalies of parallelism. The authors derive the performance bounds of parallel OR-tree searches with respect to the best-first, depth-first, and breadth-first strategies, and verify these bounds by simulation. They show that a near-linear speedup can be achieved with respect to a large number of processors for parallel OR-tree searches. Using the bounds developed, the authors derive sufficient conditions for assuring that parallelism will not degrade performance and necessary conditions for allowing parallelism to have a speedup greater than the ratio of the numbers of processors. These bounds and conditions provide the theoretical foundation for determining the number of processors required to assure a near-linear speedup.

  12. Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research

    PubMed Central

    Golino, Hudson F.; Epskamp, Sacha

    2017-01-01

    The estimation of the correct number of dimensions is a long-standing problem in psychometrics. Several methods have been proposed, such as parallel analysis (PA), Kaiser-Guttman’s eigenvalue-greater-than-one rule, multiple average partial procedure (MAP), the maximum-likelihood approaches that use fit indexes as BIC and EBIC and the less used and studied approach called very simple structure (VSS). In the present paper a new approach to estimate the number of dimensions will be introduced and compared via simulation to the traditional techniques pointed above. The approach proposed in the current paper is called exploratory graph analysis (EGA), since it is based on the graphical lasso with the regularization parameter specified using EBIC. The number of dimensions is verified using the walktrap, a random walk algorithm used to identify communities in networks. In total, 32,000 data sets were simulated to fit known factor structures, with the data sets varying across different criteria: number of factors (2 and 4), number of items (5 and 10), sample size (100, 500, 1000 and 5000) and correlation between factors (orthogonal, .20, .50 and .70), resulting in 64 different conditions. For each condition, 500 data sets were simulated using lavaan. The result shows that the EGA performs comparable to parallel analysis, EBIC, eBIC and to Kaiser-Guttman rule in a number of situations, especially when the number of factors was two. However, EGA was the only technique able to correctly estimate the number of dimensions in the four-factor structure when the correlation between factors were .7, showing an accuracy of 100% for a sample size of 5,000 observations. Finally, the EGA was used to estimate the number of factors in a real dataset, in order to compare its performance with the other six techniques tested in the simulation study. PMID:28594839

  13. Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research.

    PubMed

    Golino, Hudson F; Epskamp, Sacha

    2017-01-01

    The estimation of the correct number of dimensions is a long-standing problem in psychometrics. Several methods have been proposed, such as parallel analysis (PA), Kaiser-Guttman's eigenvalue-greater-than-one rule, multiple average partial procedure (MAP), the maximum-likelihood approaches that use fit indexes as BIC and EBIC and the less used and studied approach called very simple structure (VSS). In the present paper a new approach to estimate the number of dimensions will be introduced and compared via simulation to the traditional techniques pointed above. The approach proposed in the current paper is called exploratory graph analysis (EGA), since it is based on the graphical lasso with the regularization parameter specified using EBIC. The number of dimensions is verified using the walktrap, a random walk algorithm used to identify communities in networks. In total, 32,000 data sets were simulated to fit known factor structures, with the data sets varying across different criteria: number of factors (2 and 4), number of items (5 and 10), sample size (100, 500, 1000 and 5000) and correlation between factors (orthogonal, .20, .50 and .70), resulting in 64 different conditions. For each condition, 500 data sets were simulated using lavaan. The result shows that the EGA performs comparable to parallel analysis, EBIC, eBIC and to Kaiser-Guttman rule in a number of situations, especially when the number of factors was two. However, EGA was the only technique able to correctly estimate the number of dimensions in the four-factor structure when the correlation between factors were .7, showing an accuracy of 100% for a sample size of 5,000 observations. Finally, the EGA was used to estimate the number of factors in a real dataset, in order to compare its performance with the other six techniques tested in the simulation study.

  14. Effect of alignment of easy axes on dynamic magnetization of immobilized magnetic nanoparticles

    NASA Astrophysics Data System (ADS)

    Yoshida, Takashi; Matsugi, Yuki; Tsujimura, Naotaka; Sasayama, Teruyoshi; Enpuku, Keiji; Viereck, Thilo; Schilling, Meinhard; Ludwig, Frank

    2017-04-01

    In some biomedical applications of magnetic nanoparticles (MNPs), the particles are physically immobilized. In this study, we explore the effect of the alignment of the magnetic easy axes on the dynamic magnetization of immobilized MNPs under an AC excitation field. We prepared three immobilized MNP samples: (1) a sample in which easy axes are randomly oriented, (2) a parallel-aligned sample in which easy axes are parallel to the AC field, and (3) an orthogonally aligned sample in which easy axes are perpendicular to the AC field. First, we show that the parallel-aligned sample has the largest hysteresis in the magnetization curve and the largest harmonic magnetization spectra, followed by the randomly oriented and orthogonally aligned samples. For example, 1.6-fold increase was observed in the area of the hysteresis loop of the parallel-aligned sample compared to that of the randomly oriented sample. To quantitatively discuss the experimental results, we perform a numerical simulation based on a Fokker-Planck equation, in which probability distributions for the directions of the easy axes are taken into account in simulating the prepared MNP samples. We obtained quantitative agreement between experiment and simulation. These results indicate that the dynamic magnetization of immobilized MNPs is significantly affected by the alignment of the easy axes.

  15. Physical therapy for urinary incontinence in postmenopausal women with osteoporosis or low bone density: a randomized controlled trial.

    PubMed

    Sran, Meena; Mercier, Joanie; Wilson, Penny; Lieblich, Pat; Dumoulin, Chantale

    2016-03-01

    To assess the effectiveness of 12 weekly physical therapy sessions for urinary incontinence (UI) compared with a control intervention, for reducing the number of UI episodes measured with the 7-day bladder diary, at 3 months and 1 year postrandomization. A single parallel-group randomized controlled trial was conducted at one outpatient public health center, in postmenopausal women aged 55 years and over with osteoporosis or low bone density and UI. Women were randomized to physical therapy (PT) for UI or osteoporosis education. The primary outcome measure was number of leakage episodes on the 7-day bladder diary, assessed at baseline, after treatment and at 1 year. The secondary outcome measures included the pad test and disease-specific quality of life and self-efficacy questionnaires assessed at the same timepoints. Forty-eight women participated (24 per group). Two participants dropped out of each group and one participant was deceased before 3-month follow-up. Intention-to-treat analysis was undertaken. At 3 months and 1 year, there was a statistically significant difference in the number of leakage episodes on the 7-day bladder diary (3 mo: P = 0.04; 1 y: P = 0.01) in favor of the PT group. The effect size was 0.34 at 1 year. There were no harms reported. After a 12-week course of PT once per week for UI, PT group participants had a 75% reduction in weekly median number of leakage episodes, whereas the control group's condition had no improvement. At 1 year, the PT group participants maintained this improvement, whereas the control group's incontinence worsened.

  16. Anomalous Anticipatory Responses in Networked Random Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Roger D.; Bancel, Peter A.

    2006-10-16

    We examine an 8-year archive of synchronized, parallel time series of random data from a world spanning network of physical random event generators (REGs). The archive is a publicly accessible matrix of normally distributed 200-bit sums recorded at 1 Hz which extends from August 1998 to the present. The primary question is whether these data show non-random structure associated with major events such as natural or man-made disasters, terrible accidents, or grand celebrations. Secondarily, we examine the time course of apparently correlated responses. Statistical analyses of the data reveal consistent evidence that events which strongly affect people engender small butmore » significant effects. These include suggestions of anticipatory responses in some cases, leading to a series of specialized analyses to assess possible non-random structure preceding precisely timed events. A focused examination of data collected around the time of earthquakes with Richter magnitude 6 and greater reveals non-random structure with a number of intriguing, potentially important features. Anomalous effects in the REG data are seen only when the corresponding earthquakes occur in populated areas. No structure is found if they occur in the oceans. We infer that an important contributor to the effect is the relevance of the earthquake to humans. Epoch averaging reveals evidence for changes in the data some hours prior to the main temblor, suggestive of reverse causation.« less

  17. Field Line Random Walk in Isotropic Magnetic Turbulence up to Infinite Kubo Number

    NASA Astrophysics Data System (ADS)

    Sonsrettee, W.; Wongpan, P.; Ruffolo, D. J.; Matthaeus, W. H.; Chuychai, P.; Rowlands, G.

    2013-12-01

    In astrophysical plasmas, the magnetic field line random walk (FLRW) plays a key role in the transport of energetic particles. In the present, we consider isotropic magnetic turbulence, which is a reasonable model for interstellar space. Theoretical conceptions of the FLRW have been strongly influenced by studies of the limit of weak fluctuations (or a strong mean field) (e.g, Isichenko 1991a, b). In this case, the behavior of FLRW can be characterized by the Kubo number R = (b/B0)(l_∥ /l_ \\bot ) , where l∥ and l_ \\bot are turbulence coherence scales parallel and perpendicular to the mean field, respectively, and b is the root mean squared fluctuation field. In the 2D limit (R ≫ 1), there has been an apparent conflict between concepts of Bohm diffusion, which is based on the Corrsin's independence hypothesis, and percolative diffusion. Here we have used three non-perturbative analytic techniques based on Corrsin's independence hypothesis for B0 = 0 (R = ∞ ): diffusive decorrelation (DD), random ballistic decorrelation (RBD) and a general ordinary differential equation (ODE), and compared them with direct computer simulations. All the analytical models and computer simulations agree that isotropic turbulence for R = ∞ has a field line diffusion coefficient that is consistent with Bohm diffusion. Partially supported by the Thailand Research Fund, NASA, and NSF.

  18. Self field triggered superconducting fault current limiter

    DOEpatents

    Tekletsadik, Kasegn D [Rexford, NY

    2008-02-19

    A superconducting fault current limiter array with a plurality of superconductor elements arranged in a meanding array having an even number of supconductors parallel to each other and arranged in a plane that is parallel to an odd number of the plurality of superconductors, where the odd number of supconductors are parallel to each other and arranged in a plane that is parallel to the even number of the plurality of superconductors, when viewed from a top view. The even number of superconductors are coupled at the upper end to the upper end of the odd number of superconductors. A plurality of lower shunt coils each coupled to the lower end of each of the even number of superconductors and a plurality of upper shunt coils each coupled to the upper end of each of the odd number of superconductors so as to generate a generally orthoganal uniform magnetic field during quenching using only the magenetic field generated by the superconductors.

  19. GPURFSCREEN: a GPU based virtual screening tool using random forest classifier.

    PubMed

    Jayaraj, P B; Ajay, Mathias K; Nufail, M; Gopakumar, G; Jaleel, U C A

    2016-01-01

    In-silico methods are an integral part of modern drug discovery paradigm. Virtual screening, an in-silico method, is used to refine data models and reduce the chemical space on which wet lab experiments need to be performed. Virtual screening of a ligand data model requires large scale computations, making it a highly time consuming task. This process can be speeded up by implementing parallelized algorithms on a Graphical Processing Unit (GPU). Random Forest is a robust classification algorithm that can be employed in the virtual screening. A ligand based virtual screening tool (GPURFSCREEN) that uses random forests on GPU systems has been proposed and evaluated in this paper. This tool produces optimized results at a lower execution time for large bioassay data sets. The quality of results produced by our tool on GPU is same as that on a regular serial environment. Considering the magnitude of data to be screened, the parallelized virtual screening has a significantly lower running time at high throughput. The proposed parallel tool outperforms its serial counterpart by successfully screening billions of molecules in training and prediction phases.

  20. Detecting Spatial Patterns in Biological Array Experiments

    PubMed Central

    ROOT, DAVID E.; KELLEY, BRIAN P.; STOCKWELL, BRENT R.

    2005-01-01

    Chemical genetic screening and DNA and protein microarrays are among a number of increasingly important and widely used biological research tools that involve large numbers of parallel experiments arranged in a spatial array. It is often difficult to ensure that uniform experimental conditions are present throughout the entire array, and as a result, one often observes systematic spatially correlated errors, especially when array experiments are performed using robots. Here, the authors apply techniques based on the discrete Fourier transform to identify and quantify spatially correlated errors superimposed on a spatially random background. They demonstrate that these techniques are effective in identifying common spatially systematic errors in high-throughput 384-well microplate assay data. In addition, the authors employ a statistical test to allow for automatic detection of such errors. Software tools for using this approach are provided. PMID:14567791

  1. Parallel search for conjunctions with stimuli in apparent motion.

    PubMed

    Casco, C; Ganis, G

    1999-01-01

    A series of experiments was conducted to determine whether apparent motion tends to follow the similarity rule (i.e. is attribute-specific) and to investigate the underlying mechanism. Stimulus duration thresholds were measured during a two-alternative forced-choice task in which observers detected either the location or the motion direction of target groups defined by the conjunction of size and orientation. Target element positions were randomly chosen within a nominally defined rectangular subregion of the display (target region). The target region was presented either statically (followed by a 250 ms duration mask) or dynamically, displaced by a small distance (18 min of arc) from frame to frame. In the motion display, the position of both target and background elements was changed randomly from frame to frame within the respective areas to abolish spatial correspondence over time. Stimulus duration thresholds were lower in the motion than in the static task, indicating that target detection in the dynamic condition does not rely on the explicit identification of target elements in each static frame. Increasing the distractor-to-target ratio was found to reduce detectability in the static, but not in the motion task. This indicates that the perceptual segregation of the target is effortless and parallel with motion but not with static displays. The pattern of results holds regardless of the task or search paradigm employed. The detectability in the motion condition can be improved by increasing the number of frames and/or by reducing the width of the target area. Furthermore, parallel search in the dynamic condition can be conducted with both short-range and long-range motion stimuli. Finally, apparent motion of conjunctions is insufficient on its own to support location decision and is disrupted by random visual noise. Overall, these findings show that (i) the mechanism underlying apparent motion is attribute-specific; (ii) the motion system mediates temporal integration of feature conjunctions before they are identified by the static system; and (iii) target detectability in these stimuli relies upon a nonattentive, cooperative, directionally selective motion mechanism that responds to high-level attributes (conjunction of size and orientation).

  2. Design of a switch matrix gate/bulk driver controller for thin film lithium microbatteries using microwave SOI technology

    NASA Technical Reports Server (NTRS)

    Whitacre, J.; West, W. C.; Mojarradi, M.; Sukumar, V.; Hess, H.; Li, H.; Buck, K.; Cox, D.; Alahmad, M.; Zghoul, F. N.; hide

    2003-01-01

    This paper presents a design approach to help attain any random grouping pattern between the microbatteries. In this case, the result is an ability to charge microbatteries in parallel and to discharge microbatteries in parallel or pairs of microbatteries in series.

  3. Exploring equivalence domain in nonlinear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-05-01

    This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.

  4. Rationale and design of a randomized, double-blind, parallel-group study of terutroban 30 mg/day versus aspirin 100 mg/day in stroke patients: the prevention of cerebrovascular and cardiovascular events of ischemic origin with terutroban in patients with a history of ischemic stroke or transient ischemic attack (PERFORM) study.

    PubMed

    Bousser, M G; Amarenco, P; Chamorro, A; Fisher, M; Ford, I; Fox, K; Hennerici, M G; Mattle, H P; Rothwell, P M

    2009-01-01

    Ischemic stroke is the leading cause of mortality worldwide and a major contributor to neurological disability and dementia. Terutroban is a specific TP receptor antagonist with antithrombotic, antivasoconstrictive, and antiatherosclerotic properties, which may be of interest for the secondary prevention of ischemic stroke. This article describes the rationale and design of the Prevention of cerebrovascular and cardiovascular Events of ischemic origin with teRutroban in patients with a history oF ischemic strOke or tRansient ischeMic Attack (PERFORM) Study, which aims to demonstrate the superiority of the efficacy of terutroban versus aspirin in secondary prevention of cerebrovascular and cardiovascular events. The PERFORM Study is a multicenter, randomized, double-blind, parallel-group study being carried out in 802 centers in 46 countries. The study population includes patients aged > or =55 years, having suffered an ischemic stroke (< or =3 months) or a transient ischemic attack (< or =8 days). Participants are randomly allocated to terutroban (30 mg/day) or aspirin (100 mg/day). The primary efficacy endpoint is a composite of ischemic stroke (fatal or nonfatal), myocardial infarction (fatal or nonfatal), or other vascular death (excluding hemorrhagic death of any origin). Safety is being evaluated by assessing hemorrhagic events. Follow-up is expected to last for 2-4 years. Assuming a relative risk reduction of 13%, the expected number of primary events is 2,340. To obtain statistical power of 90%, this requires inclusion of at least 18,000 patients in this event-driven trial. The first patient was randomized in February 2006. The PERFORM Study will explore the benefits and safety of terutroban in secondary cardiovascular prevention after a cerebral ischemic event. Copyright 2009 S. Karger AG, Basel.

  5. Whole body vibration for older persons: an open randomized, multicentre, parallel, clinical trial

    PubMed Central

    2011-01-01

    Background Institutionalized older persons have a poor functional capacity. Including physical exercise in their routine activities decreases their frailty and improves their quality of life. Whole-body vibration (WBV) training is a type of exercise that seems beneficial in frail older persons to improve their functional mobility, but the evidence is inconclusive. This trial will compare the results of exercise with WBV and exercise without WBV in improving body balance, muscle performance and fall prevention in institutionalized older persons. Methods/Design An open, multicentre and parallel randomized clinical trial with blinded assessment. 160 nursing home residents aged over 65 years and of both sexes will be identified to participate in the study. Participants will be centrally randomised and allocated to interventions (vibration or exercise group) by telephone. The vibration group will perform static/dynamic exercises (balance and resistance training) on a vibratory platform (Frequency: 30-35 Hz; Amplitude: 2-4 mm) over a six-week training period (3 sessions/week). The exercise group will perform the same exercise protocol but without a vibration stimuli platform. The primary outcome measure is the static/dynamic body balance. Secondary outcomes are muscle strength and, number of new falls. Follow-up measurements will be collected at 6 weeks and at 6 months after randomization. Efficacy will be analysed on an intention-to-treat (ITT) basis and 'per protocol'. The effects of the intervention will be evaluated using the "t" test, Mann-Witney test, or Chi-square test, depending on the type of outcome. The final analysis will be performed 6 weeks and 6 months after randomization. Discussion This study will help to clarify whether WBV training improves body balance, gait mobility and muscle strength in frail older persons living in nursing homes. As far as we know, this will be the first study to evaluate the efficacy of WBV for the prevention of falls. Trial Registration ClinicalTrials.gov: NCT01375790 PMID:22192313

  6. Aerobic training and l-arginine supplementation promotes rat heart and hindleg muscles arteriogenesis after myocardial infarction.

    PubMed

    Ranjbar, Kamal; Rahmani-Nia, Farhad; Shahabpour, Elham

    2016-09-01

    Arteriogenesis is a main defense mechanism to prevent heart and local tissues dysfunction in occlusive artery disease. TGF-β and angiostatin have a pivotal role in arteriogenesis. We tested the hypothesis that aerobic training and l-arginine supplementation promotes cardiac and skeletal muscles arteriogenesis after myocardial infarction (MI) parallel to upregulation of TGF-β and downregulation of angiostatin. For this purpose, 4 weeks after LAD occlusion, 50 male Wistar rats were randomly distributed into five groups: (1) sham surgery without MI (sham, n = 10), (2) control-MI (Con-MI, n = 10), (3) l-arginine-MI (La-MI, n = 10), (4) exercise training-MI (Ex-MI, n = 10), and (5) exercise and l-arginine-MI (Ex + La-MI). Exercise training groups running on a treadmill for 10 weeks with moderate intensity. Rats in the l-arginine-treated groups drank water containing 4 % l-arginine. Arteriolar density with different diameters (11-25, 26-50, 51-75, and 76-150 μm), TGF-β, and angiostatin gene expression were measured in cardiac (area at risk) and skeletal (soleus and gastrocnemius) muscles. Smaller arterioles decreased in cardiac after MI. Aerobic training and l-arginine increased the number of cardiac arterioles with 11-25 and 26-50 μm diameters parallel to TGF-β overexpression. In gastrocnemius muscle, the number of arterioles/mm(2) was only increased in the 11 to 25 μm in response to training with and without l-arginine parallel to angiostatin downregulation. Soleus arteriolar density with different size was not different between experimental groups. Results showed that 10 weeks aerobic exercise training and l-arginine supplementation promotes arteriogenesis of heart and gastrocnemius muscles parallel to overexpression of TGF-β and downregulation of angiostatin in MI rats.

  7. Effect of Probiotic Curd on Salivary pH and Streptococcus mutans: A Double Blind Parallel Randomized Controlled Trial.

    PubMed

    Srivastava, Shivangi; Saha, Sabyasachi; Kumari, Minti; Mohd, Shafaat

    2016-02-01

    Dairy products like curd seem to be the most natural way to ingest probiotics which can reduce Streptococcus mutans level and also increase salivary pH thereby reducing the dental caries risk. To estimate the role of probiotic curd on salivary pH and Streptococcus mutans count, over a period of 7 days. This double blind parallel randomized clinical trial was conducted at the institution with 60 caries free volunteers belonging to the age group of 20-25 years who were randomly allocated into two groups. Test Group consisted of 30 subjects who consumed 100ml of probiotic curd daily for seven days while an equal numbered Control Group were given 100ml of regular curd for seven days. Saliva samples were assessed at baseline, after ½ hour 1 hour and 7 days of intervention period using pH meter and Mitis Salivarius Bacitracin agar to estimate salivary pH and S. mutans count. Data was statistically analysed using Paired and Unpaired t-test. The study revealed a reduction in salivary pH after ½ hour and 1 hour in both the groups. However after 7 days, normal curd showed a statistically significant (p< 0.05) reduction in salivary pH while probiotic curd showed a statistically significant (p< 0.05) increase in salivary pH. Similarly with regard to S. mutans colony counts probiotic curd showed statistically significant reduction (p< 0.05) as compared to normal curd. Short-term consumption of probiotic curds showed marked salivary pH elevation and reduction of salivary S. mutans counts and thus can be exploited for the prevention of enamel demineralization as a long-term remedy keeping in mind its cost effectiveness.

  8. bFGF-containing electrospun gelatin scaffolds with controlled nano-architectural features for directed angiogenesis

    PubMed Central

    Montero, Ramon B.; Vial, Ximena; Nguyen, Dat Tat; Farhand, Sepehr; Reardon, Mark; Pham, Si M.; Tsechpenakis, Gavriil; Andreopoulos, Fotios M.

    2011-01-01

    Current therapeutic angiogenesis strategies are focused on the development of biologically responsive scaffolds that can deliver multiple angiogenic cytokines and/or cells in ischemic regions. Herein, we report on a novel electrospinning approach to fabricate cytokine-containing nanofibrous scaffolds with tunable architecture to promote angiogenesis. Fiber diameter and uniformity were controlled by varying the concentration of the polymeric (i.e. gelatin) solution, the feed rate, needle to collector distance, and electric field potential between the collector plate and injection needle. Scaffold fiber orientation (random vs. aligned) was achieved by alternating the polarity of two parallel electrodes placed on the collector plate thus dictating fiber deposition patterns. Basic fibroblast growth factor (bFGF) was physically immobilized within the gelatin scaffolds at variable concentrations and human umbilical vein endothelial cells (HUVEC) were seeded on the top of the scaffolds. Cell proliferation and migration was assessed as a function of growth factor loading and scaffold architecture. HUVECs successfully adhered onto gelatin B scaffolds and cell proliferation was directly proportional to the loading concentrations of the growth factor (0–100 bFGF ng/mL). Fiber orientation had a pronounced effect on cell morphology and orientation. Cells were spread along the fibers of the electrospun scaffolds with the aligned orientation and developed a spindle-like morphology parallel to the scaffold's fibers. In contrast, cells seeded onto the scaffolds with random fiber orientation, did not demonstrate any directionality and appeared to have a rounder shape. Capillary formation (i.e. sprouts length and number of sprouts per bead), assessed in a 3-D in vitro angiogenesis assay, was a function of bFGF loading concentration (0 ng, 50 ng and 100 ng per scaffold) for both types of electrospun scaffolds (i.e. with aligned or random fiber orientation). PMID:22200610

  9. Effects of Intravenous Patient-Controlled Sufentanil Analgesia and Music Therapy on Pain and Hemodynamics After Surgery for Lung Cancer: A Randomized Parallel Study.

    PubMed

    Wang, Yichun; Tang, Haoke; Guo, Qulian; Liu, Jingshi; Liu, Xiaohong; Luo, Junming; Yang, Wenqian

    2015-11-01

    Postoperative pain is caused by surgical injury and trauma; is stressful to patients; and includes a series of physiologic, psychological, and behavioral reactions. Effective postoperative analgesia helps improve postoperative pain, perioperative safety, and hospital discharge rates. This study aimed to observe the influence of postoperative intravenous sufentanil patient-controlled analgesia combined with music therapy versus sufentanil alone on hemodynamics and analgesia in patients with lung cancer. This was a randomized parallel study performed in 60 patients in American Society of Anesthesiologists class I or II undergoing lung cancer resection at the Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University. Patients were randomly assigned to a music therapy (MT) group and a control (C) group. The MT group underwent preoperative and postoperative music intervention while the C group did not. Both groups received intravenous patient-controlled sufentanil analgesia. The primary outcome was the visual analogue scale (VAS) score at 24 hours after surgery. The secondary outcomes included hemodynamic changes (systolic blood pressure, diastolic blood pressure, heart rate), changes on the Self-Rating Anxiety Scale (SAS), total consumption of sufentanil, number of uses, sedation, and adverse effects. The postoperative sufentanil dose and analgesia frequency were recorded. Compared with the C group, the MT group had significantly lower VAS score, systolic and diastolic blood pressure, heart rate, and SAS score within 24 hours after surgery (p < 0.01). In addition, postoperative analgesia frequency and sufentanil dose were reduced in the MT group (p < 0.01). Combined music therapy and sufentanil improves intravenous patient-controlled analgesia effects compared with sufentanil alone after lung cancer surgery. Lower doses of sufentanil could be administered to more effectively improve patients' cardiovascular parameters.

  10. Evaluation of Parallel Analysis Methods for Determining the Number of Factors

    ERIC Educational Resources Information Center

    Crawford, Aaron V.; Green, Samuel B.; Levy, Roy; Lo, Wen-Juo; Scott, Lietta; Svetina, Dubravka; Thompson, Marilyn S.

    2010-01-01

    Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria…

  11. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  12. 3-D parallel program for numerical calculation of gas dynamics problems with heat conductivity on distributed memory computational systems (CS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.

    1997-12-31

    The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle.more » The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.« less

  13. Multicenter, double-blind, randomized, placebo-controlled, parallel-group study of the efficacy, safety, and tolerability of THC:CBD extract and THC extract in patients with intractable cancer-related pain.

    PubMed

    Johnson, Jeremy R; Burnell-Nugent, Mary; Lossignol, Dominique; Ganae-Motan, Elena Doina; Potts, Richard; Fallon, Marie T

    2010-02-01

    This study compared the efficacy of a tetrahydrocannabinol:cannabidiol (THC:CBD) extract, a nonopioid analgesic endocannabinoid system modulator, and a THC extract, with placebo, in relieving pain in patients with advanced cancer. In total, 177 patients with cancer pain, who experienced inadequate analgesia despite chronic opioid dosing, entered a two-week, multicenter, double-blind, randomized, placebo-controlled, parallel-group trial. Patients were randomized to THC:CBD extract (n = 60), THC extract (n = 58), or placebo (n = 59). The primary analysis of change from baseline in mean pain Numerical Rating Scale (NRS) score was statistically significantly in favor of THC:CBD compared with placebo (improvement of -1.37 vs. -0.69), whereas the THC group showed a nonsignificant change (-1.01 vs. -0.69). Twice as many patients taking THC:CBD showed a reduction of more than 30% from baseline pain NRS score when compared with placebo (23 [43%] vs. 12 [21%]). The associated odds ratio was statistically significant, whereas the number of THC group responders was similar to placebo (12 [23%] vs. 12 [21%]) and did not reach statistical significance. There was no change from baseline in median dose of opioid background medication or mean number of doses of breakthrough medication across treatment groups. No significant group differences were found in the NRS sleep quality or nausea scores or the pain control assessment. However, the results from the European Organisation for Research and Treatment of Cancer Quality of Life Cancer Questionnaire showed a worsening in nausea and vomiting with THC:CBD compared with placebo (P = 0.02), whereas THC had no difference (P = 1.0). Most drug-related adverse events were mild/moderate in severity. This study shows that THC:CBD extract is efficacious for relief of pain in patients with advanced cancer pain not fully relieved by strong opioids. Copyright 2010 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.

  14. Paging memory from random access memory to backing storage in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

    2013-05-21

    Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

  15. Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor

    NASA Astrophysics Data System (ADS)

    Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert

    2009-10-01

    Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.

  16. Parallel-line number dependence of magneto-impedance effect in multilayer permalloy [Ni80Fe20/Cu]N films

    NASA Astrophysics Data System (ADS)

    Yohanasari, R. H.; Utari; Purnama, B.

    2017-11-01

    In this paper, we studied the magneto-impedance effect in multilayered [Ni80Fe20/Cu]N with variation in the number of parallel-line on Cu PCB substrate. The method used in this research is the electrodeposition at a room temperature with Pt as an electrode. The results show that the magneto-impedance ratio increases with the increase in the number of parallel-line on Cu PCB. The maximum magneto-impedance ratio obtained in Cu PCB substrate which four parallel lines were 4.5%. Likewise, frequency variation, the magneto-impedance ratio increases with increasing frequency.

  17. EFFECTIVENESS OF DIALECTICAL BEHAVIOR THERAPY VERSUS COLLABORATIVE ASSESSMENT AND MANAGEMENT OF SUICIDALITY TREATMENT FOR REDUCTION OF SELF-HARM IN ADULTS WITH BORDERLINE PERSONALITY TRAITS AND DISORDER-A RANDOMIZED OBSERVER-BLINDED CLINICAL TRIAL.

    PubMed

    Andreasson, Kate; Krogh, Jesper; Wenneberg, Christina; Jessen, Helle K L; Krakauer, Kristine; Gluud, Christian; Thomsen, Rasmus R; Randers, Lasse; Nordentoft, Merete

    2016-06-01

    Many psychological treatments have shown effect on reducing self-harm in adults with borderline personality disorder. There is a need of brief psychotherapeutical treatment alternative for suicide prevention in specialized outpatient clinics. The DiaS trial was designed as a pragmatic single-center, two-armed, parallel-group observer-blinded, randomized clinical superiority trial. The participants had at least two criteria from the borderline personality disorder diagnosis and a recent suicide attempt (within a month). The participants were offered 16 weeks of dialectical behavior therapy (DBT) versus up to 16 weeks of collaborative assessment and management of suicidality (CAMS) treatment. The primary composite outcome was the number of participants with a new self-harm (nonsuicidal self-injury [NSSI] or suicide attempt) at week 28 from baseline. Other exploratory outcomes were: severity of borderline symptoms, depressive symptoms, hopelessness, suicide ideation, and self-esteem. At 28 weeks, the number of participants with new self-harm in the DBT group was 21 of 57 (36.8%) versus 12 of 51 (23.5%) in the CAMS treatment (OR: 1.90; 95% CI: 0.80-4.40; P = .14). When assessing the effect of DBT versus CAMS treatment on the individual components of the primary outcome, we observed no significant differences in the number of NSSI (OR: 1.60; 95% CI: 0.70-3.90; P = .31) or number of attempted suicides (OR: 2.24; 95% CI: 0.80-7.50; P = .12). In adults with borderline personality traits and disorder and a recent suicide attempt, DBT does not seem superior compared with CAMS for reduction of number of self-harm or suicide attempts. However, further randomized clinical trials may be needed. © 2016 Wiley Periodicals, Inc.

  18. Parallel/distributed direct method for solving linear systems

    NASA Technical Reports Server (NTRS)

    Lin, Avi

    1990-01-01

    A new family of parallel schemes for directly solving linear systems is presented and analyzed. It is shown that these schemes exhibit a near optimal performance and enjoy several important features: (1) For large enough linear systems, the design of the appropriate paralleled algorithm is insensitive to the number of processors as its performance grows monotonically with them; (2) It is especially good for large matrices, with dimensions large relative to the number of processors in the system; (3) It can be used in both distributed parallel computing environments and tightly coupled parallel computing systems; and (4) This set of algorithms can be mapped onto any parallel architecture without any major programming difficulties or algorithmical changes.

  19. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed Central

    Nadkarni, P. M.; Miller, P. L.

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations. PMID:1807632

  20. Code Parallelization with CAPO: A User Manual

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.

  1. Selective, Embedded, Just-In-Time Specialization (SEJITS): Portable Parallel Performance from Sequential, Productive, Embedded Domain-Specific Languages

    DTIC Science & Technology

    2012-12-01

    identity operation SIMD Single instruction, multiple datastream parallel computing Scala A byte-compiled programming language featuring dynamic type...Specific Languages 5a. CONTRACT NUMBER FA8750-10-1-0191 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S) Armando Fox 5d...application performance, but usually must rely on efficiency programmers who are experts in explicit parallel programming to achieve it. Since such efficiency

  2. Probabilistic structural mechanics research for parallel processing computers

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.

    1991-01-01

    Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.

  3. Quantitative metrics for evaluating parallel acquisition techniques in diffusion tensor imaging at 3 Tesla.

    PubMed

    Ardekani, Siamak; Selva, Luis; Sayre, James; Sinha, Usha

    2006-11-01

    Single-shot echo-planar based diffusion tensor imaging is prone to geometric and intensity distortions. Parallel imaging is a means of reducing these distortions while preserving spatial resolution. A quantitative comparison at 3 T of parallel imaging for diffusion tensor images (DTI) using k-space (generalized auto-calibrating partially parallel acquisitions; GRAPPA) and image domain (sensitivity encoding; SENSE) reconstructions at different acceleration factors, R, is reported here. Images were evaluated using 8 human subjects with repeated scans for 2 subjects to estimate reproducibility. Mutual information (MI) was used to assess the global changes in geometric distortions. The effects of parallel imaging techniques on random noise and reconstruction artifacts were evaluated by placing 26 regions of interest and computing the standard deviation of apparent diffusion coefficient and fractional anisotropy along with the error of fitting the data to the diffusion model (residual error). The larger positive values in mutual information index with increasing R values confirmed the anticipated decrease in distortions. Further, the MI index of GRAPPA sequences for a given R factor was larger than the corresponding mSENSE images. The residual error was lowest in the images acquired without parallel imaging and among the parallel reconstruction methods, the R = 2 acquisitions had the least error. The standard deviation, accuracy, and reproducibility of the apparent diffusion coefficient and fractional anisotropy in homogenous tissue regions showed that GRAPPA acquired with R = 2 had the least amount of systematic and random noise and of these, significant differences with mSENSE, R = 2 were found only for the fractional anisotropy index. Evaluation of the current implementation of parallel reconstruction algorithms identified GRAPPA acquired with R = 2 as optimal for diffusion tensor imaging.

  4. Parallel-SymD: A Parallel Approach to Detect Internal Symmetry in Protein Domains.

    PubMed

    Jha, Ashwani; Flurchick, K M; Bikdash, Marwan; Kc, Dukka B

    2016-01-01

    Internally symmetric proteins are proteins that have a symmetrical structure in their monomeric single-chain form. Around 10-15% of the protein domains can be regarded as having some sort of internal symmetry. In this regard, we previously published SymD (symmetry detection), an algorithm that determines whether a given protein structure has internal symmetry by attempting to align the protein to its own copy after the copy is circularly permuted by all possible numbers of residues. SymD has proven to be a useful algorithm to detect symmetry. In this paper, we present a new parallelized algorithm called Parallel-SymD for detecting symmetry of proteins on clusters of computers. The achieved speedup of the new Parallel-SymD algorithm scales well with the number of computing processors. Scaling is better for proteins with a larger number of residues. For a protein of 509 residues, a speedup of 63 was achieved on a parallel system with 100 processors.

  5. Parallel-SymD: A Parallel Approach to Detect Internal Symmetry in Protein Domains

    PubMed Central

    Jha, Ashwani; Flurchick, K. M.; Bikdash, Marwan

    2016-01-01

    Internally symmetric proteins are proteins that have a symmetrical structure in their monomeric single-chain form. Around 10–15% of the protein domains can be regarded as having some sort of internal symmetry. In this regard, we previously published SymD (symmetry detection), an algorithm that determines whether a given protein structure has internal symmetry by attempting to align the protein to its own copy after the copy is circularly permuted by all possible numbers of residues. SymD has proven to be a useful algorithm to detect symmetry. In this paper, we present a new parallelized algorithm called Parallel-SymD for detecting symmetry of proteins on clusters of computers. The achieved speedup of the new Parallel-SymD algorithm scales well with the number of computing processors. Scaling is better for proteins with a larger number of residues. For a protein of 509 residues, a speedup of 63 was achieved on a parallel system with 100 processors. PMID:27747230

  6. Efficiency Analysis of the Parallel Implementation of the SIMPLE Algorithm on Multiprocessor Computers

    NASA Astrophysics Data System (ADS)

    Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.

    2017-12-01

    This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.

  7. Matching pursuit parallel decomposition of seismic data

    NASA Astrophysics Data System (ADS)

    Li, Chuanhui; Zhang, Fanchang

    2017-07-01

    In order to improve the computation speed of matching pursuit decomposition of seismic data, a matching pursuit parallel algorithm is designed in this paper. We pick a fixed number of envelope peaks from the current signal in every iteration according to the number of compute nodes and assign them to the compute nodes on average to search the optimal Morlet wavelets in parallel. With the help of parallel computer systems and Message Passing Interface, the parallel algorithm gives full play to the advantages of parallel computing to significantly improve the computation speed of the matching pursuit decomposition and also has good expandability. Besides, searching only one optimal Morlet wavelet by every compute node in every iteration is the most efficient implementation.

  8. Study on the effects of microencapsulated Lactobacillus delbrueckii on the mouse intestinal flora.

    PubMed

    Sun, Qingshen; Shi, Yue; Wang, Fuying; Han, Dequan; Lei, Hong; Zhao, Yao; Sun, Quan

    2015-01-01

    To evaluate the protective effects of microencapsulation on Lactobacillus delbrueckii by random, parallel experimental design. Lincomycin hydrochloride-induced intestinal malfunction mouse model was successfully established; then the L. delbrueckii microcapsule was given to the mouse. The clinical behaviour, number of intestinal flora, mucous IgA content in small intestine, IgG and IL-2 level in peripheral blood were monitored. The histological sections were also prepared. The L. delbrueckii microcapsule could have more probiotic effects as indicated by higher bifidobacterium number in cecal contents. The sIgA content in microcapsule treated group was significantly higher than that in non-encapsulated L. delbrueckii treated group (p < 0.05). Intestine pathological damage of the L. delbrueckii microcapsule-treated group showed obvious restoration. The L. delbrueckii microcapsules could relieve the intestinal tissue pathological damage and play an important role in curing antibiotic-induced intestinal flora dysfunction.

  9. The factorization of large composite numbers on the MPP

    NASA Technical Reports Server (NTRS)

    Mckurdy, Kathy J.; Wunderlich, Marvin C.

    1987-01-01

    The continued fraction method for factoring large integers (CFRAC) was an ideal algorithm to be implemented on a massively parallel computer such as the Massively Parallel Processor (MPP). After much effort, the first 60 digit number was factored on the MPP using about 6 1/2 hours of array time. Although this result added about 10 digits to the size number that could be factored using CFRAC on a serial machine, it was already badly beaten by the implementation of Davis and Holdridge on the CRAY-1 using the quadratic sieve, an algorithm which is clearly superior to CFRAC for large numbers. An algorithm is illustrated which is ideally suited to the single instruction multiple data (SIMD) massively parallel architecture and some of the modifications which were needed in order to make the parallel implementation effective and efficient are described.

  10. Computer-assisted enzyme immunoassays and simplified immunofluorescence assays: applications for the diagnostic laboratory and the veterinarian's office.

    PubMed

    Jacobson, R H; Downing, D R; Lynch, T J

    1982-11-15

    A computer-assisted enzyme-linked immunosorbent assay (ELISA) system, based on kinetics of the reaction between substrate and enzyme molecules, was developed for testing large numbers of sera in laboratory applications. Systematic and random errors associated with conventional ELISA technique were identified leading to results formulated on a statistically validated, objective, and standardized basis. In a parallel development, an inexpensive system for field and veterinary office applications contained many of the qualities of the computer-assisted ELISA. This system uses a fluorogenic indicator (rather than the enzyme-substrate interaction) in a rapid test (15 to 20 minutes' duration) which promises broad application in serodiagnosis.

  11. Network harness: bundles of routes in public transport networks

    NASA Astrophysics Data System (ADS)

    Berche, B.; von Ferber, C.; Holovatch, T.

    2009-12-01

    Public transport routes sharing the same grid of streets and tracks are often found to proceed in parallel along shorter or longer sequences of stations. Similar phenomena are observed in other networks built with space consuming links such as cables, vessels, pipes, neurons, etc. In the case of public transport networks (PTNs) this behavior may be easily worked out on the basis of sequences of stations serviced by each route. To quantify this behavior we use the recently introduced notion of network harness. It is described by the harness distribution P(r, s): the number of sequences of s consecutive stations that are serviced by r parallel routes. For certain PTNs that we have analyzed we observe that the harness distribution may be described by power laws. These power laws indicate a certain level of organization and planning which may be driven by the need to minimize the costs of infrastructure and secondly by the fact that points of interest tend to be clustered in certain locations of a city. This effect may be seen as a result of the strong interdependence of the evolutions of both the city and its PTN. To further investigate the significance of the empirical results we have studied one- and two-dimensional models of randomly placed routes modeled by different types of walks. While in one dimension an analytic treatment was successful, the two dimensional case was studied by simulations showing that the empirical results for real PTNs deviate significantly from those expected for randomly placed routes.

  12. Spatial and temporal accuracy of asynchrony-tolerant finite difference schemes for partial differential equations at extreme scales

    NASA Astrophysics Data System (ADS)

    Kumari, Komal; Donzis, Diego

    2017-11-01

    Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.

  13. Current control of time-averaged magnetization in superparamagnetic tunnel junctions

    NASA Astrophysics Data System (ADS)

    Bapna, Mukund; Majetich, Sara A.

    2017-12-01

    This work investigates spin transfer torque control of time-averaged magnetization in a small 20 nm × 60 nm nanomagnet with a low thermal stability factor, Δ ˜ 11. Here, the nanomagnet is a part of a magnetic tunnel junction and fluctuates between parallel and anti-parallel magnetization states with respect to the magnetization of the reference layer generating a telegraph signal in the current versus time measurements. The response of the nanomagnet to an external field is first analyzed to characterize the magnetic properties. We then show that the time-averaged magnetization in the telegraph signal can be fully controlled between +1 and -1 by voltage over a small range of 0.25 V. NIST Statistical Test Suite analysis is performed for testing true randomness of the telegraph signal that the device generates when operated at near critical current values for spin transfer torque. Utilizing the probabilistic nature of the telegraph signal generated at two different voltages, a prototype demonstration is shown for multiplication of two numbers using an artificial AND logic gate.

  14. Research on energy-saving optimal control of trains in a following operation under a fixed four-aspect autoblock system based on multi-dimension parallel GA

    NASA Astrophysics Data System (ADS)

    Lu, Qiheng; Feng, Xiaoyun

    2013-03-01

    After analyzing the working principle of the four-aspect fixed autoblock system, an energy-saving control model was created based on the dynamics equations of the trains in order to study the energy-saving optimal control strategy of trains in a following operation. Besides the safety and punctuality, the main aims of the model were the energy consumption and the time error. Based on this model, the static and dynamic speed restraints under a four-aspect fixed autoblock system were put forward. The multi-dimension parallel genetic algorithm (GA) and the external punishment function were adopted to solve this problem. By using the real number coding and the strategy of ramps divided into three parts, the convergence of GA was speeded up and the length of chromosomes was shortened. A vector of Gaussian random disturbance with zero mean was superposed to the mutation operator. The simulation result showed that the method could reduce the energy consumption effectively based on safety and punctuality.

  15. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    NASA Astrophysics Data System (ADS)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  16. Parallel Flux Tensor Analysis for Efficient Moving Object Detection

    DTIC Science & Technology

    2011-07-01

    computing as well as parallelization to enable real time performance in analyzing complex video [3, 4 ]. There are a number of challenging computer vision... 4 . TITLE AND SUBTITLE Parallel Flux Tensor Analysis for Efficient Moving Object Detection 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...We use the trace of the flux tensor matrix, referred to as Tr JF , that is defined below, Tr JF = ∫ Ω W (x− y)(I2xt(y) + I2yt(y) + I2tt(y))dy ( 4 ) as

  17. EFFECT OF A NOVEL ESSENTIAL OIL MOUTHRINSE WITHOUT ALCOHOL ON GINGIVITIS: A DOUBLE-BLINDED RANDOMIZED CONTROLLED TRIAL

    PubMed Central

    Botelho, Marco Antonio; Bezerra, José Gomes; Correa, Luciano Lima; Fonseca, Said Gonçalves da Cruz; Montenegro, Danusa; Gapski, Ricardo; Brito, Gerly Anne Castro; Heukelbach, Jörg

    2007-01-01

    Several different plant extracts have been evaluated with respect to their antimicrobial effects against oral pathogens and for reduction of gingivitis. Given that a large number of these substances have been associated with significant side effects that contraindicate their long-term use, new compounds need to be tested. The aim of this study was to assess the short-term safety and efficacy of a Lippia sidoides ("alecrim pimenta")-based essential oil mouthrinse on gingival inflammation and bacterial plaque. Fifty-five patients were enrolled into a pilot, double-blinded, randomized, parallel-armed study. Patients were randomly assigned to undergo a 7-day treatment regimen with either the L. sidoides-based mouthrinse or 0.12% chlorhexidine mouthrinse. The results demonstrated decreased plaque index, gingival index and gingival bleeding index scores at 7 days, as compared to baseline. There was no statistically significance difference (p>0.05) between test and control groups for any of the clinical parameters assessed throughout the study. Adverse events were mild and transient. The findings of this study demonstrated that the L. sidoides-based mouthrinse was safe and efficacious in reducing bacterial plaque and gingival inflammation. PMID:19089126

  18. Exercise counseling to enhance smoking cessation outcomes: the Fit2Quit randomized controlled trial.

    PubMed

    Maddison, Ralph; Roberts, Vaughan; McRobbie, Hayden; Bullen, Christopher; Prapavessis, Harry; Glover, Marewa; Jiang, Yannan; Brown, Paul; Leung, William; Taylor, Sue; Tsai, Midi

    2014-10-01

    Regular exercise has been proposed as a potential smoking cessation aid. This study aimed to determine the effects of an exercise counseling program on cigarette smoking abstinence at 24 weeks. A parallel, two-arm, randomized controlled trial was conducted. Adult cigarette smokers (n = 906) who were insufficiently active and interested in quitting were randomized to receive the Fit2Quit intervention (10 exercise telephone counseling sessions over 6 months) plus usual care (behavioral counseling and nicotine replacement therapy) or usual care alone. There were no significant group differences in 7-day point-prevalence and continuous abstinence at 6 months. The more intervention calls successfully delivered, the lower the probability of smoking (OR, 0.88; 95 % CI 0.81-0.97, p = 0.01) in the intervention group. A significant difference was observed for leisure time physical activity (difference = 219.11 MET-minutes/week; 95 % CI 52.65-385.58; p = 0.01). Telephone-delivered exercise counseling may not be sufficient to improve smoking abstinence rates over and above existing smoking cessation services. (Australasian Clinical Trials Registry Number: ACTRN12609000637246.).

  19. Curious parallels and curious connections--phylogenetic thinking in biology and historical linguistics.

    PubMed

    Atkinson, Quentin D; Gray, Russell D

    2005-08-01

    In The Descent of Man (1871), Darwin observed "curious parallels" between the processes of biological and linguistic evolution. These parallels mean that evolutionary biologists and historical linguists seek answers to similar questions and face similar problems. As a result, the theory and methodology of the two disciplines have evolved in remarkably similar ways. In addition to Darwin's curious parallels of process, there are a number of equally curious parallels and connections between the development of methods in biology and historical linguistics. Here we briefly review the parallels between biological and linguistic evolution and contrast the historical development of phylogenetic methods in the two disciplines. We then look at a number of recent studies that have applied phylogenetic methods to language data and outline some current problems shared by the two fields.

  20. Eigensolution of finite element problems in a completely connected parallel architecture

    NASA Technical Reports Server (NTRS)

    Akl, F.; Morel, M.

    1989-01-01

    A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm is successfully implemented on a tightly coupled MIMD parallel processor. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts, and the dimension of the subspace on the performance of the algorithm is investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18, and 3.61 are achieved on two, four, six, and eight processors, respectively.

  1. The energy density distribution of an ideal gas and Bernoulli’s equations

    NASA Astrophysics Data System (ADS)

    Santos, Leonardo S. F.

    2018-05-01

    This work discusses the energy density distribution in an ideal gas and the consequences of Bernoulli’s equation and the corresponding relation for compressible fluids. The aim of this work is to study how Bernoulli’s equation determines the energy flow in a fluid, although Bernoulli’s equation does not describe the energy density itself. The model from molecular dynamic considerations that describes an ideal gas at rest with uniform density is modified to explore the gas in motion with non-uniform density and gravitational effects. The difference between the component of the speed of a particle that is parallel to the gas speed and the gas speed itself is called ‘parallel random speed’. The pressure from the ‘parallel random speed’ is denominated as parallel pressure. The modified model predicts that the energy density is the sum of kinetic and potential gravitational energy densities plus two terms with static and parallel pressures. The application of Bernoulli’s equation and the corresponding relation for compressible fluids in the energy density expression has resulted in two new formulations. For incompressible and compressible gas, the energy density expressions are written as a function of stagnation, static and parallel pressures, without any dependence on kinetic or gravitational potential energy densities. These expressions of the energy density are the main contributions of this work. When the parallel pressure was uniform, the energy density distribution for incompressible approximation and compressible gas did not converge to zero for the limit of null static pressure. This result is rather unusual because the temperature tends to zero for null pressure. When the gas was considered incompressible and the parallel pressure was equal to static pressure, the energy density maintained this unusual behaviour with small pressures. If the parallel pressure was equal to static pressure, the energy density converged to zero for the limit of the null pressure only if the gas was compressible. Only the last situation describes an intuitive behaviour for an ideal gas.

  2. Ensemble Smoother implemented in parallel for groundwater problems applications

    NASA Astrophysics Data System (ADS)

    Leyva, E.; Herrera, G. S.; de la Cruz, L. M.

    2013-05-01

    Data assimilation is a process that links forecasting models and measurements using the benefits from both sources. The Ensemble Kalman Filter (EnKF) is a data-assimilation sequential-method that was designed to address two of the main problems related to the use of the Extended Kalman Filter (EKF) with nonlinear models in large state spaces, i-e the use of a closure problem and massive computational requirements associated with the storage and subsequent integration of the error covariance matrix. The EnKF has gained popularity because of its simple conceptual formulation and relative ease of implementation. It has been used successfully in various applications of meteorology and oceanography and more recently in petroleum engineering and hydrogeology. The Ensemble Smoother (ES) is a method similar to EnKF, it was proposed by Van Leeuwen and Evensen (1996). Herrera (1998) proposed a version of the ES which we call Ensemble Smoother of Herrera (ESH) to distinguish it from the former. It was introduced for space-time optimization of groundwater monitoring networks. In recent years, this method has been used for data assimilation and parameter estimation in groundwater flow and transport models. The ES method uses Monte Carlo simulation, which consists of generating repeated realizations of the random variable considered, using a flow and transport model. However, often a large number of model runs are required for the moments of the variable to converge. Therefore, depending on the complexity of problem a serial computer may require many hours of continuous use to apply the ES. For this reason, it is required to parallelize the process in order to do it in a reasonable time. In this work we present the results of a parallelization strategy to reduce the execution time for doing a high number of realizations. The software GWQMonitor by Herrera (1998), implements all the algorithms required for the ESH in Fortran 90. We develop a script in Python using mpi4py, in order to execute GWQMonitor in parallel, applying the MPI library. Our approach is to calculate the initial inputs for each realization, and run groups of these realizations in separate processors. The only modification to the GWQMonitor was the final calculation of the covariance matrix. This strategy was applied to the study of a simplified aquifer in a rectangular domain of a single layer. We show the speedup and efficiency for different number of processors.

  3. IOPA: I/O-aware parallelism adaption for parallel programs

    PubMed Central

    Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei

    2017-01-01

    With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads. PMID:28278236

  4. IOPA: I/O-aware parallelism adaption for parallel programs.

    PubMed

    Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei

    2017-01-01

    With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads.

  5. Parallel eigenanalysis of finite element models in a completely connected architecture

    NASA Technical Reports Server (NTRS)

    Akl, F. A.; Morel, M. R.

    1989-01-01

    A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi) = (M)(phi)(omega), where (K) and (M) are of order N, and (omega) is order of q. The concurrent solution of the eigenproblem is based on the multifrontal/modified subspace method and is achieved in a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm was successfully implemented on a tightly coupled multiple-instruction multiple-data parallel processing machine, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macrotasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. A parallel finite element dynamic analysis program, p-feda, is documented and the performance of its subroutines in parallel environment is analyzed.

  6. Consumption of cranberry polyphenols enhances human γδ-T cell proliferation and reduces the number of symptoms associated with colds and influenza: a randomized, placebo-controlled intervention study.

    PubMed

    Nantz, Meri P; Rowe, Cheryl A; Muller, Catherine; Creasy, Rebecca; Colee, James; Khoo, Christina; Percival, Susan S

    2013-12-13

    Our main objective was to evaluate the ability of cranberry phytochemicals to modify immunity, specifically γδ-T cell proliferation, after daily consumption of a cranberry beverage, and its effect on health outcomes related to cold and influenza symptoms. The study was a randomized, double-blind, placebo-controlled, parallel intervention. Subjects drank a low calorie cranberry beverage (450 ml) made with a juice-derived, powdered cranberry fraction (n = 22) or a placebo beverage (n = 23), daily, for 10 wk. PBMC were cultured for six days with autologous serum and PHA-L stimulation. Cold and influenza symptoms were self-reported. The proliferation index of γδ-T cells in culture was almost five times higher after 10 wk of cranberry beverage consumption (p <0.001). In the cranberry beverage group, the incidence of illness was not reduced, however significantly fewer symptoms of illness were reported (p = 0.031). Consumption of the cranberry beverage modified the ex vivo proliferation of γδ-T cells. As these cells are located in the epithelium and serve as a first line of defense, improving their function may be related to reducing the number of symptoms associated with a cold and flu.

  7. Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data

    ERIC Educational Resources Information Center

    Dinno, Alexis

    2009-01-01

    Horn's parallel analysis (PA) is the method of consensus in the literature on empirical methods for deciding how many components/factors to retain. Different authors have proposed various implementations of PA. Horn's seminal 1965 article, a 1996 article by Thompson and Daniel, and a 2004 article by Hayton, Allen, and Scarpello all make assertions…

  8. An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.

    PubMed

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-03-08

    Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  9. A comparison between orthogonal and parallel plating methods for distal humerus fractures: a prospective randomized trial.

    PubMed

    Lee, Sang Ki; Kim, Kap Jung; Park, Kyung Hoon; Choy, Won Sik

    2014-10-01

    With the continuing improvements in implants for distal humerus fractures, it is expected that newer types of plates, which are anatomically precontoured, thinner and less irritating to soft tissue, would have comparable outcomes when used in a clinical study. The purpose of this study was to compare the clinical and radiographic outcomes in patients with distal humerus fractures who were treated with orthogonal and parallel plating methods using precontoured distal humerus plates. Sixty-seven patients with a mean age of 55.4 years (range 22-90 years) were included in this prospective study. The subjects were randomly assigned to receive 1 of 2 treatments: orthogonal or parallel plating. The following results were assessed: operating time, time to fracture union, presence of a step or gap at the articular margin, varus-valgus angulation, functional recovery, and complications. No intergroup differences were observed based on radiological and clinical results between the groups. In our practice, no significant differences were found between the orthogonal and parallel plating methods in terms of clinical outcomes, mean operation time, union time, or complication rates. There were no cases of fracture nonunion in either group; heterotrophic ossification was found 3 patients in orthogonal plating group and 2 patients in parallel plating group. In our practice, no significant differences were found between the orthogonal and parallel plating methods in terms of clinical outcomes or complication rates. However, orthogonal plating method may be preferred in cases of coronal shear fractures, where posterior to anterior fixation may provide additional stability to the intraarticular fractures. Additionally, parallel plating method may be the preferred technique used for fractures that occur at the most distal end of the humerus.

  10. LDRD final report on massively-parallel linear programming : the parPCx system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parekh, Ojas; Phillips, Cynthia Ann; Boman, Erik Gunnar

    2005-02-01

    This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runsmore » on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.« less

  11. A parallel time integrator for noisy nonlinear oscillatory systems

    NASA Astrophysics Data System (ADS)

    Subber, Waad; Sarkar, Abhijit

    2018-06-01

    In this paper, we adapt a parallel time integration scheme to track the trajectories of noisy non-linear dynamical systems. Specifically, we formulate a parallel algorithm to generate the sample path of nonlinear oscillator defined by stochastic differential equations (SDEs) using the so-called parareal method for ordinary differential equations (ODEs). The presence of Wiener process in SDEs causes difficulties in the direct application of any numerical integration techniques of ODEs including the parareal algorithm. The parallel implementation of the algorithm involves two SDEs solvers, namely a fine-level scheme to integrate the system in parallel and a coarse-level scheme to generate and correct the required initial conditions to start the fine-level integrators. For the numerical illustration, a randomly excited Duffing oscillator is investigated in order to study the performance of the stochastic parallel algorithm with respect to a range of system parameters. The distributed implementation of the algorithm exploits Massage Passing Interface (MPI).

  12. Predictors of short- and long-term adherence with a Mediterranean-type diet intervention: the PREDIMED randomized trial.

    PubMed

    Downer, Mary Kathryn; Gea, Alfredo; Stampfer, Meir; Sánchez-Tainta, Ana; Corella, Dolores; Salas-Salvadó, Jordi; Ros, Emilio; Estruch, Ramón; Fitó, Montserrat; Gómez-Gracia, Enrique; Arós, Fernando; Fiol, Miquel; De-la-Corte, Francisco Jose Garcia; Serra-Majem, Lluís; Pinto, Xavier; Basora, Josep; Sorlí, José V; Vinyoles, Ernest; Zazpe, Itziar; Martínez-González, Miguel-Ángel

    2016-06-14

    Dietary intervention success requires strong participant adherence, but very few studies have examined factors related to both short-term and long-term adherence. A better understanding of predictors of adherence is necessary to improve the design and execution of dietary intervention trials. This study was designed to identify participant characteristics at baseline and study features that predict short-term and long-term adherence with interventions promoting the Mediterranean-type diet (MedDiet) in the PREvención con DIeta MEDiterránea (PREDIMED) randomized trial. Analyses included men and women living in Spain aged 55-80 at high risk for cardiovascular disease. Participants were randomized to the MedDiet supplemented with either complementary extra-virgin olive oil (EVOO) or tree nuts. The control group and participants with insufficient information on adherence were excluded. PREDIMED began in 2003 and ended in 2010. Investigators assessed covariates at baseline and dietary information was updated yearly throughout follow-up. Adherence was measured with a validated 14-point Mediterranean-type diet adherence score. Logistic regression was used to examine associations between baseline characteristics and adherence at one and four years of follow-up. Participants were randomized to the MedDiet supplemented with EVOO (n = 2,543; 1,962 after exclusions) or tree nuts (n = 2,454; 2,236 after exclusions). A higher number of cardiovascular risk factors, larger waist circumference, lower physical activity levels, lower total energy intake, poorer baseline adherence to the 14-point adherence score, and allocation to MedDiet + EVOO each independently predicted poorer adherence. Participants from PREDIMED recruiting centers with a higher total workload (measured as total number of persons-years of follow-up) achieved better adherence. No adverse events or side effects were reported. To maximize dietary adherence in dietary interventions, additional efforts to promote adherence should be used for participants with lower baseline adherence to the intended diet and poorer health status. The design of multicenter nutrition trials should prioritize few large centers with more participants in each, rather than many small centers. This study was registered at controlled-trials.com (http://www.controlled-trials. com/ISRCTN35739639). International Standard Randomized Controlled Trial Number (ISRCTN): 35739639. Registration date: 5 October 2005. parallel randomized trial.

  13. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency issues in the GA, it is possible to have idle processors. However, as long as the load at each processing node is similar, the processors are kept busy nearly all of the time. In applying GAs to circuit design, a suitable genetic representation 'is that of a circuit-construction program. We discuss one such circuit-construction programming language and show how evolution can generate useful analog circuit designs. This language has the desirable property that virtually all sets of combinations of primitives result in valid circuit graphs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm and circuit simulation software, we present experimental results as applied to three analog filter and two amplifier design tasks. For example, a figure shows an 85 dB amplifier design evolved by our system, and another figure shows the performance of that circuit (gain and frequency response). In all tasks, our system is able to generate circuits that achieve the target specifications.

  14. AZTEC: A parallel iterative package for the solving linear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S.

    1996-12-31

    We describe a parallel linear system package, AZTEC. The package incorporates a number of parallel iterative methods (e.g. GMRES, biCGSTAB, CGS, TFQMR) and preconditioners (e.g. Jacobi, Gauss-Seidel, polynomial, domain decomposition with LU or ILU within subdomains). Additionally, AZTEC allows for the reuse of previous preconditioning factorizations within Newton schemes for nonlinear methods. Currently, a number of different users are using this package to solve a variety of PDE applications.

  15. Parallel pivoting combined with parallel reduction

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita

    1987-01-01

    Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

  16. Frequency of RNA–RNA interaction in a model of the RNA World

    PubMed Central

    STRIGGLES, JOHN C.; MARTIN, MATTHEW B.; SCHMIDT, FRANCIS J.

    2006-01-01

    The RNA World model for prebiotic evolution posits the selection of catalytic/template RNAs from random populations. The mechanisms by which these random populations could be generated de novo are unclear. Non-enzymatic and RNA-catalyzed nucleic acid polymerizations are poorly processive, which means that the resulting short-chain RNA population could contain only limited diversity. Nonreciprocal recombination of smaller RNAs provides an alternative mechanism for the assembly of larger species with concomitantly greater structural diversity; however, the frequency of any specific recombination event in a random RNA population is limited by the low probability of an encounter between any two given molecules. This low probability could be overcome if the molecules capable of productive recombination were redundant, with many nonhomologous but functionally equivalent RNAs being present in a random population. Here we report fluctuation experiments to estimate the redundancy of the set of RNAs in a population of random sequences that are capable of non-Watson-Crick interaction with another RNA. Parallel SELEX experiments showed that at least one in 106 random 20-mers binds to the P5.1 stem–loop of Bacillus subtilis RNase P RNA with affinities equal to that of its naturally occurring partner. This high frequency predicts that a single RNA in an RNA World would encounter multiple interacting RNAs within its lifetime, supporting recombination as a plausible mechanism for prebiotic RNA evolution. The large number of equivalent species implies that the selection of any single interacting species in the RNA World would be a contingent event, i.e., one resulting from historical accident. PMID:16495233

  17. Randomized Controlled Trial of Video Self-Modeling Following Speech Restructuring Treatment for Stuttering

    ERIC Educational Resources Information Center

    Cream, Angela; O'Brian, Sue; Jones, Mark; Block, Susan; Harrison, Elisabeth; Lincoln, Michelle; Hewat, Sally; Packman, Ann; Menzies, Ross; Onslow, Mark

    2010-01-01

    Purpose: In this study, the authors investigated the efficacy of video self-modeling (VSM) following speech restructuring treatment to improve the maintenance of treatment effects. Method: The design was an open-plan, parallel-group, randomized controlled trial. Participants were 89 adults and adolescents who undertook intensive speech…

  18. Group Cognitive Behavioural Therapy and Group Recreational Activity for Adults with Autism Spectrum Disorders: A Preliminary Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Hesselmark, Eva; Plenty, Stephanie; Bejerot, Susanne

    2014-01-01

    Although adults with autism spectrum disorder are an increasingly identified patient population, few treatment options are available. This "preliminary" randomized controlled open trial with a parallel design developed two group interventions for adults with autism spectrum disorders and intelligence within the normal range: cognitive…

  19. Algorithms and programming tools for image processing on the MPP, part 2

    NASA Technical Reports Server (NTRS)

    Reeves, Anthony P.

    1986-01-01

    A number of algorithms were developed for image warping and pyramid image filtering. Techniques were investigated for the parallel processing of a large number of independent irregular shaped regions on the MPP. In addition some utilities for dealing with very long vectors and for sorting were developed. Documentation pages for the algorithms which are available for distribution are given. The performance of the MPP for a number of basic data manipulations was determined. From these results it is possible to predict the efficiency of the MPP for a number of algorithms and applications. The Parallel Pascal development system, which is a portable programming environment for the MPP, was improved and better documentation including a tutorial was written. This environment allows programs for the MPP to be developed on any conventional computer system; it consists of a set of system programs and a library of general purpose Parallel Pascal functions. The algorithms were tested on the MPP and a presentation on the development system was made to the MPP users group. The UNIX version of the Parallel Pascal System was distributed to a number of new sites.

  20. A multicenter, randomized, controlled trial of osteopathic manipulative treatment on preterms.

    PubMed

    Cerritelli, Francesco; Pizzolorusso, Gianfranco; Renzetti, Cinzia; Cozzolino, Vincenzo; D'Orazio, Marianna; Lupacchini, Mariacristina; Marinelli, Benedetta; Accorsi, Alessandro; Lucci, Chiara; Lancellotti, Jenny; Ballabio, Silvia; Castelli, Carola; Molteni, Daniela; Besana, Roberto; Tubaldi, Lucia; Perri, Francesco Paolo; Fusilli, Paola; D'Incecco, Carmine; Barlafante, Gina

    2015-01-01

    Despite some preliminary evidence, it is still largely unknown whether osteopathic manipulative treatment improves preterm clinical outcomes. The present multi-center randomized single blind parallel group clinical trial enrolled newborns who met the criteria for gestational age between 29 and 37 weeks, without any congenital complication from 3 different public neonatal intensive care units. Preterm infants were randomly assigned to usual prenatal care (control group) or osteopathic manipulative treatment (study group). The primary outcome was the mean difference in length of hospital stay between groups. A total of 695 newborns were randomly assigned to either the study group (n= 352) or the control group (n=343). A statistical significant difference was observed between the two groups for the primary outcome (13.8 and 17.5 days for the study and control group respectively, p<0.001, effect size: 0.31). Multivariate analysis showed a reduction of the length of stay of 3.9 days (95% CI -5.5 to -2.3, p<0.001). Furthermore, there were significant reductions with treatment as compared to usual care in cost (difference between study and control group: 1,586.01€; 95% CI 1,087.18 to 6,277.28; p<0.001) but not in daily weight gain. There were no complications associated to the intervention. Osteopathic treatment reduced significantly the number of days of hospitalization and is cost-effective on a large cohort of preterm infants.

  1. Statistical design of quantitative mass spectrometry-based proteomic experiments.

    PubMed

    Oberg, Ann L; Vitek, Olga

    2009-05-01

    We review the fundamental principles of statistical experimental design, and their application to quantitative mass spectrometry-based proteomics. We focus on class comparison using Analysis of Variance (ANOVA), and discuss how randomization, replication and blocking help avoid systematic biases due to the experimental procedure, and help optimize our ability to detect true quantitative changes between groups. We also discuss the issues of pooling multiple biological specimens for a single mass analysis, and calculation of the number of replicates in a future study. When applicable, we emphasize the parallels between designing quantitative proteomic experiments and experiments with gene expression microarrays, and give examples from that area of research. We illustrate the discussion using theoretical considerations, and using real-data examples of profiling of disease.

  2. Designing Feature and Data Parallel Stochastic Coordinate Descent Method forMatrix and Tensor Factorization

    DTIC Science & Technology

    2016-05-11

    AFRL-AFOSR-JP-TR-2016-0046 Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization U Kang Korea...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or   any other aspect...Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386

  3. Resistance of a plate in parallel flow at low Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Janour, Zbynek

    1951-01-01

    The present paper gives the results of measurements of the resistance of a plate placed parallel to the flow in the range of Reynolds numbers from 10 to 2300; in this range the resistance deviates from the formula of Blasius. The lower limit of validity of the Blasius formula is determined and also the increase in resistance at the edges parallel to the flow in the case of a plate of finite width.

  4. Trial protocol: a parallel group, individually randomized clinical trial to evaluate the effect of a mobile phone application to improve sexual health among youth in Stockholm County.

    PubMed

    Nielsen, Anna; De Costa, Ayesha; Bågenholm, Aspasia; Danielsson, Kristina Gemzell; Marrone, Gaetano; Boman, Jens; Salazar, Mariano; Diwan, Vinod

    2018-02-05

    Genital Chlamydia trachomatis infection is a major public health problem worldwide affecting mostly youth. Sweden introduced an opportunistic screening approach in 1982 accompanied by treatment, partner notification and case reporting. After an initial decline in infection rate till the mid-90s, the number of reported cases has increased over the last two decades and has now stabilized at a high level of 37,000 reported cases in Sweden per year (85% of cases in youth). Sexual risk-taking among youth is also reported to have significantly increased over the last 20 years. Mobile health (mHealth) interventions could be particularly suitable for youth and sexual health promotion as the intervention is delivered in a familiar and discrete way to a tech savvy at-risk population. This paper presents a protocol for a randomized trial to study the effect of an interactive mHealth application (app) on condom use among the youth of Stockholm. 446 youth resident in Stockholm, will be recruited in this two arm parallel group individually randomized trial. Recruitment will be from Youth Health Clinics or via the trial website. Participants will be randomized to receive either the intervention (which comprises an interactive app on safe sexual health that will be installed on their smart phones) or a control group (standard of care). Youth will be followed up for 6 months, with questionnaire responses submitted periodically via the app. Self-reported condom use over 6 months will be the primary outcome. Secondary outcomes will include presence of an infection, Chlamydia tests during the study period and proxy markers of safe sex. Analysis is by intention to treat. This trial exploits the high mobile phone usage among youth to provide a phone app intervention in the area of sexual health. If successful, the results will have implications for health service delivery and health promotion among the youth. From a methodological perspective, this trial is expected to provide information on the strength and challenges of implementing a partially app (internet) based trial in this context. ISRCTN 13212899, date of registration June 22, 2017.

  5. Solving very large, sparse linear systems on mesh-connected parallel computers

    NASA Technical Reports Server (NTRS)

    Opsahl, Torstein; Reif, John

    1987-01-01

    The implementation of Pan and Reif's Parallel Nested Dissection (PND) algorithm on mesh connected parallel computers is described. This is the first known algorithm that allows very large, sparse linear systems of equations to be solved efficiently in polylog time using a small number of processors. How the processor bound of PND can be matched to the number of processors available on a given parallel computer by slowing down the algorithm by constant factors is described. Also, for the important class of problems where G(A) is a grid graph, a unique memory mapping that reduces the inter-processor communication requirements of PND to those that can be executed on mesh connected parallel machines is detailed. A description of an implementation on the Goodyear Massively Parallel Processor (MPP), located at Goddard is given. Also, a detailed discussion of data mappings and performance issues is given.

  6. Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2012-01-01

    With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells.more » The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.« less

  7. Fast parallel molecular algorithms for DNA-based computation: factoring integers.

    PubMed

    Chang, Weng-Long; Guo, Minyi; Ho, Michael Shan-Hui

    2005-06-01

    The RSA public-key cryptosystem is an algorithm that converts input data to an unrecognizable encryption and converts the unrecognizable data back into its original decryption form. The security of the RSA public-key cryptosystem is based on the difficulty of factoring the product of two large prime numbers. This paper demonstrates to factor the product of two large prime numbers, and is a breakthrough in basic biological operations using a molecular computer. In order to achieve this, we propose three DNA-based algorithms for parallel subtractor, parallel comparator, and parallel modular arithmetic that formally verify our designed molecular solutions for factoring the product of two large prime numbers. Furthermore, this work indicates that the cryptosystems using public-key are perhaps insecure and also presents clear evidence of the ability of molecular computing to perform complicated mathematical operations.

  8. Parallel ALLSPD-3D: Speeding Up Combustor Analysis Via Parallel Processing

    NASA Technical Reports Server (NTRS)

    Fricker, David M.

    1997-01-01

    The ALLSPD-3D Computational Fluid Dynamics code for reacting flow simulation was run on a set of benchmark test cases to determine its parallel efficiency. These test cases included non-reacting and reacting flow simulations with varying numbers of processors. Also, the tests explored the effects of scaling the simulation with the number of processors in addition to distributing a constant size problem over an increasing number of processors. The test cases were run on a cluster of IBM RS/6000 Model 590 workstations with ethernet and ATM networking plus a shared memory SGI Power Challenge L workstation. The results indicate that the network capabilities significantly influence the parallel efficiency, i.e., a shared memory machine is fastest and ATM networking provides acceptable performance. The limitations of ethernet greatly hamper the rapid calculation of flows using ALLSPD-3D.

  9. Effects of professional oral health care on elderly: randomized trial.

    PubMed

    Morino, T; Ookawa, K; Haruta, N; Hagiwara, Y; Seki, M

    2014-11-01

    To better understand the role of the professional oral health care for elderly in improving geriatric oral health, the effects of short-term professional oral health care (once per week for 1 month) on oral microbiological parameters were assessed. Parallel, open-labelled, randomize-controlled trial was undertaken in a nursing home for elderly in Shizuoka, Japan. Thirty-four dentate elderly over 74 years were randomly assigned from ID number to the intervention (17/34) and control (17/34) groups. The outcomes were changes in oral microbiological parameters (number of bacteria in unstimulated saliva; whole bacteria, Streptococcus, Fusobacterium and Prevotella: opportunistic pathogens detection: and index of oral hygiene evaluation [Dental Plaque Index, DPI]) within the intervention period. Each parameter was evaluated at before and after intervention period. Four elderly were lost from mortality (1), bone fracture (1), refused to participate (1) and multi-antibiotics usage (1). Finally, 30 elderly were analysed (14/intervention and 16/control). At baseline, no difference was found between the control and intervention groups. After the intervention period, the percentage of Streptococcus species increased significantly in the intervention group (Intervention, 86% [12/14]; Control, 50% [8/16]: Fisher's, right-tailed, P < 0.05). Moreover, DPI significantly improved in the intervention group (Intervention, 57% [8/14]; Control, 13% [2/16]: Fisher's, two-tailed, P < 0.05). The improvement in DPI extended for 3 months after intervention. None of side effects were reported. The short-term professional oral health care can improve oral conditions in the elderly. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Sunscreen use and intentional exposure to ultraviolet A and B radiation: a double blind randomized trial using personal dosimeters

    PubMed Central

    Autier, P; Doré, J-F; Reis, A C; Grivegnée, A; Ollivaud, L; Truchetet, F; Chamoun, E; Rotmensz, N; Severi, G; Césarini, J-P

    2000-01-01

    A previous randomized trial found that sunscreen use could extend intentional sun exposure, thereby possibly increasing the risk of cutaneous melanoma. In a similarly designed trial, we examined the effect of the use of sunscreens having different sun protection factor (SPF) on actual exposure to ultraviolet B (UVB) and ultraviolet A (UVA) radiation. In June 1998, 58 European participants 18–24 years old were randomized to receive a SPF 10 or 30 sunscreens and were asked to complete daily records of their sun exposure during their summer holidays of whom 44 utilized a personal UVA and UVB dosimeter in a standard way during their sunbathing sessions. The median daily sunbathing duration was 2.4 hours in the SPF 10 group and 3.0 hours in the SPF 30 group (P = 0.054). The increase in daily sunbathing duration was paralleled by an increase in daily UVB exposure, but not by changes in UVA or UVB accumulated over all sunbathing sessions, or in daily UVA exposure. Of all participants, those who used the SPF 30 sunscreen and had no sunburn spent the highest number of hours in sunbathing activities. Differences between the two SPF groups in total number of sunbathing hours, daily sunbathing duration, and daily UVB exposure were largest among participants without sunburn during holidays. Among those with sunburn, the differences between the two groups tended to reduce. In conclusion, sunscreens used during sunbathing tended to increase the duration of exposures to doses of ultraviolet radiation below the sunburn threshold. © 2000 Cancer Research Campaign PMID:11027441

  11. Data decomposition method for parallel polygon rasterization considering load balancing

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Chen, Zhenjie; Liu, Yongxue; Li, Feixue; Cheng, Liang; Zhu, A.-xing; Li, Manchun

    2015-12-01

    It is essential to adopt parallel computing technology to rapidly rasterize massive polygon data. In parallel rasterization, it is difficult to design an effective data decomposition method. Conventional methods ignore load balancing of polygon complexity in parallel rasterization and thus fail to achieve high parallel efficiency. In this paper, a novel data decomposition method based on polygon complexity (DMPC) is proposed. First, four factors that possibly affect the rasterization efficiency were investigated. Then, a metric represented by the boundary number and raster pixel number in the minimum bounding rectangle was developed to calculate the complexity of each polygon. Using this metric, polygons were rationally allocated according to the polygon complexity, and each process could achieve balanced loads of polygon complexity. To validate the efficiency of DMPC, it was used to parallelize different polygon rasterization algorithms and tested on different datasets. Experimental results showed that DMPC could effectively parallelize polygon rasterization algorithms. Furthermore, the implemented parallel algorithms with DMPC could achieve good speedup ratios of at least 15.69 and generally outperformed conventional decomposition methods in terms of parallel efficiency and load balancing. In addition, the results showed that DMPC exhibited consistently better performance for different spatial distributions of polygons.

  12. Pelvic floor muscle training versus watchful waiting or pessary treatment for pelvic organ prolapse (POPPS): design and participant baseline characteristics of two parallel pragmatic randomized controlled trials in primary care.

    PubMed

    Wiegersma, Marian; Panman, Chantal M C R; Kollen, Boudewijn J; Vermeulen, Karin M; Schram, Aaltje J; Messelink, Embert J; Berger, Marjolein Y; Lisman-Van Leeuwen, Yvonne; Dekker, Janny H

    2014-02-01

    Pelvic floor muscle training (PFMT) and pessaries are commonly used in the conservative treatment of pelvic organ prolapse (POP). Because there is a lack of evidence regarding the optimal choice between these two interventions, we designed the "Pelvic Organ prolapse in primary care: effects of Pelvic floor muscle training and Pessary treatment Study" (POPPS). POPPS consists of two parallel open label randomized controlled trials performed in primary care, in women aged ≥55 years, recruited through a postal questionnaire. In POPPS trial 1, women with mild POP receive either PFMT or watchful waiting. In POPPS trial 2, women with advanced POP receive either PFMT or pessary treatment. Patient recruitment started in 2009 and was finished in December 2012. Primary outcome of both POPPS trials is improvement in POP-related symptoms. Secondary outcomes are quality of life, sexual function, POP-Q stage, pelvic floor muscle function, post-void residual volume, patients' perception of improvement, and costs. All outcomes are measured 3, 12, and 24 months after the start of treatment. Cost-effectiveness will be calculated based on societal costs, using the PFDI-20 and the EQ-5D as outcomes. In this paper the POPPS design, the encountered challenges and our solutions, and participant baseline characteristics are presented. For both trials the target numbers of patients in each treatment group are achieved, giving this study sufficient power to lead to promising results. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  14. Immediate versus delayed loading of strategic mini dental implants for the stabilization of partial removable dental prostheses: a patient cluster randomized, parallel-group 3-year trial.

    PubMed

    Mundt, Torsten; Al Jaghsi, Ahmad; Schwahn, Bernd; Hilgert, Janina; Lucas, Christian; Biffar, Reiner; Schwahn, Christian; Heinemann, Friedhelm

    2016-07-30

    Acceptable short-term survival rates (>90 %) of mini-implants (diameter < 3.0 mm) are only documented for mandibular overdentures. Sound data for mini-implants as strategic abutments for a better retention of partial removable dental prosthesis (PRDP) are not available. The purpose of this study is to test the hypothesis that immediately loaded mini-implants show more bone loss and less success than strategic mini-implants with delayed loading. In this four-center (one university hospital, three dental practices in Germany), parallel-group, controlled clinical trial, which is cluster randomized on patient level, a total of 80 partially edentulous patients with unfavourable number and distribution of remaining abutment teeth in at least one jaw will receive supplementary min-implants to stabilize their PRDP. The mini-implant are either immediately loaded after implant placement (test group) or delayed after four months (control group). Follow-up of the patients will be performed for 36 months. The primary outcome is the radiographic bone level changes at implants. The secondary outcome is the implant success as a composite variable. Tertiary outcomes include clinical, subjective (quality of life, satisfaction, chewing ability) and dental or technical complications. Strategic implants under an existing PRDP are only documented for standard-diameter implants. Mini-implants could be a minimal invasive and low cost solution for this treatment modality. The trial is registered at Deutsches Register Klinischer Studien (German register of clinical trials) under DRKS-ID: DRKS00007589 ( www.germanctr.de ) on January 13(th), 2015.

  15. Very fast motion planning for highly dexterous-articulated robots

    NASA Technical Reports Server (NTRS)

    Challou, Daniel J.; Gini, Maria; Kumar, Vipin

    1994-01-01

    Due to the inherent danger of space exploration, the need for greater use of teleoperated and autonomous robotic systems in space-based applications has long been apparent. Autonomous and semi-autonomous robotic devices have been proposed for carrying out routine functions associated with scientific experiments aboard the shuttle and space station. Finally, research into the use of such devices for planetary exploration continues. To accomplish their assigned tasks, all such autonomous and semi-autonomous devices will require the ability to move themselves through space without hitting themselves or the objects which surround them. In space it is important to execute the necessary motions correctly when they are first attempted because repositioning is expensive in terms of both time and resources (e.g., fuel). Finally, such devices will have to function in a variety of different environments. Given these constraints, a means for fast motion planning to insure the correct movement of robotic devices would be ideal. Unfortunately, motion planning algorithms are rarely used in practice because of their computational complexity. Fast methods have been developed for detecting imminent collisions, but the more general problem of motion planning remains computationally intractable. However, in this paper we show how the use of multicomputers and appropriate parallel algorithms can substantially reduce the time required to synthesize paths for dexterous articulated robots with a large number of joints. We have developed a parallel formulation of the Randomized Path Planner proposed by Barraquand and Latombe. We have shown that our parallel formulation is capable of formulating plans in a few seconds or less on various parallel architectures including: the nCUBE2 multicomputer with up to 1024 processors (nCUBE2 is a registered trademark of the nCUBE corporation), and a network of workstations.

  16. Efficacy and safety of bilastine in Japanese patients with perennial allergic rhinitis: A multicenter, randomized, double-blind, placebo-controlled, parallel-group phase III study.

    PubMed

    Okubo, Kimihiro; Gotoh, Minoru; Asako, Mikiya; Nomura, Yasuyuki; Togawa, Michinori; Saito, Akihiro; Honda, Takayuki; Ohashi, Yoshihiro

    2017-01-01

    Bilastine, a novel non-sedating second-generation H 1 antihistamine, has been approved in most European countries since 2010. This study aimed to evaluate the superiority of bilastine over placebo in Japanese patients with perennial allergic rhinitis (PAR). This randomized, double-blind, placebo-controlled, parallel-group, phase III study (trial registration number JapicCTI-142600) evaluated the effect of a 2-week treatment period with bilastine (20 mg once daily), fexofenadine (60 mg twice daily), or a matched placebo (double dummy) in patients with PAR. All patients were instructed to record individual nasal and ocular symptoms in diaries daily. The primary endpoint was the mean change in total nasal symptom scores (TNSS) from baseline to Week 2 (Days 10-13). A total of 765 patients were randomly allocated to receive bilastine, fexofenadine, or placebo (256, 254, and 255 patients, respectively). The mean change in TNSS from baseline at Week 2 was significantly decreased by bilastine (-0.98) compared to placebo (-0.63, P = 0.023). Bilastine and fexofenadine showed no significant difference in the primary endpoint. However, the mean change in TNSS from baseline on Day 1 was more significantly decreased by bilastine (-0.99) than by placebo (-0.28, P < 0.001) or fexofenadine (-0.62, P = 0.032). The active drugs also improved instantaneous TNSS 1 h after the first and before the second drug administration on Day 1 (P < 0.05). The study drugs were well tolerated. After 2-week treatment period, bilastine 20 mg once daily was effective and tolerable in Japanese patients with PAR, and exhibited a rapid onset of action. Copyright © 2016 Japanese Society of Allergology. Production and hosting by Elsevier B.V. All rights reserved.

  17. Sequential or parallel decomposed processing of two-digit numbers? Evidence from eye-tracking.

    PubMed

    Moeller, Korbinian; Fischer, Martin H; Nuerk, Hans-Christoph; Willmes, Klaus

    2009-02-01

    While reaction time data have shown that decomposed processing of two-digit numbers occurs, there is little evidence about how decomposed processing functions. Poltrock and Schwartz (1984) argued that multi-digit numbers are compared in a sequential digit-by-digit fashion starting at the leftmost digit pair. In contrast, Nuerk and Willmes (2005) favoured parallel processing of the digits constituting a number. These models (i.e., sequential decomposition, parallel decomposition) make different predictions regarding the fixation pattern in a two-digit number magnitude comparison task and can therefore be differentiated by eye fixation data. We tested these models by evaluating participants' eye fixation behaviour while selecting the larger of two numbers. The stimulus set consisted of within-decade comparisons (e.g., 53_57) and between-decade comparisons (e.g., 42_57). The between-decade comparisons were further divided into compatible and incompatible trials (cf. Nuerk, Weger, & Willmes, 2001) and trials with different decade and unit distances. The observed fixation pattern implies that the comparison of two-digit numbers is not executed by sequentially comparing decade and unit digits as proposed by Poltrock and Schwartz (1984) but rather in a decomposed but parallel fashion. Moreover, the present fixation data provide first evidence that digit processing in multi-digit numbers is not a pure bottom-up effect, but is also influenced by top-down factors. Finally, implications for multi-digit number processing beyond the range of two-digit numbers are discussed.

  18. TU-AB-BRC-12: Optimized Parallel MonteCarlo Dose Calculations for Secondary MU Checks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, S; Nazareth, D; Bellor, M

    Purpose: Secondary MU checks are an important tool used during a physics review of a treatment plan. Commercial software packages offer varying degrees of theoretical dose calculation accuracy, depending on the modality involved. Dose calculations of VMAT plans are especially prone to error due to the large approximations involved. Monte Carlo (MC) methods are not commonly used due to their long run times. We investigated two methods to increase the computational efficiency of MC dose simulations with the BEAMnrc code. Distributed computing resources, along with optimized code compilation, will allow for accurate and efficient VMAT dose calculations. Methods: The BEAMnrcmore » package was installed on a high performance computing cluster accessible to our clinic. MATLAB and PYTHON scripts were developed to convert a clinical VMAT DICOM plan into BEAMnrc input files. The BEAMnrc installation was optimized by running the VMAT simulations through profiling tools which indicated the behavior of the constituent routines in the code, e.g. the bremsstrahlung splitting routine, and the specified random number generator. This information aided in determining the most efficient compiling parallel configuration for the specific CPU’s available on our cluster, resulting in the fastest VMAT simulation times. Our method was evaluated with calculations involving 10{sup 8} – 10{sup 9} particle histories which are sufficient to verify patient dose using VMAT. Results: Parallelization allowed the calculation of patient dose on the order of 10 – 15 hours with 100 parallel jobs. Due to the compiler optimization process, further speed increases of 23% were achieved when compared with the open-source compiler BEAMnrc packages. Conclusion: Analysis of the BEAMnrc code allowed us to optimize the compiler configuration for VMAT dose calculations. In future work, the optimized MC code, in conjunction with the parallel processing capabilities of BEAMnrc, will be applied to provide accurate and efficient secondary MU checks.« less

  19. Effectiveness of a Web-Based Intervention to Reduce Alcohol Consumption among French Hazardous Drinkers: A Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Guillemont, Juliette; Cogordan, Chloé; Nalpas, Bertrand; Nguyen-Thanh, Vi?t; Richard, Jean-Baptiste; Arwidson, Pierre

    2017-01-01

    This study aims to evaluate the effectiveness of a web-based intervention to reduce alcohol consumption among hazardous drinkers. A two-group parallel randomized controlled trial was conducted among adults identified as hazardous drinkers according to the Alcohol Use Disorders Identification Test. The intervention delivers personalized normative…

  20. Supervised Home Training of Dialogue Skills in Chronic Aphasia: A Randomized Parallel Group Study

    ERIC Educational Resources Information Center

    Nobis-Bosch, Ruth; Springer, Luise; Radermacher, Irmgard; Huber, Walter

    2011-01-01

    Purpose: The aim of this study was to prove the efficacy of supervised self-training for individuals with aphasia. Linguistic and communicative performance in structured dialogues represented the main study parameters. Method: In a cross-over design for randomized matched pairs, 18 individuals with chronic aphasia were examined during 12 weeks of…

  1. A Randomized Controlled Trial of Trauma-Focused Cognitive Behavioral Therapy for Sexually Exploited, War-Affected Congolese Girls

    ERIC Educational Resources Information Center

    O'Callaghan, Paul; McMullen, John; Shannon, Ciaran; Rafferty, Harry; Black, Alastair

    2013-01-01

    Objective: To assess the efficacy of trauma-focused cognitive behavioral therapy (TF-CBT) delivered by nonclinical facilitators in reducing posttraumatic stress, depression, and anxiety and conduct problems and increasing prosocial behavior in a group of war-affected, sexually exploited girls in a single-blind, parallel-design, randomized,…

  2. A Randomized trial of an Asthma Internet Self-management Intervention (RAISIN): study protocol for a randomized controlled trial.

    PubMed

    Morrison, Deborah; Wyke, Sally; Thomson, Neil C; McConnachie, Alex; Agur, Karolina; Saunderson, Kathryn; Chaudhuri, Rekha; Mair, Frances S

    2014-05-24

    The financial costs associated with asthma care continue to increase while care remains suboptimal. Promoting optimal self-management, including the use of asthma action plans, along with regular health professional review has been shown to be an effective strategy and is recommended in asthma guidelines internationally. Despite evidence of benefit, guided self-management remains underused, however the potential for online resources to promote self-management behaviors is gaining increasing recognition. The aim of this paper is to describe the protocol for a pilot evaluation of a website 'Living well with asthma' which has been developed with the aim of promoting self-management behaviors shown to improve outcomes. The study is a parallel randomized controlled trial, where adults with asthma are randomly assigned to either access to the website for 12 weeks, or usual asthma care for 12 weeks (followed by access to the website if desired). Individuals are included if they are over 16-years-old, have a diagnosis of asthma with an Asthma Control Questionnaire (ACQ) score of greater than, or equal to 1, and have access to the internet. Primary outcomes for this evaluation include recruitment and retention rates, changes at 12 weeks from baseline for both ACQ and Asthma Quality of Life Questionnaire (AQLQ) scores, and quantitative data describing website usage (number of times logged on, length of time logged on, number of times individual pages looked at, and for how long). Secondary outcomes include clinical outcomes (medication use, health services use, lung function) and patient reported outcomes (including adherence, patient activation measures, and health status). Piloting of complex interventions is considered best practice and will maximise the potential of any future large-scale randomized controlled trial to successfully recruit and be able to report on necessary outcomes. Here we will provide results across a range of outcomes which will provide estimates of efficacy to inform the design of a future full-scale randomized controlled trial of the 'Living well with asthma' website. This trial is registered with Current Controlled Trials ISRCTN78556552 on 18/06/13.

  3. An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.

    PubMed

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-09-01

    Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  4. SOP: parallel surrogate global optimization with Pareto center selection for computationally expensive single objective problems

    DOE PAGES

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    2016-02-02

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  5. Optimal parallel solution of sparse triangular systems

    NASA Technical Reports Server (NTRS)

    Alvarado, Fernando L.; Schreiber, Robert

    1990-01-01

    A method for the parallel solution of triangular sets of equations is described that is appropriate when there are many right-handed sides. By preprocessing, the method can reduce the number of parallel steps required to solve Lx = b compared to parallel forward or backsolve. Applications are to iterative solvers with triangular preconditioners, to structural analysis, or to power systems applications, where there may be many right-handed sides (not all available a priori). The inverse of L is represented as a product of sparse triangular factors. The problem is to find a factored representation of this inverse of L with the smallest number of factors (or partitions), subject to the requirement that no new nonzero elements be created in the formation of these inverse factors. A method from an earlier reference is shown to solve this problem. This method is improved upon by constructing a permutation of the rows and columns of L that preserves triangularity and allow for the best possible such partition. A number of practical examples and algorithmic details are presented. The parallelism attainable is illustrated by means of elimination trees and clique trees.

  6. Butterbur root extract and music therapy in the prevention of childhood migraine: an explorative study.

    PubMed

    Oelkers-Ax, Rieke; Leins, Anne; Parzer, Peter; Hillecke, Thomas; Bolay, Hans V; Fischer, Jochen; Bender, Stephan; Hermanns, Uta; Resch, Franz

    2008-04-01

    Migraine is very common in school-aged children, but despite a number of pharmacological and non-pharmacological options for prophylaxis, randomized controlled evidence in children is small. Evidence-based prophylactic drugs may have considerable side effects. This study was to assess efficacy of a butterbur root extract (Petadolex) and music therapy in primary school children with migraine. Prospective, randomized, partly double-blind, placebo-controlled, parallel-group trial. Following a 8-week baseline patients were randomized and received either butterbur root extract (n=19), music therapy (n=20) or placebo (n=19) over 12 weeks. All participants received additionally headache education ("treatment as usual") from the baseline onwards. Reduction of headache frequency after treatment (8-week post-treatment) as well as 6 months later (8-week follow-up) was the efficacy variable. Data analysis of subjects completing the respective study phase showed that during post-treatment, only music therapy was superior to placebo (p=0.005), whereas in the follow-up period both music therapy and butterbur root extract were superior to placebo (p=0.018 and p=0.044, respectively). All groups showed a substantial reduction of attack frequency already during baseline. Butterbur root extract and music therapy might be superior to placebo and may represent promising treatment approaches in the prophylaxis of paediatric migraine.

  7. A randomized controlled trial of intranasal ketamine in migraine with prolonged aura.

    PubMed

    Afridi, Shazia K; Giffin, Nicola J; Kaube, Holger; Goadsby, Peter J

    2013-02-12

    The aim of our study was to test the hypothesis that ketamine would affect aura in a randomized controlled double-blind trial, and thus to provide direct evidence for the role of glutamatergic transmission in human aura. We performed a double-blinded, randomized parallel-group controlled study investigating the effect of 25 mg intranasal ketamine on migraine with prolonged aura in 30 migraineurs using 2 mg intranasal midazolam as an active control. Each subject recorded data from 3 episodes of migraine. Eighteen subjects completed the study. Ketamine reduced the severity (p = 0.032) but not duration of aura in this group, whereas midazolam had no effect. These data provide translational evidence for the potential importance of glutamatergic mechanisms in migraine aura and offer a pharmacologic parallel between animal experimental work on cortical spreading depression and the clinical problem. This study provides class III evidence that intranasal ketamine is effective in reducing aura severity in patients with migraine with prolonged aura.

  8. Random-subset fitting of digital holograms for fast three-dimensional particle tracking [invited].

    PubMed

    Dimiduk, Thomas G; Perry, Rebecca W; Fung, Jerome; Manoharan, Vinothan N

    2014-09-20

    Fitting scattering solutions to time series of digital holograms is a precise way to measure three-dimensional dynamics of microscale objects such as colloidal particles. However, this inverse-problem approach is computationally expensive. We show that the computational time can be reduced by an order of magnitude or more by fitting to a random subset of the pixels in a hologram. We demonstrate our algorithm on experimentally measured holograms of micrometer-scale colloidal particles, and we show that 20-fold increases in speed, relative to fitting full frames, can be attained while introducing errors in the particle positions of 10 nm or less. The method is straightforward to implement and works for any scattering model. It also enables a parallelization strategy wherein random-subset fitting is used to quickly determine initial guesses that are subsequently used to fit full frames in parallel. This approach may prove particularly useful for studying rare events, such as nucleation, that can only be captured with high frame rates over long times.

  9. The emergence of understanding in a computer model of concepts and analogy-making

    NASA Astrophysics Data System (ADS)

    Mitchell, Melanie; Hofstadter, Douglas R.

    1990-06-01

    This paper describes Copycat, a computer model of the mental mechanisms underlying the fluidity and adaptability of the human conceptual system in the context of analogy-making. Copycat creates analogies between idealized situations in a microworld that has been designed to capture and isolate many of the central issues of analogy-making. In Copycat, an understanding of the essence of a situation and the recognition of deep similarity between two superficially different situations emerge from the interaction of a large number of perceptual agents with an associative, overlapping, and context-sensitive network of concepts. Central features of the model are: a high degree of parallelism; competition and cooperation among a large number of small, locally acting agents that together create a global understanding of the situation at hand; and a computational temperature that measures the amount of perceptual organization as processing proceeds and that in turn controls the degree of randomness with which decisions are made in the system.

  10. [Immunological and clinical study on therapeutic efficacy of inosine pranobex].

    PubMed

    Gołebiowska-Wawrzyniak, Maria; Markiewicz, Katarzyna; Kozar, Agata; Derentowicz, Piotr; Czerwińska-Kartowicz, Iwona; Jastrzebska-Janas, Krystyna; Wacławek, Jolanta; Wawrzyniak, Zbigniew M; Siwińska-Gołebiowska, Henryka

    2005-09-01

    Many studies in vitro and in vivo have shown immunomodulating and antiviral activities of inosine pranobex. The object of this research was to examine the potential beneficial effects of inosine pranobex (Groprinosin) on immune system in children with cellular immunodeficiency as a prophylaxis of recurrent infections, mainly of viral origin. 50 mg/kg b.w/day of inosine pranobex in divided doses was given to the group of 30 children aged 3-15 years for 10 days in 3 following months. Clinical and immunological investigations were done before and after the treatment. Statistically significant rise of CD3T lymphocytes number (p = 0.02) and in this CD4T lymphocytes number (p = 0.02) as well as statistically significant improvement of their function (p = 0.005) evaluated with blastic transformation method were found. These laboratory findings were parallel to clinical benefits. Control study was performed in the group of children completed by randomization and treated in the same way with garlic (Alliofil).

  11. The parallel algorithm for the 2D discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Barina, David; Najman, Pavel; Kleparnik, Petr; Kula, Michal; Zemcik, Pavel

    2018-04-01

    The discrete wavelet transform can be found at the heart of many image-processing algorithms. Until now, the transform on general-purpose processors (CPUs) was mostly computed using a separable lifting scheme. As the lifting scheme consists of a small number of operations, it is preferred for processing using single-core CPUs. However, considering a parallel processing using multi-core processors, this scheme is inappropriate due to a large number of steps. On such architectures, the number of steps corresponds to the number of points that represent the exchange of data. Consequently, these points often form a performance bottleneck. Our approach appropriately rearranges calculations inside the transform, and thereby reduces the number of steps. In other words, we propose a new scheme that is friendly to parallel environments. When evaluating on multi-core CPUs, we consistently overcome the original lifting scheme. The evaluation was performed on 61-core Intel Xeon Phi and 8-core Intel Xeon processors.

  12. Orchid: a novel management, annotation and machine learning framework for analyzing cancer mutations.

    PubMed

    Cario, Clinton L; Witte, John S

    2018-03-15

    As whole-genome tumor sequence and biological annotation datasets grow in size, number and content, there is an increasing basic science and clinical need for efficient and accurate data management and analysis software. With the emergence of increasingly sophisticated data stores, execution environments and machine learning algorithms, there is also a need for the integration of functionality across frameworks. We present orchid, a python based software package for the management, annotation and machine learning of cancer mutations. Building on technologies of parallel workflow execution, in-memory database storage and machine learning analytics, orchid efficiently handles millions of mutations and hundreds of features in an easy-to-use manner. We describe the implementation of orchid and demonstrate its ability to distinguish tissue of origin in 12 tumor types based on 339 features using a random forest classifier. Orchid and our annotated tumor mutation database are freely available at https://github.com/wittelab/orchid. Software is implemented in python 2.7, and makes use of MySQL or MemSQL databases. Groovy 2.4.5 is optionally required for parallel workflow execution. JWitte@ucsf.edu. Supplementary data are available at Bioinformatics online.

  13. A Framework to Analyze the Performance of Load Balancing Schemes for Ensembles of Stochastic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Tae-Hyuk; Sandu, Adrian; Watson, Layne T.

    2015-08-01

    Ensembles of simulations are employed to estimate the statistics of possible future states of a system, and are widely used in important applications such as climate change and biological modeling. Ensembles of runs can naturally be executed in parallel. However, when the CPU times of individual simulations vary considerably, a simple strategy of assigning an equal number of tasks per processor can lead to serious work imbalances and low parallel efficiency. This paper presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms for ensembles of simulations where many tasks are mapped onto each processor, andmore » where the individual compute times vary considerably among tasks. Four load balancing strategies are discussed: most-dividing, all-redistribution, random-polling, and neighbor-redistribution. Simulation results with a stochastic budding yeast cell cycle model are consistent with the theoretical analysis. It is especially significant that there is a provable global decrease in load imbalance for the local rebalancing algorithms due to scalability concerns for the global rebalancing algorithms. The overall simulation time is reduced by up to 25 %, and the total processor idle time by 85 %.« less

  14. Multiplex single-molecule interaction profiling of DNA-barcoded proteins.

    PubMed

    Gu, Liangcai; Li, Chao; Aach, John; Hill, David E; Vidal, Marc; Church, George M

    2014-11-27

    In contrast with advances in massively parallel DNA sequencing, high-throughput protein analyses are often limited by ensemble measurements, individual analyte purification and hence compromised quality and cost-effectiveness. Single-molecule protein detection using optical methods is limited by the number of spectrally non-overlapping chromophores. Here we introduce a single-molecular-interaction sequencing (SMI-seq) technology for parallel protein interaction profiling leveraging single-molecule advantages. DNA barcodes are attached to proteins collectively via ribosome display or individually via enzymatic conjugation. Barcoded proteins are assayed en masse in aqueous solution and subsequently immobilized in a polyacrylamide thin film to construct a random single-molecule array, where barcoding DNAs are amplified into in situ polymerase colonies (polonies) and analysed by DNA sequencing. This method allows precise quantification of various proteins with a theoretical maximum array density of over one million polonies per square millimetre. Furthermore, protein interactions can be measured on the basis of the statistics of colocalized polonies arising from barcoding DNAs of interacting proteins. Two demanding applications, G-protein coupled receptor and antibody-binding profiling, are demonstrated. SMI-seq enables 'library versus library' screening in a one-pot assay, simultaneously interrogating molecular binding affinity and specificity.

  15. Multiplex single-molecule interaction profiling of DNA barcoded proteins

    PubMed Central

    Gu, Liangcai; Li, Chao; Aach, John; Hill, David E.; Vidal, Marc; Church, George M.

    2014-01-01

    In contrast with advances in massively parallel DNA sequencing1, high-throughput protein analyses2-4 are often limited by ensemble measurements, individual analyte purification and hence compromised quality and cost-effectiveness. Single-molecule (SM) protein detection achieved using optical methods5 is limited by the number of spectrally nonoverlapping chromophores. Here, we introduce a single molecular interaction-sequencing (SMI-Seq) technology for parallel protein interaction profiling leveraging SM advantages. DNA barcodes are attached to proteins collectively via ribosome display6 or individually via enzymatic conjugation. Barcoded proteins are assayed en masse in aqueous solution and subsequently immobilized in a polyacrylamide (PAA) thin film to construct a random SM array, where barcoding DNAs are amplified into in situ polymerase colonies (polonies)7 and analyzed by DNA sequencing. This method allows precise quantification of various proteins with a theoretical maximum array density of over one million polonies per square millimeter. Furthermore, protein interactions can be measured based on the statistics of colocalized polonies arising from barcoding DNAs of interacting proteins. Two demanding applications, G-protein coupled receptor (GPCR) and antibody binding profiling, were demonstrated. SMI-Seq enables “library vs. library” screening in a one-pot assay, simultaneously interrogating molecular binding affinity and specificity. PMID:25252978

  16. Eigensolution of finite element problems in a completely connected parallel architecture

    NASA Technical Reports Server (NTRS)

    Akl, Fred A.; Morel, Michael R.

    1989-01-01

    A parallel algorithm for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi)=(M)(phi)(omega), where (K) and (M) are of order N, and (omega) is of order q is presented. The parallel algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm has been successfully implemented on a tightly coupled multiple-instruction-multiple-data (MIMD) parallel processing computer, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor, or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macro-tasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18 and 3.61 are achieved on two, four, six and eight processors, respectively.

  17. A Multibody Formulation for Three Dimensional Brick Finite Element Based Parallel and Scalable Rotor Dynamic Analysis

    DTIC Science & Technology

    2010-05-01

    connections near the hub end, and containing up to 0.48 million degrees of freedom. The models are analyzed for scala - bility and timing for hover and...Parallel and Scalable Rotor Dynamic Analysis 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK...will enable the modeling of critical couplings that occur in hingeless and bearingless hubs with advanced flex structures. Second , it will enable the

  18. A Parallel Fast Sweeping Method for the Eikonal Equation

    NASA Astrophysics Data System (ADS)

    Baker, B.

    2017-12-01

    Recently, there has been an exciting emergence of probabilistic methods for travel time tomography. Unlike gradient-based optimization strategies, probabilistic tomographic methods are resistant to becoming trapped in a local minimum and provide a much better quantification of parameter resolution than, say, appealing to ray density or performing checkerboard reconstruction tests. The benefits associated with random sampling methods however are only realized by successive computation of predicted travel times in, potentially, strongly heterogeneous media. To this end this abstract is concerned with expediting the solution of the Eikonal equation. While many Eikonal solvers use a fast marching method, the proposed solver will use the iterative fast sweeping method because the eight fixed sweep orderings in each iteration are natural targets for parallelization. To reduce the number of iterations and grid points required the high-accuracy finite difference stencil of Nobel et al., 2014 is implemented. A directed acyclic graph (DAG) is created with a priori knowledge of the sweep ordering and finite different stencil. By performing a topological sort of the DAG sets of independent nodes are identified as candidates for concurrent updating. Additionally, the proposed solver will also address scalability during earthquake relocation, a necessary step in local and regional earthquake tomography and a barrier to extending probabilistic methods from active source to passive source applications, by introducing an asynchronous parallel forward solve phase for all receivers in the network. Synthetic examples using the SEG over-thrust model will be presented.

  19. Message-passing-interface-based parallel FDTD investigation on the EM scattering from a 1-D rough sea surface using uniaxial perfectly matched layer absorbing boundary.

    PubMed

    Li, J; Guo, L-X; Zeng, H; Han, X-B

    2009-06-01

    A message-passing-interface (MPI)-based parallel finite-difference time-domain (FDTD) algorithm for the electromagnetic scattering from a 1-D randomly rough sea surface is presented. The uniaxial perfectly matched layer (UPML) medium is adopted for truncation of FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different processors is illustrated for one sea surface realization, and the computation time of the parallel FDTD algorithm is dramatically reduced compared to a single-process implementation. Finally, some numerical results are shown, including the backscattering characteristics of sea surface for different polarization and the bistatic scattering from a sea surface with large incident angle and large wind speed.

  20. An Overview of Kinematic and Calibration Models Using Internal/External Sensors or Constraints to Improve the Behavior of Spatial Parallel Mechanisms

    PubMed Central

    Majarena, Ana C.; Santolaria, Jorge; Samper, David; Aguilar, Juan J.

    2010-01-01

    This paper presents an overview of the literature on kinematic and calibration models of parallel mechanisms, the influence of sensors in the mechanism accuracy and parallel mechanisms used as sensors. The most relevant classifications to obtain and solve kinematic models and to identify geometric and non-geometric parameters in the calibration of parallel robots are discussed, examining the advantages and disadvantages of each method, presenting new trends and identifying unsolved problems. This overview tries to answer and show the solutions developed by the most up-to-date research to some of the most frequent questions that appear in the modelling of a parallel mechanism, such as how to measure, the number of sensors and necessary configurations, the type and influence of errors or the number of necessary parameters. PMID:22163469

  1. Parallel consistent labeling algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samal, A.; Henderson, T.

    Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms. Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms. In this paper, they give parallel algorithms for solving node and arc consistency. They show that any parallel algorithm for enforcing arc consistency in the worst case must have O(na) sequential steps, where n is number of nodes, and a is the number of labels per node.more » They give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.« less

  2. Implementation of parallel moment equations in NIMROD

    NASA Astrophysics Data System (ADS)

    Lee, Hankyu Q.; Held, Eric D.; Ji, Jeong-Young

    2017-10-01

    As collisionality is low (the Knudsen number is large) in many plasma applications, kinetic effects become important, particularly in parallel dynamics for magnetized plasmas. Fluid models can capture some kinetic effects when integral parallel closures are adopted. The adiabatic and linear approximations are used in solving general moment equations to obtain the integral closures. In this work, we present an effort to incorporate non-adiabatic (time-dependent) and nonlinear effects into parallel closures. Instead of analytically solving the approximate moment system, we implement exact parallel moment equations in the NIMROD fluid code. The moment code is expected to provide a natural convergence scheme by increasing the number of moments. Work in collaboration with the PSI Center and supported by the U.S. DOE under Grant Nos. DE-SC0014033, DE-SC0016256, and DE-FG02-04ER54746.

  3. Monte Carlo investigation of thrust imbalance of solid rocket motor pairs

    NASA Technical Reports Server (NTRS)

    Sforzini, R. H.; Foster, W. A., Jr.

    1976-01-01

    The Monte Carlo method of statistical analysis is used to investigate the theoretical thrust imbalance of pairs of solid rocket motors (SRMs) firing in parallel. Sets of the significant variables are selected using a random sampling technique and the imbalance calculated for a large number of motor pairs using a simplified, but comprehensive, model of the internal ballistics. The treatment of burning surface geometry allows for the variations in the ovality and alignment of the motor case and mandrel as well as those arising from differences in the basic size dimensions and propellant properties. The analysis is used to predict the thrust-time characteristics of 130 randomly selected pairs of Titan IIIC SRMs. A statistical comparison of the results with test data for 20 pairs shows the theory underpredicts the standard deviation in maximum thrust imbalance by 20% with variability in burning times matched within 2%. The range in thrust imbalance of Space Shuttle type SRM pairs is also estimated using applicable tolerances and variabilities and a correction factor based on the Titan IIIC analysis.

  4. A randomized evaluation of a computer-based physician's workstation: design considerations and baseline results.

    PubMed Central

    Rotman, B. L.; Sullivan, A. N.; McDonald, T.; DeSmedt, P.; Goodnature, D.; Higgins, M.; Suermondt, H. J.; Young, C. Y.; Owens, D. K.

    1995-01-01

    We are performing a randomized, controlled trial of a Physician's Workstation (PWS), an ambulatory care information system, developed for use in the General Medical Clinic (GMC) of the Palo Alto VA. Goals for the project include selecting appropriate outcome variables and developing a statistically powerful experimental design with a limited number of subjects. As PWS provides real-time drug-ordering advice, we retrospectively examined drug costs and drug-drug interactions in order to select outcome variables sensitive to our short-term intervention as well as to estimate the statistical efficiency of alternative design possibilities. Drug cost data revealed the mean daily cost per physician per patient was 99.3 cents +/- 13.4 cents, with a range from 0.77 cent to 1.37 cents. The rate of major interactions per prescription for each physician was 2.9% +/- 1%, with a range from 1.5% to 4.8%. Based on these baseline analyses, we selected a two-period parallel design for the evaluation, which maximized statistical power while minimizing sources of bias. PMID:8563376

  5. Remote Monitoring of Cardiac Implantable Electronic Devices (CIED)

    PubMed Central

    Zeitler, Emily P.; Piccini, Jonathan P.

    2016-01-01

    With increasing indications and access to cardiac implantable electronic devices (CIEDs) worldwide, the number of patients needing CIED follow up continues to rise. In parallel, the technology available for managing these devices has advanced considerably. In this setting, remote monitoring (RM) has emerged as a complement to routine in-office care. Rigorous studies, randomized and otherwise, have demonstrated advantages to CIED patient management systems which incorporates RM resulting in authoritative guidelines from relevant professional societies recommending RM for all eligible patients. In addition to clinical benefits, CIED management programs that include RM have been shown to be cost effective and associated with high patient satisfaction. Finally, RM programs hold promise for the future of CIED research in light of the massive data collected through RM databases converging with unprecedented computational capability. This review outlines the available data associated with clinical outcomes in patients managed with RM with an emphasis on randomized trials; the impact of RM on patient satisfaction, cost-effectiveness and healthcare utilization; and possible future directions for the use of RM in clinical practice and research. PMID:27134007

  6. Self-correcting random number generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S.; Pooser, Raphael C.

    2016-09-06

    A system and method for generating random numbers. The system may include a random number generator (RNG), such as a quantum random number generator (QRNG) configured to self-correct or adapt in order to substantially achieve randomness from the output of the RNG. By adapting, the RNG may generate a random number that may be considered random regardless of whether the random number itself is tested as such. As an example, the RNG may include components to monitor one or more characteristics of the RNG during operation, and may use the monitored characteristics as a basis for adapting, or self-correcting, tomore » provide a random number according to one or more performance criteria.« less

  7. Efficacy of orally administered prednisolone versus partial endodontic treatment on pain reduction in emergency care of acute irreversible pulpitis of mandibular molars: study protocol for a randomized controlled trial.

    PubMed

    Kérourédan, Olivia; Jallon, Léonard; Perez, Paul; Germain, Christine; Péli, Jean-François; Oriez, Dominique; Fricain, Jean-Christophe; Arrivé, Elise; Devillard, Raphaël

    2017-03-28

    Irreversible pulpitis is a highly painful inflammatory condition of the dental pulp which represents a common dental emergency. Recommended care is partial endodontic treatment. The dental literature reports major difficulties in achieving adequate analgesia to perform this emergency treatment, especially in the case of mandibular molars. In current practice, short-course, orally administered corticotherapy is used for the management of oral pain of inflammatory origin. The efficacy of intraosseous local steroid injections for irreversible pulpitis in mandibular molars has already been demonstrated but resulted in local comorbidities. Oral administration of short-course prednisolone is simple and safe but its efficacy to manage pain caused by irreversible pulpitis has not yet been demonstrated. This trial aims to evaluate the noninferiority of short-course, orally administered corticotherapy versus partial endodontic treatment for the emergency care of irreversible pulpitis in mandibular molars. This study is a noninferiority, open-label, randomized controlled clinical trial conducted at the Bordeaux University Hospital. One hundred and twenty subjects will be randomized in two 1:1 parallel arms: the intervention arm will receive one oral dose of prednisolone (1 mg/kg) during the emergency visit, followed by one morning dose each day for 3 days and the reference arm will receive partial endodontic treatment. Both groups will receive planned complete endodontic treatment 72 h after enrollment. The primary outcome is the proportion of patients with pain intensity below 5 on a Numeric Scale 24 h after the emergency visit. Secondary outcomes include comfort during care, the number of injected anesthetic cartridges when performing complete endodontic treatment, the number of antalgic drugs and the number of patients coming back for consultation after 72 h. This randomized trial will assess the ability of short-term corticotherapy to reduce pain in irreversible pulpitis as a simple and rapid alternative to partial endodontic treatment and to enable planning of endodontic treatment in optimal analgesic conditions. ClinicalTrials.gov, identifier: NCT02629042 . Registered on 7 December 2015. (Version n°1.1 28 July 2015).

  8. Understanding decimal proportions: discrete representations, parallel access, and privileged processing of zero.

    PubMed

    Varma, Sashank; Karl, Stacy R

    2013-05-01

    Much of the research on mathematical cognition has focused on the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9, with considerably less attention paid to more abstract number classes. The current research investigated how people understand decimal proportions--rational numbers between 0 and 1 expressed in the place-value symbol system. The results demonstrate that proportions are represented as discrete structures and processed in parallel. There was a semantic interference effect: When understanding a proportion expression (e.g., "0.29"), both the correct proportion referent (e.g., 0.29) and the incorrect natural number referent (e.g., 29) corresponding to the visually similar natural number expression (e.g., "29") are accessed in parallel, and when these referents lead to conflicting judgments, performance slows. There was also a syntactic interference effect, generalizing the unit-decade compatibility effect for natural numbers: When comparing two proportions, their tenths and hundredths components are processed in parallel, and when the different components lead to conflicting judgments, performance slows. The results also reveal that zero decimals--proportions ending in zero--serve multiple cognitive functions, including eliminating semantic interference and speeding processing. The current research also extends the distance, semantic congruence, and SNARC effects from natural numbers to decimal proportions. These findings inform how people understand the place-value symbol system, and the mental implementation of mathematical symbol systems more generally. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Immediate versus early non-occlusal loading of dental implants placed flapless in partially edentulous patients: a 3-year randomized clinical trial.

    PubMed

    Merli, Mauro; Moscatelli, Marco; Mariotti, Giorgia; Piemontese, Matteo; Nieri, Michele

    2012-02-01

    To compare immediate versus early non-occlusal loading of dental implants placed flapless in a 3-year, parallel group, randomized clinical trial. The study was conducted in a private dental clinic between July 2005 and July 2010. Patients 18 years or older were randomized to receive implants for fixed partial dentures in cases of partial edentulism. The test group was represented by immediate non-occlusal implant loading, whereas the control group was represented by early non-occlusal implant loading. The outcome variables were implant failure, complications and radiographic bone level at implant sites 3 years after loading, measured from the implant-abutment junction to the most coronal point of bone-to-implant contact. Randomization was computer-generated with allocation concealment by opaque sequentially numbered sealed envelopes, and the measurer was blinded to group assignment. Sixty patients were randomized: 30 to the immediately loaded group and 30 to the early loaded group. Four patients dropped out; however, the data of all patients were included in the analysis. No implant failure occurred. Two complications occurred in the control group and one in the test group. The mean bone level at 3 years was 1.91 mm for test group and 1.59 mm for control group. The adjusted difference in bone level was 0.26 mm (CI 95% -0.08 to 0.59, p = 0.1232). The null hypothesis of no difference in failure rates, complications and bone level between implants that were loaded immediately or early at 3 years cannot be rejected in this randomized clinical trial. © 2011 John Wiley & Sons A/S.

  10. Effect of caffeine on SPECT myocardial perfusion imaging during regadenoson pharmacologic stress: rationale and design of a prospective, randomized, multicenter study.

    PubMed

    Tejani, Furqan H; Thompson, Randall C; Iskandrian, Ami E; McNutt, Bruce E; Franks, Billy

    2011-02-01

    Caffeine attenuates the coronary hyperemic response to adenosine by competitive A₂(A) receptor blockade. This study aims to determine whether oral caffeine administration compromises diagnostic accuracy in patients undergoing vasodilator stress myocardial perfusion imaging (MPI) with regadenoson, a selective adenosine A(2A) agonist. This multicenter, randomized, double-blind, placebo-controlled, parallel-group study includes patients with suspected coronary artery disease who regularly consume caffeine. Each participant undergoes three SPECT MPI studies: a rest study on day 1 (MPI-1); a regadenoson stress study on day 3 (MPI-2), and a regadenoson stress study on day 5 with double-blind administration of oral caffeine 200 or 400 mg or placebo capsules (MPI-3; n = 90 per arm). Only participants with ≥ 1 reversible defect on the second MPI study undergo the subsequent stress MPI test. The primary endpoint is the difference in the number of reversible defects on the two stress tests using a 17-segment model. Pharmacokinetic/pharmacodynamic analyses will evaluate the effect of caffeine on the regadenoson exposure-response relationship. Safety will also be assessed. The results of this study will show whether the consumption of caffeine equivalent to 2-4 cups of coffee prior to an MPI study with regadenoson affects the diagnostic validity of stress testing (ClinicalTrials.gov number, NCT00826280).

  11. Comparison of intra-articular injections of hyaluronic acid and corticosteroid in the treatment of osteoarthritis of the hip in comparison with intra-articular injections of bupivacaine. Design of a prospective, randomized, controlled study with blinding of the patients and outcome assessors.

    PubMed

    Colen, Sascha; van den Bekerom, Michel P J; Bellemans, Johan; Mulier, Michiel

    2010-11-16

    Although intra-articular hyaluronic acid is well established as a treatment for osteoarthritis of the knee, its use in hip osteoarthritis is not based on large randomized controlled trials. There is a need for more rigorously designed studies on hip osteoarthritis treatment as this subject is still very much under debate. Randomized, controlled trial with a three-armed, parallel-group design. Approximately 315 patients complying with the inclusion and exclusion criteria will be randomized into one of the following treatment groups: infiltration of the hip joint with hyaluronic acid, with a corticosteroid or with 0.125% bupivacaine.The following outcome measure instruments will be assessed at baseline, i.e. before the intra-articular injection of one of the study products, and then again at six weeks, 3 and 6 months after the initial injection: Pain (100 mm VAS), Harris Hip Score and HOOS, patient assessment of their clinical status (worse, stable or better then at the time of enrollment) and intake of pain rescue medication (number per week). In addition patients will be asked if they have complications/adverse events. The six-month follow-up period for all patients will begin on the date the first injection is administered. This randomized, controlled, three-arm study will hopefully provide robust information on two of the intra-articular treatments used in hip osteoarthritis, in comparison to bupivacaine. NCT01079455.

  12. Safety and Efficacy of ABT-089 in Pediatric Attention-Deficit/Hyperactivity Disorder: Results from Two Randomized Placebo-Controlled Clinical Trials

    ERIC Educational Resources Information Center

    Wilens, Timothy E.; Gault, Laura M.; Childress, Ann; Kratochvil, Christopher J.; Bensman, Lindsey; Hall, Coleen M.; Olson, Evelyn; Robieson, Weining Z.; Garimella, Tushar S.; Abi-Saab, Walid M.; Apostol, George; Saltarelli, Mario D.

    2011-01-01

    Objective: To assess the safety and efficacy of ABT-089, a novel alpha[subscript 4]beta[subscript 2] neuronal nicotinic receptor partial agonist, vs. placebo in children with attention-deficit/hyperactivity disorder (ADHD). Method: Two multicenter, randomized, double-blind, placebo-controlled, parallel-group studies of children 6 through 12 years…

  13. The Acute Effect of Methylphenidate in Brazilian Male Children and Adolescents with ADHD: A Randomized Clinical Trial

    ERIC Educational Resources Information Center

    Szobot, C. M.; Ketzer, C.; Parente, M. A.; Biederman, J.; Rohde, L. A.

    2004-01-01

    Objective: To evaluate the acute efficacy of methylphenidate (MPH) in Brazilian male children and adolescents with ADHD. Method: In a 4-day, double-blind, placebo-controlled, randomized, fix dose escalating, parallel-group trial, 36 ADHD children and adolescents were allocated to two groups: MPH (n = 19) and placebo (n = 17). Participants were…

  14. Parallel-aware, dedicated job co-scheduling within/across symmetric multiprocessing nodes

    DOEpatents

    Jones, Terry R.; Watson, Pythagoras C.; Tuel, William; Brenner, Larry; ,Caffrey, Patrick; Fier, Jeffrey

    2010-10-05

    In a parallel computing environment comprising a network of SMP nodes each having at least one processor, a parallel-aware co-scheduling method and system for improving the performance and scalability of a dedicated parallel job having synchronizing collective operations. The method and system uses a global co-scheduler and an operating system kernel dispatcher adapted to coordinate interfering system and daemon activities on a node and across nodes to promote intra-node and inter-node overlap of said interfering system and daemon activities as well as intra-node and inter-node overlap of said synchronizing collective operations. In this manner, the impact of random short-lived interruptions, such as timer-decrement processing and periodic daemon activity, on synchronizing collective operations is minimized on large processor-count SPMD bulk-synchronous programming styles.

  15. Research on the adaptive optical control technology based on DSP

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolu; Xue, Qiao; Zeng, Fa; Zhao, Junpu; Zheng, Kuixing; Su, Jingqin; Dai, Wanjun

    2018-02-01

    Adaptive optics is a real-time compensation technique using high speed support system for wavefront errors caused by atmospheric turbulence. However, the randomness and instantaneity of atmospheric changing introduce great difficulties to the design of adaptive optical systems. A large number of complex real-time operations lead to large delay, which is an insurmountable problem. To solve this problem, hardware operation and parallel processing strategy are proposed, and a high-speed adaptive optical control system based on DSP is developed. The hardware counter is used to check the system. The results show that the system can complete a closed loop control in 7.1ms, and improve the controlling bandwidth of the adaptive optical system. Using this system, the wavefront measurement and closed loop experiment are carried out, and obtain the good results.

  16. Electron parallel closures for various ion charge numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Jeong-Young, E-mail: j.ji@usu.edu; Held, Eric D.; Kim, Sang-Kyeun

    2016-03-15

    Electron parallel closures for the ion charge number Z = 1 [J.-Y. Ji and E. D. Held, Phys. Plasmas 21, 122116 (2014)] are extended for 1 ≤ Z ≤ 10. Parameters are computed for various Z with the same form of the Z = 1 kernels adopted. The parameters are smoothly varying in Z and hence can be used to interpolate parameters and closures for noninteger, effective ion charge numbers.

  17. Statistical considerations for grain-size analyses of tills

    USGS Publications Warehouse

    Jacobs, A.M.

    1971-01-01

    Relative percentages of sand, silt, and clay from samples of the same till unit are not identical because of different lithologies in the source areas, sorting in transport, random variation, and experimental error. Random variation and experimental error can be isolated from the other two as follows. For each particle-size class of each till unit, a standard population is determined by using a normally distributed, representative group of data. New measurements are compared with the standard population and, if they compare satisfactorily, the experimental error is not significant and random variation is within the expected range for the population. The outcome of the comparison depends on numerical criteria derived from a graphical method rather than on a more commonly used one-way analysis of variance with two treatments. If the number of samples and the standard deviation of the standard population are substituted in a t-test equation, a family of hyperbolas is generated, each of which corresponds to a specific number of subsamples taken from each new sample. The axes of the graphs of the hyperbolas are the standard deviation of new measurements (horizontal axis) and the difference between the means of the new measurements and the standard population (vertical axis). The area between the two branches of each hyperbola corresponds to a satisfactory comparison between the new measurements and the standard population. Measurements from a new sample can be tested by plotting their standard deviation vs. difference in means on axes containing a hyperbola corresponding to the specific number of subsamples used. If the point lies between the branches of the hyperbola, the measurements are considered reliable. But if the point lies outside this region, the measurements are repeated. Because the critical segment of the hyperbola is approximately a straight line parallel to the horizontal axis, the test is simplified to a comparison between the means of the standard population and the means of the subsample. The minimum number of subsamples required to prove significant variation between samples caused by different lithologies in the source areas and sorting in transport can be determined directly from the graphical method. The minimum number of subsamples required is the maximum number to be run for economy of effort. ?? 1971 Plenum Publishing Corporation.

  18. Large N Limits in Tensor Models: Towards More Universality Classes of Colored Triangulations in Dimension d≥2

    NASA Astrophysics Data System (ADS)

    Bonzom, Valentin

    2016-07-01

    We review an approach which aims at studying discrete (pseudo-)manifolds in dimension d≥ 2 and called random tensor models. More specifically, we insist on generalizing the two-dimensional notion of p-angulations to higher dimensions. To do so, we consider families of triangulations built out of simplices with colored faces. Those simplices can be glued to form new building blocks, called bubbles which are pseudo-manifolds with boundaries. Bubbles can in turn be glued together to form triangulations. The main challenge is to classify the triangulations built from a given set of bubbles with respect to their numbers of bubbles and simplices of codimension two. While the colored triangulations which maximize the number of simplices of codimension two at fixed number of simplices are series-parallel objects called melonic triangulations, this is not always true anymore when restricting attention to colored triangulations built from specific bubbles. This opens up the possibility of new universality classes of colored triangulations. We present three existing strategies to find those universality classes. The first two strategies consist in building new bubbles from old ones for which the problem can be solved. The third strategy is a bijection between those colored triangulations and stuffed, edge-colored maps, which are some sort of hypermaps whose hyperedges are replaced with edge-colored maps. We then show that the present approach can lead to enumeration results and identification of universality classes, by working out the example of quartic tensor models. They feature a tree-like phase, a planar phase similar to two-dimensional quantum gravity and a phase transition between them which is interpreted as a proliferation of baby universes. While this work is written in the context of random tensors, it is almost exclusively of combinatorial nature and we hope it is accessible to interested readers who are not familiar with random matrices, tensors and quantum field theory.

  19. Multivariate test power approximations for balanced linear mixed models in studies with missing data.

    PubMed

    Ringham, Brandy M; Kreidler, Sarah M; Muller, Keith E; Glueck, Deborah H

    2016-07-30

    Multilevel and longitudinal studies are frequently subject to missing data. For example, biomarker studies for oral cancer may involve multiple assays for each participant. Assays may fail, resulting in missing data values that can be assumed to be missing completely at random. Catellier and Muller proposed a data analytic technique to account for data missing at random in multilevel and longitudinal studies. They suggested modifying the degrees of freedom for both the Hotelling-Lawley trace F statistic and its null case reference distribution. We propose parallel adjustments to approximate power for this multivariate test in studies with missing data. The power approximations use a modified non-central F statistic, which is a function of (i) the expected number of complete cases, (ii) the expected number of non-missing pairs of responses, or (iii) the trimmed sample size, which is the planned sample size reduced by the anticipated proportion of missing data. The accuracy of the method is assessed by comparing the theoretical results to the Monte Carlo simulated power for the Catellier and Muller multivariate test. Over all experimental conditions, the closest approximation to the empirical power of the Catellier and Muller multivariate test is obtained by adjusting power calculations with the expected number of complete cases. The utility of the method is demonstrated with a multivariate power analysis for a hypothetical oral cancer biomarkers study. We describe how to implement the method using standard, commercially available software products and give example code. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Implementation of Shifted Periodic Boundary Conditions in the Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) Software

    DTIC Science & Technology

    2015-08-01

    Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) Software by N Scott Weingarten and James P Larentzos Approved for...Massively Parallel Simulator ( LAMMPS ) Software by N Scott Weingarten Weapons and Materials Research Directorate, ARL James P Larentzos Engility...Shifted Periodic Boundary Conditions in the Large-Scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) Software 5a. CONTRACT NUMBER 5b

  1. A Multicenter, Randomized, Controlled Trial of Osteopathic Manipulative Treatment on Preterms

    PubMed Central

    Cerritelli, Francesco; Pizzolorusso, Gianfranco; Renzetti, Cinzia; Cozzolino, Vincenzo; D’Orazio, Marianna; Lupacchini, Mariacristina; Marinelli, Benedetta; Accorsi, Alessandro; Lucci, Chiara; Lancellotti, Jenny; Ballabio, Silvia; Castelli, Carola; Molteni, Daniela; Besana, Roberto; Tubaldi, Lucia; Perri, Francesco Paolo; Fusilli, Paola; D’Incecco, Carmine; Barlafante, Gina

    2015-01-01

    Background Despite some preliminary evidence, it is still largely unknown whether osteopathic manipulative treatment improves preterm clinical outcomes. Materials and Methods The present multi-center randomized single blind parallel group clinical trial enrolled newborns who met the criteria for gestational age between 29 and 37 weeks, without any congenital complication from 3 different public neonatal intensive care units. Preterm infants were randomly assigned to usual prenatal care (control group) or osteopathic manipulative treatment (study group). The primary outcome was the mean difference in length of hospital stay between groups. Results A total of 695 newborns were randomly assigned to either the study group (n= 352) or the control group (n=343). A statistical significant difference was observed between the two groups for the primary outcome (13.8 and 17.5 days for the study and control group respectively, p<0.001, effect size: 0.31). Multivariate analysis showed a reduction of the length of stay of 3.9 days (95% CI -5.5 to -2.3, p<0.001). Furthermore, there were significant reductions with treatment as compared to usual care in cost (difference between study and control group: 1,586.01€; 95% CI 1,087.18 to 6,277.28; p<0.001) but not in daily weight gain. There were no complications associated to the intervention. Conclusions Osteopathic treatment reduced significantly the number of days of hospitalization and is cost-effective on a large cohort of preterm infants. PMID:25974071

  2. The RESPIRE trials: Two phase III, randomized, multicentre, placebo-controlled trials of Ciprofloxacin Dry Powder for Inhalation (Ciprofloxacin DPI) in non-cystic fibrosis bronchiectasis.

    PubMed

    Aksamit, Timothy; Bandel, Tiemo-Joerg; Criollo, Margarita; De Soyza, Anthony; Elborn, J Stuart; Operschall, Elisabeth; Polverino, Eva; Roth, Katrin; Winthrop, Kevin L; Wilson, Robert

    2017-07-01

    The primary goals of long-term disease management in non-cystic fibrosis bronchiectasis (NCFB) are to reduce the number of exacerbations, and improve quality of life. However, currently no therapies are licensed for this. Ciprofloxacin Dry Powder for Inhalation (Ciprofloxacin DPI) has potential to be the first long-term intermittent therapy approved to reduce exacerbations in NCFB patients. The RESPIRE programme consists of two international phase III prospective, parallel-group, randomized, double-blinded, multicentre, placebo-controlled trials of the same design. Adult patients with idiopathic or post-infectious NCFB, a history of ≥2 exacerbations in the previous 12months, and positive sputum culture for one of seven pre-specified pathogens, undergo stratified randomization 2:1 to receive twice-daily Ciprofloxacin DPI 32.5mg or placebo using a pocket-sized inhaler in one of two regimens: 28days on/off treatment or 14days on/off treatment. The treatment period is 48weeks plus an 8-week follow-up after the last dose. The primary efficacy endpoints are time to first exacerbation after treatment initiation and frequency of exacerbations using a stringent definition of exacerbation. Secondary endpoints, including frequency of events using different exacerbation definitions, microbiology, quality of life and lung function will also be evaluated. The RESPIRE trials will determine the efficacy and safety of Ciprofloxacin DPI. The strict entry criteria and stratified randomization, the inclusion of two treatment regimens and a stringent definition of exacerbation should clarify the patient population best positioned to benefit from long-term inhaled antibiotic therapy. Additionally RESPIRE will increase understanding of NCFB treatment and could lead to an important new therapy for sufferers. The RESPIRE trials are registered in ClinicalTrials.gov, ID number NCT01764841 (RESPIRE 1; date of registration January 8, 2013) and NCT02106832 (RESPIRE 2; date of registration April 4, 2014). Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Impact of Attending Physicians' Comments on Residents' Workloads in the Emergency Department: Results from Two J(^o^)PAN Randomized Controlled Trials.

    PubMed

    Kuriyama, Akira; Umakoshi, Noriyuki; Fujinaga, Jun; Kaihara, Toshie; Urushidani, Seigo; Kuninaga, Naoki; Ichikawa, Motohiro; Ienaga, Shinichiro; Sasaki, Akira; Ikegami, Tetsunori

    2016-01-01

    To examine whether peppy comments from attending physicians increased the workload of residents working in the emergency department (ED). We conducted two parallel-group, assessor-blinded, randomized trials at the ED in a tertiary care hospital in western Japan. Twenty-five residents who examined either ambulatory (J(^o^)PAN-1 Trial) or transferred patients (J(^o^)PAN-2 Trial) in the ED on weekdays. Participants were randomly assigned to groups that either received a peppy message such as "Hope you have a quiet day!" (intervention group) or did not (control group) from the attending physicians. Both trials were conducted from June 2014 through March 2015. For each trial, residents rated the number of patients examined during and the busyness and difficulty of their shifts on a 5-point Likert scale. A total of 169 randomizations (intervention group, 81; control group, 88) were performed for the J(^o^)PAN-1 Trial, and 178 (intervention group, 85; control group, 93) for the J(^o^)PAN-2 Trial. In the J(^o^)PAN-1 trial, no differences were observed in the number of ambulatory patients examined during their shifts (5.5 and 5.7, respectively, p = 0.48), the busyness of their shifts (2.8 vs 2.8; p = 0.58), or the difficulty of their shifts (3.1 vs 3.1, p = 0.94). However, in the J(^o^)PAN-2 trial, although busyness (2.8 vs 2.7; p = 0.40) and difficulty (3.1 vs 3.2; p = 0.75) were similar between groups, the intervention group examined more transferred patients than the control group (4.4 vs 3.9; p = 0.01). Peppy comments from attending physicians had a minimal jinxing effect on the workload of residents working in the ED. University Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR), UMIN000017193 and UMIN000017194.

  4. Six-Month Evaluation of a Sodium Bicarbonate-Containing Toothpaste for Reduction of Established Gingivitis: A Randomized USA-Based Clinical Trial.

    PubMed

    Jose, Anto; Pratten, Jonathan; Bosma, Mary-Lynn; Milleman, Kimberly R; Milleman, Jeffery L; Wang, Nan

    2018-03-01

    Short-term use of sodium bicarbonate (NaHCO3)-containing toothpaste reduces plaque and improves clinical measures of gingivitis. To examine this over a longer period, we compared efficacy and tolerability of twice-daily brushing for 24 weeks with 67% or 0% NaHCO3-containing toothpastes in USA-based participants with moderate gingivitis (Clinicaltrials.gov:NCT02207400). This was a six-month, randomized, examiner-blind, parallel-group, clinical trial. Investigators randomized adults with blood in expectorate after brushing and ≥ 20 gingival bleeding sites to 67% NaHCO3 (n = 123; n = 107 completed study) or 0% NaHCO3 (n = 123; n = 109 completed study) toothpastes. Primary efficacy variables included between-treatment differences in number of bleeding sites and Modified Gingival Index (MGI) score at 24 weeks. Secondary efficacy variables included Bleeding Index and Turesky modification of the Quigley-Hein Plaque Index (overall and interproximal sites) at six, 12, and 24 weeks. A subset of 50 participants underwent sampling to assess plaque microbiology over the course of treatment. Compared with the 0% NaHCO3 toothpaste, the 67% NaHCO3 toothpaste produced statistically significant improvements at Week 24 in number of bleeding sites (46.7% difference) and MGI (33.9% difference), and for all other endpoints (all p < 0.0001). There was no significant between-treatment difference in the proportion of participants harboring opportunistic pathogens. Products were generally well tolerated, with two and five treatment-related adverse events reported in the 67% and 0% NaHCO3 toothpaste groups, respectively. Gingival bleeding, gingivitis, and plaque indices were significantly improved at six, 12, and 24 weeks with twice-daily brushing with 67% NaHCO3-containing toothpaste in participants with moderate gingivitis. Copyright© by the YES Group, Inc.

  5. A randomized controlled trial evaluating the efficacy of a 67% sodium bicarbonate toothpaste on gingivitis.

    PubMed

    Lomax, A; Patel, S; Wang, N; Kakar, K; Kakar, A; Bosma, M L

    2017-11-01

    In previous studies, toothpastes with high levels of sodium bicarbonate (>50%) have reduced gingival inflammation and oral malodour. This study compared the effects of brushing for 6 weeks with 67% (test group) or 0% (control group) sodium bicarbonate toothpaste on gingival health. This was a single-centre, single examiner-blind, randomized, controlled, two-treatment, parallel-group study. Eligible subjects (≥18 years) had ≥20 gradable teeth, mild-to-moderate gingivitis, a positive response to bleeding on brushing and ≥20 bleeding sites. The primary objective was to compare the number of bleeding sites following twice-daily use of 67% sodium bicarbonate toothpaste or 0% sodium bicarbonate toothpaste after 6 weeks. Secondary endpoints included Modified Gingival Index (MGI), Bleeding Index (BI) and volatile sulphur compounds (VSC), assessed after 6 weeks. Safety was assessed by treatment-emergent oral soft tissue abnormalities and adverse events. Of 148 patients randomized (74 to each treatment), 66 (89.2%) completed the study in the test group, compared with 69 (93.2%) in the control group. Compared with the control group, the test group had a significant reduction in the number of bleeding sites at Week 6 (absolute difference - 11.0 [-14.0, -8.0], P < 0.0001; relative difference - 25.4%), together with significant reductions in MGI and BI (both P < 0.0001). Although the median reductions from baseline for VSC were numerically greater in the test group, the difference did not reach statistical significance (P = 0.9701). This 67% sodium bicarbonate toothpaste provided statistically significant improvements in gingival health and bleeding after 6 weeks of use. © 2016 The Authors. International Journal of Dental Hygiene Published by John Wiley & Sons Ltd.

  6. Oxcarbazepine in migraine headache: a double-blind, randomized, placebo-controlled study.

    PubMed

    Silberstein, S; Saper, J; Berenson, F; Somogyi, M; McCague, K; D'Souza, J

    2008-02-12

    To evaluate the efficacy, safety, and tolerability of oxcarbazepine (1,200 mg/day) vs placebo as prophylactic therapy for patients with migraine headaches. This multicenter, double-blind, randomized, placebo-controlled, parallel-group trial consisted of a 4-week single-blind baseline phase and a 15-week double-blind phase consisting of a 6-week titration period, an 8-week maintenance period, and a 1-week down-titration period, after which patients could enter a 13-week open-label extension phase. During the 6-week titration period, oxcarbazepine was initiated at 150 mg/day and increased by 150 mg/day every 5 days to a maximum tolerated dose of 1,200 mg/day. The primary outcome measure was change from baseline in the number of migraine attacks during the last 28-day period of the double-blind phase. Eighty-five patients were randomized to receive oxcarbazepine and 85 to receive placebo. There was no difference between the oxcarbazepine (-1.30) and placebo groups in mean change in number of migraine attacks from baseline during the last 28 days of double-blind phase (-1.74; p = 0.2274). Adverse events were reported for 68 oxcarbazepine-treated patients (80%) and 55 placebo-treated patients (65%). The majority of adverse events were mild or moderate in severity. The most common adverse events (>or=15% of patients) in the oxcarbazepine-treated group were fatigue (20.0%), dizziness (17.6%), and nausea (16.5%); no adverse event occurred in more than 15% of the placebo-treated patients. Overall, oxcarbazepine was safe and well tolerated; however, oxcarbazepine did not show efficacy in the prophylactic treatment of migraine headaches.

  7. Reducing intrusive traumatic memories after emergency caesarean section: A proof-of-principle randomized controlled study.

    PubMed

    Horsch, Antje; Vial, Yvan; Favrod, Céline; Harari, Mathilde Morisod; Blackwell, Simon E; Watson, Peter; Iyadurai, Lalitha; Bonsall, Michael B; Holmes, Emily A

    2017-07-01

    Preventative psychological interventions to aid women after traumatic childbirth are needed. This proof-of-principle randomized controlled study evaluated whether the number of intrusive traumatic memories mothers experience after emergency caesarean section (ECS) could be reduced by a brief cognitive intervention. 56 women after ECS were randomized to one of two parallel groups in a 1:1 ratio: intervention (usual care plus cognitive task procedure) or control (usual care). The intervention group engaged in a visuospatial task (computer-game 'Tetris' via a handheld gaming device) for 15 min within six hours following their ECS. The primary outcome was the number of intrusive traumatic memories related to the ECS recorded in a diary for the week post-ECS. As predicted, compared with controls, the intervention group reported fewer intrusive traumatic memories (M = 4.77, SD = 10.71 vs. M = 9.22, SD = 10.69, d = 0.647 [95% CI: 0.106, 1.182]) over 1 week (intention-to-treat analyses, primary outcome). There was a trend towards reduced acute stress re-experiencing symptoms (d = 0.503 [95% CI: -0.032, 1.033]) after 1 week (intention-to-treat analyses). Times series analysis on daily intrusions data confirmed the predicted difference between groups. 72% of women rated the intervention "rather" to "extremely" acceptable. This represents a first step in the development of an early (and potentially universal) intervention to prevent postnatal posttraumatic stress symptoms that may benefit both mother and child. ClinicalTrials.gov, www.clinicaltrials.gov, NCT02502513. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Study protocol: Münster tinnitus randomized controlled clinical trial-2013 based on tailor-made notched music training (TMNMT).

    PubMed

    Pantev, Christo; Rudack, Claudia; Stein, Alwina; Wunderlich, Robert; Engell, Alva; Lau, Pia; Wollbrink, Andreas; Shaykevich, Alex

    2014-03-02

    Tinnitus is a result of hyper-activity/hyper-synchrony of auditory neurons coding the tinnitus frequency, which has developed to synchronous mass activity owing the lack of inhibition. We assume that removal of exactly these frequency components from an auditory stimulus will cause the brain to reorganize around tonotopic regions coding the tinnitus frequency. Based on this assumption a novel treatment for tonal tinnitus - tailor-made notched music training (TMNMT) (Proc Natl Acad Sci USA 107:1207-1210, 2010; Ann N Y Acad Sci 1252:253-258, 2012; Frontiers Syst Neurosci 6:50, 2012) has been introduced and will be tested in this clinical trial on a large number of tinnitus patients. A randomized controlled trial (RCT) in parallel group design will be performed in a double-blinded manner. The choice of the intervention we are going to apply is based on two "proof of concept" studies in humans (Proc Natl Acad Sci USA 107:1207-1210, 2010; Ann N Y Acad Sci 1252:253-258, 2012; Frontiers Syst Neurosci 6:50, 2012; PloS One 6(9):e24685, 2011) and on a recent animal study (Front Syst Neurosci 7:21, 2013).The RCT includes 100 participants with chronic, tonal tinnitus who listened to tailor-made notched music (TMNM) for two hours a day for three months. The effect of TMNMT is assessed by the tinnitus handicap questionnaire and visual analogue scales (VAS) measuring perceived tinnitus loudness, distress and handicap. This is the first randomized controlled trial applying TMNMT on a larger number of patients with tonal tinnitus. Our data will verify more securely and reliably the effectiveness of this kind of completely non-invasive and low-cost treatment approach on tonal tinnitus. Current Controlled Trials ISRCTN04840953.

  9. Efficacy of high doses of penicillin versus amoxicillin in the treatment of uncomplicated community acquired pneumonia in adults. A non-inferiority controlled clinical trial.

    PubMed

    Llor, Carl; Pérez, Almudena; Carandell, Eugenia; García-Sangenís, Anna; Rezola, Javier; Llorente, Marian; Gestoso, Salvador; Bobé, Francesc; Román-Rodríguez, Miguel; Cots, Josep M; Hernández, Silvia; Cortés, Jordi; Miravitlles, Marc; Morros, Rosa

    2017-10-20

    Community-acquired pneumonia (CAP) is treated with penicillin in some northern European countries. To evaluate whether high-dose penicillin V is as effective as high-dose amoxicillin for the treatment of non-severe CAP. Multicentre, parallel, double-blind, controlled, randomized clinical trial. 31 primary care centers in Spain. Patients from 18 to 75 years of age with no significant associated comorbidity and with symptoms of lower respiratory tract infection and radiological confirmation of CAP were randomized to receive either penicillin V 1.6 million units, or amoxicillin 1000mg three times per day for 10 days. The main outcome was clinical cure at 14 days, and the primary hypothesis was that penicillin V would be non-inferior to amoxicillin with regard to this outcome, with a margin of 15% for the difference in proportions. EudraCT register 2012-003511-63. A total of 43 subjects (amoxicillin: 28; penicillin: 15) were randomized. Clinical cure was observed in 10 (90.9%) patients assigned to penicillin and in 25 (100%) patients assigned to amoxicillin with a difference of -9.1% (95% CI, -41.3% to 6.4%; p=.951) for non-inferiority. In the intention-to-treat analysis, amoxicillin was found to be 28.6% superior to penicillin (95% CI, 7.3-58.1%; p=.009 for superiority). The number of adverse events was similar in both groups. There was a trend favoring high-dose amoxicillin versus high-dose penicillin in adults with uncomplicated CAP. The main limitation of this trial was the low statistical power due to the low number of patients included. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.

  10. Visual analysis of inter-process communication for large-scale parallel computing.

    PubMed

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  11. Extracting random numbers from quantum tunnelling through a single diode.

    PubMed

    Bernardo-Gavito, Ramón; Bagci, Ibrahim Ethem; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J; Woodhead, Christopher S; Missous, Mohamed; Roedig, Utz; Young, Robert J

    2017-12-19

    Random number generation is crucial in many aspects of everyday life, as online security and privacy depend ultimately on the quality of random numbers. Many current implementations are based on pseudo-random number generators, but information security requires true random numbers for sensitive applications like key generation in banking, defence or even social media. True random number generators are systems whose outputs cannot be determined, even if their internal structure and response history are known. Sources of quantum noise are thus ideal for this application due to their intrinsic uncertainty. In this work, we propose using resonant tunnelling diodes as practical true random number generators based on a quantum mechanical effect. The output of the proposed devices can be directly used as a random stream of bits or can be further distilled using randomness extraction algorithms, depending on the application.

  12. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  13. A Behavior-Based Intervention That Prevents Sexual Assault: the Results of a Matched-Pairs, Cluster-Randomized Study in Nairobi, Kenya.

    PubMed

    Baiocchi, Michael; Omondi, Benjamin; Langat, Nickson; Boothroyd, Derek B; Sinclair, Jake; Pavia, Lee; Mulinge, Munyae; Githua, Oscar; Golden, Neville H; Sarnquist, Clea

    2017-10-01

    The study's design was a cluster-randomized, matched-pairs, parallel trial of a behavior-based sexual assault prevention intervention in the informal settlements. The participants were primary school girls aged 10-16. Classroom-based interventions for girls and boys were delivered by instructors from the same settlements, at the same time, over six 2-h sessions. The girls' program had components of empowerment, gender relations, and self-defense. The boys' program promotes healthy gender norms. The control arm of the study received a health and hygiene curriculum. The primary outcome was the rate of sexual assault in the prior 12 months at the cluster level (school level). Secondary outcomes included the generalized self-efficacy scale, the distribution of number of times victims were sexually assaulted in the prior period, skills used, disclosure rates, and distribution of perpetrators. Difference-in-differences estimates are reported with bootstrapped confidence intervals. Fourteen schools with 3147 girls from the intervention group and 14 schools with 2539 girls from the control group were included in the analysis. We estimate a 3.7 % decrease, p = 0.03 and 95 % CI = (0.4, 8.0), in risk of sexual assault in the intervention group due to the intervention (initially 7.3 % at baseline). We estimate an increase in mean generalized self-efficacy score of 0.19 (baseline average 3.1, on a 1-4 scale), p = 0.0004 and 95 % CI = (0.08, 0.39). This innovative intervention that combined parallel training for young adolescent girls and boys in school settings showed significant reduction in the rate of sexual assault among girls in this population.

  14. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  15. Generation of physical random numbers by using homodyne detection

    NASA Astrophysics Data System (ADS)

    Hirakawa, Kodai; Oya, Shota; Oguri, Yusuke; Ichikawa, Tsubasa; Eto, Yujiro; Hirano, Takuya; Tsurumaru, Toyohiro

    2016-10-01

    Physical random numbers generated by quantum measurements are, in principle, impossible to predict. We have demonstrated the generation of physical random numbers by using a high-speed balanced photodetector to measure the quadrature amplitudes of vacuum states. Using this method, random numbers were generated at 500 Mbps, which is more than one order of magnitude faster than previously [Gabriel et al:, Nature Photonics 4, 711-715 (2010)]. The Crush test battery of the TestU01 suite consists of 31 tests in 144 variations, and we used them to statistically analyze these numbers. The generated random numbers passed 14 of the 31 tests. To improve the randomness, we performed a hash operation, in which each random number was multiplied by a random Toeplitz matrix; the resulting numbers passed all of the tests in the TestU01 Crush battery.

  16. A generator for unique quantum random numbers based on vacuum states

    NASA Astrophysics Data System (ADS)

    Gabriel, Christian; Wittmann, Christoffer; Sych, Denis; Dong, Ruifang; Mauerer, Wolfgang; Andersen, Ulrik L.; Marquardt, Christoph; Leuchs, Gerd

    2010-10-01

    Random numbers are a valuable component in diverse applications that range from simulations over gambling to cryptography. The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational unpredictability of quantum mechanics. However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique. Here we present a simple experimental setup based on homodyne measurements that uses the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably unique randomness, are important attributes for achieving high-reliability, high-speed and low-cost quantum random number generators.

  17. Randomized trial of four-layer and two-layer bandage systems in the management of chronic venous ulceration.

    PubMed

    Moffatt, Christine J; McCullagh, Lynn; O'Connor, Theresa; Doherty, Debra C; Hourican, Catherine; Stevens, Julie; Mole, Trevor; Franks, Peter J

    2003-01-01

    To compare a four-layer bandage system with a two-layer system in the management of chronic venous leg ulceration, a prospective randomized open parallel groups trial was undertaken. In total, 112 patients newly presenting to leg ulcer services with chronic leg ulceration, screened to exclude the presence of arterial disease (ankle brachial pressure index <0.8) and causes of ulceration other than venous disease, were entered into the trial. Patients were randomized to receive either four-layer (Profore) or two-layer (Surepress) high-compression elastic bandage systems. In all, 109 out of 112 patients had at least one follow-up. After 24 weeks, 50 out of 57 (88%) patients randomized to the four-layer bandage system with follow-up had ulcer closure (full epithelialization) compared with 40 out of 52 (77%) on the two-layer bandage, hazard ratio = 1.18 (95% confidence interval 0.69-2.02), p = 0.55. After 12 weeks, 40 out of 57 (70%) patients randomized to the four-layer bandage system with follow-up had ulcer closure compared with 30 out of 52 (58%) on the two-layer bandage, odds ratio = 4.23 (95% confidence interval 1.29-13.86), p = 0.02. Withdrawal rates were significantly greater on the two-layer bandage (30 out of 54; 56%) compared with the four-layer bandage system (8 out of 58; 14%), p < 0.001, and the number of patients with at least one device-related adverse incident was significantly greater on the two-layer bandaging system (15 out of 54; 28%) compared with four-layer bandaging (5 out of 54; 9%), p = 0.01. The higher mean cost of treatment in the two-layer bandaging system arm over 24 weeks ($1374 [ pound 916] vs. $1314 [ pound 876]) was explained by the increased mean number of bandage changes (1.5 vs. 1.1 per week) with the two-layer system. In conclusion, the four-layer bandage offers advantages over the two-layer bandage in terms of reduced withdrawal from treatment, fewer adverse incidents, and lower treatment cost.

  18. Evaluation of piezocision and laser-assisted flapless corticotomy in the acceleration of canine retraction: a randomized controlled trial.

    PubMed

    Alfawal, Alaa M H; Hajeer, Mohammad Y; Ajaj, Mowaffak A; Hamadah, Omar; Brad, Bassel

    2018-02-17

    To evaluate the effectiveness of two minimally invasive surgical procedures in the acceleration of canine retraction: piezocision and laser-assisted flapless corticotomy (LAFC). Trial design: A single-centre randomized controlled trial with a compound design (two-arm parallel-group design and a split-mouth design for each arm). 36 Class II division I patients (12 males, 24 females; age range: 15 to 27 years) requiring first upper premolars extraction followed by canine retraction. piezocision group (PG; n = 18) and laser-assisted flapless corticotomy group (LG; n = 18). A split-mouth design was applied for each group where the flapless surgical intervention was randomly allocated to one side and the other side served as a control side. the rate of canine retraction (primary outcome), anchorage loss and canine rotation, which were assessed at 1, 2, 3 and 4 months following the onset of canine retraction. Also the duration of canine retraction was recorded. Random sequence: Computer-generated random numbers. Allocation concealment: sequentially numbered, opaque, sealed envelopes. Blinding: Single blinded (outcomes' assessor). Seventeen patients in each group were enrolled in the statistical analysis. The rate of canine retraction was significantly greater in the experimental side than in the control side in both groups by two-fold in the first month and 1.5-fold in the second month (p < 0.001). Also the overall canine retraction duration was significantly reduced in the experimental side as compared with control side in both groups about 25% (p ≤ 0.001). There were no significant differences between the experimental and the control sides regarding loss of anchorage and upper canine rotation in both groups (p > 0.05). There were no significant differences between the two flapless techniques regarding the studied variables during all evaluation times (p > 0.05). Piezocision and laser-assisted flapless corticotomy appeared to be effective treatment methods for accelerating canine retraction without any significant untoward effect on anchorage or canine rotation during rapid retraction. ClinicalTrials.gov (Identifier: NCT02606331 ).

  19. Integrating photo-stimulable phosphor plates into dental and dental hygiene radiography curricula.

    PubMed

    Tax, Cara L; Robb, Christine L; Brillant, Martha G S; Doucette, Heather J

    2013-11-01

    It is not known whether the integration of photo-stimulable phosphor (PSP) plates into dental and dental hygiene curricula creates unique learning challenges for students. The purpose of this two-year study was to determine if dental hygiene students had more and/or different types of errors when using PSP plates compared to film and whether the PSP imaging plates had any particular characteristics that needed to be addressed in the learning process. Fifty-nine first-year dental hygiene students at one Canadian dental school were randomly assigned to two groups (PSP or film) before exposing their initial full mouth series on a teaching manikin using the parallel technique. The principal investigator determined the number and types of errors based on a specific set of performance criteria. The two groups (PSP vs. film) were compared for total number and type of errors made. Results of the study indicated the difference in the total number of errors made using PSP or film was not statistically significant; however, there was a difference in the types of errors made, with the PSP group having more horizontal errors than the film group. In addition, the study identified a number of unique characteristics of the PSP plates that required special consideration for teaching this technology.

  20. The implementation of an aeronautical CFD flow code onto distributed memory parallel systems

    NASA Astrophysics Data System (ADS)

    Ierotheou, C. S.; Forsey, C. R.; Leatham, M.

    2000-04-01

    The parallelization of an industrially important in-house computational fluid dynamics (CFD) code for calculating the airflow over complex aircraft configurations using the Euler or Navier-Stokes equations is presented. The code discussed is the flow solver module of the SAUNA CFD suite. This suite uses a novel grid system that may include block-structured hexahedral or pyramidal grids, unstructured tetrahedral grids or a hybrid combination of both. To assist in the rapid convergence to a solution, a number of convergence acceleration techniques are employed including implicit residual smoothing and a multigrid full approximation storage scheme (FAS). Key features of the parallelization approach are the use of domain decomposition and encapsulated message passing to enable the execution in parallel using a single programme multiple data (SPMD) paradigm. In the case where a hybrid grid is used, a unified grid partitioning scheme is employed to define the decomposition of the mesh. The parallel code has been tested using both structured and hybrid grids on a number of different distributed memory parallel systems and is now routinely used to perform industrial scale aeronautical simulations. Copyright

  1. Collimator of multiple plates with axially aligned identical random arrays of apertures

    NASA Technical Reports Server (NTRS)

    Hoover, R. B.; Underwood, J. H. (Inventor)

    1973-01-01

    A collimator is disclosed for examining the spatial location of distant sources of radiation and for imaging by projection, small, near sources of radiation. The collimator consists of a plurality of plates, all of which are pierced with an identical random array of apertures. The plates are mounted perpendicular to a common axis, with like apertures on consecutive plates axially aligned so as to form radiation channels parallel to the common axis. For near sources, the collimator is interposed between the source and a radiation detector and is translated perpendicular to the common axis so as to project radiation traveling parallel to the common axis incident to the detector. For far sources the collimator is scanned by rotating it in elevation and azimuth with a detector to determine the angular distribution of the radiation from the source.

  2. The effect of selection environment on the probability of parallel evolution.

    PubMed

    Bailey, Susan F; Rodrigue, Nicolas; Kassen, Rees

    2015-06-01

    Across the great diversity of life, there are many compelling examples of parallel and convergent evolution-similar evolutionary changes arising in independently evolving populations. Parallel evolution is often taken to be strong evidence of adaptation occurring in populations that are highly constrained in their genetic variation. Theoretical models suggest a few potential factors driving the probability of parallel evolution, but experimental tests are needed. In this study, we quantify the degree of parallel evolution in 15 replicate populations of Pseudomonas fluorescens evolved in five different environments that varied in resource type and arrangement. We identified repeat changes across multiple levels of biological organization from phenotype, to gene, to nucleotide, and tested the impact of 1) selection environment, 2) the degree of adaptation, and 3) the degree of heterogeneity in the environment on the degree of parallel evolution at the gene-level. We saw, as expected, that parallel evolution occurred more often between populations evolved in the same environment; however, the extent of parallel evolution varied widely. The degree of adaptation did not significantly explain variation in the extent of parallelism in our system but number of available beneficial mutations correlated negatively with parallel evolution. In addition, degree of parallel evolution was significantly higher in populations evolved in a spatially structured, multiresource environment, suggesting that environmental heterogeneity may be an important factor constraining adaptation. Overall, our results stress the importance of environment in driving parallel evolutionary changes and point to a number of avenues for future work for understanding when evolution is predictable. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Multirate parallel distributed compensation of a cluster in wireless sensor and actor networks

    NASA Astrophysics Data System (ADS)

    Yang, Chun-xi; Huang, Ling-yun; Zhang, Hao; Hua, Wang

    2016-01-01

    The stabilisation problem for one of the clusters with bounded multiple random time delays and packet dropouts in wireless sensor and actor networks is investigated in this paper. A new multirate switching model is constructed to describe the feature of this single input multiple output linear system. According to the difficulty of controller design under multi-constraints in multirate switching model, this model can be converted to a Takagi-Sugeno fuzzy model. By designing a multirate parallel distributed compensation, a sufficient condition is established to ensure this closed-loop fuzzy control system to be globally exponentially stable. The solution of the multirate parallel distributed compensation gains can be obtained by solving an auxiliary convex optimisation problem. Finally, two numerical examples are given to show, compared with solving switching controller, multirate parallel distributed compensation can be obtained easily. Furthermore, it has stronger robust stability than arbitrary switching controller and single-rate parallel distributed compensation under the same conditions.

  4. A path-level exact parallelization strategy for sequential simulation

    NASA Astrophysics Data System (ADS)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  5. A random rule model of surface growth

    NASA Astrophysics Data System (ADS)

    Mello, Bernardo A.

    2015-02-01

    Stochastic models of surface growth are usually based on randomly choosing a substrate site to perform iterative steps, as in the etching model, Mello et al. (2001) [5]. In this paper I modify the etching model to perform sequential, instead of random, substrate scan. The randomicity is introduced not in the site selection but in the choice of the rule to be followed in each site. The change positively affects the study of dynamic and asymptotic properties, by reducing the finite size effect and the short-time anomaly and by increasing the saturation time. It also has computational benefits: better use of the cache memory and the possibility of parallel implementation.

  6. An Efficient Multicore Implementation of a Novel HSS-Structured Multifrontal Solver Using Randomized Sampling

    DOE PAGES

    Ghysels, Pieter; Li, Xiaoye S.; Rouet, Francois -Henry; ...

    2016-10-27

    Here, we present a sparse linear system solver that is based on a multifrontal variant of Gaussian elimination and exploits low-rank approximation of the resulting dense frontal matrices. We use hierarchically semiseparable (HSS) matrices, which have low-rank off-diagonal blocks, to approximate the frontal matrices. For HSS matrix construction, a randomized sampling algorithm is used together with interpolative decompositions. The combination of the randomized compression with a fast ULV HSS factoriz ation leads to a solver with lower computational complexity than the standard multifrontal method for many applications, resulting in speedups up to 7 fold for problems in our test suite.more » The implementation targets many-core systems by using task parallelism with dynamic runtime scheduling. Numerical experiments show performance improvements over state-of-the-art sparse direct solvers. The implementation achieves high performance and good scalability on a range of modern shared memory parallel systems, including the Intel Xeon Phi (MIC). The code is part of a software package called STRUMPACK - STRUctured Matrices PACKage, which also has a distributed memory component for dense rank-structured matrices.« less

  7. Pseudo-Random Number Generator Based on Coupled Map Lattices

    NASA Astrophysics Data System (ADS)

    Lü, Huaping; Wang, Shihong; Hu, Gang

    A one-way coupled chaotic map lattice is used for generating pseudo-random numbers. It is shown that with suitable cooperative applications of both chaotic and conventional approaches, the output of the spatiotemporally chaotic system can easily meet the practical requirements of random numbers, i.e., excellent random statistical properties, long periodicity of computer realizations, and fast speed of random number generations. This pseudo-random number generator system can be used as ideal synchronous and self-synchronizing stream cipher systems for secure communications.

  8. Effective components of feedback from Routine Outcome Monitoring (ROM) in youth mental health care: study protocol of a three-arm parallel-group randomized controlled trial

    PubMed Central

    2014-01-01

    Background Routine Outcome Monitoring refers to regular measurements of clients’ progress in clinical practice, aiming to evaluate and, if necessary, adapt treatment. Clients fill out questionnaires and clinicians receive feedback about the results. Studies concerning feedback in youth mental health care are rare. The effects of feedback, the importance of specific aspects of feedback, and the mechanisms underlying the effects of feedback are unknown. In the present study, several potentially effective components of feedback from Routine Outcome Monitoring in youth mental health care in the Netherlands are investigated. Methods/Design We will examine three different forms of feedback through a three-arm parallel-group randomized controlled trial. 432 children and adolescents (aged 4 to 17 years) and their parents, who have been referred to mental health care institution Pro Persona, will be randomly assigned to one of three feedback conditions (144 participants per condition). Randomization will be stratified by age of the child or adolescent and by department. All participants fill out questionnaires at the start of treatment, one and a half months after the start of treatment, every three months during treatment, and at the end of treatment. Participants in the second and third feedback conditions fill out an additional questionnaire. In condition 1, clinicians receive basic feedback regarding clients’ symptoms and quality of life. In condition 2, the feedback of condition 1 is extended with feedback regarding possible obstacles to a good outcome and with practical suggestions. In condition 3, the feedback of condition 2 is discussed with a colleague while following a standardized format for case consultation. The primary outcome measure is symptom severity and secondary outcome measures are quality of life, satisfaction with treatment, number of sessions, length of treatment, and rates of dropout. We will also examine the role of being not on track (not responding to treatment). Discussion This study contributes to the identification of effective components of feedback and a better understanding of how feedback functions in real-world clinical practice. If the different feedback components prove to be effective, this can help to support and improve the care for youth. Trial registration Dutch Trial Register NTR4234 PMID:24393491

  9. Load Balancing in Stochastic Networks: Algorithms, Analysis, and Game Theory

    DTIC Science & Technology

    2014-04-16

    SECURITY CLASSIFICATION OF: The classic randomized load balancing model is the so-called supermarket model, which describes a system in which...P.O. Box 12211 Research Triangle Park, NC 27709-2211 mean-field limits, supermarket model, thresholds, game, randomized load balancing REPORT...balancing model is the so-called supermarket model, which describes a system in which customers arrive to a service center with n parallel servers according

  10. Omega 3/6 Fatty Acids for Reading in Children: A Randomized, Double-Blind, Placebo-Controlled Trial in 9-Year-Old Mainstream Schoolchildren in Sweden

    ERIC Educational Resources Information Center

    Johnson, Mats; Fransson, Gunnar; Östlund, Sven; Areskoug, Björn; Gillberg, Christopher

    2017-01-01

    Background: Previous research has shown positive effects of Omega 3/6 fatty acids in children with inattention and reading difficulties. We aimed to investigate if Omega 3/6 improved reading ability in mainstream schoolchildren. Methods: We performed a 3-month parallel, randomized, double-blind, placebo-controlled trial followed by 3-month active…

  11. An overview of confounding. Part 1: the concept and how to address it.

    PubMed

    Howards, Penelope P

    2018-04-01

    Confounding is an important source of bias, but it is often misunderstood. We consider how confounding occurs and how to address confounding using examples. Study results are confounded when the effect of the exposure on the outcome, mixes with the effects of other risk and protective factors for the outcome. This problem arises when these factors are present to different degrees among the exposed and unexposed study participants, but not all differences between the groups result in confounding. Thinking about an ideal study where all of the population of interest is exposed in one universe and is unexposed in a parallel universe helps to distinguish confounders from other differences. In an actual study, an observed unexposed population is chosen to stand in for the unobserved parallel universe. Differences between this substitute population and the parallel universe result in confounding. Confounding by identified factors can be addressed analytically and through study design, but only randomization has the potential to address confounding by unmeasured factors. Nevertheless, a given randomized study may still be confounded. Confounded study results can lead to incorrect conclusions about the effect of the exposure of interest on the outcome. © 2018 Nordic Federation of Societies of Obstetrics and Gynecology.

  12. Incomplete caries removal and indirect pulp capping in primary molars: a randomized controlled trial.

    PubMed

    Bressani, Ana Eliza Lemes; Mariath, Adriela Azevedo Souza; Haas, Alex Nogueira; Garcia-Godoy, Franklin; de Araujo, Fernando Borba

    2013-08-01

    To compare the effect of incomplete caries removal (ICR) and indirect pulp capping (IPC) with calcium hydroxide (CH) or an inert material (wax) on color, consistency and contamination of the remaining dentin of primary molars. This double-blind, parallel-design, randomized controlled trial included 30 children presenting one primary molar with deep caries lesion. Children were randomly assigned after ICR to receive IPC with CH or wax. All teeth were then restored with resin composite. Baseline dentin color and consistency were evaluated after ICR, and dentin samples were collected for contamination analyses using scanning electron microscopy. After 3 months, restorations were removed and the three parameters were re-evaluated. In both groups, dentin became significantly darker after 3 months. No cases of yellow dentin were observed after 3 months with CH compared to 33.3% of the wax cases (P < 0.05). A statistically significant difference over time was observed only for CH regarding consistency. CH stimulated a dentin hardening process in a statistically higher number of cases than wax (86.7% vs. 33.3%; P = 0.008). Contamination changed significantly over time in CH and wax without significant difference between groups. It was concluded that CH and wax arrested the carious process of the remaining carious dentin after indirect pulp capping, but CH showed superior dentin color and consistency after 3 months.

  13. Repeated tender point injections of granisetron alleviate chronic myofascial pain--a randomized, controlled, double-blinded trial.

    PubMed

    Christidis, Nikolaos; Omrani, Shahin; Fredriksson, Lars; Gjelset, Mattias; Louca, Sofia; Hedenberg-Magnusson, Britt; Ernberg, Malin

    2015-01-01

    Serotonin (5-HT) mediates pain by peripheral 5-HT3-receptors. Results from a few studies indicate that intramuscular injections of 5-HT3-antagonists may reduce musculoskeletal pain. The aim of this study was to investigate if repeated intramuscular tender-point injections of the 5-HT3-antagonist granisetron alleviate pain in patients with myofascial temporomandibular disorders (M-TMD). This prospective, randomized, controlled, double blind, parallel-arm trial (RCT) was carried out during at two centers in Stockholm, Sweden. The randomization was performed by a researcher who did not participate in data collection with an internet-based application ( www.randomization.com ). 40 patients with a diagnose of M-TMD according to the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) were randomized to receive repeated injections, one week apart, with either granisetron (GRA; 3 mg) or isotonic saline as control (CTR). The median weekly pain intensities decreased significantly at all follow-ups (1-, 2-, 6-months) in the GRA-group (Friedman test; P < 0.05), but not in the CTR-group (Friedman-test; P > 0.075). The numbers needed to treat (NNT) were 4 at the 1- and 6-month follow-ups, and 3.3 at the 2-month follow-up in favor of granisetron. Repeated intramuscular tender-point injections with granisetron provide a new pharmacological treatment possibility for myofascial pain patients with repeated intramuscular tender-point injections with the serotonin type 3 antagonist granisetron. It showed a clinically relevant pain reducing effect in the temporomandibular region, both in a short- and long-term aspect. European Clinical Trials Database 2005-006042-41 as well as at Clinical Trials NCT02230371 .

  14. Study protocol: a randomized controlled trial investigating the effects of a psychosexual training program for adolescents with autism spectrum disorder.

    PubMed

    Visser, Kirsten; Greaves-Lord, Kirstin; Tick, Nouchka T; Verhulst, Frank C; Maras, Athanasios; van der Vegt, Esther J M

    2015-08-28

    Previous research shows that adolescents with autism spectrum disorder (ASD) run several risks in their psychosexual development and that these adolescents can have limited access to reliable information on puberty and sexuality, emphasizing the need for specific guidance of adolescents with ASD in their psychosexual development. Few studies have investigated the effects of psychosexual training programs for adolescents with ASD and to date no randomized controlled trials are available to study the effects of psychosexual interventions for this target group. The randomized controlled trial (RCT) described in this study protocol aims to investigate the effects of the Tackling Teenage Training (TTT) program on the psychosexual development of adolescents with ASD. This parallel clinical trial, conducted in the South-West of the Netherlands, has a simple equal randomization design with an intervention and a waiting-list control condition. Two hundred adolescents and their parents participate in this study. We assess the participants in both conditions using self-report as well as parent-report questionnaires at three time points during 1 year: at baseline (T1), post-treatment (T2), and for follow-up (T3). To our knowledge, the current study is the first that uses a randomized controlled design to study the effects of a psychosexual training program for adolescents with ASD. It has a number of methodological strengths, namely a large sample size, a wide range of functionally relevant outcome measures, the use of multiple informants, and a standardized research and intervention protocol. Also some limitations of the described study are identified, for instance not making a comparison between two treatment conditions, and no use of blinded observational measures to investigate the ecological validity of the research results. Dutch Trial Register NTR2860. Registered on 20 April 2011.

  15. Reducing patient delay in Acute Coronary Syndrome (RAPiD): research protocol for a web-based randomized controlled trial examining the effect of a behaviour change intervention.

    PubMed

    Farquharson, Barbara; Johnston, Marie; Smith, Karen; Williams, Brian; Treweek, Shaun; Dombrowski, Stephan U; Dougall, Nadine; Abhyankar, Purva; Grindle, Mark

    2017-05-01

    To evaluate the efficacy of a behaviour change technique-based intervention and compare two possible modes of delivery (text + visual and text-only) with usual care. Patient delay prevents many people from achieving optimal benefit of time-dependent treatments for acute coronary syndrome. Reducing delay would reduce mortality and morbidity, but interventions to change behaviour have had mixed results. Systematic inclusion of behaviour change techniques or a visual mode of delivery might improve the efficacy of interventions. A three-arm web-based, parallel randomized controlled trial of a theory-based intervention. The intervention comprises 12 behaviour change techniques systematically identified following systematic review and a consensus exercise undertaken with behaviour change experts. We aim to recruit n = 177 participants who have experienced acute coronary syndrome in the previous 6 months from a National Health Service Hospital. Consenting participants will be randomly allocated in equal numbers to one of three study groups: i) usual care, ii) usual care plus text-only behaviour change technique-based intervention or iii) usual care plus text + visual behaviour change technique-based intervention. The primary outcome will be the change in intention to phone an ambulance immediately with symptoms of acute coronary syndrome ≥15-minute duration, assessed using two randomized series of eight scenarios representing varied symptoms before and after delivery of the interventions or control condition (usual care). Funding granted January 2014. Positive results changing intentions would lead to a randomized controlled trial of the behaviour change intervention in clinical practice, assessing patient delay in the event of actual symptoms. Registered at ClinicalTrials.gov: NCT02820103. © 2016 John Wiley & Sons Ltd.

  16. The efficacy of traditional acupuncture on patients with chronic neck pain: study protocol of a randomized controlled trial.

    PubMed

    Yang, Yiling; Yan, Xiaoxia; Deng, Hongmei; Zeng, Dian; Huang, Jianpeng; Fu, Wenbin; Xu, Nenggui; Liu, Jianhua

    2017-07-10

    A large number of randomized trials on the use of acupuncture to treat chronic pain have been conducted. However, there is considerable controversy regarding the effectiveness of acupuncture. We designed a randomized trial involving patients with chronic neck pain (CNP) to investigate whether acupuncture is more effective than a placebo in treating CNP. A five-arm, parallel, single-blinded, randomized, sham-controlled trial was designed. Patients with CNP of more than 3 months' duration are being recruited from Guangdong Provincial Hospital of Chinese Medicine (China). Following examination, 175 patients will be randomized into one of five groups (35 patients in each group) as follows: a traditional acupuncture group (group A), a shallow-puncture group (group B), a non-acupoint acupuncture group (group C), a non-acupoint shallow-puncture group (group D) and a sham-puncture group (group E). The interventions will last for 20 min and will be carried out twice a week for 5 weeks. The primary outcome will be evaluated by changes in the Northwick Park Neck Pain Questionnaire (NPQ). Secondary outcomes will be measured by the pain threshold, the Short Form McGill Pain Questionnaire-2 (SF-MPQ-2), the 36-Item Short-Form Health Survey (SF-36) and diary entries. Analysis of the data will be performed at baseline, at the end of the intervention and at 3 months' follow-up. The safety of acupuncture will be evaluated at each treatment period. The purpose of this trial is to determine whether traditional acupuncture is more effective for chronic pain relief than sham acupuncture in adults with CNP, and to determine which type of sham acupuncture is the optimal control for clinical trials. Chinese Clinical Trial Registry: ChiCTR-IOR-15006886 . Registered on 2 July 2015.

  17. Compact holographic optical neural network system for real-time pattern recognition

    NASA Astrophysics Data System (ADS)

    Lu, Taiwei; Mintzer, David T.; Kostrzewski, Andrew A.; Lin, Freddie S.

    1996-08-01

    One of the important characteristics of artificial neural networks is their capability for massive interconnection and parallel processing. Recently, specialized electronic neural network processors and VLSI neural chips have been introduced in the commercial market. The number of parallel channels they can handle is limited because of the limited parallel interconnections that can be implemented with 1D electronic wires. High-resolution pattern recognition problems can require a large number of neurons for parallel processing of an image. This paper describes a holographic optical neural network (HONN) that is based on high- resolution volume holographic materials and is capable of performing massive 3D parallel interconnection of tens of thousands of neurons. A HONN with more than 16,000 neurons packaged in an attache case has been developed. Rotation- shift-scale-invariant pattern recognition operations have been demonstrated with this system. System parameters such as the signal-to-noise ratio, dynamic range, and processing speed are discussed.

  18. Optimum parallel step-sector bearing lubricated with an incompressible fluid

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.

    1983-01-01

    The dimensionless parameters normally associated with a step sector thrust bearing are the film thickness ratio, the dimensionless step location, the number of sectors, the radius ratio, and the angular extent of the lubrication feed groove. The optimum number of sectors and the parallel step configuration for a step sector thrust bearing while considering load capacity or stiffness and assuming an incompressible fluid are presented.

  19. A connectionist model for diagnostic problem solving

    NASA Technical Reports Server (NTRS)

    Peng, Yun; Reggia, James A.

    1989-01-01

    A competition-based connectionist model for solving diagnostic problems is described. The problems considered are computationally difficult in that (1) multiple disorders may occur simultaneously and (2) a global optimum in the space exponential to the total number of possible disorders is sought as a solution. The diagnostic problem is treated as a nonlinear optimization problem, and global optimization criteria are decomposed into local criteria governing node activation updating in the connectionist model. Nodes representing disorders compete with each other to account for each individual manifestation, yet complement each other to account for all manifestations through parallel node interactions. When equilibrium is reached, the network settles into a locally optimal state. Three randomly generated examples of diagnostic problems, each of which has 1024 cases, were tested, and the decomposition plus competition plus resettling approach yielded very high accuracy.

  20. Memory Retrieval Given Two Independent Cues: Cue Selection or Parallel Access?

    ERIC Educational Resources Information Center

    Rickard, Timothy C.; Bajic, Daniel

    2004-01-01

    A basic but unresolved issue in the study of memory retrieval is whether multiple independent cues can be used concurrently (i.e., in parallel) to recall a single, common response. A number of empirical results, as well as potentially applicable theories, suggest that retrieval can proceed in parallel, though Rickard (1997) set forth a model that…

  1. MPI_XSTAR: MPI-based Parallelization of the XSTAR Photoionization Program

    NASA Astrophysics Data System (ADS)

    Danehkar, Ashkbiz; Nowak, Michael A.; Lee, Julia C.; Smith, Randall K.

    2018-02-01

    We describe a program for the parallel implementation of multiple runs of XSTAR, a photoionization code that is used to predict the physical properties of an ionized gas from its emission and/or absorption lines. The parallelization program, called MPI_XSTAR, has been developed and implemented in the C++ language by using the Message Passing Interface (MPI) protocol, a conventional standard of parallel computing. We have benchmarked parallel multiprocessing executions of XSTAR, using MPI_XSTAR, against a serial execution of XSTAR, in terms of the parallelization speedup and the computing resource efficiency. Our experience indicates that the parallel execution runs significantly faster than the serial execution, however, the efficiency in terms of the computing resource usage decreases with increasing the number of processors used in the parallel computing.

  2. Employing online quantum random number generators for generating truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2013-01-01

    The presented package for the Mathematica computing system allows the harnessing of quantum random number generators (QRNG) for investigating the statistical properties of quantum states. The described package implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data. New version program summaryProgram title: TRQS Catalogue identifier: AEKA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 18 134 No. of bytes in distributed program, including test data, etc.: 2 520 49 Distribution format: tar.gz Programming language: Mathematica, C. Computer: Any supporting Mathematica in version 7 or higher. Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit). RAM: Case-dependent Supplementary material: Fig. 1 mentioned below can be downloaded. Classification: 4.15. External routines: Quantis software library (http://www.idquantique.com/support/quantis-trng.html) Catalogue identifier of previous version: AEKA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)118 Does the new version supersede the previous version?: Yes Nature of problem: Generation of random density matrices and utilization of high-quality random numbers for the purpose of computer simulation. Solution method: Use of a physical quantum random number generator and an on-line service providing access to the source of true random numbers generated by quantum real number generator. Reasons for new version: Added support for the high-speed on-line quantum random number generator and improved methods for retrieving lists of random numbers. Summary of revisions: The presented version provides two signicant improvements. The first one is the ability to use the on-line Quantum Random Number Generation service developed by PicoQuant GmbH and the Nano-Optics groups at the Department of Physics of Humboldt University. The on-line service supported in the version 2.0 of the TRQS package provides faster access to true randomness sources constructed using the laws of quantum physics. The service is freely available at https://qrng.physik.hu-berlin.de/. The use of this service allows using the presented package with the need of a physical quantum random number generator. The second improvement introduced in this version is the ability to retrieve arrays of random data directly for the used source. This increases the speed of the random number generation, especially in the case of an on-line service, where it reduces the time necessary to establish the connection. Thanks to the speed improvement of the presented version, the package can now be used in simulations requiring larger amounts of random data. Moreover, the functions for generating random numbers provided by the current version of the package more closely follow the pattern of functions for generating pseudo- random numbers provided in Mathematica. Additional comments: Speed comparison: The implementation of the support for the QRNG on-line service provides a noticeable improvement in the speed of random number generation. For the samples of real numbers of size 101; 102,…,107 the times required to generate these samples using Quantis USB device and QRNG service are compared in Fig. 1. The presented results show that the use of the on-line service provides faster access to random numbers. One should note, however, that the speed gain can increase or decrease depending on the connection speed between the computer and the server providing random numbers. Running time: Depends on the used source of randomness and the amount of random data used in the experiment. References: [1] M. Wahl, M. Leifgen, M. Berlin, T. Röhlicke, H.-J. Rahn, O. Benson., An ultrafast quantum random number generator with provably bounded output bias based on photon arrival time measurements, Applied Physics Letters, Vol. 098, 171105 (2011). http://dx.doi.org/10.1063/1.3578456.

  3. VARIANCE ANISOTROPY IN KINETIC PLASMAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parashar, Tulasi N.; Matthaeus, William H.; Oughton, Sean

    Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solarmore » wind observations.« less

  4. SPSS and SAS programs for determining the number of components using parallel analysis and velicer's MAP test.

    PubMed

    O'Connor, B P

    2000-08-01

    Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.

  5. Comparison of the pharmacokinetics and safety of three formulations of infliximab (CT-P13, EU-approved reference infliximab and the US-licensed reference infliximab) in healthy subjects: a randomized, double-blind, three-arm, parallel-group, single-dose, Phase I study.

    PubMed

    Park, Won; Lee, Sang Joon; Yun, Jihye; Yoo, Dae Hyun

    2015-01-01

    To compare the pharmacokinetics (PK), safety and tolerability of biosimilar infliximab (CT-P13 [Remsima(®), Inflectra(®)]) with two formulations of the reference medicinal product (RMP) (Remicade(®)) from either Europe (EU-RMP) or the USA (US-RMP). This was a double-blind, three-arm, parallel-group study (EudraCT number: 2013-003173-10). Healthy subjects received single doses (5 mg/kg) of CT-P13 (n = 71), EU-RMP (n = 71) or US-RMP (n = 71). The primary objective was to compare the PK profiles for the three formulations. Assessments of comparative safety and tolerability were secondary objectives. Baseline demographics were well balanced across the three groups. Primary end points (Cmax, AUClast and AUCinf) were equivalent between all formulations (CT-P13 vs EU-RMP; CT-P13 vs US-RMP; EU-RMP vs US-RMP). All other PK end points supported the high similarity of the three treatments. Tolerability profiles of the formulations were similar. The PK profile of CT-P13 is highly similar to EU-RMP and US-RMP. All three formulations were equally well tolerated.

  6. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs.

    PubMed

    Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong

    2015-12-26

    This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  7. Split-mouth and parallel-arm trials to compare pain with intraosseous anaesthesia delivered by the computerised Quicksleeper system and conventional infiltration anaesthesia in paediatric oral healthcare: protocol for a randomised controlled trial

    PubMed Central

    Smaïl-Faugeron, Violaine; Muller-Bolla, Michèle; Sixou, Jean-Louis; Courson, Frédéric

    2015-01-01

    Introduction Local anaesthesia is commonly used in paediatric oral healthcare. Infiltration anaesthesia is the most frequently used, but recent developments in anaesthesia techniques have introduced an alternative: intraosseous anaesthesia. We propose to perform a split-mouth and parallel-arm multicentre randomised controlled trial (RCT) comparing the pain caused by the insertion of the needle for the injection of conventional infiltration anaesthesia, and intraosseous anaesthesia by the computerised QuickSleeper system, in children and adolescents. Methods and analysis Inclusion criteria are patients 7–15 years old with at least 2 first permanent molars belonging to the same dental arch (for the split-mouth RCT) or with a first permanent molar (for the parallel-arm RCT) requiring conservative or endodontic treatment limited to pulpotomy. The setting of this study is the Department of Paediatric Dentistry at 3 University dental hospitals in France. The primary outcome measure will be pain reported by the patient on a visual analogue scale concerning the insertion of the needle and the injection/infiltration. Secondary outcomes are latency, need for additional anaesthesia during the treatment and pain felt during the treatment. We will use a computer-generated permuted-block randomisation sequence for allocation to anaesthesia groups. The random sequences will be stratified by centre (and by dental arch for the parallel-arm RCT). Only participants will be blinded to group assignment. Data will be analysed by the intent-to-treat principle. In all, 160 patients will be included (30 in the split-mouth RCT, 130 in the parallel-arm RCT). Ethics and dissemination This protocol has been approved by the French ethics committee for the protection of people (Comité de Protection des Personnes, Ile de France I) and will be conducted in full accordance with accepted ethical principles. Findings will be reported in scientific publications and at research conferences, and in project summary papers for participants. Trial registration number ClinicalTrials.gov NCT02084433. PMID:26163031

  8. Molecular-dynamics simulations of self-assembled monolayers (SAM) on parallel computers

    NASA Astrophysics Data System (ADS)

    Vemparala, Satyavani

    The purpose of this dissertation is to investigate the properties of self-assembled monolayers, particularly alkanethiols and Poly (ethylene glycol) terminated alkanethiols. These simulations are based on realistic interatomic potentials and require scalable and portable multiresolution algorithms implemented on parallel computers. Large-scale molecular dynamics simulations of self-assembled alkanethiol monolayer systems have been carried out using an all-atom model involving a million atoms to investigate their structural properties as a function of temperature, lattice spacing and molecular chain-length. Results show that the alkanethiol chains tilt from the surface normal by a collective angle of 25° along next-nearest neighbor direction at 300K. At 350K the system transforms to a disordered phase characterized by small tilt angle, flexible tilt direction, and random distribution of backbone planes. With increasing lattice spacing, a, the tilt angle increases rapidly from a nearly zero value at a = 4.7A to as high as 34° at a = 5.3A at 300K. We also studied the effect of end groups on the tilt structure of SAM films. We characterized the system with respect to temperature, the alkane chain length, lattice spacing, and the length of the end group. We found that the gauche defects were predominant only in the tails, and the gauche defects increased with the temperature and number of EG units. Effect of electric field on the structure of poly (ethylene glycol) (PEG) terminated alkanethiol self assembled monolayer (SAM) on gold has been studied using parallel molecular dynamics method. An applied electric field triggers a conformational transition from all-trans to a mostly gauche conformation. The polarity of the electric field has a significant effect on the surface structure of PEG leading to a profound effect on the hydrophilicity of the surface. The electric field applied anti-parallel to the surface normal causes a reversible transition to an ordered state in which the oxygen atoms are exposed. On the other hand, an electric field applied in a direction parallel to the surface normal introduces considerable disorder in the system and the oxygen atoms are buried inside.

  9. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.

  10. Memory-based frame synchronizer. [for digital communication systems

    NASA Technical Reports Server (NTRS)

    Stattel, R. J.; Niswander, J. K. (Inventor)

    1981-01-01

    A frame synchronizer for use in digital communications systems wherein data formats can be easily and dynamically changed is described. The use of memory array elements provide increased flexibility in format selection and sync word selection in addition to real time reconfiguration ability. The frame synchronizer comprises a serial-to-parallel converter which converts a serial input data stream to a constantly changing parallel data output. This parallel data output is supplied to programmable sync word recognizers each consisting of a multiplexer and a random access memory (RAM). The multiplexer is connected to both the parallel data output and an address bus which may be connected to a microprocessor or computer for purposes of programming the sync word recognizer. The RAM is used as an associative memory or decorder and is programmed to identify a specific sync word. Additional programmable RAMs are used as counter decoders to define word bit length, frame word length, and paragraph frame length.

  11. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-06-02

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  12. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2013-09-03

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  13. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-06-09

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  14. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A; Mamidala, Amith R

    2014-02-11

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  15. True random numbers from amplified quantum vacuum.

    PubMed

    Jofre, M; Curty, M; Steinlechner, F; Anzolin, G; Torres, J P; Mitchell, M W; Pruneri, V

    2011-10-10

    Random numbers are essential for applications ranging from secure communications to numerical simulation and quantitative finance. Algorithms can rapidly produce pseudo-random outcomes, series of numbers that mimic most properties of true random numbers while quantum random number generators (QRNGs) exploit intrinsic quantum randomness to produce true random numbers. Single-photon QRNGs are conceptually simple but produce few random bits per detection. In contrast, vacuum fluctuations are a vast resource for QRNGs: they are broad-band and thus can encode many random bits per second. Direct recording of vacuum fluctuations is possible, but requires shot-noise-limited detectors, at the cost of bandwidth. We demonstrate efficient conversion of vacuum fluctuations to true random bits using optical amplification of vacuum and interferometry. Using commercially-available optical components we demonstrate a QRNG at a bit rate of 1.11 Gbps. The proposed scheme has the potential to be extended to 10 Gbps and even up to 100 Gbps by taking advantage of high speed modulation sources and detectors for optical fiber telecommunication devices.

  16. Twice-weekly aripiprazole for treating children and adolescents with tic disorder, a randomized controlled clinical trial.

    PubMed

    Ghanizadeh, Ahmad

    2016-01-01

    Treating tic disorder is challenging. No trial has ever examined whether twice weekly aripiprazole is effective for treating tic disorders. Participants of this 8-week randomized controlled parallel-group clinical trial were a clinical sample of 36 children and adolescents with tic disorder. Yale global tic severity scale was used to assess the outcome. Both groups received daily dosage of aripiprazole for the first 14 days. Then, one group received daily dose of aripiprazole while the other group received twice weekly dosage of aripiprazole for the next 46 days. The patients were assessed at baseline, week 2, 4, and 8. Tic scores decreased in both group significantly 22.8 (18.5) versus 22.0 (11.6). Moreover, there was no between group difference. The final mean (SD) score of motor and vocal tics in the group treated with daily treatment was not significantly different from the twice weekly group (Cohen's d = 0.36). The odds ratios for sedation and increased appetite were 3.05 and 3, respectively. For the first time, current findings support that twice weekly aripiprazole efficacy was not different from that of daily treatment. The rate of drowsiness in the twice weekly treatment group was less than that of the daily treatment group. This trial was registered at http://www.irct.ir. The registration number of this trial was: IRCT201312263930N32. http://www.irct.ir/searchresult.php?id=3930&number=32.

  17. Effectiveness of hormone therapy for treating dry eye syndrome in postmenopausal women: a randomized trial.

    PubMed

    Piwkumsribonruang, Narongchai; Somboonporn, Woraruk; Luanratanakorn, Patanaree; Kaewrudee, Srinaree; Tharnprisan, Piangjit; Soontrapa, Sugree

    2010-06-01

    The efficacy of hormone therapy (HT) on dry eye syndrome remains debatable. To study the efficacy of HT on dry eye syndrome. A randomized controlled, double blind, parallel group, community-based study in 42 post-menopausal patients was conducted. The patients had dry eye syndrome and were not taking any medications. They were assigned to one of two groups. Group A comprised 21 patients given transdermal 17 beta-estradiol (50 mg/day) and medroxy progesterone acetate (2.5 mg/day) continuously for three months and group B comprised 21 patients given both transdermal and oral placebo. Participants in the study were included for final analysis. The improvement of dry eye symptoms were measured by visual analog scale, tear secretion, intraocular pressure, corneal thickness, and tear breakup time determined before treatment and at 6 and 12 weeks of treatment. At 12 weeks, the number of patients who reported improvement of dry eye symptoms was greater in the HT group than that in the placebo group. However, the difference was not statistically significant (RR 0.25, 95% CI 0.04-2.80 and 0.60, 95% CI 0.33-2.03 in right and left eye, respectively). For other parameters, there was no significant difference between the two groups. According to the present study, there is no strong evidence to support the use of HT for treating dry eye syndrome. The limited number of participants included in the present study may have contributed to the insignificant effects.

  18. Fast and Precise Emulation of Stochastic Biochemical Reaction Networks With Amplified Thermal Noise in Silicon Chips.

    PubMed

    Kim, Jaewook; Woo, Sung Sik; Sarpeshkar, Rahul

    2018-04-01

    The analysis and simulation of complex interacting biochemical reaction pathways in cells is important in all of systems biology and medicine. Yet, the dynamics of even a modest number of noisy or stochastic coupled biochemical reactions is extremely time consuming to simulate. In large part, this is because of the expensive cost of random number and Poisson process generation and the presence of stiff, coupled, nonlinear differential equations. Here, we demonstrate that we can amplify inherent thermal noise in chips to emulate randomness physically, thus alleviating these costs significantly. Concurrently, molecular flux in thermodynamic biochemical reactions maps to thermodynamic electronic current in a transistor such that stiff nonlinear biochemical differential equations are emulated exactly in compact, digitally programmable, highly parallel analog "cytomorphic" transistor circuits. For even small-scale systems involving just 80 stochastic reactions, our 0.35-μm BiCMOS chips yield a 311× speedup in the simulation time of Gillespie's stochastic algorithm over COPASI, a fast biochemical-reaction software simulator that is widely used in computational biology; they yield a 15 500× speedup over equivalent MATLAB stochastic simulations. The chip emulation results are consistent with these software simulations over a large range of signal-to-noise ratios. Most importantly, our physical emulation of Poisson chemical dynamics does not involve any inherently sequential processes and updates such that, unlike prior exact simulation approaches, they are parallelizable, asynchronous, and enable even more speedup for larger-size networks.

  19. Nitrates for stable angina: a systematic review and meta-analysis of randomized clinical trials.

    PubMed

    Wei, Jiafu; Wu, Taixiang; Yang, Qing; Chen, Mao; Ni, Juan; Huang, Dejia

    2011-01-07

    To assess the effect (harms and benefits) of nitrates for stable angina. We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE and EMBASE. Randomized controlled trials with both parallel and crossover design were included. The following outcome measures were evaluated: number of angina attacks weekly and nitroglycerin consumption, quality of life, total exercise duration, time to onset of angina and time to 1 mm ST depression. Fifty-one trials with 3595 patients meeting inclusion criteria were analyzed. Both intermittent and continuous regimens of nitrates lengthened exercise duration significantly by 31 and 53 s respectively. The number of angina attacks was significantly reduced by 2.89 episodes weekly for continuous administration and 1.5 episodes weekly for intermittent administration. With intermittent administration, increased dose provided with 21 s more length of exercise duration. With continuous administration, exercise duration was pronged more in low-dose group. Quality of life was not improved by continuous application of GTN patches and was similar between continuous and intermittent groups. In addition, 51.6% patients receiving nitrates complained with headache. Long-term administration of nitrates was beneficial for angina prophylaxis and improved exercise performance but might be ineffective for improving quality of life. With continuous regimen, low-dose nitrates were more effective than high-dose ones for improving exercise performance. By contrast, with intermittent regimen, high-dose nitrates were more effective. In addition, intermittent administration could bring zero-hour effect. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  20. Randomized controlled trial of an intervention to improve drug appropriateness in community-dwelling polymedicated elderly people.

    PubMed

    Campins, Lluís; Serra-Prat, Mateu; Gózalo, Inés; López, David; Palomera, Elisabet; Agustí, Clara; Cabré, Mateu

    2017-02-01

    Polypharmacy is frequent in the elderly population and is associated with potentially drug inappropriateness and drug-related problems. To assess the effectiveness and safety of a medication evaluation programme for community-dwelling polymedicated elderly people. Randomized, open-label, multicentre, parallel-arm clinical trial with 1-year follow-up. Primary care centres. Polymedicated (≥8 drugs) elderly people (≥70 years). Pharmacist review of all medication according to the Good Palliative-Geriatric Practice algorithm and the Screening Tool of Older Person's Prescriptions-Screening Tool to Alert Doctors to the Right Treatment criteria and recommendations to the patient's physician. Routine clinical practice. Recommendations and changes implemented, number of prescribed drugs, restarted drugs, primary care and emergency department consultations, hospitalizations and death. About 503 (252 intervention and 251 control) patients were recruited and 2709 drugs were evaluated. About 26.5% of prescriptions were rated as potentially inappropriate and 21.5% were changed (9.1% discontinuation, 6.9% dose adjustment, 3.2% substitution and 2.2% new prescription). About 2.62 recommendations per patient were made and at least one recommendation was made for 95.6% of patients. The mean number of prescriptions per patient was significantly lower in the intervention group at 3- and 6-month follow-up. Discontinuations, dose adjustments and substitutions were significantly higher than in the control group at 3, 6 and 12 months. No differences were observed in the number of emergency visits, hospitalizations and deaths. The study intervention was safe, reduced potentially inappropriate medication, but did not reduce emergency visits and hospitalizations in polymedicated elderly people. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. The optimal number of personnel for good quality of chest compressions: A prospective randomized parallel manikin trial

    PubMed Central

    Huh, Ji Young; Nishiyama, Kei; Hayashi, Hiroyuki

    2017-01-01

    Background Long durational chest compression (CC) deteriorates cardiopulmonary resuscitation (CPR) quality. The appropriate number of CC personnel for minimizing rescuer’s fatigue is mostly unknown. Objective We determined the optimal number of personnel needed for 30-min CPR in a rescue-team. Methods We conducted a randomized, manikin trial on healthcare providers. We divided them into Groups A to D according to the assigned different rest period to each group between the 2 min CCs. Groups A, B, C, and D performed CCs at 2, 4, 6, and 8 min rest period. All participants performed CCs for 30 min with a different rest period; participants allocated to Groups A, B, C, and D performed, eight, five, four, and three cycles, respectively. We compared a quality change of CCs among these groups to investigate how the assigned rest period affects the maintenance of CC quality during the 30-min CPR. Results This study involved 143 participants (male 58 [41%]; mean age, 24 years,) for the evaluation. As participants had less rest periods, the quality of their CCs such as sufficient depth ratio declined over 30-min CPR. A significant decrease in the sufficient CC depth ratio was observed in the second to the last cycle as compared to the first cycle. (median changes; A: −4%, B: −3%, C: 0%, and D: 0% p < 0.01). Conclusions A 6 min rest period after 2 min CC is vital in order to sustain the quality of CC during a 30-min CPR cycle. At least four personnel may be needed to reduce rescuer's fatigue for a 30-min CPR cycle when the team consists of men and women. PMID:29267300

  2. Accurate single-scattering simulation of ice cloud using the invariant-imbedding T-matrix method and the physical-geometric optics method

    NASA Astrophysics Data System (ADS)

    Sun, B.; Yang, P.; Kattawar, G. W.; Zhang, X.

    2017-12-01

    The ice cloud single-scattering properties can be accurately simulated using the invariant-imbedding T-matrix method (IITM) and the physical-geometric optics method (PGOM). The IITM has been parallelized using the Message Passing Interface (MPI) method to remove the memory limitation so that the IITM can be used to obtain the single-scattering properties of ice clouds for sizes in the geometric optics regime. Furthermore, the results associated with random orientations can be analytically achieved once the T-matrix is given. The PGOM is also parallelized in conjunction with random orientations. The single-scattering properties of a hexagonal prism with height 400 (in units of lambda/2*pi, where lambda is the incident wavelength) and an aspect ratio of 1 (defined as the height over two times of bottom side length) are given by using the parallelized IITM and compared to the counterparts using the parallelized PGOM. The two results are in close agreement. Furthermore, the integrated single-scattering properties, including the asymmetry factor, the extinction cross-section, and the scattering cross-section, are given in a completed size range. The present results show a smooth transition from the exact IITM solution to the approximate PGOM result. Because the calculation of the IITM method has reached the geometric regime, the IITM and the PGOM can be efficiently employed to accurately compute the single-scattering properties of ice cloud in a wide spectral range.

  3. Lidcombe Program Webcam Treatment for Early Stuttering: A Randomized Controlled Trial.

    PubMed

    Bridgman, Kate; Onslow, Mark; O'Brian, Susan; Jones, Mark; Block, Susan

    2016-10-01

    Webcam treatment is potentially useful for health care in cases of early stuttering in which clients are isolated from specialized treatment services for geographic and other reasons. The purpose of the present trial was to compare outcomes of clinic and webcam deliveries of the Lidcombe Program treatment (Packman et al., 2015) for early stuttering. The design was a parallel, open plan, noninferiority randomized controlled trial of the standard Lidcombe Program treatment and the experimental webcam Lidcombe Program treatment. Participants were 49 children aged 3 years 0 months to 5 years 11 months at the start of treatment. Primary outcomes were the percentage of syllables stuttered at 9 months postrandomization and the number of consultations to complete Stage 1 of the Lidcombe Program. There was insufficient evidence of a posttreatment difference of the percentage of syllables stuttered between the standard and webcam Lidcombe Program treatments. There was insufficient evidence of a difference between the groups for typical stuttering severity measured by parents or the reported clinical relationship with the treating speech-language pathologist. This trial confirmed the viability of the webcam Lidcombe Program intervention. It appears to be as efficacious and economically viable as the standard, clinic Lidcombe Program treatment.

  4. Spiral bacterial foraging optimization method: Algorithm, evaluation and convergence analysis

    NASA Astrophysics Data System (ADS)

    Kasaiezadeh, Alireza; Khajepour, Amir; Waslander, Steven L.

    2014-04-01

    A biologically-inspired algorithm called Spiral Bacterial Foraging Optimization (SBFO) is investigated in this article. SBFO, previously proposed by the same authors, is a multi-agent, gradient-based algorithm that minimizes both the main objective function (local cost) and the distance between each agent and a temporary central point (global cost). A random jump is included normal to the connecting line of each agent to the central point, which produces a vortex around the temporary central point. This random jump is also suitable to cope with premature convergence, which is a feature of swarm-based optimization methods. The most important advantages of this algorithm are as follows: First, this algorithm involves a stochastic type of search with a deterministic convergence. Second, as gradient-based methods are employed, faster convergence is demonstrated over GA, DE, BFO, etc. Third, the algorithm can be implemented in a parallel fashion in order to decentralize large-scale computation. Fourth, the algorithm has a limited number of tunable parameters, and finally SBFO has a strong certainty of convergence which is rare in existing global optimization algorithms. A detailed convergence analysis of SBFO for continuously differentiable objective functions has also been investigated in this article.

  5. Random forests on Hadoop for genome-wide association studies of multivariate neuroimaging phenotypes

    PubMed Central

    2013-01-01

    Motivation Multivariate quantitative traits arise naturally in recent neuroimaging genetics studies, in which both structural and functional variability of the human brain is measured non-invasively through techniques such as magnetic resonance imaging (MRI). There is growing interest in detecting genetic variants associated with such multivariate traits, especially in genome-wide studies. Random forests (RFs) classifiers, which are ensembles of decision trees, are amongst the best performing machine learning algorithms and have been successfully employed for the prioritisation of genetic variants in case-control studies. RFs can also be applied to produce gene rankings in association studies with multivariate quantitative traits, and to estimate genetic similarities measures that are predictive of the trait. However, in studies involving hundreds of thousands of SNPs and high-dimensional traits, a very large ensemble of trees must be inferred from the data in order to obtain reliable rankings, which makes the application of these algorithms computationally prohibitive. Results We have developed a parallel version of the RF algorithm for regression and genetic similarity learning tasks in large-scale population genetic association studies involving multivariate traits, called PaRFR (Parallel Random Forest Regression). Our implementation takes advantage of the MapReduce programming model and is deployed on Hadoop, an open-source software framework that supports data-intensive distributed applications. Notable speed-ups are obtained by introducing a distance-based criterion for node splitting in the tree estimation process. PaRFR has been applied to a genome-wide association study on Alzheimer's disease (AD) in which the quantitative trait consists of a high-dimensional neuroimaging phenotype describing longitudinal changes in the human brain structure. PaRFR provides a ranking of SNPs associated to this trait, and produces pair-wise measures of genetic proximity that can be directly compared to pair-wise measures of phenotypic proximity. Several known AD-related variants have been identified, including APOE4 and TOMM40. We also present experimental evidence supporting the hypothesis of a linear relationship between the number of top-ranked mutated states, or frequent mutation patterns, and an indicator of disease severity. Availability The Java codes are freely available at http://www2.imperial.ac.uk/~gmontana. PMID:24564704

  6. Random forests on Hadoop for genome-wide association studies of multivariate neuroimaging phenotypes.

    PubMed

    Wang, Yue; Goh, Wilson; Wong, Limsoon; Montana, Giovanni

    2013-01-01

    Multivariate quantitative traits arise naturally in recent neuroimaging genetics studies, in which both structural and functional variability of the human brain is measured non-invasively through techniques such as magnetic resonance imaging (MRI). There is growing interest in detecting genetic variants associated with such multivariate traits, especially in genome-wide studies. Random forests (RFs) classifiers, which are ensembles of decision trees, are amongst the best performing machine learning algorithms and have been successfully employed for the prioritisation of genetic variants in case-control studies. RFs can also be applied to produce gene rankings in association studies with multivariate quantitative traits, and to estimate genetic similarities measures that are predictive of the trait. However, in studies involving hundreds of thousands of SNPs and high-dimensional traits, a very large ensemble of trees must be inferred from the data in order to obtain reliable rankings, which makes the application of these algorithms computationally prohibitive. We have developed a parallel version of the RF algorithm for regression and genetic similarity learning tasks in large-scale population genetic association studies involving multivariate traits, called PaRFR (Parallel Random Forest Regression). Our implementation takes advantage of the MapReduce programming model and is deployed on Hadoop, an open-source software framework that supports data-intensive distributed applications. Notable speed-ups are obtained by introducing a distance-based criterion for node splitting in the tree estimation process. PaRFR has been applied to a genome-wide association study on Alzheimer's disease (AD) in which the quantitative trait consists of a high-dimensional neuroimaging phenotype describing longitudinal changes in the human brain structure. PaRFR provides a ranking of SNPs associated to this trait, and produces pair-wise measures of genetic proximity that can be directly compared to pair-wise measures of phenotypic proximity. Several known AD-related variants have been identified, including APOE4 and TOMM40. We also present experimental evidence supporting the hypothesis of a linear relationship between the number of top-ranked mutated states, or frequent mutation patterns, and an indicator of disease severity. The Java codes are freely available at http://www2.imperial.ac.uk/~gmontana.

  7. 3-D inversion of airborne electromagnetic data parallelized and accelerated by local mesh and adaptive soundings

    NASA Astrophysics Data System (ADS)

    Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad

    2014-03-01

    Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution. Any existing modelling technique can be included into our framework of mesh decoupling and adaptive sampling to accelerate large-scale 3-D EM inversions.

  8. Public and private health-care financing with alternate public rationing rules.

    PubMed

    Cuff, Katherine; Hurley, Jeremiah; Mestelman, Stuart; Muller, Andrew; Nuscheler, Robert

    2012-02-01

    We develop a model to analyze parallel public and private health-care financing under two alternative public sector rationing rules: needs-based rationing and random rationing. Individuals vary in income and severity of illness. There is a limited supply of health-care resources used to treat individuals, causing some individuals to go untreated. Insurers (both public and private) must bid to obtain the necessary health-care resources to treat their beneficiaries. Given individuals' willingnesses-to-pay for private insurance are increasing in income, the introduction of private insurance diverts treatment from relatively poor to relatively rich individuals. Further, the impact of introducing parallel private insurance depends on the rationing mechanism in the public sector. We show that the private health insurance market is smaller when the public sector rations according to need than when allocation is random. Copyright © 2010 John Wiley & Sons, Ltd.

  9. Parallel coding of conjunctions in visual search.

    PubMed

    Found, A

    1998-10-01

    Two experiments investigated whether the conjunctive nature of nontarget items influenced search for a conjunction target. Each experiment consisted of two conditions. In both conditions, the target item was a red bar tilted to the right, among white tilted bars and vertical red bars. As well as color and orientation, display items also differed in terms of size. Size was irrelevant to search in that the size of the target varied randomly from trial to trial. In one condition, the size of items correlated with the other attributes of display items (e.g., all red items were big and all white items were small). In the other condition, the size of items varied randomly (i.e., some red items were small and some were big, and some white items were big and some were small). Search was more efficient in the size-correlated condition, consistent with the parallel coding of conjunctions in visual search.

  10. [CMACPAR an modified parallel neuro-controller for control processes].

    PubMed

    Ramos, E; Surós, R

    1999-01-01

    CMACPAR is a Parallel Neurocontroller oriented to real time systems as for example Control Processes. Its characteristics are mainly a fast learning algorithm, a reduced number of calculations, great generalization capacity, local learning and intrinsic parallelism. This type of neurocontroller is used in real time applications required by refineries, hydroelectric centers, factories, etc. In this work we present the analysis and the parallel implementation of a modified scheme of the Cerebellar Model CMAC for the n-dimensional space projection using a mean granularity parallel neurocontroller. The proposed memory management allows for a significant memory reduction in training time and required memory size.

  11. PlayStation purpura.

    PubMed

    Robertson, Susan J; Leonard, Jane; Chamberlain, Alex J

    2010-08-01

    A 16-year-old boy presented with a number of asymptomatic pigmented macules on the volar aspect of his index fingers. Dermoscopy of each macule revealed a parallel ridge pattern of homogenous reddish-brown pigment. We propose that these lesions were induced by repetitive trauma from a Sony PlayStation 3 (Sony Corporation, Tokyo, Japan) vibration feedback controller. The lesions completely resolved following abstinence from gaming over a number of weeks. Although the parallel ridge pattern is typically the hallmark for early acral lentiginous melanoma, it may be observed in a limited number of benign entities, including subcorneal haematoma.

  12. Trinary signed-digit arithmetic using an efficient encoding scheme

    NASA Astrophysics Data System (ADS)

    Salim, W. Y.; Alam, M. S.; Fyath, R. S.; Ali, S. A.

    2000-09-01

    The trinary signed-digit (TSD) number system is of interest for ultrafast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.

  13. One-step trinary signed-digit arithmetic using an efficient encoding scheme

    NASA Astrophysics Data System (ADS)

    Salim, W. Y.; Fyath, R. S.; Ali, S. A.; Alam, Mohammad S.

    2000-11-01

    The trinary signed-digit (TSD) number system is of interest for ultra fast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.

  14. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE PAGES

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...

    2018-01-30

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  15. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  16. The effect of three ergonomics training programs on the prevalence of low-back pain among workers of an Iranian automobile factory: a randomized clinical trial.

    PubMed

    Aghilinejad, M; Bahrami-Ahmadi, A; Kabir-Mokamelkhah, E; Sarebanha, S; Hosseini, H R; Sadeghi, Z

    2014-04-01

    Many workers suffer from low-back pain. Type and severity of spinal complaints have relationship with work load. Lack of adherence to ergonomics recommendations among the important causes of low-back pain. To assess the effect of 3 ergonomics training programs on the prevalence of lowback pain among workers of an Iranian automobile factory. In a parallel-design 4-arm randomized clinical trial, 760 active workers of an automobile factory were studied. 503 workers were found eligible and randomized into 3 intervention groups (n=252), and a control group (n=251). The intervention groups consisted of 3 arms: 84 workers were educated by pamphlet, 84 by lectures, and 84 by workshop. Nordic questionnaire was used to determine the prevalence of spinal complaint before and 1-year after the interventions. The trial is registered with the Iranian Randomized Clinical Trial Registry, number IRCT2013061213182N2. Out of 503 workers, 52 lost to follow-up leaving 451 workers for analyses. The prevalence of low-back pain at the baseline was not significantly different among the studied arms. 1-year after the interventions, the prevalence did not change significantly from the baseline values for the lecture and pamphlet group. However, the prevalence of LBP experienced during the last year significantly (p=0.036) decreased from 42% to 23% in participant took part in the workshop. Training of automobile factory workers in ergonomics is more effective by running workshop than giving lecture or disseminating pamphlet.

  17. Coherent backscattering of light by complex random media of spherical scatterers: numerical solution

    NASA Astrophysics Data System (ADS)

    Muinonen, Karri

    2004-07-01

    Novel Monte Carlo techniques are described for the computation of reflection coefficient matrices for multiple scattering of light in plane-parallel random media of spherical scatterers. The present multiple scattering theory is composed of coherent backscattering and radiative transfer. In the radiative transfer part, the Stokes parameters of light escaping from the medium are updated at each scattering process in predefined angles of emergence. The scattering directions at each process are randomized using probability densities for the polar and azimuthal scattering angles: the former angle is generated using the single-scattering phase function, whereafter the latter follows from Kepler's equation. For spherical scatterers in the Rayleigh regime, randomization proceeds semi-analytically whereas, beyond that regime, cubic spline presentation of the scattering matrix is used for numerical computations. In the coherent backscattering part, the reciprocity of electromagnetic waves in the backscattering direction allows the renormalization of the reversely propagating waves, whereafter the scattering characteristics are computed in other directions. High orders of scattering (~10 000) can be treated because of the peculiar polarization characteristics of the reverse wave: after a number of scatterings, the polarization state of the reverse wave becomes independent of that of the incident wave, that is, it becomes fully dictated by the scatterings at the end of the reverse path. The coherent backscattering part depends on the single-scattering albedo in a non-monotonous way, the most pronounced signatures showing up for absorbing scatterers. The numerical results compare favourably to the literature results for nonabsorbing spherical scatterers both in and beyond the Rayleigh regime.

  18. The effect of pheniramine on fentanyl-induced cough: a randomized, double blinded, placebo controlled clinical study.

    PubMed

    Arslan, Zakir; Çalık, Eyup Serhat; Kaplan, Bekir; Ahiskalioglu, Elif Oral

    2016-01-01

    There are many studies conducted on reducing the frequency and severity of fentayl-induced cough during anesthesia induction. We propose that pheniramine maleate, an antihistaminic, may suppress this cough. We aim to observe the effect of pheniramine on fentanyl-induced cough during anesthesia induction. This is a double-blinded, prospective, three-arm parallel, randomized clinical trial of 120 patients with ASA (American Society of Anesthesiologists) physical status III and IV who aged ≥18 and scheduled for elective open heart surgery during general anesthesia. Patients were randomly assigned to three groups of 40 patients, using computer-generated random numbers: placebo group, pheniramine group, and lidocaine group. Cough incidence differed significantly between groups. In the placebo group, 37.5% of patients had cough, whereas the frequency was significantly decreased in pheniramine group (5%) and lidocaine group (15%) (Fischer exact test, p=0.0007 and p=0.0188, respectively). There was no significant change in cough incidence between pheniramine group (5%) and lidocaine group (15%) (Fischer exact test, p=0.4325). Cough severity did also change between groups. Post Hoc tests with Bonferroni showed that mean cough severity in placebo differed significantly than that of pheniramine group and lidocaine group (p<0.0001 and p=0.009, respectively). There was no significant change in cough severity between pheniramine group and lidocaine group (p=0.856). Intravenous pheniramine is as effective as lidocaine in preventing fentayl-induced cough. Our results emphasize that pheniramine is a convenient drug to decrease this cough. Copyright © 2015 Sociedade Brasileira de Anestesiologia. Published by Elsevier Editora Ltda. All rights reserved.

  19. Randomized controlled pilot study to compare Homeopathy and Conventional therapy in Acute Otitis Media.

    PubMed

    Sinha, M N; Siddiqui, V A; Nayak, C; Singh, Vikram; Dixit, Rupali; Dewan, Deepti; Mishra, Alok

    2012-01-01

    To compare the effectiveness of Homeopathy and Conventional therapy in Acute Otitis Media (AOM). A randomized placebo-controlled parallel group pilot study of homeopathic vs conventional treatment for AOM was conducted in Jaipur, India. Patients were randomized by a computer generated random number list to receive either individualized homeopathic medicines in fifty millesimal (LM) potencies, or conventional treatment including analgesics, antipyretics and anti-inflammatory drugs. Patients who did not improve were prescribed antibiotics at the 3rd day. Outcomes were assessed by the Acute Otitis Media-Severity of Symptoms (AOM-SOS) Scale and Tympanic Membrane Examination over 21 days. 81 patients were included, 80 completed follow-up: 41 for conventional and 40 for homeopathic treatment. In the Conventional group, all 40 (100%) patients were cured, in the Homeopathy group, 38 (95%) patients were cured while 02 (5%) patients were lost to the last two follow-up. By the 3rd day of treatment, 4 patients were cured in Homeopathy group but in Conventional group only one patient was cured. In the Conventional group antibiotics were prescribed in 39 (97.5%), no antibiotics were required in the Homeopathy group. 85% of patients were prescribed six homeopathic medicines. Individualized homeopathy is an effective conventional treatment in AOM, there were no significant differences between groups in the main outcome. Symptomatic improvement was quicker in the Homeopathy group, and there was a large difference in antibiotic requirements, favouring homeopathy. Further work on a larger scale should be conducted. Copyright © 2011 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  20. Replacement of dietary saturated fat with unsaturated fats increases numbers of circulating endothelial progenitor cells and decreases numbers of microparticles: findings from the randomized, controlled Dietary Intervention and VAScular function (DIVAS) study.

    PubMed

    Weech, Michelle; Altowaijri, Hana; Mayneris-Perxachs, Jordi; Vafeiadou, Katerina; Madden, Jacqueline; Todd, Susan; Jackson, Kim G; Lovegrove, Julie A; Yaqoob, Parveen

    2018-06-01

    Endothelial progenitor cells (EPCs) and microparticles are emerging as novel markers of cardiovascular disease (CVD) risk, which could potentially be modified by dietary fat. We have previously shown that replacing dietary saturated fatty acids (SFAs) with monounsaturated or n-6 (ω-6) polyunsaturated fatty acids (MUFAs or PUFAs, respectively) improved lipid biomarkers, blood pressure, and markers of endothelial activation, but their effects on circulating EPCs and microparticles are unclear. The Dietary Intervention and VAScular function (DIVAS) Study investigated the replacement of 9.5-9.6% of total energy (%TE) contributed by SFAs with MUFAs or n-6 PUFAs for 16 wk on EPC and microparticle numbers in United Kingdom adults with moderate CVD risk. In this randomized, controlled, single-blind, parallel-group dietary intervention, men and women aged 21-60 y (n = 190) with moderate CVD risk (≥50% above the population mean) consumed 1 of three 16-wk isoenergetic diets. Target compositions for total fat, SFAs, MUFAs, and n-6 PUFAs (%TE) were as follows: SFA-rich diet (36:17:11:4; n = 64), MUFA-rich diet (36:9:19:4; n = 62), and n-6 PUFA-rich diet (36:9:13:10; n = 66). Circulating EPC, endothelial microparticle (EMP), and platelet microparticle (PMP) numbers were analyzed by flow cytometry. Dietary intake, vascular function, and other cardiometabolic risk factors were determined at baseline. Relative to the SFA-rich diet, MUFA- and n-6 PUFA-rich diets decreased EMP (-47.3%, -44.9%) respectively and PMP (-36.8%, -39.1%) numbers (overall diet effects, P < 0.01). The MUFA-rich diet increased EPC numbers (+28.4%; P = 0.023). Additional analyses that used stepwise regression models identified the augmentation index (measuring arterial stiffness determined by pulse-wave analysis) as an independent predictor of baseline EPC and microparticle numbers. Replacement of 9.5-9.6%TE dietary SFAs with MUFAs increased EPC numbers, and replacement with either MUFAs or n-6 PUFAs decreased microparticle numbers, suggesting beneficial effects on endothelial repair and maintenance. Further studies are warranted to determine the mechanisms underlying the favorable effects on EPC and microparticle numbers after SFA replacement. This trial was registered at www.clinicaltrials.gov as NCT01478958.

  1. Connectionist Models and Parallelism in High Level Vision.

    DTIC Science & Technology

    1985-01-01

    GRANT NUMBER(s) Jerome A. Feldman N00014-82-K-0193 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENt. PROJECT, TASK Computer Science...Connectionist Models 2.1 Background and Overviev % Computer science is just beginning to look seriously at parallel computation : it may turn out that...the chair. The program includes intermediate level networks that compute more complex joints and ones that compute parallelograms in the image. These

  2. On the Composition of Public-Coin Zero-Knowledge Protocols

    DTIC Science & Technology

    2011-05-31

    only languages in BPP have public-coin black-box zero-knowledge protocols that are secure under an unbounded (polynomial) number of parallel...only languages in BPP have public-coin black-box zero-knowledge protocols that are secure under an unbounded (polynomial) number of parallel repetitions...and Krawczyk [GK96b] show that only languages in BPP have constant-round public-coin (stand-alone) black-box ZK protocols with negligible soundness

  3. Memory access in shared virtual memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berrendorf, R.

    1992-01-01

    Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.

  4. Memory access in shared virtual memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berrendorf, R.

    1992-09-01

    Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.

  5. TECA: A Parallel Toolkit for Extreme Climate Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhat, Mr; Ruebel, Oliver; Byna, Surendra

    2012-03-12

    We present TECA, a parallel toolkit for detecting extreme events in large climate datasets. Modern climate datasets expose parallelism across a number of dimensions: spatial locations, timesteps and ensemble members. We design TECA to exploit these modes of parallelism and demonstrate a prototype implementation for detecting and tracking three classes of extreme events: tropical cyclones, extra-tropical cyclones and atmospheric rivers. We process a modern TB-sized CAM5 simulation dataset with TECA, and demonstrate good runtime performance for the three case studies.

  6. Parallel Processing and Scientific Applications

    DTIC Science & Technology

    1992-11-30

    Lattice QCD Calculations on the Connection Machine), SIAM News 24, 1 (May 1991) 5. C. F. Baillie and D. A. Johnston, Crumpling Dynamically Triangulated...hypercubic lattice ; in the second, the surface is randomly triangulated once at the beginning of the simulation; and in the third the random...Sharpe, QCD with Dynamical Wilson Fermions 1I, Phys. Rev. D44, 3272 (1991), 8. R. Gupta and C. F. Baillie, Critical Behavior of the 2D XY Model, Phys

  7. Optics Program Modified for Multithreaded Parallel Computing

    NASA Technical Reports Server (NTRS)

    Lou, John; Bedding, Dave; Basinger, Scott

    2006-01-01

    A powerful high-performance computer program for simulating and analyzing adaptive and controlled optical systems has been developed by modifying the serial version of the Modeling and Analysis for Controlled Optical Systems (MACOS) program to impart capabilities for multithreaded parallel processing on computing systems ranging from supercomputers down to Symmetric Multiprocessing (SMP) personal computers. The modifications included the incorporation of OpenMP, a portable and widely supported application interface software, that can be used to explicitly add multithreaded parallelism to an application program under a shared-memory programming model. OpenMP was applied to parallelize ray-tracing calculations, one of the major computing components in MACOS. Multithreading is also used in the diffraction propagation of light in MACOS based on pthreads [POSIX Thread, (where "POSIX" signifies a portable operating system for UNIX)]. In tests of the parallelized version of MACOS, the speedup in ray-tracing calculations was found to be linear, or proportional to the number of processors, while the speedup in diffraction calculations ranged from 50 to 60 percent, depending on the type and number of processors. The parallelized version of MACOS is portable, and, to the user, its interface is basically the same as that of the original serial version of MACOS.

  8. Partitioning and packing mathematical simulation models for calculation on parallel computers

    NASA Technical Reports Server (NTRS)

    Arpasi, D. J.; Milner, E. J.

    1986-01-01

    The development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system is described. Degrees of parallelism (i.e., coupling between the equations) and their impact on parallel processing are discussed. The problem of identifying computational parallelism within sets of closely coupled equations that require the exchange of current values of variables is described. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. An algorithm which packs the equations into a minimum number of processors is also described. The results of the packing algorithm when applied to a turbojet engine model are presented in terms of processor utilization.

  9. Data preprocessing for determining outer/inner parallelization in the nested loop problem using OpenMP

    NASA Astrophysics Data System (ADS)

    Handhika, T.; Bustamam, A.; Ernastuti, Kerami, D.

    2017-07-01

    Multi-thread programming using OpenMP on the shared-memory architecture with hyperthreading technology allows the resource to be accessed by multiple processors simultaneously. Each processor can execute more than one thread for a certain period of time. However, its speedup depends on the ability of the processor to execute threads in limited quantities, especially the sequential algorithm which contains a nested loop. The number of the outer loop iterations is greater than the maximum number of threads that can be executed by a processor. The thread distribution technique that had been found previously only be applied by the high-level programmer. This paper generates a parallelization procedure for low-level programmer in dealing with 2-level nested loop problems with the maximum number of threads that can be executed by a processor is smaller than the number of the outer loop iterations. Data preprocessing which is related to the number of the outer loop and the inner loop iterations, the computational time required to execute each iteration and the maximum number of threads that can be executed by a processor are used as a strategy to determine which parallel region that will produce optimal speedup.

  10. Random sampling causes the low reproducibility of rare eukaryotic OTUs in Illumina COI metabarcoding.

    PubMed

    Leray, Matthieu; Knowlton, Nancy

    2017-01-01

    DNA metabarcoding, the PCR-based profiling of natural communities, is becoming the method of choice for biodiversity monitoring because it circumvents some of the limitations inherent to traditional ecological surveys. However, potential sources of bias that can affect the reproducibility of this method remain to be quantified. The interpretation of differences in patterns of sequence abundance and the ecological relevance of rare sequences remain particularly uncertain. Here we used one artificial mock community to explore the significance of abundance patterns and disentangle the effects of two potential biases on data reproducibility: indexed PCR primers and random sampling during Illumina MiSeq sequencing. We amplified a short fragment of the mitochondrial Cytochrome c Oxidase Subunit I (COI) for a single mock sample containing equimolar amounts of total genomic DNA from 34 marine invertebrates belonging to six phyla. We used seven indexed broad-range primers and sequenced the resulting library on two consecutive Illumina MiSeq runs. The total number of Operational Taxonomic Units (OTUs) was ∼4 times higher than expected based on the composition of the mock sample. Moreover, the total number of reads for the 34 components of the mock sample differed by up to three orders of magnitude. However, 79 out of 86 of the unexpected OTUs were represented by <10 sequences that did not appear consistently across replicates. Our data suggest that random sampling of rare OTUs (e.g., small associated fauna such as parasites) accounted for most of variation in OTU presence-absence, whereas biases associated with indexed PCRs accounted for a larger amount of variation in relative abundance patterns. These results suggest that random sampling during sequencing leads to the low reproducibility of rare OTUs. We suggest that the strategy for handling rare OTUs should depend on the objectives of the study. Systematic removal of rare OTUs may avoid inflating diversity based on common β descriptors but will exclude positive records of taxa that are functionally important. Our results further reinforce the need for technical replicates (parallel PCR and sequencing from the same sample) in metabarcoding experimental designs. Data reproducibility should be determined empirically as it will depend upon the sequencing depth, the type of sample, the sequence analysis pipeline, and the number of replicates. Moreover, estimating relative biomasses or abundances based on read counts remains elusive at the OTU level.

  11. Using Computer-Generated Random Numbers to Calculate the Lifetime of a Comet.

    ERIC Educational Resources Information Center

    Danesh, Iraj

    1991-01-01

    An educational technique to calculate the lifetime of a comet using software-generated random numbers is introduced to undergraduate physiques and astronomy students. Discussed are the generation and eligibility of the required random numbers, background literature related to the problem, and the solution to the problem using random numbers.…

  12. Pseudo-random number generator for the Sigma 5 computer

    NASA Technical Reports Server (NTRS)

    Carroll, S. N.

    1983-01-01

    A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.

  13. NDL-v2.0: A new version of the numerical differentiation library for parallel architectures

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Voglis, C.; Papageorgiou, D. G.; Lagaris, I. E.

    2014-07-01

    We present a new version of the numerical differentiation library (NDL) used for the numerical estimation of first and second order partial derivatives of a function by finite differencing. In this version we have restructured the serial implementation of the code so as to achieve optimal task-based parallelization. The pure shared-memory parallelization of the library has been based on the lightweight OpenMP tasking model allowing for the full extraction of the available parallelism and efficient scheduling of multiple concurrent library calls. On multicore clusters, parallelism is exploited by means of TORC, an MPI-based multi-threaded tasking library. The new MPI implementation of NDL provides optimal performance in terms of function calls and, furthermore, supports asynchronous execution of multiple library calls within legacy MPI programs. In addition, a Python interface has been implemented for all cases, exporting the functionality of our library to sequential Python codes. Catalog identifier: AEDG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 63036 No. of bytes in distributed program, including test data, etc.: 801872 Distribution format: tar.gz Programming language: ANSI Fortran-77, ANSI C, Python. Computer: Distributed systems (clusters), shared memory systems. Operating system: Linux, Unix. Has the code been vectorized or parallelized?: Yes. RAM: The library uses O(N) internal storage, N being the dimension of the problem. It can use up to O(N2) internal storage for Hessian calculations, if a task throttling factor has not been set by the user. Classification: 4.9, 4.14, 6.5. Catalog identifier of previous version: AEDG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180(2009)1404 Does the new version supersede the previous version?: Yes Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, and sensitivity analysis. For a large number of scientific and engineering applications, the underlying functions correspond to simulation codes for which analytical estimation of derivatives is difficult or almost impossible. A parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with a carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Reasons for new version: The updated version was motivated by our endeavors to extend a parallel Bayesian uncertainty quantification framework [1], by incorporating higher order derivative information as in most state-of-the-art stochastic simulation methods such as Stochastic Newton MCMC [2] and Riemannian Manifold Hamiltonian MC [3]. The function evaluations are simulations with significant time-to-solution, which also varies with the input parameters such as in [1, 4]. The runtime of the N-body-type of problem changes considerably with the introduction of a longer cut-off between the bodies. In the first version of the library, the OpenMP-parallel subroutines spawn a new team of threads and distribute the function evaluations with a PARALLEL DO directive. This limits the functionality of the library as multiple concurrent calls require nested parallelism support from the OpenMP environment. Therefore, either their function evaluations will be serialized or processor oversubscription is likely to occur due to the increased number of OpenMP threads. In addition, the Hessian calculations include two explicit parallel regions that compute first the diagonal and then the off-diagonal elements of the array. Due to the barrier between the two regions, the parallelism of the calculations is not fully exploited. These issues have been addressed in the new version by first restructuring the serial code and then running the function evaluations in parallel using OpenMP tasks. Although the MPI-parallel implementation of the first version is capable of fully exploiting the task parallelism of the PNDL routines, it does not utilize the caching mechanism of the serial code and, therefore, performs some redundant function evaluations in the Hessian and Jacobian calculations. This can lead to: (a) higher execution times if the number of available processors is lower than the total number of tasks, and (b) significant energy consumption due to wasted processor cycles. Overcoming these drawbacks, which become critical as the time of a single function evaluation increases, was the primary goal of this new version. Due to the code restructure, the MPI-parallel implementation (and the OpenMP-parallel in accordance) avoids redundant calls, providing optimal performance in terms of the number of function evaluations. Another limitation of the library was that the library subroutines were collective and synchronous calls. In the new version, each MPI process can issue any number of subroutines for asynchronous execution. We introduce two library calls that provide global and local task synchronizations, similarly to the BARRIER and TASKWAIT directives of OpenMP. The new MPI-implementation is based on TORC, a new tasking library for multicore clusters [5-7]. TORC improves the portability of the software, as it relies exclusively on the POSIX-Threads and MPI programming interfaces. It allows MPI processes to utilize multiple worker threads, offering a hybrid programming and execution environment similar to MPI+OpenMP, in a completely transparent way. Finally, to further improve the usability of our software, a Python interface has been implemented on top of both the OpenMP and MPI versions of the library. This allows sequential Python codes to exploit shared and distributed memory systems. Summary of revisions: The revised code improves the performance of both parallel (OpenMP and MPI) implementations. The functionality and the user-interface of the MPI-parallel version have been extended to support the asynchronous execution of multiple PNDL calls, issued by one or multiple MPI processes. A new underlying tasking library increases portability and allows MPI processes to have multiple worker threads. For both implementations, an interface to the Python programming language has been added. Restrictions: The library uses only double precision arithmetic. The MPI implementation assumes the homogeneity of the execution environment provided by the operating system. Specifically, the processes of a single MPI application must have identical address space and a user function resides at the same virtual address. In addition, address space layout randomization should not be used for the application. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 23 ms for the serial distribution, 25 ms for the OpenMP with 2 threads, 53 ms and 1.01 s for the MPI parallel distribution using 2 threads and 2 processes respectively and yield-time for idle workers equal to 10 ms. References: [1] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Bayesian uncertainty quantification and propagation in molecular dynamics simulations: a high performance computing framework, J. Chem. Phys 137 (14). [2] H.P. Flath, L.C. Wilcox, V. Akcelik, J. Hill, B. van Bloemen Waanders, O. Ghattas, Fast algorithms for Bayesian uncertainty quantification in large-scale linear inverse problems based on low-rank partial Hessian approximations, SIAM J. Sci. Comput. 33 (1) (2011) 407-432. [3] M. Girolami, B. Calderhead, Riemann manifold Langevin and Hamiltonian Monte Carlo methods, J. R. Stat. Soc. Ser. B (Stat. Methodol.) 73 (2) (2011) 123-214. [4] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Data driven, predictive molecular dynamics for nanoscale flow simulations under uncertainty, J. Phys. Chem. B 117 (47) (2013) 14808-14816. [5] P.E. Hadjidoukas, E. Lappas, V.V. Dimakopoulos, A runtime library for platform-independent task parallelism, in: PDP, IEEE, 2012, pp. 229-236. [6] C. Voglis, P.E. Hadjidoukas, D.G. Papageorgiou, I. Lagaris, A parallel hybrid optimization algorithm for fitting interatomic potentials, Appl. Soft Comput. 13 (12) (2013) 4481-4492. [7] P.E. Hadjidoukas, C. Voglis, V.V. Dimakopoulos, I. Lagaris, D.G. Papageorgiou, Supporting adaptive and irregular parallelism for non-linear numerical optimization, Appl. Math. Comput. 231 (2014) 544-559.

  14. Design considerations for parallel graphics libraries

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  15. BCYCLIC: A parallel block tridiagonal matrix cyclic solver

    NASA Astrophysics Data System (ADS)

    Hirshman, S. P.; Perumalla, K. S.; Lynch, V. E.; Sanchez, R.

    2010-09-01

    A block tridiagonal matrix is factored with minimal fill-in using a cyclic reduction algorithm that is easily parallelized. Storage of the factored blocks allows the application of the inverse to multiple right-hand sides which may not be known at factorization time. Scalability with the number of block rows is achieved with cyclic reduction, while scalability with the block size is achieved using multithreaded routines (OpenMP, GotoBLAS) for block matrix manipulation. This dual scalability is a noteworthy feature of this new solver, as well as its ability to efficiently handle arbitrary (non-powers-of-2) block row and processor numbers. Comparison with a state-of-the art parallel sparse solver is presented. It is expected that this new solver will allow many physical applications to optimally use the parallel resources on current supercomputers. Example usage of the solver in magneto-hydrodynamic (MHD), three-dimensional equilibrium solvers for high-temperature fusion plasmas is cited.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, D.A.; Grunwald, D.C.

    The spectrum of parallel processor designs can be divided into three sections according to the number and complexity of the processors. At one end there are simple, bit-serial processors. Any one of thee processors is of little value, but when it is coupled with many others, the aggregate computing power can be large. This approach to parallel processing can be likened to a colony of termites devouring a log. The most notable examples of this approach are the NASA/Goodyear Massively Parallel Processor, which has 16K one-bit processors, and the Thinking Machines Connection Machine, which has 64K one-bit processors. At themore » other end of the spectrum, a small number of processors, each built using the fastest available technology and the most sophisticated architecture, are combined. An example of this approach is the Cray X-MP. This type of parallel processing is akin to four woodmen attacking the log with chainsaws.« less

  17. Collectively loading programs in a multiple program multiple data environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.

    Techniques are disclosed for loading programs efficiently in a parallel computing system. In one embodiment, nodes of the parallel computing system receive a load description file which indicates, for each program of a multiple program multiple data (MPMD) job, nodes which are to load the program. The nodes determine, using collective operations, a total number of programs to load and a number of programs to load in parallel. The nodes further generate a class route for each program to be loaded in parallel, where the class route generated for a particular program includes only those nodes on which the programmore » needs to be loaded. For each class route, a node is selected using a collective operation to be a load leader which accesses a file system to load the program associated with a class route and broadcasts the program via the class route to other nodes which require the program.« less

  18. Efficacy and safety of sacubitril/valsartan (LCZ696) in Japanese patients with chronic heart failure and reduced ejection fraction: Rationale for and design of the randomized, double-blind PARALLEL-HF study.

    PubMed

    Tsutsui, Hiroyuki; Momomura, Shinichi; Saito, Yoshihiko; Ito, Hiroshi; Yamamoto, Kazuhiro; Ohishi, Tomomi; Okino, Naoko; Guo, Weinong

    2017-09-01

    The prognosis of heart failure patients with reduced ejection fraction (HFrEF) in Japan remains poor, although there is growing evidence for increasing use of evidence-based pharmacotherapies in Japanese real-world HF registries. Sacubitril/valsartan (LCZ696) is a first-in-class angiotensin receptor neprilysin inhibitor shown to reduce mortality and morbidity in the recently completed largest outcome trial in patients with HFrEF (PARADIGM-HF trial). The prospectively designed phase III PARALLEL-HF (Prospective comparison of ARNI with ACE inhibitor to determine the noveL beneficiaL trEatment vaLue in Japanese Heart Failure patients) study aims to assess the clinical efficacy and safety of LCZ696 in Japanese HFrEF patients, and show similar improvements in clinical outcomes as the PARADIGM-HF study enabling the registration of LCZ696 in Japan. This is a multicenter, randomized, double-blind, parallel-group, active controlled study of 220 Japanese HFrEF patients. Eligibility criteria include a diagnosis of chronic HF (New York Heart Association Class II-IV) and reduced ejection fraction (left ventricular ejection fraction ≤35%) and increased plasma concentrations of natriuretic peptides [N-terminal pro B-type natriuretic peptide (NT-proBNP) ≥600pg/mL, or NT-proBNP ≥400pg/mL for those who had a hospitalization for HF within the last 12 months] at the screening visit. The study consists of three phases: (i) screening, (ii) single-blind active LCZ696 run-in, and (iii) double-blind randomized treatment. Patients tolerating LCZ696 50mg bid during the treatment run-in are randomized (1:1) to receive LCZ696 100mg bid or enalapril 5mg bid for 4 weeks followed by up-titration to target doses of LCZ696 200mg bid or enalapril 10mg bid in a double-blind manner. The primary outcome is the composite of cardiovascular death or HF hospitalization and the study is an event-driven trial. The design of the PARALLEL-HF study is aligned with the PARADIGM-HF study and aims to assess the efficacy and safety of LCZ696 in Japanese HFrEF patients. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Performance evaluation of canny edge detection on a tiled multicore architecture

    NASA Astrophysics Data System (ADS)

    Brethorst, Andrew Z.; Desai, Nehal; Enright, Douglas P.; Scrofano, Ronald

    2011-01-01

    In the last few years, a variety of multicore architectures have been used to parallelize image processing applications. In this paper, we focus on assessing the parallel speed-ups of different Canny edge detection parallelization strategies on the Tile64, a tiled multicore architecture developed by the Tilera Corporation. Included in these strategies are different ways Canny edge detection can be parallelized, as well as differences in data management. The two parallelization strategies examined were loop-level parallelism and domain decomposition. Loop-level parallelism is achieved through the use of OpenMP,1 and it is capable of parallelization across the range of values over which a loop iterates. Domain decomposition is the process of breaking down an image into subimages, where each subimage is processed independently, in parallel. The results of the two strategies show that for the same number of threads, programmer implemented, domain decomposition exhibits higher speed-ups than the compiler managed, loop-level parallelism implemented with OpenMP.

  20. An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response

    PubMed Central

    Stipčević, Mario; Ursin, Rupert

    2015-01-01

    Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576

  1. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  2. Review of Recent Methodological Developments in Group-Randomized Trials: Part 2-Analysis.

    PubMed

    Turner, Elizabeth L; Prague, Melanie; Gallis, John A; Li, Fan; Murray, David M

    2017-07-01

    In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have updated that review with developments in analysis of the past 13 years, with a companion article to focus on developments in design. We discuss developments in the topics of the earlier review (e.g., methods for parallel-arm GRTs, individually randomized group-treatment trials, and missing data) and in new topics, including methods to account for multiple-level clustering and alternative estimation methods (e.g., augmented generalized estimating equations, targeted maximum likelihood, and quadratic inference functions). In addition, we describe developments in analysis of alternative group designs (including stepped-wedge GRTs, network-randomized trials, and pseudocluster randomized trials), which require clustering to be accounted for in their design and analysis.

  3. Dose finding with the sequential parallel comparison design.

    PubMed

    Wang, Jessie J; Ivanova, Anastasia

    2014-01-01

    The sequential parallel comparison design (SPCD) is a two-stage design recommended for trials with possibly high placebo response. A drug-placebo comparison in the first stage is followed in the second stage by placebo nonresponders being re-randomized between drug and placebo. We describe how SPCD can be used in trials where multiple doses of a drug or multiple treatments are compared with placebo and present two adaptive approaches. We detail how to analyze data in such trials and give recommendations about the allocation proportion to placebo in the two stages of SPCD.

  4. Parallel MR Imaging with Accelerations Beyond the Number of Receiver Channels Using Real Image Reconstruction.

    PubMed

    Ji, Jim; Wright, Steven

    2005-01-01

    Parallel imaging using multiple phased-array coils and receiver channels has become an effective approach to high-speed magnetic resonance imaging (MRI). To obtain high spatiotemporal resolution, the k-space is subsampled and later interpolated using multiple channel data. Higher subsampling factors result in faster image acquisition. However, the subsampling factors are upper-bounded by the number of parallel channels. Phase constraints have been previously proposed to overcome this limitation with some success. In this paper, we demonstrate that in certain applications it is possible to obtain acceleration factors potentially up to twice the channel numbers by using a real image constraint. Data acquisition and processing methods to manipulate and estimate of the image phase information are presented for improving image reconstruction. In-vivo brain MRI experimental results show that accelerations up to 6 are feasible with 4-channel data.

  5. Hierarchical Parallelization of Gene Differential Association Analysis

    PubMed Central

    2011-01-01

    Background Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication) parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Results Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. Conclusions The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels. PMID:21936916

  6. Hierarchical parallelization of gene differential association analysis.

    PubMed

    Needham, Mark; Hu, Rui; Dwarkadas, Sandhya; Qiu, Xing

    2011-09-21

    Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication) parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels.

  7. Tuning HDF5 subfiling performance on parallel file systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byna, Suren; Chaarawi, Mohamad; Koziol, Quincey

    Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate andmore » tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.« less

  8. Magnetosheath Filamentary Structures Formed by Ion Acceleration at the Quasi-Parallel Bow Shock

    NASA Technical Reports Server (NTRS)

    Omidi, N.; Sibeck, D.; Gutynska, O.; Trattner, K. J.

    2014-01-01

    Results from 2.5-D electromagnetic hybrid simulations show the formation of field-aligned, filamentary plasma structures in the magnetosheath. They begin at the quasi-parallel bow shock and extend far into the magnetosheath. These structures exhibit anticorrelated, spatial oscillations in plasma density and ion temperature. Closer to the bow shock, magnetic field variations associated with density and temperature oscillations may also be present. Magnetosheath filamentary structures (MFS) form primarily in the quasi-parallel sheath; however, they may extend to the quasi-perpendicular magnetosheath. They occur over a wide range of solar wind Alfvénic Mach numbers and interplanetary magnetic field directions. At lower Mach numbers with lower levels of magnetosheath turbulence, MFS remain highly coherent over large distances. At higher Mach numbers, magnetosheath turbulence decreases the level of coherence. Magnetosheath filamentary structures result from localized ion acceleration at the quasi-parallel bow shock and the injection of energetic ions into the magnetosheath. The localized nature of ion acceleration is tied to the generation of fast magnetosonic waves at and upstream of the quasi-parallel shock. The increased pressure in flux tubes containing the shock accelerated ions results in the depletion of the thermal plasma in these flux tubes and the enhancement of density in flux tubes void of energetic ions. This results in the observed anticorrelation between ion temperature and plasma density.

  9. Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.

  10. Highly efficient spatial data filtering in parallel using the opensource library CPPPO

    NASA Astrophysics Data System (ADS)

    Municchi, Federico; Goniva, Christoph; Radl, Stefan

    2016-10-01

    CPPPO is a compilation of parallel data processing routines developed with the aim to create a library for "scale bridging" (i.e. connecting different scales by mean of closure models) in a multi-scale approach. CPPPO features a number of parallel filtering algorithms designed for use with structured and unstructured Eulerian meshes, as well as Lagrangian data sets. In addition, data can be processed on the fly, allowing the collection of relevant statistics without saving individual snapshots of the simulation state. Our library is provided with an interface to the widely-used CFD solver OpenFOAM®, and can be easily connected to any other software package via interface modules. Also, we introduce a novel, extremely efficient approach to parallel data filtering, and show that our algorithms scale super-linearly on multi-core clusters. Furthermore, we provide a guideline for choosing the optimal Eulerian cell selection algorithm depending on the number of CPU cores used. Finally, we demonstrate the accuracy and the parallel scalability of CPPPO in a showcase focusing on heat and mass transfer from a dense bed of particles.

  11. Semi-Individualized Homeopathy Add-On Versus Usual Care Only for Premenstrual Disorders: A Randomized, Controlled Feasibility Study.

    PubMed

    Klein-Laansma, Christien T; Jong, Mats; von Hagens, Cornelia; Jansen, Jean Pierre C H; van Wietmarschen, Herman; Jong, Miek C

    2018-03-22

    Premenstrual syndrome and premenstrual dysphoric disorder (PMS/PMDD) bother a substantial number of women. Homeopathy seems a promising treatment, but it needs investigation using reliable study designs. The feasibility of organizing an international randomized pragmatic trial on a homeopathic add-on treatment (usual care [UC] + HT) compared with UC alone was evaluated. A multicenter, randomized, controlled pragmatic trial with parallel groups. The study was organized in general and private homeopathic practices in the Netherlands and Sweden and in an outpatient university clinic in Germany. Women diagnosed as having PMS/PMDD, based on prospective daily rating by the daily record of severity of problems (DRSP) during a period of 2 months, were included and randomized. Women were to receive UC + HT or UC for 4 months. Homeopathic medicine selection was according to a previously tested prognostic questionnaire and electronic algorithm. Usual care was as provided by the women's general practitioner according to their preferences. Before and after treatment, the women completed diaries (DRSP), the measure yourself concerns and well-being, and other questionnaires. Intention-to-treat (ITT) and per protocol (PP) analyses were performed. In Germany, the study could not proceed because of legal limitations. In Sweden, recruitment proved extremely difficult. In the Netherlands and Sweden, 60 women were randomized (UC + HT: 28; UC: 32), data of 47/46 women were analyzed (ITT/PP). After 4 months, relative mean change of DRSP scores in the UC + HT group was significantly better than in the UC group (p = 0.03). With respect to recruitment and different legal status, it does not seem feasible to perform a larger, international, pragmatic randomized trial on (semi-)individualized homeopathy for PMS/PMDD. Since the added value of HT compared with UC was demonstrated by significant differences in symptom score changes, further studies are warranted.

  12. Probiotic capsules and xylitol chewing gum to manage symptoms of pharyngitis: a randomized controlled factorial trial

    PubMed Central

    Little, Paul; Stuart, Beth; Wingrove, Zoe; Mullee, Mark; Thomas, Tammy; Johnson, Sophie; Leydon, Gerry; Richards-Hall, Samantha; Williamson, Ian; Yao, Lily; Zhu, Shihua; Moore, Michael

    2017-01-01

    BACKGROUND: Reducing the use of antibiotics for upper respiratory tract infections is needed to limit the global threat of antibiotic resistance. We estimated the effectiveness of probiotics and xylitol for the management of pharyngitis. METHODS: In this parallel-group factorial randomized controlled trial, participants in primary care (aged 3 years or older) with pharyngitis underwent randomization by nurses who provided sequential intervention packs. Pack contents for 3 kinds of material and advice were previously determined by computer-generated random numbers: no chewing gum, xylitol-based chewing gum (15% xylitol; 5 pieces daily) and sorbitol gum (5 pieces daily). Half of each group were also randomly assigned to receive either probiotic capsules (containing 24 × 109 colony-forming units of lactobacilli and bifidobacteria) or placebo. The primary outcome was mean self-reported severity of sore throat and difficulty swallowing (scale 0–6) in the first 3 days. We used multiple imputation to avoid the assumption that data were missing completely at random. RESULTS: A total of 1009 individuals consented, 934 completed the baseline assessment, and 689 provided complete data for the primary outcome. Probiotics were not effective in reducing the severity of symptoms: mean severity scores 2.75 with no probiotic and 2.78 with probiotic (adjusted difference −0.001, 95% confidence interval [CI] −0.24 to 0.24). Chewing gum was also ineffective: mean severity scores 2.73 without gum, 2.72 with sorbitol gum (adjusted difference 0.07, 95% CI −0.23 to 0.37) and 2.73 with xylitol gum (adjusted difference 0.01, 95% CI −0.29 to 0.30). None of the secondary outcomes differed significantly between groups, and no harms were reported. INTERPRETATION: Neither probiotics nor advice to chew xylitol-based chewing gum was effective for managing pharyngitis. Trial registration: ISRCTN, no. ISRCTN51472596 PMID:29255098

  13. Probiotic capsules and xylitol chewing gum to manage symptoms of pharyngitis: a randomized controlled factorial trial.

    PubMed

    Little, Paul; Stuart, Beth; Wingrove, Zoe; Mullee, Mark; Thomas, Tammy; Johnson, Sophie; Leydon, Gerry; Richards-Hall, Samantha; Williamson, Ian; Yao, Lily; Zhu, Shihua; Moore, Michael

    2017-12-18

    Reducing the use of antibiotics for upper respiratory tract infections is needed to limit the global threat of antibiotic resistance. We estimated the effectiveness of probiotics and xylitol for the management of pharyngitis. In this parallel-group factorial randomized controlled trial, participants in primary care (aged 3 years or older) with pharyngitis underwent randomization by nurses who provided sequential intervention packs. Pack contents for 3 kinds of material and advice were previously determined by computer-generated random numbers: no chewing gum, xylitol-based chewing gum (15% xylitol; 5 pieces daily) and sorbitol gum (5 pieces daily). Half of each group were also randomly assigned to receive either probiotic capsules (containing 24 × 10 9 colony-forming units of lactobacilli and bifidobacteria) or placebo. The primary outcome was mean self-reported severity of sore throat and difficulty swallowing (scale 0-6) in the first 3 days. We used multiple imputation to avoid the assumption that data were missing completely at random. A total of 1009 individuals consented, 934 completed the baseline assessment, and 689 provided complete data for the primary outcome. Probiotics were not effective in reducing the severity of symptoms: mean severity scores 2.75 with no probiotic and 2.78 with probiotic (adjusted difference -0.001, 95% confidence interval [CI] -0.24 to 0.24). Chewing gum was also ineffective: mean severity scores 2.73 without gum, 2.72 with sorbitol gum (adjusted difference 0.07, 95% CI -0.23 to 0.37) and 2.73 with xylitol gum (adjusted difference 0.01, 95% CI -0.29 to 0.30). None of the secondary outcomes differed significantly between groups, and no harms were reported. Neither probiotics nor advice to chew xylitol-based chewing gum was effective for managing pharyngitis. Trial registration: ISRCTN, no. ISRCTN51472596. © 2017 Joule Inc. or its licensors.

  14. Cooperative storage of shared files in a parallel computing system with dynamic block size

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  15. Optimisation of a parallel ocean general circulation model

    NASA Astrophysics Data System (ADS)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  16. Parallel computation with the force

    NASA Technical Reports Server (NTRS)

    Jordan, H. F.

    1985-01-01

    A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.

  17. Random sphere packing model of heterogeneous propellants

    NASA Astrophysics Data System (ADS)

    Kochevets, Sergei Victorovich

    It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous propellants is developed which uses the concept of multi-point correlation functions. A set of intrinsic length scales of local density fluctuations in random heterogeneous propellants is identified by performing a Monte-Carlo study of the correlation functions. This method of analysis shows great promise for understanding the origins of the combustion instability of heterogeneous propellants, and is believed to become a valuable tool for the development of safe and reliable rocket engines.

  18. Fully Parallel MHD Stability Analysis Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2014-10-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Initial results of the code parallelization will be reported. Work is supported by the U.S. DOE SBIR program.

  19. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  20. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  1. Secure uniform random-number extraction via incoherent strategies

    NASA Astrophysics Data System (ADS)

    Hayashi, Masahito; Zhu, Huangjun

    2018-01-01

    To guarantee the security of uniform random numbers generated by a quantum random-number generator, we study secure extraction of uniform random numbers when the environment of a given quantum state is controlled by the third party, the eavesdropper. Here we restrict our operations to incoherent strategies that are composed of the measurement on the computational basis and incoherent operations (or incoherence-preserving operations). We show that the maximum secure extraction rate is equal to the relative entropy of coherence. By contrast, the coherence of formation gives the extraction rate when a certain constraint is imposed on the eavesdropper's operations. The condition under which the two extraction rates coincide is then determined. Furthermore, we find that the exponential decreasing rate of the leaked information is characterized by Rényi relative entropies of coherence. These results clarify the power of incoherent strategies in random-number generation, and can be applied to guarantee the quality of random numbers generated by a quantum random-number generator.

  2. Development of gallium arsenide high-speed, low-power serial parallel interface modules: Executive summary

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Final report to NASA LeRC on the development of gallium arsenide (GaAS) high-speed, low power serial/parallel interface modules. The report discusses the development and test of a family of 16, 32 and 64 bit parallel to serial and serial to parallel integrated circuits using a self aligned gate MESFET technology developed at the Honeywell Sensors and Signal Processing Laboratory. Lab testing demonstrated 1.3 GHz clock rates at a power of 300 mW. This work was accomplished under contract number NAS3-24676.

  3. Vehicle lateral motion regulation under unreliable communication links based on robust H∞ output-feedback control schema

    NASA Astrophysics Data System (ADS)

    Li, Cong; Jing, Hui; Wang, Rongrong; Chen, Nan

    2018-05-01

    This paper presents a robust control schema for vehicle lateral motion regulation under unreliable communication links via controller area network (CAN). The communication links between the system plant and the controller are assumed to be imperfect and therefore the data packet dropouts occur frequently. The paper takes the form of parallel distributed compensation and treats the dropouts as random binary numbers that form Bernoulli distribution. Both of the tire cornering stiffness uncertainty and external disturbances are considered to enhance the robustness of the controller. In addition, a robust H∞ static output-feedback control approach is proposed to realize the lateral motion control with relative low cost sensors. The stochastic stability of the closed-loop system and conservation of the guaranteed H∞ performance are investigated. Simulation results based on CarSim platform using a high-fidelity and full-car model verify the effectiveness of the proposed control approach.

  4. Expansion of Protein Farnesyltransferase Specificity Using “Tunable” Active Site Interactions

    PubMed Central

    Hougland, James L.; Gangopadhyay, Soumyashree A.; Fierke, Carol A.

    2012-01-01

    Post-translational modifications play essential roles in regulating protein structure and function. Protein farnesyltransferase (FTase) catalyzes the biologically relevant lipidation of up to several hundred cellular proteins. Site-directed mutagenesis of FTase coupled with peptide selectivity measurements demonstrates that molecular recognition is determined by a combination of multiple interactions. Targeted randomization of these interactions yields FTase variants with altered and, in some cases, bio-orthogonal selectivity. We demonstrate that FTase specificity can be “tuned” using a small number of active site contacts that play essential roles in discriminating against non-substrates in the wild-type enzyme. This tunable selectivity extends in vivo, with FTase variants enabling the creation of bioengineered parallel prenylation pathways with altered substrate selectivity within a cell. Engineered FTase variants provide a novel avenue for probing both the selectivity of prenylation pathway enzymes and the effects of prenylation pathway modifications on the cellular function of a protein. PMID:22992747

  5. Unsteady flow past an airfoil pitched at constant rate

    NASA Technical Reports Server (NTRS)

    Lourenco, L.; Vandommelen, L.; Shib, C.; Krothapalli, A.

    1992-01-01

    The unsteady flow past a NACA 0012 airfoil that is undertaking a constant-rate pitching up motion is investigated experimentally by the PIDV technique in a water towing tank. The Reynolds number is 5000, based upon the airfoil's chord and the free-stream velocity. The airfoil is pitching impulsively from 0 to 30 deg. with a dimensionless pitch rate alpha of 0.131. Instantaneous velocity and associated vorticity data have been acquired over the entire flow field. The primary vortex dominates the flow behavior after it separates from the leading edge of the airfoil. Complete stall emerges after this vortex detaches from the airfoil and triggers the shedding of a counter-rotating vortex near the trailing edge. A parallel computational study using the discrete vortex, random walk approximation has also been conducted. In general, the computational results agree very well with the experiment.

  6. Carbon nanotube bundles with tensile strength over 80 GPa.

    PubMed

    Bai, Yunxiang; Zhang, Rufan; Ye, Xuan; Zhu, Zhenxing; Xie, Huanhuan; Shen, Boyuan; Cai, Dali; Liu, Bofei; Zhang, Chenxi; Jia, Zhao; Zhang, Shenli; Li, Xide; Wei, Fei

    2018-05-14

    Carbon nanotubes (CNTs) are one of the strongest known materials. When assembled into fibres, however, their strength becomes impaired by defects, impurities, random orientations and discontinuous lengths. Fabricating CNT fibres with strength reaching that of a single CNT has been an enduring challenge. Here, we demonstrate the fabrication of CNT bundles (CNTBs) that are centimetres long with tensile strength over 80 GPa using ultralong defect-free CNTs. The tensile strength of CNTBs is controlled by the Daniels effect owing to the non-uniformity of the initial strains in the components. We propose a synchronous tightening and relaxing strategy to release these non-uniform initial strains. The fabricated CNTBs, consisting of a large number of components with parallel alignment, defect-free structures, continuous lengths and uniform initial strains, exhibit a tensile strength of 80 GPa (corresponding to an engineering tensile strength of 43 GPa), which is far higher than that of any other strong fibre.

  7. ISCFD Nagoya 1989 - International Symposium on Computational Fluid Dynamics, 3rd, Nagoya, Japan, Aug. 28-31, 1989, Technical Papers

    NASA Astrophysics Data System (ADS)

    Recent advances in computational fluid dynamics are discussed in reviews and reports. Topics addressed include large-scale LESs for turbulent pipe and channel flows, numerical solutions of the Euler and Navier-Stokes equations on parallel computers, multigrid methods for steady high-Reynolds-number flow past sudden expansions, finite-volume methods on unstructured grids, supersonic wake flow on a blunt body, a grid-characteristic method for multidimensional gas dynamics, and CIC numerical simulation of a wave boundary layer. Consideration is given to vortex simulations of confined two-dimensional jets, supersonic viscous shear layers, spectral methods for compressible flows, shock-wave refraction at air/water interfaces, oscillatory flow in a two-dimensional collapsible channel, the growth of randomness in a spatially developing wake, and an efficient simplex algorithm for the finite-difference and dynamic linear-programming method in optimal potential control.

  8. A wide bandwidth CCD buffer memory system

    NASA Technical Reports Server (NTRS)

    Siemens, K.; Wallace, R. W.; Robinson, C. R.

    1978-01-01

    A prototype system was implemented to demonstrate that CCD's can be applied advantageously to the problem of low power digital storage and particularly to the problem of interfacing widely varying data rates. CCD shift register memories (8K bit) were used to construct a feasibility model 128 K-bit buffer memory system. Serial data that can have rates between 150 kHz and 4.0 MHz can be stored in 4K-bit, randomly-accessible memory blocks. Peak power dissipation during a data transfer is less than 7 W, while idle power is approximately 5.4 W. The system features automatic data input synchronization with the recirculating CCD memory block start address. System expansion to accommodate parallel inputs or a greater number of memory blocks can be performed in a modular fashion. Since the control logic does not increase proportionally to increase in memory capacity, the power requirements per bit of storage can be reduced significantly in a larger system.

  9. Acoustic receptivity and transition modeling of Tollmien-Schlichting disturbances induced by distributed surface roughness

    NASA Astrophysics Data System (ADS)

    Raposo, Henrique; Mughal, Shahid; Ashworth, Richard

    2018-04-01

    Acoustic receptivity to Tollmien-Schlichting waves in the presence of surface roughness is investigated for a flat plate boundary layer using the time-harmonic incompressible linearized Navier-Stokes equations. It is shown to be an accurate and efficient means of predicting receptivity amplitudes and, therefore, to be more suitable for parametric investigations than other approaches with direct-numerical-simulation-like accuracy. Comparison with the literature provides strong evidence of the correctness of the approach, including the ability to quantify non-parallel flow effects. These effects are found to be small for the efficiency function over a wide range of frequencies and local Reynolds numbers. In the presence of a two-dimensional wavy-wall, non-parallel flow effects are quite significant, producing both wavenumber detuning and an increase in maximum amplitude. However, a smaller influence is observed when considering an oblique Tollmien-Schlichting wave. This is explained by considering the non-parallel effects on receptivity and on linear growth which may, under certain conditions, cancel each other out. Ultimately, we undertake a Monte Carlo type uncertainty quantification analysis with two-dimensional distributed random roughness. Its power spectral density (PSD) is assumed to follow a power law with an associated uncertainty following a probabilistic Gaussian distribution. The effects of the acoustic frequency over the mean amplitude of the generated two-dimensional Tollmien-Schlichting waves are studied. A strong dependence on the mean PSD shape is observed and discussed according to the basic resonance mechanisms leading to receptivity. The growth of Tollmien-Schlichting waves is predicted with non-linear parabolized stability equations computations to assess the effects of stochasticity in transition location.

  10. Random numbers from vacuum fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Yicheng; Kurtsiefer, Christian, E-mail: christian.kurtsiefer@gmail.com; Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543

    2016-07-25

    We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.

  11. Investigating the Randomness of Numbers

    ERIC Educational Resources Information Center

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  12. Scalable Domain Decomposed Monte Carlo Particle Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  13. Optimization of Monte Carlo dose calculations: The interface problem

    NASA Astrophysics Data System (ADS)

    Soudentas, Edward

    1998-05-01

    High energy photon beams are widely used for radiation treatment of deep-seated tumors. The human body contains many types of interfaces between dissimilar materials that affect dose distribution in radiation therapy. Experimentally, significant radiation dose perturbations has been observed at such interfaces. The EGS4 Monte Carlo code was used to calculate dose perturbations at boundaries between dissimilar materials (such as bone/water) for 60Co and 6 MeV linear accelerator beams using a UNIX workstation. A simple test of the reliability of a random number generator was also developed. A systematic study of the adjustable parameters in EGS4 was performed in order to minimize calculational artifacts at boundaries. Calculations of dose perturbations at boundaries between different materials showed that there is a 12% increase in dose at water/bone interface, and a 44% increase in dose at water/copper interface. with the increase mainly due to electrons produced in water and backscattered from the high atomic number material. The dependence of the dose increase on the atomic number was also investigated. The clinically important case of using two parallel opposed beams for radiation therapy was investigated where increased doses at boundaries has been observed. The Monte Carlo calculations can provide accurate dosimetry data under conditions of electronic non-equilibrium at tissue interfaces.

  14. A Domain Decomposition Parallelization of the Fast Marching Method

    NASA Technical Reports Server (NTRS)

    Herrmann, M.

    2003-01-01

    In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.

  15. Electron heating in a Monte Carlo model of a high Mach number, supercritical, collisionless shock

    NASA Technical Reports Server (NTRS)

    Ellison, Donald C.; Jones, Frank C.

    1987-01-01

    Preliminary work in the investigation of electron injection and acceleration at parallel shocks is presented. A simple model of electron heating that is derived from a unified shock model which includes the effects of an electrostatic potential jump is described. The unified shock model provides a kinetic description of the injection and acceleration of ions and a fluid description of electron heating at high Mach number, supercritical, and parallel shocks.

  16. Parallel Ray Tracing Using the Message Passing Interface

    DTIC Science & Technology

    2007-09-01

    software is available for lens design and for general optical systems modeling. It tends to be designed to run on a single processor and can be very...Cameron, Senior Member, IEEE Abstract—Ray-tracing software is available for lens design and for general optical systems modeling. It tends to be designed to...National Aeronautics and Space Administration (NASA), optical ray tracing, parallel computing, parallel pro- cessing, prime numbers, ray tracing

  17. Why caution is recommended with post-hoc individual patient matching for estimation of treatment effect in parallel-group randomized controlled trials: the case of acute stroke trials.

    PubMed

    Jafari, Nahid; Hearne, John; Churilov, Leonid

    2013-11-10

    A post-hoc individual patient matching procedure was recently proposed within the context of parallel group randomized clinical trials (RCTs) as a method for estimating treatment effect. In this paper, we consider a post-hoc individual patient matching problem within a parallel group RCT as a multi-objective decision-making problem focussing on the trade-off between the quality of individual matches and the overall percentage of matching. Using acute stroke trials as a context, we utilize exact optimization and simulation techniques to investigate a complex relationship between the overall percentage of individual post-hoc matching, the size of the respective RCT, and the quality of matching on variables highly prognostic for a good functional outcome after stroke, as well as the dispersion in these variables. It is empirically confirmed that a high percentage of individual post-hoc matching can only be achieved when the differences in prognostic baseline variables between individually matched subjects within the same pair are sufficiently large and that the unmatched subjects are qualitatively different to the matched ones. It is concluded that the post-hoc individual matching as a technique for treatment effect estimation in parallel-group RCTs should be exercised with caution because of its propensity to introduce significant bias and reduce validity. If used with appropriate caution and thorough evaluation, this approach can complement other viable alternative approaches for estimating the treatment effect. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Scattering Properties of Heterogeneous Mineral Particles with Absorbing Inclusions

    NASA Technical Reports Server (NTRS)

    Dlugach, Janna M.; Mishchenko, Michael I.

    2015-01-01

    We analyze the results of numerically exact computer modeling of scattering and absorption properties of randomly oriented poly-disperse heterogeneous particles obtained by placing microscopic absorbing grains randomly on the surfaces of much larger spherical mineral hosts or by imbedding them randomly inside the hosts. These computations are paralleled by those for heterogeneous particles obtained by fully encapsulating fractal-like absorbing clusters in the mineral hosts. All computations are performed using the superposition T-matrix method. In the case of randomly distributed inclusions, the results are compared with the outcome of Lorenz-Mie computations for an external mixture of the mineral hosts and absorbing grains. We conclude that internal aggregation can affect strongly both the integral radiometric and differential scattering characteristics of the heterogeneous particle mixtures.

  19. Parallel Calculation of Sensitivity Derivatives for Aircraft Design using Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Bischof, c. H.; Green, L. L.; Haigler, K. J.; Knauff, T. L., Jr.

    1994-01-01

    Sensitivity derivative (SD) calculation via automatic differentiation (AD) typical of that required for the aerodynamic design of a transport-type aircraft is considered. Two ways of computing SD via code generated by the ADIFOR automatic differentiation tool are compared for efficiency and applicability to problems involving large numbers of design variables. A vector implementation on a Cray Y-MP computer is compared with a coarse-grained parallel implementation on an IBM SP1 computer, employing a Fortran M wrapper. The SD are computed for a swept transport wing in turbulent, transonic flow; the number of geometric design variables varies from 1 to 60 with coupling between a wing grid generation program and a state-of-the-art, 3-D computational fluid dynamics program, both augmented for derivative computation via AD. For a small number of design variables, the Cray Y-MP implementation is much faster. As the number of design variables grows, however, the IBM SP1 becomes an attractive alternative in terms of compute speed, job turnaround time, and total memory available for solutions with large numbers of design variables. The coarse-grained parallel implementation also can be moved easily to a network of workstations.

  20. Implementing the PM Programming Language using MPI and OpenMP - a New Tool for Programming Geophysical Models on Parallel Systems

    NASA Astrophysics Data System (ADS)

    Bellerby, Tim

    2015-04-01

    PM (Parallel Models) is a new parallel programming language specifically designed for writing environmental and geophysical models. The language is intended to enable implementers to concentrate on the science behind the model rather than the details of running on parallel hardware. At the same time PM leaves the programmer in control - all parallelisation is explicit and the parallel structure of any given program may be deduced directly from the code. This paper describes a PM implementation based on the Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) standards, looking at issues involved with translating the PM parallelisation model to MPI/OpenMP protocols and considering performance in terms of the competing factors of finer-grained parallelisation and increased communication overhead. In order to maximise portability, the implementation stays within the MPI 1.3 standard as much as possible, with MPI-2 MPI-IO file handling the only significant exception. Moreover, it does not assume a thread-safe implementation of MPI. PM adopts a two-tier abstract representation of parallel hardware. A PM processor is a conceptual unit capable of efficiently executing a set of language tasks, with a complete parallel system consisting of an abstract N-dimensional array of such processors. PM processors may map to single cores executing tasks using cooperative multi-tasking, to multiple cores or even to separate processing nodes, efficiently sharing tasks using algorithms such as work stealing. While tasks may move between hardware elements within a PM processor, they may not move between processors without specific programmer intervention. Tasks are assigned to processors using a nested parallelism approach, building on ideas from Reyes et al. (2009). The main program owns all available processors. When the program enters a parallel statement then either processors are divided out among the newly generated tasks (number of new tasks < number of processors) or tasks are divided out among the available processors (number of tasks > number of processors). Nested parallel statements may further subdivide the processor set owned by a given task. Tasks or processors are distributed evenly by default, but uneven distributions are possible under programmer control. It is also possible to explicitly enable child tasks to migrate within the processor set owned by their parent task, reducing load unbalancing at the potential cost of increased inter-processor message traffic. PM incorporates some programming structures from the earlier MIST language presented at a previous EGU General Assembly, while adopting a significantly different underlying parallelisation model and type system. PM code is available at www.pm-lang.org under an unrestrictive MIT license. Reference Ruymán Reyes, Antonio J. Dorta, Francisco Almeida, Francisco de Sande, 2009. Automatic Hybrid MPI+OpenMP Code Generation with llc, Recent Advances in Parallel Virtual Machine and Message Passing Interface, Lecture Notes in Computer Science Volume 5759, 185-195

  1. Counting or Chunking?

    PubMed Central

    Spotorno, Nicola; McMillan, Corey T.; Powers, John P.; Clark, Robin; Grossman, Murray

    2014-01-01

    A growing amount of empirical data is showing that the ability to manipulate quantities in a precise and efficient fashion is rooted in cognitive mechanisms devoted to specific aspects of numbers processing. The Analog number system (ANS) has a reasonable representation of quantities up to about 4, and represents larger quantities on the basis of a numerical ratio between quantities. In order to represent the precise cardinality of a number, the ANS may be supported by external algorithms such as language, leading to a “Precise Number System”. In the setting of limited language, other number-related systems can appear. For example the Parallel Individuation system (PIS) supports a “chunking mechanism” that clusters units of larger numerosities into smaller subsets. In the present study we investigated number processing in non-aphasic patients with Corticobasal Syndrome (CBS) and Posterior Cortical Atrophy (PCA), two neurodegenerative conditions that are associated with progressive parietal atrophy. The present study investigated these number systems in CBS and PCA by assessing the property of the ANS associated with smaller and larger numerosities, and the chunking property of the PIS. The results revealed that CBS/PCA patients are impaired in simple calculations (e.g., addition and subtraction) and that their performance strongly correlates with the size of the numbers involved in these calculations, revealing a clear magnitude effect. This magnitude effect correlated with gray matter atrophy in parietal regions. Moreover, a numeral-dots transcoding task showed that CBS/PCA patients are able to take advantage of clustering in the spatial distribution of the dots of the array. The relative advantage associated with chunking compared to a random spatial distribution correlated with both parietal and prefrontal regions. These results shed light on the properties of systems for representing number knowledge in non-aphasic patients with CBS and PCA. PMID:25278132

  2. Analysis of Uniform Random Numbers Generated by Randu and Urn Ten Different Seeds.

    DTIC Science & Technology

    The statistical properties of the numbers generated by two uniform random number generators, RANDU and URN, each using ten different seeds are...The testing is performed on a sequence of 50,000 numbers generated by each uniform random number generator using each of the ten seeds . (Author)

  3. Random bits, true and unbiased, from atmospheric turbulence

    PubMed Central

    Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo

    2014-01-01

    Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499

  4. Lamellar cationic lipid-DNA complexes from lipids with a strong preference for planar geometry: A Minimal Electrostatic Model.

    PubMed

    Perico, Angelo; Manning, Gerald S

    2014-11-01

    We formulate and analyze a minimal model, based on condensation theory, of the lamellar cationic lipid (CL)-DNA complex of alternately charged lipid bilayers and DNA monolayers in a salt solution. Each lipid bilayer, composed by a random mixture of cationic and neutral lipids, is assumed to be a rigid uniformly charged plane. Each DNA monolayer, located between two lipid bilayers, is formed by the same number of parallel DNAs with a uniform separation distance. For the electrostatic calculation, the model lipoplex is collapsed to a single plane with charge density equal to the net lipid and DNA charge. The free energy difference between the lamellar lipoplex and a reference state of the same number of free lipid bilayers and free DNAs, is calculated as a function of the fraction of CLs, of the ratio of the number of CL charges to the number of negative charges of the DNA phosphates, and of the total number of planes. At the isoelectric point the free energy difference is minimal. The complex formation, already favoured by the decrease of the electrostatic charging free energy, is driven further by the free energy gain due to the release of counterions from the DNAs and from the lipid bilayers, if strongly charged. This minimal model compares well with experiment for lipids having a strong preference for planar geometry and with major features of more detailed models of the lipoplex. © 2014 Wiley Periodicals, Inc.

  5. Real-time fast physical random number generator with a photonic integrated circuit.

    PubMed

    Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu

    2017-03-20

    Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.

  6. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.

    PubMed

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.

  7. The Tera Multithreaded Architecture and Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.; Mavriplis, Dimitri J.

    1998-01-01

    The Tera Multithreaded Architecture (MTA) is a new parallel supercomputer currently being installed at San Diego Supercomputing Center (SDSC). This machine has an architecture quite different from contemporary parallel machines. The computational processor is a custom design and the machine uses hardware to support very fine grained multithreading. The main memory is shared, hardware randomized and flat. These features make the machine highly suited to the execution of unstructured mesh problems, which are difficult to parallelize on other architectures. We report the results of a study carried out during July-August 1998 to evaluate the execution of EUL3D, a code that solves the Euler equations on an unstructured mesh, on the 2 processor Tera MTA at SDSC. Our investigation shows that parallelization of an unstructured code is extremely easy on the Tera. We were able to get an existing parallel code (designed for a shared memory machine), running on the Tera by changing only the compiler directives. Furthermore, a serial version of this code was compiled to run in parallel on the Tera by judicious use of directives to invoke the "full/empty" tag bits of the machine to obtain synchronization. This version achieves 212 and 406 Mflop/s on one and two processors respectively, and requires no attention to partitioning or placement of data issues that would be of paramount importance in other parallel architectures.

  8. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  9. Parallel discrete event simulation: A shared memory approach

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1987-01-01

    With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.

  10. Parallelization of MRCI based on hole-particle symmetry.

    PubMed

    Suo, Bing; Zhai, Gaohong; Wang, Yubin; Wen, Zhenyi; Hu, Xiangqian; Li, Lemin

    2005-01-15

    The parallel implementation of multireference configuration interaction program based on the hole-particle symmetry is described. The platform to implement the parallelization is an Intel-Architectural cluster consisting of 12 nodes, each of which is equipped with two 2.4-G XEON processors, 3-GB memory, and 36-GB disk, and are connected by a Gigabit Ethernet Switch. The dependence of speedup on molecular symmetries and task granularities is discussed. Test calculations show that the scaling with the number of nodes is about 1.9 (for C1 and Cs), 1.65 (for C2v), and 1.55 (for D2h) when the number of nodes is doubled. The largest calculation performed on this cluster involves 5.6 x 10(8) CSFs.

  11. Statistical power in parallel group point exposure studies with time-to-event outcomes: an empirical comparison of the performance of randomized controlled trials and the inverse probability of treatment weighting (IPTW) approach.

    PubMed

    Austin, Peter C; Schuster, Tibor; Platt, Robert W

    2015-10-15

    Estimating statistical power is an important component of the design of both randomized controlled trials (RCTs) and observational studies. Methods for estimating statistical power in RCTs have been well described and can be implemented simply. In observational studies, statistical methods must be used to remove the effects of confounding that can occur due to non-random treatment assignment. Inverse probability of treatment weighting (IPTW) using the propensity score is an attractive method for estimating the effects of treatment using observational data. However, sample size and power calculations have not been adequately described for these methods. We used an extensive series of Monte Carlo simulations to compare the statistical power of an IPTW analysis of an observational study with time-to-event outcomes with that of an analysis of a similarly-structured RCT. We examined the impact of four factors on the statistical power function: number of observed events, prevalence of treatment, the marginal hazard ratio, and the strength of the treatment-selection process. We found that, on average, an IPTW analysis had lower statistical power compared to an analysis of a similarly-structured RCT. The difference in statistical power increased as the magnitude of the treatment-selection model increased. The statistical power of an IPTW analysis tended to be lower than the statistical power of a similarly-structured RCT.

  12. Effect of omega-3 and ascorbic acid on inflammation markers in depressed shift workers in Shahid Tondgoyan Oil Refinery, Iran: a randomized double-blind placebo-controlled study

    PubMed Central

    Khajehnasiri, Farahnaz; Mortazavi, Seyed Bagher; Allameh, Abdolamir; Akhondzadeh, Shahin

    2013-01-01

    The present study aimed to assess the effect of supplementation of omega-3 and/or vitamin C on serum interleukin-6 and high sensitivity C-reactive protein concentration and depression scores among shift workers in Shahid Tondgoyan oil refinery. The study design was randomized, double-blind, placebo-controlled, parallel trial. Totally 136 shift workers with a depression score ≥10 in 21-item Beck Depression Rating Scale were randomly assigned to receive omega-3 (180 mg eicosapentaenoate acid and 120 mg docosahexaenoic acid) or/and vitamin C 250 mg or placebo twice daily (with the same taste and shape as omega-3 and vitamin C) for 60 days in four groups. Depression score, interleukin-6 and high sensitivity C-reactive protein were measured at baseline and after 60 days. This study showed that supplementation of omega-3 plus vitamin C is associated with a decrease in depression score (p<0.05). Supplementation of omega-3 without vitamin C, is associated with a reduction in depression score (p<0.0001) and high sensitivity C-reactive protein concentration (p<0.01). Therefore omega-3 supplementation showed a better effect on reducing depression score and high sensitivity C-reactive protein, but supplementation of vitamin C along with omega-3 did not have significant effect on change in C-reactive protein level compared to omega-3 alone. (Registration number: IRCT201202189056N1) PMID:23874068

  13. Efficacy and safety of electroacupuncture with different acupoints for chemotherapy-induced nausea and vomiting: study protocol for a randomized controlled trial.

    PubMed

    Chen, Bo; Hu, Shu-xiang; Liu, Bao-hu; Zhao, Tian-yi; Li, Bo; Liu, Yan; Li, Ming-yue; Pan, Xing-fang; Guo, Yong-ming; Chen, Ze-lin; Guo, Yi

    2015-05-12

    Many patients experience nausea and vomiting during chemotherapy treatment. Evidence demonstrates that electroacupuncture is beneficial for controlling chemotherapy-induced nausea and vomiting (CINV). However, the acupoint or matching acupoint with the best efficacy for controlling CINV still remains unidentified. This study consists of a randomized controlled trial (RCT) with four parallel arms: a control group and three electroacupuncture groups (one with Neiguan (PC6), one with Zhongwan (CV12), and one with both PC6 and CV12). The control group received standard antiemetic only, while the other three groups received electroacupuncture stimulation with different acupoints besides the standard antiemetic. The intervention is done once daily from the first day (day 1) to the fourth day (day 4) during chemotherapy treatment. The primary outcome measures include frequency of nausea, vomiting and retching. The secondary outcome measures are the grade of constipation and diarrhea, electrogastrogram, assessment of quality of life, assessment of anxiety and depression, and other adverse effects during the chemotherapy. Assessments are scheduled from one day pre-chemotherapy (day 0) to the fifth day of chemotherapy (day 5). Follow-ups are done from day 6 to day 21. The aim of this study is to evaluate the efficacy and safety of electro-acupuncture with different acupoints in the management of CINV. The register number of randomized controlled trial is NCT02195908 . The date of registration was 21 July 2014.

  14. Comparison of acarbose and voglibose in diabetes patients who are inadequately controlled with basal insulin treatment: randomized, parallel, open-label, active-controlled study.

    PubMed

    Lee, Mi Young; Choi, Dong Seop; Lee, Moon Kyu; Lee, Hyoung Woo; Park, Tae Sun; Kim, Doo Man; Chung, Choon Hee; Kim, Duk Kyu; Kim, In Joo; Jang, Hak Chul; Park, Yong Soo; Kwon, Hyuk Sang; Lee, Seung Hun; Shin, Hee Kang

    2014-01-01

    We studied the efficacy and safety of acarbose in comparison with voglibose in type 2 diabetes patients whose blood glucose levels were inadequately controlled with basal insulin alone or in combination with metformin (or a sulfonylurea). This study was a 24-week prospective, open-label, randomized, active-controlled multi-center study. Participants were randomized to receive either acarbose (n=59, 300 mg/day) or voglibose (n=62, 0.9 mg/day). The mean HbA1c at week 24 was significantly decreased approximately 0.7% from baseline in both acarbose (from 8.43% ± 0.71% to 7.71% ± 0.93%) and voglibose groups (from 8.38% ± 0.73% to 7.68% ± 0.94%). The mean fasting plasma glucose level and self-monitoring of blood glucose data from 1 hr before and after each meal were significantly decreased at week 24 in comparison to baseline in both groups. The levels 1 hr after dinner at week 24 were significantly decreased in the acarbose group (from 233.54 ± 69.38 to 176.80 ± 46.63 mg/dL) compared with the voglibose group (from 224.18 ± 70.07 to 193.01 ± 55.39 mg/dL). In conclusion, both acarbose and voglibose are efficacious and safe in patients with type 2 diabetes who are inadequately controlled with basal insulin. (ClinicalTrials.gov number, NCT00970528).

  15. Comparison of Acarbose and Voglibose in Diabetes Patients Who Are Inadequately Controlled with Basal Insulin Treatment: Randomized, Parallel, Open-Label, Active-Controlled Study

    PubMed Central

    Lee, Mi Young; Lee, Moon Kyu; Lee, Hyoung Woo; Park, Tae Sun; Kim, Doo Man; Chung, Choon Hee; Kim, Duk Kyu; Kim, In Joo; Jang, Hak Chul; Park, Yong Soo; Kwon, Hyuk Sang; Lee, Seung Hun; Shin, Hee Kang

    2014-01-01

    We studied the efficacy and safety of acarbose in comparison with voglibose in type 2 diabetes patients whose blood glucose levels were inadequately controlled with basal insulin alone or in combination with metformin (or a sulfonylurea). This study was a 24-week prospective, open-label, randomized, active-controlled multi-center study. Participants were randomized to receive either acarbose (n=59, 300 mg/day) or voglibose (n=62, 0.9 mg/day). The mean HbA1c at week 24 was significantly decreased approximately 0.7% from baseline in both acarbose (from 8.43% ± 0.71% to 7.71% ± 0.93%) and voglibose groups (from 8.38% ± 0.73% to 7.68% ± 0.94%). The mean fasting plasma glucose level and self-monitoring of blood glucose data from 1 hr before and after each meal were significantly decreased at week 24 in comparison to baseline in both groups. The levels 1 hr after dinner at week 24 were significantly decreased in the acarbose group (from 233.54 ± 69.38 to 176.80 ± 46.63 mg/dL) compared with the voglibose group (from 224.18 ± 70.07 to 193.01 ± 55.39 mg/dL). In conclusion, both acarbose and voglibose are efficacious and safe in patients with type 2 diabetes who are inadequately controlled with basal insulin. (ClinicalTrials.gov number, NCT00970528) PMID:24431911

  16. Effect of Herbal and Fluoride Mouth Rinses on Streptococcus mutans and Dental Caries among 12–15-Year-Old School Children: A Randomized Controlled Trial

    PubMed Central

    Shenoy Panchmal, Ganesh; Kumar, Vijaya; Jodalli, Praveen S.; Sonde, Laxminarayan

    2017-01-01

    To assess and compare the effect of herbal and fluoride mouth rinses on Streptococcus mutans count and glucan synthesis by Streptococcus mutans and dental caries, a parallel group placebo controlled randomized trial was conducted among 240 schoolchildren (12–15 years old). Participants were randomly divided and allocated into Group I (0.2% fluoride group), Group II (herbal group), and Group III (placebo group). All received 10 ml of respective mouth rinses every fortnight for a period of one year. Intergroup and intragroup comparison were done for Streptococcus mutans count and glucan synthesis by Streptococcus mutans and dental caries. Streptococcus mutans count showed a statistically significant difference between Group I and Group III (p = 0.035) and also between Group II and Group III (p = 0.039). Glucan concentration levels showed a statistically significant difference (p = 0.024) between Group II and Group III at 12th month. Mean DMF scores showed no statistical difference between the three groups (p = 0.139). No difference in the level of significance was seen in the intention-to-treat and per-protocol analysis. The present study showed that both herbal and fluoride mouth rinses, when used fortnightly, were equally effective and could be recommended for use in school-based health education program to control dental caries. Trial registration number is CTRI/2015/08/006070. PMID:28352285

  17. Protection of xenon against postoperative oxygen impairment in adults undergoing Stanford Type-A acute aortic dissection surgery: Study protocol for a prospective, randomized controlled clinical trial.

    PubMed

    Jin, Mu; Cheng, Yi; Yang, Yanwei; Pan, Xudong; Lu, Jiakai; Cheng, Weiping

    2017-08-01

    The available evidence shows that hypoxemia after Stanford Type-A acute aortic dissection (AAD) surgery is a frequent cause of several adverse consequences. The pathogenesis of postoperative hypoxemia after AAD surgery is complex, and ischemia/reperfusion and inflammation are likely to be underlying risk factors. Xenon, recognized as an ideal anesthetic and anti-inflammatory treatment, might be a possible treatment for these adverse effects. The trial is a prospective, double-blind, 4-group, parallel, randomized controlled, a signal-center clinical trial. We will recruit 160 adult patients undergoing Stanford type-A AAD surgery. Patients will be allocated a study number and will be randomized on a 1:1:1:1 basis to receive 1 of the 3 treatment options (pulmonary inflated with 50% xenon, 75% xenon, or 100% xenon) or no treatment (control group, pulmonary inflated with 50% nitrogen). The aims of this study are to clarify the lung protection capability of xenon and its possible mechanisms in patients undergoing the Stanford type-A AAD surgery. This trial uses an innovative design to account for the xenon effects of postoperative oxygen impairment, and it also delineates the mechanism for any benefit from xenon. The investigational xenon group is considered a treatment intervention, as it includes 3 groups of pulmonary static inflation with 50%, 75%, and 100% xenon. It is suggested that future trials might define an appropriate concentration of xenon for the best practice intervention.

  18. Fluid dynamics during Random Positioning Machine micro-gravity experiments

    NASA Astrophysics Data System (ADS)

    Leguy, Carole A. D.; Delfos, René; Pourquie, Mathieu J. B. M.; Poelma, Christian; Westerweel, Jerry; van Loon, Jack J. W. A.

    2017-06-01

    A Random Positioning Machine (RPM) is a device used to study the role of gravity on biological systems. This is accomplished through continuous reorientation of the sample such that the net influence of gravity is randomized over time. The aim of this study is to predict fluid flow behavior during such RPM simulated microgravity studies, which may explain differences found between RPM and space flight experiments. An analytical solution is given for a cylinder as a model for an experimental container. Then, a dual-axis rotating frame is used to mimic the motion characteristics of an RPM with sinusoidal rotation frequencies of 0.2 Hz and 0.1 Hz while Particle Image Velocimetry is used to measure the velocity field inside a flask. To reproduce the same experiment numerically, a Direct Numerical Simulation model is used. The analytical model predicts that an increase in the Womersley number leads to higher shear stresses at the cylinder wall and decrease in fluid angular velocity inside the cylinder. The experimental results show that periodic single-axis rotation induces a fluid motion parallel to the wall and that a complex flow is observed for two-axis rotation with a maximum wall shear stress of 8.0 mPa (80 mdyne /cm2). The experimental and numerical results show that oscillatory motion inside an RPM induces flow motion that can, depending on the experimental samples, reduce the quality of the simulated microgravity. Thus, it is crucial to determine the appropriate oscillatory frequency of the axes to design biological experiments.

  19. Verification of recursive probabilistic integration (RPI) method for fatigue life management using non-destructive inspections

    NASA Astrophysics Data System (ADS)

    Chen, Tzikang J.; Shiao, Michael

    2016-04-01

    This paper verified a generic and efficient assessment concept for probabilistic fatigue life management. The concept is developed based on an integration of damage tolerance methodology, simulations methods1, 2, and a probabilistic algorithm RPI (recursive probability integration)3-9 considering maintenance for damage tolerance and risk-based fatigue life management. RPI is an efficient semi-analytical probabilistic method for risk assessment subjected to various uncertainties such as the variability in material properties including crack growth rate, initial flaw size, repair quality, random process modeling of flight loads for failure analysis, and inspection reliability represented by probability of detection (POD). In addition, unlike traditional Monte Carlo simulations (MCS) which requires a rerun of MCS when maintenance plan is changed, RPI can repeatedly use a small set of baseline random crack growth histories excluding maintenance related parameters from a single MCS for various maintenance plans. In order to fully appreciate the RPI method, a verification procedure was performed. In this study, MC simulations in the orders of several hundred billions were conducted for various flight conditions, material properties, and inspection scheduling, POD and repair/replacement strategies. Since the MC simulations are time-consuming methods, the simulations were conducted parallelly on DoD High Performance Computers (HPC) using a specialized random number generator for parallel computing. The study has shown that RPI method is several orders of magnitude more efficient than traditional Monte Carlo simulations.

  20. Plain packaging of cigarettes and smoking behavior: study protocol for a randomized controlled study.

    PubMed

    Maynard, Olivia M; Leonards, Ute; Attwood, Angela S; Bauld, Linda; Hogarth, Lee; Munafò, Marcus R

    2014-06-25

    Previous research on the effects of plain packaging has largely relied on self-report measures. Here we describe the protocol of a randomized controlled trial investigating the effect of the plain packaging of cigarettes on smoking behavior in a real-world setting. In a parallel group randomization design, 128 daily cigarette smokers (50% male, 50% female) will attend an initial screening session and be assigned plain or branded packs of cigarettes to smoke for a full day. Plain packs will be those currently used in Australia where plain packaging has been introduced, while branded packs will be those currently used in the United Kingdom. Our primary study outcomes will be smoking behavior (self-reported number of cigarettes smoked and volume of smoke inhaled per cigarette as measured using a smoking topography device). Secondary outcomes measured pre- and post-intervention will be smoking urges, motivation to quit smoking, and perceived taste of the cigarettes. Secondary outcomes measured post-intervention only will be experience of smoking from the cigarette pack, overall experience of smoking, attributes of the cigarette pack, perceptions of the on-packet health warnings, behavior changes, views on plain packaging, and the rewarding value of smoking. Sex differences will be explored for all analyses. This study is novel in its approach to assessing the impact of plain packaging on actual smoking behavior. This research will help inform policymakers about the effectiveness of plain packaging as a tobacco control measure. Current Controlled Trials ISRCTN52982308 (registered 27 June 2013).

  1. Low-Energy Truly Random Number Generation with Superparamagnetic Tunnel Junctions for Unconventional Computing

    NASA Astrophysics Data System (ADS)

    Vodenicarevic, D.; Locatelli, N.; Mizrahi, A.; Friedman, J. S.; Vincent, A. F.; Romera, M.; Fukushima, A.; Yakushiji, K.; Kubota, H.; Yuasa, S.; Tiwari, S.; Grollier, J.; Querlioz, D.

    2017-11-01

    Low-energy random number generation is critical for many emerging computing schemes proposed to complement or replace von Neumann architectures. However, current random number generators are always associated with an energy cost that is prohibitive for these computing schemes. We introduce random number bit generation based on specific nanodevices: superparamagnetic tunnel junctions. We experimentally demonstrate high-quality random bit generation that represents an orders-of-magnitude improvement in energy efficiency over current solutions. We show that the random generation speed improves with nanodevice scaling, and we investigate the impact of temperature, magnetic field, and cross talk. Finally, we show how alternative computing schemes can be implemented using superparamagentic tunnel junctions as random number generators. These results open the way for fabricating efficient hardware computing devices leveraging stochasticity, and they highlight an alternative use for emerging nanodevices.

  2. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaptation on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load inbalances among processors on a parallel machine. This paper described the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution coast is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35 percent of the mesh is randomly adapted. For large scale scientific computations, our load balancing strategy gives an almost sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remappier yields processor assignments that are less than 3 percent of the optimal solutions, but requires only 1 percent of the computational time.

  3. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blocksome, Michael A.; Mamidala, Amith R.

    2013-09-03

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segmentmore » of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.« less

  4. Hypercube Expert System Shell - Applying Production Parallelism.

    DTIC Science & Technology

    1989-12-01

    possible processor organizations, or int( rconntction n thod,, for par- allel architetures . The following are examples of commonlv used interconnection...this timing analysis because match speed-up avaiiah& from production parallelism is proportional to the average number of affected produclions1 ( 11:5

  5. [Parodontocid efficiency in complex treatment and prevention of gingivitis].

    PubMed

    Makeeva, I M; Turkina, A Iu; Poliakova, M A; Babina, K S

    2013-01-01

    Antiplaque/antigingivitis effect of an alcohol-free mouthrinse Parodontocid were evaluated by randomized parallel group clinical trial. Sixty patients with gingivitis were clinically examined to determine PHP, RMNPI and PMA indexes. After professional dental prophylaxis, subjects were randomly assigned in two groups to 10 days oral hygiene program. Group 1 patients used only toothbrush and prophylactic toothpaste while in group 2 persons used Parodontocid in conjunction with normal brushing and flossing.Parodontocid significantly reduced plaque and gingivitis compared to negative control.

  6. Systematic Review of Integrative Health Care Research: Randomized Control Trials, Clinical Controlled Trials, and Meta-Analysis

    DTIC Science & Technology

    2010-01-01

    to usual care (control). Also, in the pilot study of the 4 individual Noetic therapies, off-site prayer was associated with the lowest absolute...mortality in-hospital and at 6 months [16]. The parallel randomization to 4 different Noetic therapies across 5 study arms limited the assessment of...interventional cardiac care: the Monitoring and Actualisation of Noetic Trainings (MANTRA) II randomised study ,” Lancet, vol. 366, no. 9481, pp. 211–217, 2005. [18

  7. Quantum random number generation for loophole-free Bell tests

    NASA Astrophysics Data System (ADS)

    Mitchell, Morgan; Abellan, Carlos; Amaya, Waldimar

    2015-05-01

    We describe the generation of quantum random numbers at multi-Gbps rates, combined with real-time randomness extraction, to give very high purity random numbers based on quantum events at most tens of ns in the past. The system satisfies the stringent requirements of quantum non-locality tests that aim to close the timing loophole. We describe the generation mechanism using spontaneous-emission-driven phase diffusion in a semiconductor laser, digitization, and extraction by parity calculation using multi-GHz logic chips. We pay special attention to experimental proof of the quality of the random numbers and analysis of the randomness extraction. In contrast to widely-used models of randomness generators in the computer science literature, we argue that randomness generation by spontaneous emission can be extracted from a single source.

  8. Quantum Random Number Generation Using a Quanta Image Sensor

    PubMed Central

    Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R.

    2016-01-01

    A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698

  9. Parallel closure theory for toroidally confined plasmas

    NASA Astrophysics Data System (ADS)

    Ji, Jeong-Young; Held, Eric D.

    2017-10-01

    We solve a system of general moment equations to obtain parallel closures for electrons and ions in an axisymmetric toroidal magnetic field. Magnetic field gradient terms are kept and treated using the Fourier series method. Assuming lowest order density (pressure) and temperature to be flux labels, the parallel heat flow, friction, and viscosity are expressed in terms of radial gradients of the lowest-order temperature and pressure, parallel gradients of temperature and parallel flow, and the relative electron-ion parallel flow velocity. Convergence of closure quantities is demonstrated as the number of moments and Fourier modes are increased. Properties of the moment equations in the collisionless limit are also discussed. Combining closures with fluid equations parallel mass flow and electric current are also obtained. Work in collaboration with the PSI Center and supported by the U.S. DOE under Grant Nos. DE-SC0014033, DE-SC0016256, and DE-FG02-04ER54746.

  10. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  11. Parallel computing for probabilistic fatigue analysis

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.

    1993-01-01

    This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.

  12. Linearly exact parallel closures for slab geometry

    NASA Astrophysics Data System (ADS)

    Ji, Jeong-Young; Held, Eric D.; Jhang, Hogun

    2013-08-01

    Parallel closures are obtained by solving a linearized kinetic equation with a model collision operator using the Fourier transform method. The closures expressed in wave number space are exact for time-dependent linear problems to within the limits of the model collision operator. In the adiabatic, collisionless limit, an inverse Fourier transform is performed to obtain integral (nonlocal) parallel closures in real space; parallel heat flow and viscosity closures for density, temperature, and flow velocity equations replace Braginskii's parallel closure relations, and parallel flow velocity and heat flow closures for density and temperature equations replace Spitzer's parallel transport relations. It is verified that the closures reproduce the exact linear response function of Hammett and Perkins [Phys. Rev. Lett. 64, 3019 (1990)] for Landau damping given a temperature gradient. In contrast to their approximate closures where the vanishing viscosity coefficient numerically gives an exact response, our closures relate the heat flow and nonvanishing viscosity to temperature and flow velocity (gradients).

  13. Short-term and practice effects of metronome pacing in Parkinson's disease patients with gait freezing while in the 'on' state: randomized single blind evaluation.

    PubMed

    Cubo, Esther; Leurgans, Sue; Goetz, Christopher G

    2004-12-01

    In a randomized single blind parallel study, we tested the efficacy of an auditory metronome on walking speed and freezing in Parkinson's disease (PD) patients with freezing gait impairment during their 'on' function. No pharmacological treatment is effective in managing 'on' freezing in PD. Like visual cues that can help overcome freezing, rhythmic auditory pacing may provide cues that help normalize walking pace and overcome freezing. Non-demented PD patients with freezing during their 'on' state walked under two conditions, in randomized order: unassisted walking and walking with the use of an audiocassette with a metronome recording. The walking trials were randomized and gait variables were rated from videotapes by a blinded evaluator. Outcome measures were total walking time (total trial time-total freezing time), which was considered the time over a course of specified length, freezing time, average freeze duration and number of freezes. All outcomes were averaged across trials for each person and then compared across conditions using Signed Rank tests. Twelve non-demented PD patients with a mean age of 65.8 +/- 11.2 years, and mean PD duration of 12.4 +/- 7.3 years were included. The use of the metronome slowed ambulation and increased the total walking time (P < 0.0005) only during the first visit, without affecting any freezing variable. In the nine patients who took the metronome recording home and used it daily for 1 week while walking, freezing remained unimproved. Though advocated in prior publications as a walking aid for PD patients, auditory metronome pacing slows walking and is not a beneficial intervention for freezing during their 'on' periods.

  14. Vaginal prolapse repair with or without a midurethral sling in women with genital prolapse and occult stress urinary incontinence: a randomized trial.

    PubMed

    van der Ploeg, J Marinus; Oude Rengerink, Katrien; van der Steen, Annemarie; van Leeuwen, Jules H Schagen; van der Vaart, C Huub; Roovers, Jan-Paul W R

    2016-07-01

    We compared pelvic organ prolapse (POP) repair with and without midurethral sling (MUS) in women with occult stress urinary incontinence (SUI). This was a randomized trial conducted by a consortium of 13 teaching hospitals assessing a parallel cohort of continent women with symptomatic stage II or greater POP. Women with occult SUI were randomly assigned to vaginal prolapse repair with or without MUS. Women without occult SUI received POP surgery. Main outcomes were the absence of SUI at the 12-month follow-up based on the Urogenital Distress Inventory and the need for additional treatment for SUI. We evaluated 231 women, of whom 91 randomized as follows: 43 to POP surgery with and 47 without MUS. A greater number of women in the MUS group reported absence of SUI [86 % vs. 48 %; relative risk (RR) 1.79; 95 % confidence interval (CI) 1.29-2.48]. No women in the MUS group received additional treatment for postoperative SUI; six (13 %) in the control group had a secondary MUS. Women with occult SUI reported more urinary symptoms after POP surgery and more often underwent treatment for postoperative SUI than women without occult SUI. Women with occult SUI had a higher risk of reporting SUI after POP surgery compared with women without occult SUI. Adding a MUS to POP surgery reduced the risk of postoperative SUI and the need for its treatment in women with occult SUI. Of women with occult SUI undergoing POP-only surgery, 13 % needed additional MUS. We found no differences in global impression of improvement and quality of life.

  15. The Effect of Adjuvant Zinc Therapy on Recovery from Pneumonia in Hospitalized Children: A Double-Blind Randomized Controlled Trial

    PubMed Central

    Qasemzadeh, Mohammad Javad; Fathi, Mahdi; Tashvighi, Maryam; Gharehbeglou, Mohammad; Yadollah-Damavandi, Soheila; Parsa, Yekta; Rahimi, Ebrahim

    2014-01-01

    Objectives. Pneumonia is one of the common mortality causes in young children. Some studies have shown beneficial effect of zinc supplements on treatment of pneumonia. The present study aimed to investigate the effects of short courses of zinc administration on recovery from this disease in hospitalized children. Methods. In a parallel Double-Blind Randomized Controlled Trial at Ayatollah Golpaygani Hospital in Qom, 120 children aged 3–60 months with pneumonia were randomly assigned 1 : 1 to receive zinc or placebo (5 mL every 12 hours) along with the common antibiotic treatments until discharge. Primary outcome was recovery from pneumonia which included the incidence and resolving clinical symptoms and duration of hospitalization. Results. The difference between two groups in all clinical symptoms at admittance and the variables affecting the disease such as age and sex were not statistically significant (P < 0.05) at baseline. Compared to the placebo group, the treatment group showed a statistically significant decrease in duration of clinical symptoms (P = 0.044) and hospitalization (P = 0.004). Conclusions. Supplemental administration of zinc can expedite the healing process and results in faster resolution of clinical symptoms in children with pneumonia. In general, zinc administration, along with common antibiotic treatments, is recommended in this group of children. It can also reduce the drug resistance caused by multiple antibiotic therapies. This trial is approved by Medical Ethic Committee of Islamic Azad University in Iran (ID Number: 8579622-Q). This study is also registered in AEARCTR (The American Economic Association's Registry for Randomized Controlled Trials). This trial is registered with RCT ID: AEARCTR-0000187. PMID:24955282

  16. Specific music therapy techniques in the treatment of primary headache disorders in adolescents: a randomized attention-placebo-controlled trial.

    PubMed

    Koenig, Julian; Oelkers-Ax, Rieke; Kaess, Michael; Parzer, Peter; Lenzen, Christoph; Hillecke, Thomas Karl; Resch, Franz

    2013-10-01

    Migraine and tension-type headache have a high prevalence in children and adolescents. In addition to common pharmacologic and nonpharmacologic interventions, music therapy has been shown to be efficient in the prophylaxis of pediatric migraine. This study aimed to assess the efficacy of specific music therapy techniques in the treatment of adolescents with primary headache (tension-type headache and migraine). A prospective, randomized, attention-placebo-controlled parallel group trial was conducted. Following an 8-week baseline, patients were randomized to either music therapy (n = 40) or a rhythm pedagogic program (n = 38) designed as an "attention placebo" over 6 sessions within 8 weeks. Reduction of both headache frequency and intensity after treatment (8-week postline) as well as 6 months after treatment were taken as the efficacy variables. Treatments were delivered in equal dose and frequency by the same group of therapists. Data analysis of subjects completing the protocol showed that neither treatment was superior to the other at any point of measurement (posttreatment and follow-up). Intention-to-treat analysis revealed no impact of drop-out on these results. Both groups showed a moderate mean reduction of headache frequency posttreatment of about 20%, but only small numbers of responders (50% frequency reduction). Follow-up data showed no significant deteriorations or improvements. This article presents a randomized placebo-controlled trial on music therapy in the treatment of adolescents with frequent primary headache. Music therapy is not superior to an attention placebo within this study. These results draw attention to the need of providing adequate controls within therapeutic trials in the treatment of pain. Copyright © 2013 American Pain Society. Published by Elsevier Inc. All rights reserved.

  17. Impact of a computer-assisted Screening, Brief Intervention and Referral to Treatment on reducing alcohol consumption among patients with hazardous drinking disorder in hospital emergency departments. The randomized BREVALCO trial.

    PubMed

    Duroy, David; Boutron, Isabelle; Baron, Gabriel; Ravaud, Philippe; Estellat, Candice; Lejoyeux, Michel

    2016-08-01

    To assess the impact of a computer-assisted Screening, Brief Intervention, and Referral to Treatment (SBIRT) on daily consumption of alcohol by patients with hazardous drinking disorder detected after systematic screening during their admission to an emergency department (ED). Two-arm, parallel group, multicentre, randomized controlled trial with a centralised computer-generated randomization procedure. Four EDs in university hospitals located in the Paris area in France. Patients admitted in the ED for any reason, with hazardous drinking disorder detected after systematic screening (i.e., Alcohol Use Disorder Identification Test score ≥5 for women and 8 for men OR self-reported alcohol consumption by week ≥7 drinks for women and 14 for men). The experimental intervention was computer-assisted SBIRT and the comparator was a placebo-controlled intervention (i.e., a computer-assisted education program on nutrition). Interventions were administered in the ED and followed by phone reinforcements at 1 and 3 months. The primary outcome was the mean number of alcohol drinks per day in the previous week, at 12 months. Results From May 2005 to February 2011, 286 patients were randomized to the computer-assisted SBIRT and 286 to the comparator intervention. The two groups did not differ in the primary outcome, with an adjusted mean difference of 0.12 (95% confidence interval, -0.88 to 1.11). There was no additional benefit of the computer-assisted alcohol SBIRT as compared with the computer-assisted education program on nutrition among patients with hazardous drinking disorder detected by systematic screening during their admission to an ED. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Online Alcohol Assessment and Feedback for Hazardous and Harmful Drinkers: Findings From the AMADEUS-2 Randomized Controlled Trial of Routine Practice in Swedish Universities.

    PubMed

    Bendtsen, Preben; Bendtsen, Marcus; Karlsson, Nadine; White, Ian R; McCambridge, Jim

    2015-07-09

    Previous research on the effectiveness of online alcohol interventions for college students has shown mixed results. Small benefits have been found in some studies and because online interventions are inexpensive and possible to implement on a large scale, there is a need for further study. This study evaluated the effectiveness of national provision of a brief online alcohol intervention for students in Sweden. Risky drinkers at 9 colleges and universities in Sweden were invited by mail and identified using a single screening question. These students (N=1605) gave consent and were randomized into a 2-arm parallel group randomized controlled trial consisting of immediate or delayed access to a fully automated online assessment and intervention with personalized feedback. After 2 months, there was no strong evidence of effectiveness with no statistically significant differences in the planned analyses, although there were some indication of possible benefit in sensitivity analyses suggesting an intervention effect of a 10% reduction (95% CI -30% to 10%) in total weekly alcohol consumption. Also, differences in effect sizes between universities were seen with participants from a major university (n=365) reducing their weekly alcohol consumption by 14% (95% CI -23% to -4%). However, lower recruitment than planned and differential attrition in the intervention and control group (49% vs 68%) complicated interpretation of the outcome data. Any effects of current national provision are likely to be small and further research and development work is needed to enhance effectiveness. International Standard Randomized Controlled Trial Number (ISRCTN): 02335307; http://www.isrctn.com/ISRCTN02335307 (Archived by WebCite at http://www.webcitation.org/6ZdPUh0R4).

  19. Controlled assessment of the efficacy of occlusal stabilization splints on sleep bruxism.

    PubMed

    van der Zaag, Jacques; Lobbezoo, Frank; Wicks, Darrel J; Visscher, Corine M; Hamburger, Hans L; Naeije, Machiel

    2005-01-01

    To assess the efficacy of occlusal stabilization splints in the management of sleep bruxism (SB) in a double-blind, parallel, controlled, randomized clinical trial. Twenty-one participants were randomly assigned to an occlusal splint group (n = 11; mean age = 34.2 +/- 13.1 years) or a palatal splint (ie, an acrylic palatal coverage) group (n = 10; mean age = 34.9 +/- 11.2 years). Two polysomnographic recordings that included bilateral masseter electromyographic activity were made: one prior to treatment, the other after a treatment period of 4 weeks. The number of bruxism episodes per hour of sleep (Epi/h), the number of bursts per hour (Bur/h), and the bruxism time index (ie, the percentage of total sleep time spent bruxing) were established as outcome variables at a 10% maximum voluntary contraction threshold level. A general linear model was used to test both the effects between splint groups and within the treatment phase as well as their interaction for each outcome variable. Neither occlusal stabilization splints nor palatal splints had an influence on the SB outcome variables or on the sleep variables measured on a group level. In individual cases, variable outcomes were found: Some patients had an increase (33% to 48% of the cases), while others showed no change (33% to 48%) or a decrease (19% to 29%) in SB outcome variables. The absence of significant group effects of splints in the management of SB indicates that caution is required when splints are indicated, apart from their role in the protection against dental wear. The application of splints should therefore be considered at the individual patient level.

  20. The FORCE - A highly portable parallel programming language

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger

    1989-01-01

    This paper explains why the FORCE parallel programming language is easily portable among six different shared-memory multiprocessors, and how a two-level macro preprocessor makes it possible to hide low-level machine dependencies and to build machine-independent high-level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared-memory multiprocessor executing them.

  1. The FORCE: A highly portable parallel programming language

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger

    1989-01-01

    Here, it is explained why the FORCE parallel programming language is easily portable among six different shared-memory microprocessors, and how a two-level macro preprocessor makes it possible to hide low level machine dependencies and to build machine-independent high level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared memory multiprocessor executing them.

  2. Protocol for a single-centre, parallel-arm, randomised controlled superiority trial evaluating the effects of transcatheter arterial embolisation of abnormal knee neovasculature on pain, function and quality of life in people with knee osteoarthritis

    PubMed Central

    Landers, Steve; Hely, Andrew; Harrison, Benjamin; Maister, Nick; Hely, Rachael; Lane, Stephen E; Gill, Stephen D; Page, Richard S

    2017-01-01

    Introduction Symptomatic knee osteoarthritis (OA) is common. Advanced knee OA is successfully treated with joint replacement surgery, but effectively managing mild to moderate knee OA can be difficult. Angiogenesis increases with OA and might contribute to pain and structural damage. Modifying angiogenesis is a potential treatment pathway for OA. The aim of the current study is to determine whether transcatheter arterial embolisation of abnormal neovasculature arising from the genicular arterial branches improves knee pain, physical function and quality of life in people with mild to moderate symptomatic knee OA. Methods and analysis The study is a single centre, parallel-arm, double-blinded (participant and assessor), randomised controlled superiority trial with 1:1 random block allocation. Eligible participants have mild to moderate symptomatic knee OA and will be randomly assigned to receive either embolisation of aberrant knee neovasculature of genicular arterial branches or a placebo intervention. Outcome measures will be collected prior to the intervention and again 1, 6 and 12 months postintervention. The primary outcome is change in knee pain between baseline and 12 month assessment as measured by the Knee Injury and Osteoarthritis Outcome Score (KOOS). Secondary outcomes include change in self-reported physical function (KOOS), self-reported quality of life (KOOS, EuroQol: EQ-5D-5L), self-reported knee joint stiffness (KOOS), self-reported global change, 6 min walk test performance, and 30 s chair-stand test performance. Intention-to-treat analysis will be performed including all participants as randomised. To detect a mean between group difference in change pain of 20% at the one year reassessment with a two-sided significance level of α=0.05 and power of 80% using a two-sample t-test, we require 29 participants per arm which allows for 20% of participants to drop out. Ethics and dissemination Barwon Health Human Research Ethics Committee, 30 May 2016, (ref:15/101). Study results will be disseminated via peer-reviewed publications and conference presentations. Trial registration number Universal trial number U1111-1183-8503, Australian New Zealand Clinical Trials Registry, ACTRN12616001184460, approved 29 August 2016. PMID:28554913

  3. Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Buluc, Aydn; Pothen, Alex

    It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less

  4. Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting

    DOE PAGES

    Azad, Ariful; Buluc, Aydn; Pothen, Alex

    2016-03-24

    It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less

  5. Reduced-Order Structure-Preserving Model for Parallel-Connected Three-Phase Grid-Tied Inverters: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian B; Purba, Victor; Jafarpour, Saber

    Given that next-generation infrastructures will contain large numbers of grid-connected inverters and these interfaces will be satisfying a growing fraction of system load, it is imperative to analyze the impacts of power electronics on such systems. However, since each inverter model has a relatively large number of dynamic states, it would be impractical to execute complex system models where the full dynamics of each inverter are retained. To address this challenge, we derive a reduced-order structure-preserving model for parallel-connected grid-tied three-phase inverters. Here, each inverter in the system is assumed to have a full-bridge topology, LCL filter at the pointmore » of common coupling, and the control architecture for each inverter includes a current controller, a power controller, and a phase-locked loop for grid synchronization. We outline a structure-preserving reduced-order inverter model for the setting where the parallel inverters are each designed such that the filter components and controller gains scale linearly with the power rating. By structure preserving, we mean that the reduced-order three-phase inverter model is also composed of an LCL filter, a power controller, current controller, and PLL. That is, we show that the system of parallel inverters can be modeled exactly as one aggregated inverter unit and this equivalent model has the same number of dynamical states as an individual inverter in the paralleled system. Numerical simulations validate the reduced-order models.« less

  6. Programmable quantum random number generator without postprocessing.

    PubMed

    Nguyen, Lac; Rehain, Patrick; Sua, Yong Meng; Huang, Yu-Ping

    2018-02-15

    We demonstrate a viable source of unbiased quantum random numbers whose statistical properties can be arbitrarily programmed without the need for any postprocessing such as randomness distillation or distribution transformation. It is based on measuring the arrival time of single photons in shaped temporal modes that are tailored with an electro-optical modulator. We show that quantum random numbers can be created directly in customized probability distributions and pass all randomness tests of the NIST and Dieharder test suites without any randomness extraction. The min-entropies of such generated random numbers are measured close to the theoretical limits, indicating their near-ideal statistics and ultrahigh purity. Easy to implement and arbitrarily programmable, this technique can find versatile uses in a multitude of data analysis areas.

  7. The effect of Vaccinium uliginosum extract on tablet computer-induced asthenopia: randomized placebo-controlled study.

    PubMed

    Park, Choul Yong; Gu, Namyi; Lim, Chi-Yeon; Oh, Jong-Hyun; Chang, Minwook; Kim, Martha; Rhee, Moo-Yong

    2016-08-18

    To investigate the alleviation effect of Vaccinium uliginosum extract (DA9301) on tablet computer-induced asthenopia. This was a randomized, placebo-controlled, double-blind and parallel study (Trial registration number: 2013-95). A total 60 volunteers were randomized into DA9301 (n = 30) and control (n = 30) groups. The DA9301 group received DA9301 oral pill (1000 mg/day) for 4 weeks and the control group received placebo. Asthenopia was evaluated by administering a questionnaire containing 10 questions (responses were scored on a scales of 0-6; total score: 60) regarding ocular symptoms before (baseline) and 4 weeks after receiving pills (DA9301 or placebo). The participants completed the questionnaire before and after tablet computer (iPad Air, Apple Inc.) watching at each visit. The change in total asthenopia score (TAS) was calculated and compared between the groups TAS increased significantly after tablet computer watching at baseline in DA9301 group. (from 20.35 to 23.88; p = 0.031) However, after receiving DA9301 for 4 weeks, TAS remained stable after tablet computer watching. In the control group, TAS changes induced by tablet computer watching were not significant both at baseline and at 4 weeks after receiving placebo. Further analysis revealed the scores for "tired eyes" (p = 0.001), "sore/aching eyes" (p = 0.038), "irritated eyes" (p = 0.010), "watery eyes" (p = 0.005), "dry eyes" (p = 0.003), "eye strain" (p = 0.006), "blurred vision" (p = 0.034), and "visual discomfort" (p = 0.018) significantly improved in the DA9301 group. We found that oral intake of DA9301 (1000 mg/day for 4 weeks) was effective in alleviating asthenopia symptoms induced by tablet computer watching. The study is registered at www.clinicaltrials.gov (registration number: NCT02641470, date of registration December 30, 2015).

  8. Yogurt supplemented with probiotics can protect the healthy elderly from respiratory infections: A randomized controlled open-label trial.

    PubMed

    Pu, Fangfang; Guo, Yue; Li, Ming; Zhu, Hong; Wang, Shijie; Shen, Xi; He, Miao; Huang, Chengyu; He, Fang

    2017-01-01

    To evaluate whether yogurt supplemented with a probiotic strain could protect middle-aged and elderly people from acute upper respiratory tract infections (URTI) using a randomized, blank-controlled, parallel-group design. Two hundred and five volunteers aged ≥45 years were randomly divided into two groups. The subjects in the intervention group were orally administered 300 mL/d of yogurt supplemented with a probiotic strain, Lactobacillus paracasei N1115 (N1115), 3.6×10 7 CFU/mL for 12 weeks, while those in the control group retained their normal diet without any probiotic supplementation. The primary outcome was the incidence of URTI, and changes in serum protein, immunoglobulins, and the profiles of the T-lymphocyte subsets (total T-cells [CD3 + ], T-helper cells [CD4 + ], and T-cytotoxic-suppressor cells [CD8 + ]) during the intervention were the secondary outcomes. Compared to the control group, the number of persons diagnosed with an acute URTI and the number of URTI events significantly decreased in the intervention group ( P =0.038, P =0.030, respectively). The risk of URTI in the intervention group was evaluated as 55% of that in the control group (relative risk =0.55, 95% CI: 0.307-0.969). The change in the percentage of CD3 + cells in the intervention group was significantly higher than in the control group ( P =0.038). However, no significant differences were observed in the total protein, albumin, globulin, and prealbumin levels in both groups ( P >0.05). The study suggested that yogurt with selected probiotic strains such as N1115 may reduce the risk of acute upper tract infections in the elderly. The enhancement of the T-cell-mediated natural immune defense might be one of the important underlying mechanisms for probiotics to express their anti-infective effects.

  9. Yogurt supplemented with probiotics can protect the healthy elderly from respiratory infections: A randomized controlled open-label trial

    PubMed Central

    Pu, Fangfang; Guo, Yue; Li, Ming; Zhu, Hong; Wang, Shijie; Shen, Xi; He, Miao; Huang, Chengyu; He, Fang

    2017-01-01

    Purpose To evaluate whether yogurt supplemented with a probiotic strain could protect middle-aged and elderly people from acute upper respiratory tract infections (URTI) using a randomized, blank-controlled, parallel-group design. Patients and methods Two hundred and five volunteers aged ≥45 years were randomly divided into two groups. The subjects in the intervention group were orally administered 300 mL/d of yogurt supplemented with a probiotic strain, Lactobacillus paracasei N1115 (N1115), 3.6×107 CFU/mL for 12 weeks, while those in the control group retained their normal diet without any probiotic supplementation. The primary outcome was the incidence of URTI, and changes in serum protein, immunoglobulins, and the profiles of the T-lymphocyte subsets (total T-cells [CD3+], T-helper cells [CD4+], and T-cytotoxic-suppressor cells [CD8+]) during the intervention were the secondary outcomes. Results Compared to the control group, the number of persons diagnosed with an acute URTI and the number of URTI events significantly decreased in the intervention group (P=0.038, P=0.030, respectively). The risk of URTI in the intervention group was evaluated as 55% of that in the control group (relative risk =0.55, 95% CI: 0.307–0.969). The change in the percentage of CD3+ cells in the intervention group was significantly higher than in the control group (P=0.038). However, no significant differences were observed in the total protein, albumin, globulin, and prealbumin levels in both groups (P>0.05). Conclusion The study suggested that yogurt with selected probiotic strains such as N1115 may reduce the risk of acute upper tract infections in the elderly. The enhancement of the T-cell-mediated natural immune defense might be one of the important underlying mechanisms for probiotics to express their anti-infective effects. PMID:28848330

  10. Randomized comparison of renal denervation versus intensified pharmacotherapy including spironolactone in true-resistant hypertension: six-month results from the Prague-15 study.

    PubMed

    Rosa, Ján; Widimský, Petr; Toušek, Petr; Petrák, Ondřej; Čurila, Karol; Waldauf, Petr; Bednář, František; Zelinka, Tomáš; Holaj, Robert; Štrauch, Branislav; Šomlóová, Zuzana; Táborský, Miloš; Václavík, Jan; Kociánová, Eva; Branny, Marian; Nykl, Igor; Jiravský, Otakar; Widimský, Jiří

    2015-02-01

    This prospective, randomized, open-label multicenter trial evaluated the efficacy of catheter-based renal denervation (Symplicity, Medtronic) versus intensified pharmacological treatment including spironolactone (if tolerated) in patients with true-resistant hypertension. This was confirmed by 24-hour ambulatory blood pressure monitoring after excluding secondary hypertension and confirmation of adherence to therapy by measurement of plasma antihypertensive drug levels before enrollment. One-hundred six patients were randomized to renal denervation (n=52), or intensified pharmacological treatment (n=54) with baseline systolic blood pressure of 159±17 and 155±17 mm Hg and average number of drugs 5.1 and 5.4, respectively. A significant reduction in 24-hour average systolic blood pressure after 6 months (-8.6 [95% cofidence interval: -11.8, -5.3] mm Hg; P<0.001 in renal denervation versus -8.1 [95% cofidence interval: -12.7, -3.4] mm Hg; P=0.001 in pharmacological group) was observed, which was comparable in both groups. Similarly, a significant reduction in systolic office blood pressure (-12.4 [95% cofidence interval: -17.0, -7.8] mm Hg; P<0.001 in renal denervation versus -14.3 [95% cofidence interval: -19.7, -8.9] mm Hg; P<0.001 in pharmacological group) was present. Between-group differences in change were not significant. The average number of antihypertensive drugs used after 6 months was significantly higher in the pharmacological group (+0.3 drugs; P<0.001). A significant increase in serum creatinine and a parallel decrease of creatinine clearance were observed in the pharmacological group; between-group difference were borderline significant. The 6-month results of this study confirmed the safety of renal denervation. In conclusion, renal denervation achieved reduction of blood pressure comparable with intensified pharmacotherapy. © 2014 American Heart Association, Inc.

  11. Intermittent catheterization with a hydrophilic-coated catheter delays urinary tract infections in acute spinal cord injury: a prospective, randomized, multicenter trial.

    PubMed

    Cardenas, Diana D; Moore, Katherine N; Dannels-McClure, Amy; Scelza, William M; Graves, Daniel E; Brooks, Monifa; Busch, Anna Karina

    2011-05-01

    To investigate whether intermittent catheterization (IC) with a hydrophilic-coated catheter delays the onset of the first symptomatic urinary tract infection (UTI) and reduces the number of symptomatic UTIs in patients with acute spinal cord injury (SCI) compared with IC with standard, uncoated catheters. A prospective, randomized, parallel-group trial. Fifteen North American SCI centers. Participants were followed up while in the hospital or rehabilitation unit (institutional period) and up to 3 months after institutional discharge (community period). The maximal study period was 6 months. A total of 224 subjects with traumatic SCI of less than 3 months' duration who use IC. The participants were randomized within 10 days of starting IC to either single-use hydrophilic-coated (SpeediCath) or polyvinyl chloride uncoated (Conveen) catheters. The time from the first catheterization to the first antibiotic-treated symptomatic UTI was measured as well as the total number of symptomatic UTIs during the study period. The time to the first antibiotic-treated symptomatic UTI was significantly delayed in the hydrophilic-coated catheter group compared with the uncoated catheter group. The delay corresponded to a 33% decrease in the daily risk of developing the first symptomatic UTI among participants who used the hydrophilic-coated catheter. In the institutional period, the incidence of antibiotic-treated symptomatic UTIs was reduced by 21% (P < .05) in the hydrophilic-coated catheter group. The use of a hydrophilic-coated catheter for IC is associated with a delay in the onset of the first antibiotic-treated symptomatic UTI and with a reduction in the incidence of symptomatic UTI in patients with acute SCI during the acute inpatient rehabilitation. Using a hydrophilic-coated catheter could minimize UTI-related complications, treatment costs, and rehabilitation delays in this group of patients, and reduce the emergence of antibiotic-resistant organisms. Copyright © 2011 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  12. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs

    PubMed Central

    Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong

    2015-01-01

    This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB. PMID:26712765

  13. A double-blind, randomized, placebo-controlled, parallel-group study of THC/CBD oromucosal spray in combination with the existing treatment regimen, in the relief of central neuropathic pain in patients with multiple sclerosis.

    PubMed

    Langford, R M; Mares, J; Novotna, A; Vachova, M; Novakova, I; Notcutt, W; Ratcliffe, S

    2013-04-01

    Central neuropathic pain (CNP) occurs in many multiple sclerosis (MS) patients. The provision of adequate pain relief to these patients can very difficult. Here we report the first phase III placebo-controlled study of the efficacy of the endocannabinoid system modulator delta-9-tetrahydrocannabinol (THC)/cannabidiol (CBD) oromucosal spray (USAN name, nabiximols; Sativex, GW Pharmaceuticals, Salisbury, Wiltshire, UK), to alleviate CNP. Patients who had failed to gain adequate analgesia from existing medication were treated with THC/CBD spray or placebo as an add-on treatment, in a double-blind manner, for 14 weeks to investigate the efficacy of the medication in MS-induced neuropathic pain. This parallel-group phase of the study was then followed by an 18-week randomized-withdrawal study (14-week open-label treatment period plus a double-blind 4-week randomized-withdrawal phase) to investigate time to treatment failure and show maintenance of efficacy. A total of 339 patients were randomized to phase A (167 received THC/CBD spray and 172 received placebo). Of those who completed phase A, 58 entered the randomized-withdrawal phase. The primary endpoint of responder analysis at the 30 % level at week 14 of phase A of the study was not met, with 50 % of patients on THC/CBD spray classed as responders at the 30 % level compared to 45 % of patients on placebo (p = 0.234). However, an interim analysis at week 10 showed a statistically significant treatment difference in favor of THC/CBD spray at this time point (p = 0.046). During the randomized-withdrawal phase, the primary endpoint of time to treatment failure was statistically significant in favor of THC/CBD spray, with 57 % of patients receiving placebo failing treatment versus 24 % of patients from the THC/CBD spray group (p = 0.04). The mean change from baseline in Pain Numerical Rating Scale (NRS) (p = 0.028) and sleep quality NRS (p = 0.015) scores, both secondary endpoints in phase B, were also statistically significant compared to placebo, with estimated treatment differences of -0.79 and 0.99 points, respectively, in favor of THC/CBD spray treatment. The results of the current investigation were equivocal, with conflicting findings in the two phases of the study. While there were a large proportion of responders to THC/CBD spray treatment during the phase A double-blind period, the primary endpoint was not met due to a similarly large number of placebo responders. In contrast, there was a marked effect in phase B of the study, with an increased time to treatment failure in the THC/CBD spray group compared to placebo. These findings suggest that further studies are required to explore the full potential of THC/CBD spray in these patients.

  14. The Fight Deck Perspective of the NASA Langley AILS Concept

    NASA Technical Reports Server (NTRS)

    Rine, Laura L.; Abbott, Terence S.; Lohr, Gary W.; Elliott, Dawn M.; Waller, Marvin C.; Perry, R. Brad

    2000-01-01

    Many US airports depend on parallel runway operations to meet the growing demand for day to day operations. In the current airspace system, Instrument Meteorological Conditions (IMC) reduce the capacity of close parallel runway operations; that is, runways spaced closer than 4300 ft. These capacity losses can result in landing delays causing inconveniences to the traveling public, interruptions in commerce, and increased operating costs to the airlines. This document presents the flight deck perspective component of the Airborne Information for Lateral Spacing (AILS) approaches to close parallel runways in IMC. It represents the ideas the NASA Langley Research Center (LaRC) AILS Development Team envisions to integrate a number of components and procedures into a workable system for conducting close parallel runway approaches. An initial documentation of the aspects of this concept was sponsored by LaRC and completed in 1996. Since that time a number of the aspects have evolved to a more mature state. This paper is an update of the earlier documentation.

  15. Distributed Computing for Signal Processing: Modeling of Asynchronous Parallel Computation. Appendix G. On the Design and Modeling of Special Purpose Parallel Processing Systems.

    DTIC Science & Technology

    1985-05-01

    unit in the data base, with knowing one generic assembly language. °-’--a 139 The 5-tuple describing single operation execution time of the operations...TSi-- generate , random eventi ( ,.0-15 tieit tmls - ((floa egus ()16 274 r Ispt imet imel I at :EVE’JS- II ktime=0.0; /0 present time 0/ rrs ptime=0.0...computing machinery capable of performing these tasks within a given time constraint. Because the majority of the available computing machinery is general

  16. A hybrid-type quantum random number generator

    NASA Astrophysics Data System (ADS)

    Hai-Qiang, Ma; Wu, Zhu; Ke-Jin, Wei; Rui-Xue, Li; Hong-Wei, Liu

    2016-05-01

    This paper proposes a well-performing hybrid-type truly quantum random number generator based on the time interval between two independent single-photon detection signals, which is practical and intuitive, and generates the initial random number sources from a combination of multiple existing random number sources. A time-to-amplitude converter and multichannel analyzer are used for qualitative analysis to demonstrate that each and every step is random. Furthermore, a carefully designed data acquisition system is used to obtain a high-quality random sequence. Our scheme is simple and proves that the random number bit rate can be dramatically increased to satisfy practical requirements. Project supported by the National Natural Science Foundation of China (Grant Nos. 61178010 and 11374042), the Fund of State Key Laboratory of Information Photonics and Optical Communications (Beijing University of Posts and Telecommunications), China, and the Fundamental Research Funds for the Central Universities of China (Grant No. bupt2014TS01).

  17. High-speed true random number generation based on paired memristors for security electronics

    NASA Astrophysics Data System (ADS)

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-01

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ˜30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  18. High-speed true random number generation based on paired memristors for security electronics.

    PubMed

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-10

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ∼30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  19. Doing better by getting worse: posthypnotic amnesia improves random number generation.

    PubMed

    Terhune, Devin Blair; Brugger, Peter

    2011-01-01

    Although forgetting is often regarded as a deficit that we need to control to optimize cognitive functioning, it can have beneficial effects in a number of contexts. We examined whether disrupting memory for previous numerical responses would attenuate repetition avoidance (the tendency to avoid repeating the same number) during random number generation and thereby improve the randomness of responses. Low suggestible and low dissociative and high dissociative highly suggestible individuals completed a random number generation task in a control condition, following a posthypnotic amnesia suggestion to forget previous numerical responses, and in a second control condition following the cancellation of the suggestion. High dissociative highly suggestible participants displayed a selective increase in repetitions during posthypnotic amnesia, with equivalent repetition frequency to a random system, whereas the other two groups exhibited repetition avoidance across conditions. Our results demonstrate that temporarily disrupting memory for previous numerical responses improves random number generation.

  20. Doing Better by Getting Worse: Posthypnotic Amnesia Improves Random Number Generation

    PubMed Central

    Terhune, Devin Blair; Brugger, Peter

    2011-01-01

    Although forgetting is often regarded as a deficit that we need to control to optimize cognitive functioning, it can have beneficial effects in a number of contexts. We examined whether disrupting memory for previous numerical responses would attenuate repetition avoidance (the tendency to avoid repeating the same number) during random number generation and thereby improve the randomness of responses. Low suggestible and low dissociative and high dissociative highly suggestible individuals completed a random number generation task in a control condition, following a posthypnotic amnesia suggestion to forget previous numerical responses, and in a second control condition following the cancellation of the suggestion. High dissociative highly suggestible participants displayed a selective increase in repetitions during posthypnotic amnesia, with equivalent repetition frequency to a random system, whereas the other two groups exhibited repetition avoidance across conditions. Our results demonstrate that temporarily disrupting memory for previous numerical responses improves random number generation. PMID:22195022

  1. Effects of a whole body vibration (WBV) exercise intervention for institutionalized older people: a randomized, multicentre, parallel, clinical trial.

    PubMed

    Sitjà-Rabert, Mercè; Martínez-Zapata, Ma José; Fort Vanmeerhaeghe, Azahara; Rey Abella, Ferran; Romero-Rodríguez, Daniel; Bonfill, Xavier

    2015-02-01

    To assess the efficacy of an exercise program on a whole-body vibration platform (WBV) in improving body balance and muscle performance and preventing falls in institutionalized elderly people. A multicentre randomized parallel assessor-blinded clinical trial was conducted in elderly persons living in nursing homes. Participants were randomized to an exercise program performed either on a whole body vibratory platform (WBV plus exercise group) or on a stationary surface (exercise group). The exercise program for both groups consisted of static and dynamic exercises (balance and strength training over a 6-week training period of 3 sessions per week). The frequency applied on the vibratory platform was 30 to 35 Hz and amplitude was 2 to 4 mm. The primary outcome measurement was static/dynamic body balance. Secondary outcomes were muscle strength and number of falls. Efficacy was analyzed on an intention-to-treat basis and per protocol. The effects of the intervention were evaluated using the t test, Mann-Whitney test, or chi-square test, depending on the type of outcome. Follow-up measurements were collected 6 weeks and 6 months after randomization. A total of 159 participants from 10 centers were included: 81 in the WBV plus exercise group and 78 in the control group. Mean age was 82 years, and 67.29% were women. The Tinetti test score showed a significant overall improvement in both groups (P < .001). No significant differences were found between groups at week 6 (P = .890) or month 6 (P = .718). The Timed Up and Go test did not improve (P = .599) in either group over time, and no significant differences were found between groups at week 6 (P = .757) or month 6 (P = .959). Muscle performance results from the 5 Sit-To-Stand tests improved significantly across time (P = .001), but no statistically significant differences were found between groups at week 6 (P = .709) or month 6 (P = .841). A total of 57 falls (35.8%) were recorded during the follow-up period, with no differences between groups (P = .406). Exercise program on a vibratory platform provides benefits similar to those with exercise program on a stationary surface in relation to body balance, gait, functional mobility, and muscle strength in institutionalized elderly people. Longer studies in larger samples are needed to assess falls. Copyright © 2015 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  2. Implementation of DFT application on ternary optical computer

    NASA Astrophysics Data System (ADS)

    Junjie, Peng; Youyi, Fu; Xiaofeng, Zhang; Shuai, Kong; Xinyu, Wei

    2018-03-01

    As its characteristics of huge number of data bits and low energy consumption, optical computing may be used in the applications such as DFT etc. which needs a lot of computation and can be implemented in parallel. According to this, DFT implementation methods in full parallel as well as in partial parallel are presented. Based on resources ternary optical computer (TOC), extensive experiments were carried out. Experimental results show that the proposed schemes are correct and feasible. They provide a foundation for further exploration of the applications on TOC that needs a large amount calculation and can be processed in parallel.

  3. Implementations of BLAST for parallel computers.

    PubMed

    Jülich, A

    1995-02-01

    The BLAST sequence comparison programs have been ported to a variety of parallel computers-the shared memory machine Cray Y-MP 8/864 and the distributed memory architectures Intel iPSC/860 and nCUBE. Additionally, the programs were ported to run on workstation clusters. We explain the parallelization techniques and consider the pros and cons of these methods. The BLAST programs are very well suited for parallelization for a moderate number of processors. We illustrate our results using the program blastp as an example. As input data for blastp, a 799 residue protein query sequence and the protein database PIR were used.

  4. n-body simulations using message passing parallel computers.

    NASA Astrophysics Data System (ADS)

    Grama, A. Y.; Kumar, V.; Sameh, A.

    The authors present new parallel formulations of the Barnes-Hut method for n-body simulations on message passing computers. These parallel formulations partition the domain efficiently incurring minimal communication overhead. This is in contrast to existing schemes that are based on sorting a large number of keys or on the use of global data structures. The new formulations are augmented by alternate communication strategies which serve to minimize communication overhead. The impact of these communication strategies is experimentally studied. The authors report on experimental results obtained from an astrophysical simulation on an nCUBE2 parallel computer.

  5. A Randomized Parallel Study for Simulated Internal Jugular Vein Cannulation Using Simple Needle Guide Device

    ClinicalTrials.gov

    2017-08-14

    Doctors Attending a Central Line Insertion Training Courses for New Residents of a University Hospital From March 2017 to June 2017; Physicians Who Had Less Than 10 Ultrasound Guided Internal Jugular Vein Cannulation Participate in This Study

  6. On the Nonlinear Stability of Plane Parallel Shear Flow in a Coplanar Magnetic Field

    NASA Astrophysics Data System (ADS)

    Xu, Lanxi; Lan, Wanli

    2017-12-01

    Lyapunov direct method has been used to study the nonlinear stability of laminar flow between two parallel planes in the presence of a coplanar magnetic field for streamwise perturbations with stress-free boundary planes. Two Lyapunov functions are defined. By means of the first, it is proved that the transverse components of the perturbations decay unconditionally and asymptotically to zero for all Reynolds numbers and magnetic Reynolds numbers. By means of the second, it is showed that the other components of the perturbations decay conditionally and exponentially to zero for all Reynolds numbers and the magnetic Reynolds numbers below π ^2/2M, where M is the maximum of the absolute value of the velocity field of the laminar flow.

  7. Quantum random number generator

    DOEpatents

    Pooser, Raphael C.

    2016-05-10

    A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.

  8. Postdural puncture headache is not an age-related symptom in children: a prospective, open-randomized, parallel group study comparing a22-gauge Quincke with a 22-gauge Whitacre needle.

    PubMed

    Kokki, H; Salonvaara, M; Herrgård, E; Onen, P

    1999-01-01

    Many reports have shown a low incidence of postdural puncture headache (PDPH) and other complaints in young children. The objective of this open-randomized, prospective, parallel group study was to compare the use of a cutting point spinal needle (22-G Quincke) with a pencil point spinal needle (22-G Whitacre) in children. We studied the puncture characteristics, success rate and incidence of postpuncture complaints in 57 children, aged 8 months to 15 years, following 98 lumbar punctures (LP). The patient/parents completed a diary at 3 and 7 days after LP. The response rate was 97%. The incidence of PDPH was similar, 15% in the Quincke group and 9% in the Whitacre group (P=0.42). The risk of developing a PDPH was not dependent on the age (r < 0.00, P=0.67). Eight of the 11 PDPHs developed in children younger than 10 years, the youngest being 23-months-old.

  9. The effective propagation constants of SH wave in composites reinforced by dispersive parallel nanofibers

    NASA Astrophysics Data System (ADS)

    Qiang, FangWei; Wei, PeiJun; Li, Li

    2012-07-01

    In the present paper, the effective propagation constants of elastic SH waves in composites with randomly distributed parallel cylindrical nanofibers are studied. The surface stress effects are considered based on the surface elasticity theory and non-classical interfacial conditions between the nanofiber and the host are derived. The scattering waves from individual nanofibers embedded in an infinite elastic host are obtained by the plane wave expansion method. The scattering waves from all fibers are summed up to obtain the multiple scattering waves. The interactions among random dispersive nanofibers are taken into account by the effective field approximation. The effective propagation constants are obtained by the configurational average of the multiple scattering waves. The effective speed and attenuation of the averaged wave and the associated dynamical effective shear modulus of composites are numerically calculated. Based on the numerical results, the size effects of the nanofibers on the effective propagation constants and the effective modulus are discussed.

  10. Magnetic orientation of nontronite clay in aqueous dispersions and its effect on water diffusion.

    PubMed

    Abrahamsson, Christoffer; Nordstierna, Lars; Nordin, Matias; Dvinskikh, Sergey V; Nydén, Magnus

    2015-01-01

    The diffusion rate of water in dilute clay dispersions depends on particle concentration, size, shape, aggregation and water-particle interactions. As nontronite clay particles magnetically align parallel to the magnetic field, directional self-diffusion anisotropy can be created within such dispersion. Here we study water diffusion in exfoliated nontronite clay dispersions by diffusion NMR and time-dependant 1H-NMR-imaging profiles. The dispersion clay concentration was varied between 0.3 and 0.7 vol%. After magnetic alignment of the clay particles in these dispersions a maximum difference of 20% was measured between the parallel and perpendicular self-diffusion coefficients in the dispersion with 0.7 vol% clay. A method was developed to measure water diffusion within the dispersion in the absence of a magnetic field (random clay orientation) as this is not possible with standard diffusion NMR. However, no significant difference in self-diffusion coefficient between random and aligned dispersions could be observed. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Capabilities of Fully Parallelized MHD Stability Code MARS

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2016-10-01

    Results of full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. Parallel version of MARS, named PMARS, has been recently developed at FAR-TECH. Parallelized MARS is an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, implemented in MARS. Parallelization of the code included parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse vector iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the MARS algorithm using parallel libraries and procedures. Parallelized MARS is capable of calculating eigenmodes with significantly increased spatial resolution: up to 5,000 adapted radial grid points with up to 500 poloidal harmonics. Such resolution is sufficient for simulation of kink, tearing and peeling-ballooning instabilities with physically relevant parameters. Work is supported by the U.S. DOE SBIR program.

  12. Fully Parallel MHD Stability Analysis Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2015-11-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Results of MARS parallelization and of the development of a new fix boundary equilibrium code adapted for MARS input will be reported. Work is supported by the U.S. DOE SBIR program.

  13. Fast physical-random number generation using laser diode's frequency noise: influence of frequency discriminator

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kouhei; Kasuya, Yuki; Yumoto, Mitsuki; Arai, Hideaki; Sato, Takashi; Sakamoto, Shuichi; Ohkawa, Masashi; Ohdaira, Yasuo

    2018-02-01

    Not so long ago, pseudo random numbers generated by numerical formulae were considered to be adequate for encrypting important data-files, because of the time needed to decode them. With today's ultra high-speed processors, however, this is no longer true. So, in order to thwart ever-more advanced attempts to breach our system's protections, cryptologists have devised a method that is considered to be virtually impossible to decode, and uses what is a limitless number of physical random numbers. This research describes a method, whereby laser diode's frequency noise generate a large quantities of physical random numbers. Using two types of photo detectors (APD and PIN-PD), we tested the abilities of two types of lasers (FP-LD and VCSEL) to generate random numbers. In all instances, an etalon served as frequency discriminator, the examination pass rates were determined using NIST FIPS140-2 test at each bit, and the Random Number Generation (RNG) speed was noted.

  14. Xyce parallel electronic simulator : users' guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-artmore » algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.« less

  15. smallWig: parallel compression of RNA-seq WIG files.

    PubMed

    Wang, Zhiying; Weissman, Tsachy; Milenkovic, Olgica

    2016-01-15

    We developed a new lossless compression method for WIG data, named smallWig, offering the best known compression rates for RNA-seq data and featuring random access functionalities that enable visualization, summary statistics analysis and fast queries from the compressed files. Our approach results in order of magnitude improvements compared with bigWig and ensures compression rates only a fraction of those produced by cWig. The key features of the smallWig algorithm are statistical data analysis and a combination of source coding methods that ensure high flexibility and make the algorithm suitable for different applications. Furthermore, for general-purpose file compression, the compression rate of smallWig approaches the empirical entropy of the tested WIG data. For compression with random query features, smallWig uses a simple block-based compression scheme that introduces only a minor overhead in the compression rate. For archival or storage space-sensitive applications, the method relies on context mixing techniques that lead to further improvements of the compression rate. Implementations of smallWig can be executed in parallel on different sets of chromosomes using multiple processors, thereby enabling desirable scaling for future transcriptome Big Data platforms. The development of next-generation sequencing technologies has led to a dramatic decrease in the cost of DNA/RNA sequencing and expression profiling. RNA-seq has emerged as an important and inexpensive technology that provides information about whole transcriptomes of various species and organisms, as well as different organs and cellular communities. The vast volume of data generated by RNA-seq experiments has significantly increased data storage costs and communication bandwidth requirements. Current compression tools for RNA-seq data such as bigWig and cWig either use general-purpose compressors (gzip) or suboptimal compression schemes that leave significant room for improvement. To substantiate this claim, we performed a statistical analysis of expression data in different transform domains and developed accompanying entropy coding methods that bridge the gap between theoretical and practical WIG file compression rates. We tested different variants of the smallWig compression algorithm on a number of integer-and real- (floating point) valued RNA-seq WIG files generated by the ENCODE project. The results reveal that, on average, smallWig offers 18-fold compression rate improvements, up to 2.5-fold compression time improvements, and 1.5-fold decompression time improvements when compared with bigWig. On the tested files, the memory usage of the algorithm never exceeded 90 KB. When more elaborate context mixing compressors were used within smallWig, the obtained compression rates were as much as 23 times better than those of bigWig. For smallWig used in the random query mode, which also supports retrieval of the summary statistics, an overhead in the compression rate of roughly 3-17% was introduced depending on the chosen system parameters. An increase in encoding and decoding time of 30% and 55% represents an additional performance loss caused by enabling random data access. We also implemented smallWig using multi-processor programming. This parallelization feature decreases the encoding delay 2-3.4 times compared with that of a single-processor implementation, with the number of processors used ranging from 2 to 8; in the same parameter regime, the decoding delay decreased 2-5.2 times. The smallWig software can be downloaded from: http://stanford.edu/~zhiyingw/smallWig/smallwig.html, http://publish.illinois.edu/milenkovic/, http://web.stanford.edu/~tsachy/. zhiyingw@stanford.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    1999-01-01

    The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.

  17. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model

    NASA Astrophysics Data System (ADS)

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie

    2016-05-01

    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  18. The MANDELA study: A multicenter, randomized, open-label, parallel group trial to refine the use of everolimus after heart transplantation.

    PubMed

    Deuse, Tobias; Bara, Christoph; Barten, Markus J; Hirt, Stephan W; Doesch, Andreas O; Knosalla, Christoph; Grinninger, Carola; Stypmann, Jörg; Garbade, Jens; Wimmer, Peter; May, Christoph; Porstner, Martina; Schulz, Uwe

    2015-11-01

    In recent years a series of trials has sought to define the optimal protocol for everolimus-based immunosuppression in heart transplantation, with the goal of minimizing exposure to calcineurin inhibitors (CNIs) and harnessing the non-immunosuppressive benefits of everolimus. Randomized studies have demonstrated that immunosuppressive potency can be maintained in heart transplant patients receiving everolimus despite marked CNI reduction, although very early CNI withdrawal may be inadvisable. A potential renal advantage has been shown for everolimus, but the optimal time for conversion and the adequate reduction in CNI exposure remain to be defined. Other reasons for use of everolimus include a substantial reduction in the risk of cytomegalovirus infection, and evidence for inhibition of cardiac allograft vasculopathy, a major cause of graft loss. The ongoing MANDELA study is a 12-month multicenter, randomized, open-label, parallel-group study in which efficacy, renal function and safety are compared in approximately 200 heart transplant patients. Patients receive CNI therapy, steroids and everolimus or mycophenolic acid during months 3 to 6 post-transplant, and are then randomized at month 6 post-transplant (i) to convert to CNI-free immunosuppression with everolimus and mycophenolic acid or (ii) to continue reduced-exposure CNI, with concomitant everolimus. Patients are then followed to month 18 post-transplant The rationale and expectations for the trial and its methodology are described herein. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Mechanical properties of electrospun bilayer fibrous membranes as potential scaffolds for tissue engineering.

    PubMed

    Pu, Juan; Komvopoulos, Kyriakos

    2014-06-01

    Bilayer fibrous membranes of poly(l-lactic acid) (PLLA) were fabricated by electrospinning, using a parallel-disk mandrel configuration that resulted in the sequential deposition of a layer with fibers aligned across the two parallel disks and a layer with randomly oriented fibers, both layers deposited in a single process step. Membrane structure and fiber alignment were characterized by scanning electron microscopy and two-dimensional fast Fourier transform. Because of the intricacies of the generated electric field, bilayer membranes exhibited higher porosity than single-layer membranes consisting of randomly oriented fibers fabricated with a solid-drum collector. However, despite their higher porosity, bilayer membranes demonstrated generally higher elastic modulus, yield strength and toughness than single-layer membranes with random fibers. Bilayer membrane deformation at relatively high strain rates comprised multiple abrupt microfracture events characterized by discontinuous fiber breakage. Bilayer membrane elongation yielded excessive necking of the layer with random fibers and remarkable fiber stretching (on the order of 400%) in the layer with fibers aligned in the stress direction. In addition, fibers in both layers exhibited multiple localized necking, attributed to the nonuniform distribution of crystalline phases in the fibrillar structure. The high membrane porosity, good mechanical properties, and good biocompatibility and biodegradability of PLLA (demonstrated in previous studies) make the present bilayer membranes good scaffold candidates for a wide range of tissue engineering applications. Copyright © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  20. Establishing a group of endpoints in a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.; Xue, Hanhong

    2016-02-02

    A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification.

  1. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  2. A Parallel Framework with Block Matrices of a Discrete Fourier Transform for Vector-Valued Discrete-Time Signals.

    PubMed

    Soto-Quiros, Pablo

    2015-01-01

    This paper presents a parallel implementation of a kind of discrete Fourier transform (DFT): the vector-valued DFT. The vector-valued DFT is a novel tool to analyze the spectra of vector-valued discrete-time signals. This parallel implementation is developed in terms of a mathematical framework with a set of block matrix operations. These block matrix operations contribute to analysis, design, and implementation of parallel algorithms in multicore processors. In this work, an implementation and experimental investigation of the mathematical framework are performed using MATLAB with the Parallel Computing Toolbox. We found that there is advantage to use multicore processors and a parallel computing environment to minimize the high execution time. Additionally, speedup increases when the number of logical processors and length of the signal increase.

  3. Parallel-In-Time For Moving Meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Manteuffel, T. A.; Southworth, B.

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is appliedmore » to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.« less

  4. Towards a high-speed quantum random number generator

    NASA Astrophysics Data System (ADS)

    Stucki, Damien; Burri, Samuel; Charbon, Edoardo; Chunnilall, Christopher; Meneghetti, Alessio; Regazzoni, Francesco

    2013-10-01

    Randomness is of fundamental importance in various fields, such as cryptography, numerical simulations, or the gaming industry. Quantum physics, which is fundamentally probabilistic, is the best option for a physical random number generator. In this article, we will present the work carried out in various projects in the context of the development of a commercial and certified high speed random number generator.

  5. Self-balanced real-time photonic scheme for ultrafast random number generation

    NASA Astrophysics Data System (ADS)

    Li, Pu; Guo, Ya; Guo, Yanqiang; Fan, Yuanlong; Guo, Xiaomin; Liu, Xianglian; Shore, K. Alan; Dubrova, Elena; Xu, Bingjie; Wang, Yuncai; Wang, Anbang

    2018-06-01

    We propose a real-time self-balanced photonic method for extracting ultrafast random numbers from broadband randomness sources. In place of electronic analog-to-digital converters (ADCs), the balanced photo-detection technology is used to directly quantize optically sampled chaotic pulses into a continuous random number stream. Benefitting from ultrafast photo-detection, our method can efficiently eliminate the generation rate bottleneck from electronic ADCs which are required in nearly all the available fast physical random number generators. A proof-of-principle experiment demonstrates that using our approach 10 Gb/s real-time and statistically unbiased random numbers are successfully extracted from a bandwidth-enhanced chaotic source. The generation rate achieved experimentally here is being limited by the bandwidth of the chaotic source. The method described has the potential to attain a real-time rate of 100 Gb/s.

  6. A general purpose subroutine for fast fourier transform on a distributed memory parallel machine

    NASA Technical Reports Server (NTRS)

    Dubey, A.; Zubair, M.; Grosch, C. E.

    1992-01-01

    One issue which is central in developing a general purpose Fast Fourier Transform (FFT) subroutine on a distributed memory parallel machine is the data distribution. It is possible that different users would like to use the FFT routine with different data distributions. Thus, there is a need to design FFT schemes on distributed memory parallel machines which can support a variety of data distributions. An FFT implementation on a distributed memory parallel machine which works for a number of data distributions commonly encountered in scientific applications is presented. The problem of rearranging the data after computing the FFT is also addressed. The performance of the implementation on a distributed memory parallel machine Intel iPSC/860 is evaluated.

  7. Implementing Shared Memory Parallelism in MCBEND

    NASA Astrophysics Data System (ADS)

    Bird, Adam; Long, David; Dobson, Geoff

    2017-09-01

    MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.

  8. Parallel workflow tools to facilitate human brain MRI post-processing

    PubMed Central

    Cui, Zaixu; Zhao, Chenxi; Gong, Gaolang

    2015-01-01

    Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues. PMID:26029043

  9. VNIR hyperspectral background characterization methods in adverse weather conditions

    NASA Astrophysics Data System (ADS)

    Romano, João M.; Rosario, Dalton; Roth, Luz

    2009-05-01

    Hyperspectral technology is currently being used by the military to detect regions of interest where potential targets may be located. Weather variability, however, may affect the ability for an algorithm to discriminate possible targets from background clutter. Nonetheless, different background characterization approaches may facilitate the ability for an algorithm to discriminate potential targets over a variety of weather conditions. In a previous paper, we introduced a new autonomous target size invariant background characterization process, the Autonomous Background Characterization (ABC) or also known as the Parallel Random Sampling (PRS) method, features a random sampling stage, a parallel process to mitigate the inclusion by chance of target samples into clutter background classes during random sampling; and a fusion of results at the end. In this paper, we will demonstrate how different background characterization approaches are able to improve performance of algorithms over a variety of challenging weather conditions. By using the Mahalanobis distance as the standard algorithm for this study, we compare the performance of different characterization methods such as: the global information, 2 stage global information, and our proposed method, ABC, using data that was collected under a variety of adverse weather conditions. For this study, we used ARDEC's Hyperspectral VNIR Adverse Weather data collection comprised of heavy, light, and transitional fog, light and heavy rain, and low light conditions.

  10. Efficacy and safety of pioglitazone added to alogliptin in Japanese patients with type 2 diabetes mellitus: a multicentre, randomized, double-blind, parallel-group, comparative study.

    PubMed

    Kaku, K; Katou, M; Igeta, M; Ohira, T; Sano, H

    2015-12-01

    A phase IV, multicentre, randomized, double-blind, parallel-group, comparative study was conducted in Japanese subjects with type 2 diabetes mellitus (T2DM) who had inadequate glycaemic control, despite treatment with alogliptin in addition to diet and/or exercise therapy. Subjects with glycated haemoglobin (HbA1c) concentrations of 6.9-10.5% were randomized to receive 16 weeks' double-blind treatment with pioglitazone 15 mg, 30 mg once daily or placebo added to alogliptin 25 mg once daily. The primary endpoint was the change in HbA1c from baseline at the end of treatment period (week 16). Both pioglitazone 15 and 30 mg combination therapy resulted in a significantly greater reduction in HbA1c than alogliptin monotherapy [-0.80 and -0.90% vs 0.00% (the least squares mean using analysis of covariance model); p < 0.0001, respectively]. The overall incidence rates of treatment-emergent adverse events were similar among the treatment groups. Pioglitazone/alogliptin combination therapy was effective and generally well tolerated in Japanese subjects with T2DM and is considered to be useful in clinical settings. © 2015 John Wiley & Sons Ltd.

  11. Efficacy and safety of rasagiline as an adjunct to levodopa treatment in Chinese patients with Parkinson's disease: a randomized, double-blind, parallel-controlled, multi-centre trial.

    PubMed

    Zhang, Lina; Zhang, Zhiqin; Chen, Yangmei; Qin, Xinyue; Zhou, Huadong; Zhang, Chaodong; Sun, Hongbin; Tang, Ronghua; Zheng, Jinou; Yi, Lin; Deng, Liying; Li, Jinfang

    2013-08-01

    Rasagiline mesylate is a highly potent, selective and irreversible monoamine oxidase type B (MAOB) inhibitor and is effective as monotherapy or adjunct to levodopa for patients with Parkinson's disease (PD). However, few studies have evaluated the efficacy and safety of rasagiline in the Chinese population. This study was designed to investigate the safety and efficacy of rasagiline as adjunctive therapy to levodopa treatment in Chinese PD patients. This was a randomized, double-blind, placebo-controlled, parallel-group, multi-centre trial conducted over a 12-wk period that enrolled 244 PD patients with motor fluctuations. Participants were randomly assigned to oral rasagiline mesylate (1 mg) or placebo, once daily. Altogether, 219 patients completed the trial. Rasagiline showed significantly greater efficacy compared with placebo. During the treatment period, the primary efficacy variable--mean adjusted total daily off time--decreased from baseline by 1.7 h in patients treated with 1.0 mg/d rasagiline compared to placebo (p < 0.05). Scores using the Unified Parkinson's Disease Rating Scale also improved during rasagiline treatment. Rasagiline was well tolerated. This study demonstrated that rasagiline mesylate is effective and well tolerated as an adjunct to levodopa treatment in Chinese PD patients with fluctuations.

  12. A Monte Carlo method for the simulation of coagulation and nucleation based on weighted particles and the concepts of stochastic resolution and merging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.

    Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less

  13. A Monte Carlo investigation of thrust imbalance of solid rocket motor pairs

    NASA Technical Reports Server (NTRS)

    Sforzini, R. H.; Foster, W. A., Jr.; Johnson, J. S., Jr.

    1974-01-01

    A technique is described for theoretical, statistical evaluation of the thrust imbalance of pairs of solid-propellant rocket motors (SRMs) firing in parallel. Sets of the significant variables, determined as a part of the research, are selected using a random sampling technique and the imbalance calculated for a large number of motor pairs. The performance model is upgraded to include the effects of statistical variations in the ovality and alignment of the motor case and mandrel. Effects of cross-correlations of variables are minimized by selecting for the most part completely independent input variables, over forty in number. The imbalance is evaluated in terms of six time - varying parameters as well as eleven single valued ones which themselves are subject to statistical analysis. A sample study of the thrust imbalance of 50 pairs of 146 in. dia. SRMs of the type to be used on the space shuttle is presented. The FORTRAN IV computer program of the analysis and complete instructions for its use are included. Performance computation time for one pair of SRMs is approximately 35 seconds on the IBM 370/155 using the FORTRAN H compiler.

  14. Kappa and Rater Accuracy: Paradigms and Parameters.

    PubMed

    Conger, Anthony J

    2017-12-01

    Drawing parallels to classical test theory, this article clarifies the difference between rater accuracy and reliability and demonstrates how category marginal frequencies affect rater agreement and Cohen's kappa (κ). Category assignment paradigms are developed: comparing raters to a standard (index) versus comparing two raters to one another (concordance), using both nonstochastic and stochastic category membership. Using a probability model to express category assignments in terms of rater accuracy and random error, it is shown that observed agreement (Po) depends only on rater accuracy and number of categories; however, expected agreement (Pe) and κ depend additionally on category frequencies. Moreover, category frequencies affect Pe and κ solely through the variance of the category proportions, regardless of the specific frequencies underlying the variance. Paradoxically, some judgment paradigms involving stochastic categories are shown to yield higher κ values than their nonstochastic counterparts. Using the stated probability model, assignments to categories were generated for 552 combinations of paradigms, rater and category parameters, category frequencies, and number of stimuli. Observed means and standard errors for Po, Pe, and κ were fully consistent with theory expectations. Guidelines for interpretation of rater accuracy and reliability are offered, along with a discussion of alternatives to the basic model.

  15. Quantum random number generation

    DOE PAGES

    Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu; ...

    2016-06-28

    Quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness -- coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at a highmore » speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.« less

  16. Three is much more than two in coarsening dynamics of cyclic competitions

    NASA Astrophysics Data System (ADS)

    Mitarai, Namiko; Gunnarson, Ivar; Pedersen, Buster Niels; Rosiek, Christian Anker; Sneppen, Kim

    2016-04-01

    The classical game of rock-paper-scissors has inspired experiments and spatial model systems that address the robustness of biological diversity. In particular, the game nicely illustrates that cyclic interactions allow multiple strategies to coexist for long-time intervals. When formulated in terms of a one-dimensional cellular automata, the spatial distribution of strategies exhibits coarsening with algebraically growing domain size over time, while the two-dimensional version allows domains to break and thereby opens the possibility for long-time coexistence. We consider a quasi-one-dimensional implementation of the cyclic competition, and study the long-term dynamics as a function of rare invasions between parallel linear ecosystems. We find that increasing the complexity from two to three parallel subsystems allows a transition from complete coarsening to an active steady state where the domain size stays finite. We further find that this transition happens irrespective of whether the update is done in parallel for all sites simultaneously or done randomly in sequential order. In both cases, the active state is characterized by localized bursts of dislocations, followed by longer periods of coarsening. In the case of the parallel dynamics, we find that there is another phase transition between the active steady state and the coarsening state within the three-line system when the invasion rate between the subsystems is varied. We identify the critical parameter for this transition and show that the density of active boundaries has critical exponents that are consistent with the directed percolation universality class. On the other hand, numerical simulations with the random sequential dynamics suggest that the system may exhibit an active steady state as long as the invasion rate is finite.

  17. Parallel tempering simulation of the three-dimensional Edwards-Anderson model with compact asynchronous multispin coding on GPU

    NASA Astrophysics Data System (ADS)

    Fang, Ye; Feng, Sheng; Tam, Ka-Ming; Yun, Zhifeng; Moreno, Juana; Ramanujam, J.; Jarrell, Mark

    2014-10-01

    Monte Carlo simulations of the Ising model play an important role in the field of computational statistical physics, and they have revealed many properties of the model over the past few decades. However, the effect of frustration due to random disorder, in particular the possible spin glass phase, remains a crucial but poorly understood problem. One of the obstacles in the Monte Carlo simulation of random frustrated systems is their long relaxation time making an efficient parallel implementation on state-of-the-art computation platforms highly desirable. The Graphics Processing Unit (GPU) is such a platform that provides an opportunity to significantly enhance the computational performance and thus gain new insight into this problem. In this paper, we present optimization and tuning approaches for the CUDA implementation of the spin glass simulation on GPUs. We discuss the integration of various design alternatives, such as GPU kernel construction with minimal communication, memory tiling, and look-up tables. We present a binary data format, Compact Asynchronous Multispin Coding (CAMSC), which provides an additional 28.4% speedup compared with the traditionally used Asynchronous Multispin Coding (AMSC). Our overall design sustains a performance of 33.5 ps per spin flip attempt for simulating the three-dimensional Edwards-Anderson model with parallel tempering, which significantly improves the performance over existing GPU implementations.

  18. The Many Ways Data Must Flow.

    ERIC Educational Resources Information Center

    La Brecque, Mort

    1984-01-01

    To break the bottleneck inherent in today's linear computer architectures, parallel schemes (which allow computers to perform multiple tasks at one time) are being devised. Several of these schemes are described. Dataflow devices, parallel number-crunchers, programing languages, and a device based on a neurological model are among the areas…

  19. Dimensionality Assessment of Ordered Polytomous Items with Parallel Analysis

    ERIC Educational Resources Information Center

    Timmerman, Marieke E.; Lorenzo-Seva, Urbano

    2011-01-01

    Parallel analysis (PA) is an often-recommended approach for assessment of the dimensionality of a variable set. PA is known in different variants, which may yield different dimensionality indications. In this article, the authors considered the most appropriate PA procedure to assess the number of common factors underlying ordered polytomously…

  20. Hardware packet pacing using a DMA in a parallel computer

    DOEpatents

    Chen, Dong; Heidelberger, Phillip; Vranas, Pavlos

    2013-08-13

    Method and system for hardware packet pacing using a direct memory access controller in a parallel computer which, in one aspect, keeps track of a total number of bytes put on the network as a result of a remote get operation, using a hardware token counter.

  1. Comparison of in-vivo failure of single-thread and dual-thread temporary anchorage devices over 18 months: A split-mouth randomized controlled trial.

    PubMed

    Durrani, Owais Khalid; Shaheed, Sohrab; Khan, Arsalan; Bashir, Ulfat

    2017-10-01

    The purpose of this study was to compare the in-vivo failure rates of single-thread and dual-thread temporary anchorage device (TAD) designs over 18 months. Thirty patients with skeletal Class II Division 1 malocclusion requiring anchorage from TADs for retraction of maxillary incisors into the extracted premolar space were recruited in this parallel group, split-mouth, randomized controlled trial. A block randomization sequence was generated with Random Allocation Software (version 2.0; Isfahan, Iran) with the allocations concealed in sequentially numbered, opaque, sealed envelopes. A total of 60 TADs (diameter, 2 mm; length, 10 mm) were placed in the maxillary arches of these patients with random allocation of the 2 types to the left and the right sides in a 1:1 ratio. All TADs were placed between the roots of the second premolar and the first molar and were immediately loaded. Patients were followed for a minimum of 12 months and a maximum of 18 months for the failure of the TADs. Data were analyzed blindly on an intention-to-treat basis. Four TADs (13.3%) failed in the single-thread group, and 6 TADs (20%) failed in the dual-thread group. The McNemar test showed an insignificant difference (P = 0.72) between the 2 groups. An odds ratio of 1.6 (95% confidence interval, 0.39-6.97) showed no significant associations among the variables. Most TADs failed in the first month after insertion (50%). The failure rate of dual-thread TADs compared with single-thread TADs is statistically insignificant when placed in the maxilla for retraction of the anterior segment. Registration: The trial was not registered before commencement. The protocol was not published before the trial. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  2. A benefit-finding intervention for family caregivers of persons with Alzheimer disease: study protocol of a randomized controlled trial

    PubMed Central

    2012-01-01

    Background Caregivers of relatives with Alzheimer’s disease are highly stressed and at risk for physical and psychiatric conditions. Interventions are usually focused on providing caregivers with knowledge of dementia, skills, and/or support, to help them cope with the stress. This model, though true to a certain extent, ignores how caregiver stress is construed in the first place. Besides burden, caregivers also report rewards, uplifts, and gains, such as a sense of purpose and personal growth. Finding benefits through positive reappraisal may offset the effect of caregiving on caregiver outcomes. Design Two randomized controlled trials are planned. They are essentially the same except that Trial 1 is a cluster trial (that is, randomization based on groups of participants) whereas in Trial 2, randomization is based on individuals. Participants are randomized into three groups - benefit finding, psychoeducation, and simplified psychoeducation. Participants in each group receive a total of approximately 12 hours of training either in group or individually at home. Booster sessions are provided at around 14 months after the initial treatment. The primary outcomes are caregiver stress (subjective burden, role overload, and cortisol), perceived benefits, subjective health, psychological well-being, and depression. The secondary outcomes are caregiver coping, and behavioral problems and functional impairment of the care-recipient. Outcome measures are obtained at baseline, post-treatment (2 months), and 6, 12, 18 and 30 months. Discussion The emphasis on benefits, rather than losses and difficulties, provides a new dimension to the way interventions for caregivers can be conceptualized and delivered. By focusing on the positive, caregivers may be empowered to sustain caregiving efforts in the long term despite the day-to-day challenges. The two parallel trials will provide an assessment of whether the effectiveness of the intervention depends on the mode of delivery. Trial registration Chinese Clinical Trial Registry (http://www.chictr.org/en/) identifier number ChiCTR-TRC-10000881. PMID:22747914

  3. Assessment of changes following en-masse retraction with mini-implants anchorage compared to two-step retraction with conventional anchorage in patients with class II division 1 malocclusion: a randomized controlled trial.

    PubMed

    Al-Sibaie, Salma; Hajeer, Mohammad Y

    2014-06-01

    No randomized controlled trial has tried to compare treatment outcomes between the sliding en-masse retraction of upper anterior teeth supported by mini-implants and the two-step sliding retraction technique employing conventional anchorage devices. To evaluate skeletal, dental, and soft tissue changes following anterior teeth retraction. Parallel-groups randomized controlled trial on patients with class II division 1 malocclusion treated at the University of Al-Baath Dental School in Hamah, Syria between July 2011 and May 2013. One hundred and thirty-three patients with an upper dentoalveolar protrusion were evaluated and 80 patients fulfilled the inclusion criteria. Randomization was performed using computer-generated tables; allocation was concealed using sequentially numbered opaque and sealed envelopes. Fifty-six participants were analysed (mean age 22.34 ± 4.56 years). They were randomly distributed into two groups with 28 patients in each group (1:1 allocation ratio). Following first premolar extraction, space closure was accomplished using either the en-masse technique with mini-implants or the two-step technique with transpalatal arches (TPAs). The antero-posterior displacements of upper incisal edges and upper first molars were measured on lateral cephalograms at three assessment times. Assessor blinding was employed. A bodily retraction (-4.42 mm; P < 0.001) with a slight intrusion (-1.53 mm; P < 0.001) of the upper anterior teeth was achieved in the mini-implants group, whereas upper anterior teeth retraction was achieved by controlled palatal tipping in the TPA group. When retracting anterior teeth in patients with moderate to severe protrusion, the en-masse retraction based on mini-implants anchorage gave superior results compared to the two-step retraction based on conventional anchorage in terms of speed, dental changes, anchorage loss, and aesthetic outcomes.

  4. Effect of Garlic and Lemon Juice Mixture on Lipid Profile and Some Cardiovascular Risk Factors in People 30-60 Years Old with Moderate Hyperlipidaemia: A Randomized Clinical Trial.

    PubMed

    Aslani, Negar; Entezari, Mohammad Hasan; Askari, Gholamreza; Maghsoudi, Zahra; Maracy, Mohammad Reza

    2016-01-01

    This study was performed to effects of garlic and lemon juice mixture on lipid profile and some cardiovascular risk factors in people 30-60 years old with moderate hyperlipidemia. In a parallel-designed randomized controlled clinical trial, a total of 112 hyperlipidemic patients 30-60 years, were recruited from Isfahan Cardiovascular Research Center. People were selected and randomly divided into four groups. Control blood samples were taken and height, weight, and blood pressure were recorded. (1) Received 20 g of garlic daily, plus 1 tablespoon lemon juice, (2) received 20 g garlic daily, (3) received 1 tablespoon of lemon juice daily, and (4) did not receive garlic or lemon juice. A study technician was done the random allocations using a random numbers table. All participants presented 3 days of dietary records and 3 days of physical activity records during 8 weeks. Blood samples were obtained at study baseline and after 8 weeks of intervention. Results showed a significant decrease in total cholesterol (changes from baseline: 40.8 ± 6.1, P < 0.001), low-density lipoprotein-cholesterol (29.8 ± 2.6, P < 0.001), and fibrinogen (111.4 ± 16.1, P < 0.001) in the Group 1, in comparison with other groups. A greater reduction in systolic and diastolic blood pressure was observed in Group 1 compared with the Groups 3 and 4 (37 ± 10, P = 0.01) (24 ± 1, P = 0.02); respectively. Furthermore, a great reduction in body mass index was observed in the mixed group compared with the lemon juice and control groups (1.6 ± 0.1, P = 0.04). Administration of garlic plus lemon juice resulted in an improvement in lipid levels, fibrinogen and blood pressure of patients with hyperlipidemia.

  5. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce

    PubMed Central

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987

  6. Exact parallel algorithms for some members of the traveling salesman problem family

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pekny, J.F.

    1989-01-01

    The traveling salesman problem and its many generalizations comprise one of the best known combinatorial optimization problem families. Most members of the family are NP-complete problems so that exact algorithms require an unpredictable and sometimes large computational effort. Parallel computers offer hope for providing the power required to meet these demands. A major barrier to applying parallel computers is the lack of parallel algorithms. The contributions presented in this thesis center around new exact parallel algorithms for the asymmetric traveling salesman problem (ATSP), prize collecting traveling salesman problem (PCTSP), and resource constrained traveling salesman problem (RCTSP). The RCTSP is amore » particularly difficult member of the family since finding a feasible solution is an NP-complete problem. An exact sequential algorithm is also presented for the directed hamiltonian cycle problem (DHCP). The DHCP algorithm is superior to current heuristic approaches and represents the first exact method applicable to large graphs. Computational results presented for each of the algorithms demonstrates the effectiveness of combining efficient algorithms with parallel computing methods. Performance statistics are reported for randomly generated ATSPs with 7,500 cities, PCTSPs with 200 cities, RCTSPs with 200 cities, DHCPs with 3,500 vertices, and assignment problems of size 10,000. Sequential results were collected on a Sun 4/260 engineering workstation, while parallel results were collected using a 14 and 100 processor BBN Butterfly Plus computer. The computational results represent the largest instances ever solved to optimality on any type of computer.« less

  7. Evaluation of Tai Chi Yunshou exercises on community-based stroke patients with balance dysfunction: a study protocol of a cluster randomized controlled trial.

    PubMed

    Tao, Jing; Rao, Ting; Lin, Lili; Liu, Wei; Wu, Zhenkai; Zheng, Guohua; Su, Yusheng; Huang, Jia; Lin, Zhengkun; Wu, Jinsong; Fang, Yunhua; Chen, Lidian

    2015-02-25

    Balance dysfunction after stroke limits patients' general function and participation in daily life. Previous researches have suggested that Tai Chi exercise could offer a positive improvement in older individuals' balance function and reduce the risk of falls. But convincing evidence for the effectiveness of enhancing balance function after stroke with Tai Chi exercise is still inadequate. Considering the difficulties for stroke patients to complete the whole exercise, the current trial evaluates the benefit of Tai Chi Yunshou exercise for patients with balance dysfunction after stroke through a cluster randomization, parallel-controlled design. A single-blind, cluster-randomized, parallel-controlled trial will be conducted. A total of 10 community health centers (5 per arm) will be selected and randomly allocated into Tai Chi Yunshou exercise group or balance rehabilitation training group. Each community health centers will be asked to enroll 25 eligible patients into the trial. 60 minutes per each session, 1 session per day, 5 times per week and the total training round is 12 weeks. Primary and secondary outcomes will be measured at baseline and 4-weeks, 8-weeks, 12-weeks, 6-week follow-up, 12-week follow-up after randomization. Safety and economic evaluation will also be assessed. This protocol aims to evaluate the effectiveness of Tai Chi Yunshou exercise for the balance function of patients after stroke. If the outcome is positive, this project will provide an appropriate and economic balance rehabilitation technology for community-based stroke patients. Chinese Clinical Trial Registry: ChiCTR-TRC-13003641. Registration date: 22 August, 2013 http://www.chictr.org/usercenter/project/listbycreater.aspx .

  8. The efficacy and safety of Fufangdanshen tablets (Radix Salviae miltiorrhizae formula tablets) for mild to moderate vascular dementia: a study protocol for a randomized controlled trial.

    PubMed

    Tian, Jinzhou; Shi, Jing; Wei, Mingqing; Qin, Renan; Ni, Jingnian; Zhang, Xuekai; Li, Ting; Wang, Yongyan

    2016-06-08

    Vascular dementia (VaD) is the second most common subtype of dementia after Alzheimer's disease (AD). Currently, there are no medications approved for treating patients with VaD. Fufangdanshen (FFDS) tablets (Radix Salviae miltiorrhizae formula tablets) are a traditional Chinese medicine that has been reported to improve memory. However, the existing evidence for FFDS tablets in clinical practice derives from methodologically flawed studies. To further investigate the safety, tolerability, and efficacy of FFDS tables in the treatment of mild to moderate VaD, we designed and reported the methodology for a 24-week randomized, double-blind, parallel, multicenter study. This ongoing study is a double-blind, randomized, parallel placebo-controlled trial. A total of 240 patients with mild to moderate VaD will be enrolled. After a 2-week run-in period, the eligible patients will be randomized to receive either three FFDS or placebo tablets three times per day for 24 weeks, with a follow-up 12 weeks after the last treatment. The primary efficacy measurement will be the Alzheimer's Disease Assessment Scale-cognitive subscale (ADAS-cog) and the Clinician Interview-Based Impression of Change (CIBIC-plus). The secondary efficacy measurements will include the Mini Mental State Examination (MMSE) and activities of daily living (ADL). Adverse events will also be reported. This randomized trial will be the first rigorous study on the efficacy and safety of FFDS tablets for treating cognitive symptoms in patients with VaD using a rational design. ClinicalTrials.gov: NCT01761227 . Registered on 2 January 2013.

  9. Continuous quality improvement interventions to improve long-term outcomes of antiretroviral therapy in women who initiated therapy during pregnancy or breastfeeding in the Democratic Republic of Congo: design of an open-label, parallel, group randomized trial.

    PubMed

    Yotebieng, Marcel; Behets, Frieda; Kawende, Bienvenu; Ravelomanana, Noro Lantoniaina Rosa; Tabala, Martine; Okitolonda, Emile W

    2017-04-26

    Despite the rapid adoption of the World Health Organization's 2013 guidelines, children continue to be infected with HIV perinatally because of sub-optimal adherence to the continuum of HIV care in maternal and child health (MCH) clinics. To achieve the UNAIDS goal of eliminating mother-to-child HIV transmission, multiple, adaptive interventions need to be implemented to improve adherence to the HIV continuum. The aim of this open label, parallel, group randomized trial is to evaluate the effectiveness of Continuous Quality Improvement (CQI) interventions implemented at facility and health district levels to improve retention in care and virological suppression through 24 months postpartum among pregnant and breastfeeding women receiving ART in MCH clinics in Kinshasa, Democratic Republic of Congo. Prior to randomization, the current monitoring and evaluation system will be strengthened to enable collection of high quality individual patient-level data necessary for timely indicators production and program outcomes monitoring to inform CQI interventions. Following randomization, in health districts randomized to CQI, quality improvement (QI) teams will be established at the district level and at MCH clinics level. For 18 months, QI teams will be brought together quarterly to identify key bottlenecks in the care delivery system using data from the monitoring system, develop an action plan to address those bottlenecks, and implement the action plan at the level of their district or clinics. If proven to be effective, CQI as designed here, could be scaled up rapidly in resource-scarce settings to accelerate progress towards the goal of an AIDS free generation. The protocol was retrospectively registered on February 7, 2017. ClinicalTrials.gov Identifier: NCT03048669 .

  10. A random access memory immune to single event upset using a T-Resistor

    DOEpatents

    Ochoa, A. Jr.

    1987-10-28

    In a random access memory cell, a resistance ''T'' decoupling network in each leg of the cell reduces random errors caused by the interaction of energetic ions with the semiconductor material forming the cell. The cell comprises two parallel legs each containing a series pair of complementary MOS transistors having a common gate connected to the node between the transistors of the opposite leg. The decoupling network in each leg is formed by a series pair of resistors between the transistors together with a third resistor interconnecting the junction between the pair of resistors and the gate of the transistor pair forming the opposite leg of the cell. 4 figs.

  11. Screening unlabeled DNA targets with randomly ordered fiber-optic gene arrays.

    PubMed

    Steemers, F J; Ferguson, J A; Walt, D R

    2000-01-01

    We have developed a randomly ordered fiber-optic gene array for rapid, parallel detection of unlabeled DNA targets with surface immobilized molecular beacons (MB) that undergo a conformational change accompanied by a fluorescence change in the presence of a complementary DNA target. Microarrays are prepared by randomly distributing MB-functionalized 3-microm diameter microspheres in an array of wells etched in a 500-microm diameter optical imaging fiber. Using several MBs, each designed to recognize a different target, we demonstrate the selective detection of genomic cystic fibrosis related targets. Positional registration and fluorescence response monitoring of the microspheres was performed using an optical encoding scheme and an imaging fluorescence microscope system.

  12. Random access memory immune to single event upset using a T-resistor

    DOEpatents

    Ochoa, Jr., Agustin

    1989-01-01

    In a random access memory cell, a resistance "T" decoupling network in each leg of the cell reduces random errors caused by the interaction of energetic ions with the semiconductor material forming the cell. The cell comprises two parallel legs each containing a series pair of complementary MOS transistors having a common gate connected to the node between the transistors of the opposite leg. The decoupling network in each leg is formed by a series pair of resistors between the transistors together with a third resistor interconnecting the junction between the pair of resistors and the gate of the transistor pair forming the opposite leg of the cell.

  13. Parallel 3D Multi-Stage Simulation of a Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Turner, Mark G.; Topp, David A.

    1998-01-01

    A 3D multistage simulation of each component of a modern GE Turbofan engine has been made. An axisymmetric view of this engine is presented in the document. This includes a fan, booster rig, high pressure compressor rig, high pressure turbine rig and a low pressure turbine rig. In the near future, all components will be run in a single calculation for a solution of 49 blade rows. The simulation exploits the use of parallel computations by using two levels of parallelism. Each blade row is run in parallel and each blade row grid is decomposed into several domains and run in parallel. 20 processors are used for the 4 blade row analysis. The average passage approach developed by John Adamczyk at NASA Lewis Research Center has been further developed and parallelized. This is APNASA Version A. It is a Navier-Stokes solver using a 4-stage explicit Runge-Kutta time marching scheme with variable time steps and residual smoothing for convergence acceleration. It has an implicit K-E turbulence model which uses an ADI solver to factor the matrix. Between 50 and 100 explicit time steps are solved before a blade row body force is calculated and exchanged with the other blade rows. This outer iteration has been coined a "flip." Efforts have been made to make the solver linearly scaleable with the number of blade rows. Enough flips are run (between 50 and 200) so the solution in the entire machine is not changing. The K-E equations are generally solved every other explicit time step. One of the key requirements in the development of the parallel code was to make the parallel solution exactly (bit for bit) match the serial solution. This has helped isolate many small parallel bugs and guarantee the parallelization was done correctly. The domain decomposition is done only in the axial direction since the number of points axially is much larger than the other two directions. This code uses MPI for message passing. The parallel speed up of the solver portion (no 1/0 or body force calculation) for a grid which has 227 points axially.

  14. The electrical MHD and Hall current impact on micropolar nanofluid flow between rotating parallel plates

    NASA Astrophysics Data System (ADS)

    Shah, Zahir; Islam, Saeed; Gul, Taza; Bonyah, Ebenezer; Altaf Khan, Muhammad

    2018-06-01

    The current research aims to examine the combined effect of magnetic and electric field on micropolar nanofluid between two parallel plates in a rotating system. The nanofluid flow between two parallel plates is taken under the influence of Hall current. The flow of micropolar nanofluid has been assumed in steady state. The rudimentary governing equations have been changed to a set of differential nonlinear and coupled equations using suitable similarity variables. An optimal approach has been used to acquire the solution of the modelled problems. The convergence of the method has been shown numerically. The impact of the Skin friction on velocity profile, Nusslet number on temperature profile and Sherwood number on concentration profile have been studied. The influences of the Hall currents, rotation, Brownian motion and thermophoresis analysis of micropolar nanofluid have been mainly focused in this work. Moreover, for comprehension the physical presentation of the embedded parameters that is, coupling parameter N1 , viscosity parameter Re , spin gradient viscosity parameter N2 , rotating parameter Kr , Micropolar fluid constant N3 , magnetic parameter M , Prandtl number Pr , Thermophoretic parameter Nt , Brownian motion parameter Nb , and Schmidt number Sc have been plotted and deliberated graphically.

  15. Parasites and parallel divergence of the number of individual MHC alleles between sympatric three-spined stickleback Gasterosteus aculeatus morphs in Iceland.

    PubMed

    Natsopoulou, M E; Pálsson, S; Ólafsdóttir, G Á

    2012-10-01

    Two pairs of sympatric three-spined stickleback Gasterosteus aculeatus morphs and two single morph populations inhabiting mud and lava or rocky benthic habitats in four Icelandic lakes were screened for parasites and genotyped for MHC class IIB diversity. Parasitic infection differed consistently between G. aculeatus from different benthic habitats. Gasterosteus aculeatus from the lava or rocky habitats were more heavily infected in all lakes. A parallel pattern was also found in individual MHC allelic variation with lava G. aculeatus morphs exhibiting lower levels of variation than the mud morphs. Evidence for selective divergence in MHC allele number is ambiguous but supported by two findings in addition to the parallel pattern observed. MHC allele diversity was not consistent with diversity reported at neutral markers (microsatellites) and in Þingvallavatn the most common number of alleles in each morph was associated with lower infection levels. In the Þingvallavatn lava morph, lower infection levels by the two most common parasites, Schistocephalus solidus and Diplostomum baeri, were associated with different MHC allele numbers. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.

  16. Indirect vs direct bonding of mandibular fixed retainers in orthodontic patients: a single-center randomized controlled trial comparing placement time and failure over a 6-month period.

    PubMed

    Bovali, Efstathia; Kiliaridis, Stavros; Cornelis, Marie A

    2014-12-01

    The objective of this 2-arm parallel single-center trial was to compare placement time and numbers of failures of mandibular lingual retainers bonded with an indirect procedure vs a direct bonding procedure. Sixty-four consecutive patients at the postgraduate orthodontic clinic of the University of Geneva in Switzerland scheduled for debonding and mandibular fixed retainer placement were randomly allocated to either an indirect bonding procedure or a traditional direct bonding procedure. Eligibility criteria were the presence of the 4 mandibular incisors and the 2 mandibular canines, and no active caries, restorations, fractures, or periodontal disease of these teeth. The patients were randomized in blocks of 4; the randomization sequence was generated using an online randomization service (www.randomization.com). Allocation concealment was secured by contacting the sequence generator for treatment assignment; blinding was possible for outcome assessment only. Bonding time was measured for each procedure. Unpaired t tests were used to assess differences in time. Patients were recalled at 1, 2, 4, and 6 months after bonding. Mandibular fixed retainers having at least 1 composite pad debonded were considered as failures. The log-rank test was used to compare the Kaplan-Meier survival curves of both procedures. A test of proportion was applied to compare the failures at 6 months between the treatment groups. Sixty-four patients were randomized in a 1:1 ratio. One patient dropped out at baseline after the bonding procedure, and 3 patients did not attend the recalls at 4 and 6 months. Bonding time was significantly shorter for the indirect procedure (321 ± 31 seconds, mean ± SD) than for the direct procedure (401 ± 40 seconds) (per protocol analysis of 63 patients: mean difference = 80 seconds; 95% CI = 62.4-98.1; P <0.001). The 6-month numbers of failures were 10 of 31 (32%) with the indirect technique and 7 of 29 (24%) with the direct technique (log rank: P = 0.35; test of proportions: risk difference = 0.08; 95% CI = -0.15 to 0.31; P = 0.49). No serious harm was observed except for plaque accumulation. Indirect bonding was statistically significantly faster than direct bonding, with both techniques showing similar risks of failure. This trial was not registered. The protocol was not published before trial commencement. No funding or conflict of interest to be declared. Copyright © 2014 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  17. Progressive Vector Quantization on a massively parallel SIMD machine with application to multispectral image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.

  18. Multi-line triggering and interdigitated electrode structure for photoconductive semiconductor switches

    DOEpatents

    Mar, Alan [Albuquerque, NM; Zutavern, Fred J [Albuquerque, NM; Loubriel, Guillermo [Albuquerque, NM

    2007-02-06

    An improved photoconductive semiconductor switch comprises multiple-line optical triggering of multiple, high-current parallel filaments between the switch electrodes. The switch can also have a multi-gap, interdigitated electrode for the generation of additional parallel filaments. Multi-line triggering can increase the switch lifetime at high currents by increasing the number of current filaments and reducing the current density at the contact electrodes in a controlled manner. Furthermore, the improved switch can mitigate the degradation of switching conditions with increased number of firings of the switch.

  19. Access and visualization using clusters and other parallel computers

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.; Bergou, Attila; Berriman, Bruce; Block, Gary; Collier, Jim; Curkendall, Dave; Good, John; Husman, Laura; Jacob, Joe; Laity, Anastasia; hide

    2003-01-01

    JPL's Parallel Applications Technologies Group has been exploring the issues of data access and visualization of very large data sets over the past 10 or so years. this work has used a number of types of parallel computers, and today includes the use of commodity clusters. This talk will highlight some of the applications and tools we have developed, including how they use parallel computing resources, and specifically how we are using modern clusters. Our applications focus on NASA's needs; thus our data sets are usually related to Earth and Space Science, including data delivered from instruments in space, and data produced by telescopes on the ground.

  20. Experiences with hypercube operating system instrumentation

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Rudolph, David C.

    1989-01-01

    The difficulties in conceptualizing the interactions among a large number of processors make it difficult both to identify the sources of inefficiencies and to determine how a parallel program could be made more efficient. This paper describes an instrumentation system that can trace the execution of distributed memory parallel programs by recording the occurrence of parallel program events. The resulting event traces can be used to compile summary statistics that provide a global view of program performance. In addition, visualization tools permit the graphic display of event traces. Visual presentation of performance data is particularly useful, indeed, necessary for large-scale parallel computers; the enormous volume of performance data mandates visual display.

Top